Re: [radeonhd] 0x9611:0x1043:0x82EE: ASUS M3A78-CM onboard AMD780V(RS780)
Jakob Bohm writes:
If a video adapter must be set to the same refresh rate for VGA and video output, the NTSC resulution with 60Hz Interlaced may be a better choice to match 60Hz VGA signals. As always, a user configured monitor overrides any compiled in defaults and may be configured to override autodetection too.
Isn't this how it is working presently? User supplied modes always have priority, however they are not used unconditionally but verified against various other settings.
No that is precisely NOT how it is working presently (at least in 1.2.5): There is no known (==documented) option that makes radeonhd believe my configured monitor over its own flawed autodetection of "no" monitor present. There are options to tell radeonhd that there is no monitor even if it detects one, but no option to tell it there IS a monitor.
This is correct. If no monitor is detected there is presently no way around this. But as soon as a monitor is detected as connected the user supplied modes have precedence.
There should be NO situation in which the autodetection of anything but the adapter itself should cause the driver to refuse to load. The very existence of the execution path that ends by outputting the message "(EE) RADEONHD(0): Failed to detect a connected monitor" is a bug.
The problem here is that DVI connectors can have digital or analog monitors connected. So even if we assume that something is connected we still don't know which output to enable. We could just enable both and hope for the best - the problem here is that due to the chip design it's not always possible to share a crtc between analog and digital outputs as the crtc might have to be programmed slightly differently. This is not true in all cases. We could of course add an elaborate option where the user can specify what sort of output is connected.
The inability to identify the connected monitor should lead to "All known modes are potentially valid" (like in old drivers without autodetection), not to "(WW) RADEONHD(0): Unable to find initial modes" "(EE) RADEONHD(0): RandR: No valid modes. Disabling RandR support."
In summary, the radeonhd code currently scheduled for inclusion in the next Debian release seems extremely lopsided in its desire to consider the card useless until otherwise proven.
I do agree that the monitor detection code needs to be more tolerant and at least try to fall back to probing for the presence of a monitor thru ddc if other methods fail. Cheers, Egbert. -- To unsubscribe, e-mail: radeonhd+unsubscribe@opensuse.org For additional commands, e-mail: radeonhd+help@opensuse.org
On Nov 09, 09 11:24:45 +0100, Egbert Eich wrote:
No that is precisely NOT how it is working presently (at least in 1.2.5): There is no known (==documented) option that makes radeonhd believe my configured monitor over its own flawed autodetection of "no" monitor present. There are options to tell radeonhd that there is no monitor even if it detects one, but no option to tell it there IS a monitor.
Option "Enable" in the monitor section should do exactly this. If it doesn't, it's a bug in the RandR layer. Wouldn't be the first one, though :-]
There should be NO situation in which the autodetection of anything but the adapter itself should cause the driver to refuse to load. The very existence of the execution path that ends by outputting the message "(EE) RADEONHD(0): Failed to detect a connected monitor" is a bug.
This is a philosophical question (what to do if no monitor is attached) - and apparently upstream has decided that in this case no X should be started by default. Also note Egbert's comment:
The problem here is that DVI connectors can have digital or analog monitors connected. So even if we assume that something is connected we still don't know which output to enable. We could just enable both and hope for the best - the problem here is that due to the chip design it's not always possible to share a crtc between analog and digital outputs as the crtc might have to be programmed slightly differently. This is not true in all cases.
Matthias
--
Matthias Hopf
On Wed, Nov 11, 2009 at 03:40:56PM +0100, Matthias Hopf wrote:
On Nov 09, 09 11:24:45 +0100, Egbert Eich wrote:
No that is precisely NOT how it is working presently (at least in 1.2.5): There is no known (==documented) option that makes radeonhd believe my configured monitor over its own flawed autodetection of "no" monitor present. There are options to tell radeonhd that there is no monitor even if it detects one, but no option to tell it there IS a monitor.
Option "Enable" in the monitor section should do exactly this. If it doesn't, it's a bug in the RandR layer. Wouldn't be the first one, though :-]
I have option "Enabled" in there, where did you see documentation for an option named "Enable" ?
There should be NO situation in which the autodetection of anything but the adapter itself should cause the driver to refuse to load. The very existence of the execution path that ends by outputting the message "(EE) RADEONHD(0): Failed to detect a connected monitor" is a bug.
This is a philosophical question (what to do if no monitor is attached) - and apparently upstream has decided that in this case no X should be started by default. Also note Egbert's comment:
Aren't YOU the upstream?
The problem here is that DVI connectors can have digital or analog monitors connected. So even if we assume that something is connected we still don't know which output to enable. We could just enable both and hope for the best - the problem here is that due to the chip design it's not always possible to share a crtc between analog and digital outputs as the crtc might have to be programmed slightly differently. This is not true in all cases.
Matthias
In the case of DVI-I connectors (not DVI-D connectors like on my card), these could be treated as two logical connectors for purposes of configuration (One analog connector and one digital connector). But my principal complaint here stands: A computer should be able to start properly, including loading up its GPU drivers, even if the monitor is not plugged in until much later. If the monitor is not plugged in (really not plugged in, or just seeming not plugged in due to minor hardware or software bugs), the computer and its adapter will obviously not be able to autodetect the capabilities of the monitor and will need to rely on configured data or defaults, but failure is not an option. -- This message is hastily written, please ignore any unpleasant wordings, do not consider it a binding commitment, even if its phrasing may indicate so. Its contents may be deliberately or accidentally untrue. Trademarks and other things belong to their owners, if any. -- To unsubscribe, e-mail: radeonhd+unsubscribe@opensuse.org For additional commands, e-mail: radeonhd+help@opensuse.org
On Nov 18, 09 00:46:43 +0100, Jakob Bohm wrote:
On Wed, Nov 11, 2009 at 03:40:56PM +0100, Matthias Hopf wrote:
On Nov 09, 09 11:24:45 +0100, Egbert Eich wrote:
No that is precisely NOT how it is working presently (at least in 1.2.5): There is no known (==documented) option that makes radeonhd believe my configured monitor over its own flawed autodetection of "no" monitor present. There are options to tell radeonhd that there is no monitor even if it detects one, but no option to tell it there IS a monitor.
Option "Enable" in the monitor section should do exactly this. If it doesn't, it's a bug in the RandR layer. Wouldn't be the first one, though :-]
I have option "Enabled" in there, where did you see documentation for an option named "Enable" ?
man xorg.conf, section "MONITOR SECTION".
There should be NO situation in which the autodetection of anything but the adapter itself should cause the driver to refuse to load. The very existence of the execution path that ends by outputting the message "(EE) RADEONHD(0): Failed to detect a connected monitor" is a bug.
This is a philosophical question (what to do if no monitor is attached) - and apparently upstream has decided that in this case no X should be started by default. Also note Egbert's comment:
Aren't YOU the upstream?
I'm sort-of project manager for radeonhd, and developer there. I do some upstream work on randr related stuff and in the xserver, but I have close to no time for that right now. And no, I haven't been involved in that decision. If somebody like me writes that "upstream has decided" that typically means that somebody did something somewhere, but ATM nobody knows what and what for for sure.
In the case of DVI-I connectors (not DVI-D connectors like on my card), these could be treated as two logical connectors for purposes of configuration (One analog connector and one digital connector).
... which is the way we do in radeonhd. Other drivers decided to do it
differently. Again, philosophical. Both ways have positive and negative
sides.
Matthias
--
Matthias Hopf
On Mon, Nov 09, 2009 at 11:24:45AM +0100, Egbert Eich wrote:
There should be NO situation in which the autodetection of anything but the adapter itself should cause the driver to refuse to load. The very existence of the execution path that ends by outputting the message "(EE) RADEONHD(0): Failed to detect a connected monitor" is a bug.
The problem here is that DVI connectors can have digital or analog monitors connected. So even if we assume that something is connected we still don't know which output to enable. We could just enable both and hope for the best - the problem here is that due to the chip design it's not always possible to share a crtc between analog and digital outputs as the crtc might have to be programmed slightly differently. This is not true in all cases.
We could of course add an elaborate option where the user can specify what sort of output is connected.
In which case it is easier to fix up the layout description. Jakob, please think this one through... When the driver starts, it reads in the board layout, and then tries to detect which monitors are connected. If your board layout in ATOM is a mess (which is far too common), then chances are that your monitor cannot properly be detected. This detection influences many things, as often, this detected monitor also decides the initial size of the framebuffer, the dpi of the server, etc.. If no monitor is detected, there are basically two options. * Either you throw up your hands and say "hey, no monitor detected!!!" and then fall back to the console so that the user at least gets to see something and can try to work around the issue from there. * Or, continue working anyway, initialising a framebuffer, and the hardware without enabling any output, providing the user with blackened displays everywhere. Decide for yourself what you prefer, all black monitors with no chance of getting out anymore (as ctrl-alt-backspace often is disabled today) and no information on what is going wrong, or the console where you can try to fix the issue? Luc Verhaegen. -- To unsubscribe, e-mail: radeonhd+unsubscribe@opensuse.org For additional commands, e-mail: radeonhd+help@opensuse.org
First an apology to one of the other posters: I was unaware that the give-up logic was (at least partially) in Randr, not in radeonhd. From the limited end user documentation I assumed it was a side effect of the documented radeonhd-only rule that autodetection trumps the monitor config by default. On Mon, Nov 23, 2009 at 06:31:06PM +0100, Luc Verhaegen wrote:
On Mon, Nov 09, 2009 at 11:24:45AM +0100, Egbert Eich wrote:
There should be NO situation in which the autodetection of anything but the adapter itself should cause the driver to refuse to load. The very existence of the execution path that ends by outputting the message "(EE) RADEONHD(0): Failed to detect a connected monitor" is a bug.
The problem here is that DVI connectors can have digital or analog monitors connected. So even if we assume that something is connected we still don't know which output to enable. We could just enable both and hope for the best - the problem here is that due to the chip design it's not always possible to share a crtc between analog and digital outputs as the crtc might have to be programmed slightly differently. This is not true in all cases.
We could of course add an elaborate option where the user can specify what sort of output is connected.
In which case it is easier to fix up the layout description.
Jakob, please think this one through...
When the driver starts, it reads in the board layout, and then tries to detect which monitors are connected. If your board layout in ATOM is a mess (which is far too common), then chances are that your monitor cannot properly be detected.
This detection influences many things, as often, this detected monitor also decides the initial size of the framebuffer, the dpi of the server, etc..
If no monitor is detected, there are basically two options.
* Either you throw up your hands and say "hey, no monitor detected!!!" and then fall back to the console so that the user at least gets to see something and can try to work around the issue from there. * Or, continue working anyway, initialising a framebuffer, and the hardware without enabling any output, providing the user with blackened displays everywhere.
Decide for yourself what you prefer, all black monitors with no chance of getting out anymore (as ctrl-alt-backspace often is disabled today) and no information on what is going wrong, or the console where you can try to fix the issue?
Luc Verhaegen.
That is a false alternative. The proper alternative to giving up is to assume that the explicit Monitor configuration given in xorg.conf (or a fallback default) is true and configure the output, dpi, framebuffer etc. accordingly. For instance if the config says that there is an analog monitor capable of 1920x1200 @ 60 Hz connected to the VGA connector, then generate output for such a monitor, even if it cannot be detected with current algorithms. If the config says nothing, then set 640x480 resulution with timing for a generic VGA monitor on the VGA output (if any) or a generic VGA monitor on the analog part of a DVI output (if any) or a generic TV on the video output (if any) or a generic SD (576 lines) flat panel on the digital part of a DVI output (if any) or the HDMI output (if any), or the displayport output (if any). Then if the config still says nothing, enable the same picture on as many as possible of the other outputs. In most cases this should result in usable output and easy hand tuning to actual monitor capabilities. This also mirrors the traditional behaviour of X, BIOS, console and non-free operating systems. Oh, and in my case the problem seems NOT to be an ATOM bios issue, but the fact that some widely used ATEN KVM switch ASICs do not load the VGA output heavily enough to be detected by the detection code. -- This message is hastily written, please ignore any unpleasant wordings, do not consider it a binding commitment, even if its phrasing may indicate so. Its contents may be deliberately or accidentally untrue. Trademarks and other things belong to their owners, if any. -- To unsubscribe, e-mail: radeonhd+unsubscribe@opensuse.org For additional commands, e-mail: radeonhd+help@opensuse.org
On Mon, Nov 23, 2009 at 11:09:42PM +0100, Jakob Bohm wrote:
First an apology to one of the other posters: I was unaware that the give-up logic was (at least partially) in Randr, not in radeonhd. From the limited end user documentation I assumed it was a side effect of the documented radeonhd-only rule that autodetection trumps the monitor config by default.
On Mon, Nov 23, 2009 at 06:31:06PM +0100, Luc Verhaegen wrote:
On Mon, Nov 09, 2009 at 11:24:45AM +0100, Egbert Eich wrote:
There should be NO situation in which the autodetection of anything but the adapter itself should cause the driver to refuse to load. The very existence of the execution path that ends by outputting the message "(EE) RADEONHD(0): Failed to detect a connected monitor" is a bug.
The problem here is that DVI connectors can have digital or analog monitors connected. So even if we assume that something is connected we still don't know which output to enable. We could just enable both and hope for the best - the problem here is that due to the chip design it's not always possible to share a crtc between analog and digital outputs as the crtc might have to be programmed slightly differently. This is not true in all cases.
We could of course add an elaborate option where the user can specify what sort of output is connected.
In which case it is easier to fix up the layout description.
Jakob, please think this one through...
When the driver starts, it reads in the board layout, and then tries to detect which monitors are connected. If your board layout in ATOM is a mess (which is far too common), then chances are that your monitor cannot properly be detected.
This detection influences many things, as often, this detected monitor also decides the initial size of the framebuffer, the dpi of the server, etc..
If no monitor is detected, there are basically two options.
* Either you throw up your hands and say "hey, no monitor detected!!!" and then fall back to the console so that the user at least gets to see something and can try to work around the issue from there. * Or, continue working anyway, initialising a framebuffer, and the hardware without enabling any output, providing the user with blackened displays everywhere.
Decide for yourself what you prefer, all black monitors with no chance of getting out anymore (as ctrl-alt-backspace often is disabled today) and no information on what is going wrong, or the console where you can try to fix the issue?
Luc Verhaegen.
That is a false alternative. The proper alternative to giving up is to assume that the explicit Monitor configuration given in xorg.conf (or a fallback default) is true and configure the output, dpi, framebuffer etc. accordingly. For instance if the config says that there is an analog monitor capable of 1920x1200 @ 60 Hz connected to the VGA connector, then generate output for such a monitor, even if it cannot be detected with current algorithms.
Generate an output? Outputs are _there_ on the chip, and the way they are wired to the connectors is predefined. What do you expect us to do, just randomly pick any output, and then give you the opportunity to restart the xserver over and over again until you finally, accidentally hit the one you need at this time? Now, about the monitor section, please refer back about 2.5 years and see how much crap i got for using the monitor section when it was present, and we ended up adding an extra option (UseConfiguredMonitor) just to enable monitor sections when they are there. This was not something that i accepted lightly. So do get off your high horse and stop thrashing things we had to do because we got a lot more thrashing in the other direction already.
If the config says nothing, then set 640x480 resulution with timing for a generic VGA monitor on the VGA output (if any) or a generic VGA monitor on the analog part of a DVI output (if any) or a generic TV on the video output (if any) or a generic SD (576 lines) flat panel on the digital part of a DVI output (if any) or the HDMI output (if any), or the displayport output (if any). Then if the config still says nothing, enable the same picture on as many as possible of the other outputs. In most cases this should result in usable output and easy hand tuning to actual monitor capabilities. This also mirrors the traditional behaviour of X, BIOS, console and non-free operating systems.
Which one of these outputs should we use? Or maybe _all_ of these outputs? Or maybe choose them at random? Or maybe leave then all blank? As said before, under the current circumstances, bailing is a very good option that at least gives the user the opportunity to fix the issue. Way better than presenting the user with a blanked screen and no information and no other option than to reboot into runlevel 3.
Oh, and in my case the problem seems NOT to be an ATOM bios issue, but the fact that some widely used ATEN KVM switch ASICs do not load the VGA output heavily enough to be detected by the detection code.
Aha, so load detection fails, but that still does not mean that our default behaviour is wrong. You should be able to associate a monitor section with an output and then force it on (anyone acquainted with RandR1.2, please advice). Now. Something else entirely. Are you alone on this world? Are you the only person on this planet who is using this graphics hardware on free software? Are the demands you have the only demands that all graphics driver should fill? Or could it be that there are many more Jakob Bohms out there who all think that their situation is the only thing that matters and that the whole world should be geared towards making life perfect for this Jakob Bohm too. Luc Verhaegen. -- To unsubscribe, e-mail: radeonhd+unsubscribe@opensuse.org For additional commands, e-mail: radeonhd+help@opensuse.org
On Nov 24, 09 00:03:55 +0100, Luc Verhaegen wrote:
So do get off your high horse and stop thrashing things we had to do because we got a lot more thrashing in the other direction already.
Luc, calm down. The request is perfectly valid, though not achievable with current RandR implementation and logic (dunno whether it would be possible at all the way RandR is designed ATM). What we need is - Ability to reconfigure DPI, etc. on-the-fly (should be working) - Ability to probe outputs flicker-free (at least partly working for radeonhd thanks to Egbert's latest changes) - Resizeable framebuffer (or reasonably large preallocated fb - the former already working with radeon/KMS, the later with radeonhd, modulo bugs) - User interfaces that deal nicely with DPI and screen space changes (the later often - not always - working, the former not) - Some code in RandR or a yet-to-be-written user space daemon or in the display managers *and* desktops that probes output every second or so, and reacts on the changes The last point is the main issue - there is no code. Additionally, some chips (notably intel) don't allow for flicker-free probing, so you would have to rely on DDC probing. RandR logic was to always switch off a CRTC before doing the probing, I think that has already changed (but I could be mistaken). Due to this I don't think we will see this soon, as a non-generic solution probably won't be accepted, especially if it hits chips with large user basis as with intel. So in the end, this is a valid request, but I don't see this coming in the near term future. And it's nothing to be implemented in radeonhd at all. Well, maybe there are some bugs lurking that have to be fixed, which don't surface yet.
Which one of these outputs should we use? Or maybe _all_ of these outputs? Or maybe choose them at random? Or maybe leave then all blank?
The logic would use none at first, basically using a shadow fb only, and enable them as soon as something is connected. Which includes framebuffer size and DPI changes etc.
As said before, under the current circumstances, bailing is a very good option that at least gives the user the opportunity to fix the issue. Way better than presenting the user with a blanked screen and no information and no other option than to reboot into runlevel 3.
Agreed. ATM nothing else is useable.
Oh, and in my case the problem seems NOT to be an ATOM bios issue, but the fact that some widely used ATEN KVM switch ASICs do not load the VGA output heavily enough to be detected by the detection code.
Tested with latest git master? Egbert changed the load detection code, it's much better now.
You should be able to associate a monitor section with an output and then force it on (anyone acquainted with RandR1.2, please advice).
Take a look at http://www.x.org/wiki/radeonhd (Examples) - and add an
'Option "Enable"' to the Monitor section. If DDC doesn't work, you will
have to add VertRefresh and HorizSync. man xorg.conf.
Matthias
--
Matthias Hopf
On Tue, Nov 24, 2009 at 03:06:28PM +0100, Matthias Hopf wrote:
On Nov 24, 09 00:03:55 +0100, Luc Verhaegen wrote:
So do get off your high horse and stop thrashing things we had to do because we got a lot more thrashing in the other direction already.
Luc, calm down. The request is perfectly valid, though not achievable with current RandR implementation and logic (dunno whether it would be possible at all the way RandR is designed ATM).
Now that we have all had a few months to calm down, here are some supplemental notes for this old thread. 1. My mission in my postings was to help out and to educate, as I had already established a workaround for my personal machine. I am sorry this turned into a flamewar which is why I backed down from the debate until now. 2. While it may not have come across, I have been doing video adapter programming on other operating systems for a few decades, but I have not yet worked with the inards of X or its driver model, hence my confusion regarding the distribution of tasks between RANDRx and the driver.
What we need is
- Ability to reconfigure DPI, etc. on-the-fly (should be working)
This would only be needed if changing resolution etc. upon seeing a monitor getting plugged in. A simpler more traditional aproach is to just stick with whatever resolution was chosen at driver startup, even if not optimal for the real monitor. For instance if the driver started with 96dpi 800x600, it would just continue reporting and using those values even if connected to a 120dpi monitor capable of 1600x1200 pixels.
- Ability to probe outputs flicker-free (at least partly working for radeonhd thanks to Egbert's latest changes)
This can be avoided by generating "blind" output even when there is no monitor receiving it. This is the traditional solution for analog / primitive channels such as VGA, CGA/EGA and TV ports. It may not be good for DVI/HDMI/DisplayPort outputs, but those are more likely to facilitate flicker free detection.
- Resizeable framebuffer (or reasonably large preallocated fb - the former already working with radeon/KMS, the later with radeonhd, modulo bugs)
Again this is needed only if recomputing the resolution dynamically.
- User interfaces that deal nicely with DPI and screen space changes (the later often - not always - working, the former not)
For comparison at least one widely used operating system has officially supported this since 1995 via a simple: "User preferences/screen resolution changed" event, prompting applications and toolkits to recheck any cached data (or just continue if they don't care). This event was originally just used for the case of an end user manually changing the configuration through a GUI, but was simple and generic enough to work the same in the presence of automatic changes. Despite this simplicity and well-establishedness, there are still (as of 2010) business critical applications on that OS which crash and burn when they receive the message (I get flack for this due to my own driver work...).
- Some code in RandR or a yet-to-be-written user space daemon or in the display managers *and* desktops that probes output every second or so, and reacts on the changes
Moving probing too far up the layers would make life hard on people not running a fancy desktop or hundreds of desktop-helper-daemons. X is supposed to be usable with a single non-wm app or a primitive wm such as twm. I see that another poster already figured out how to keep this at the driver/RANDR level.
The last point is the main issue - there is no code. Additionally, some chips (notably intel) don't allow for flicker-free probing, so you would have to rely on DDC probing. RandR logic was to always switch off a CRTC before doing the probing, I think that has already changed (but I could be mistaken).
Due to this I don't think we will see this soon, as a non-generic solution probably won't be accepted, especially if it hits chips with large user basis as with intel.
This is why all my posts assumed startup-probing only with some classic least-common-denominator fall back values in the absence of user overrides. Just as in old X drivers for old chipsets like ET4000 or S3 Trio. There would basically be multiple sources for the output resolution, with decreasing priority: 1. User forced configuration with "Enable Yes" (Example: User says force 1200x1024, hardware detection ignored, 1200x1024 wins). 2. User non-forced configuration if compatible with actual data from detection (Example: User says 1200x1024, hardware says monitor can do up to 1600x1200, 1200x1024 wins). 3. Actual data from detection (Example: User says 1200x1024 or is silent, hardware says monitor can do up to 1024x768, 1024x768 wins). 4. User non-forced configuration (Example: User says 1200x1024, hardware failed to detect what is plugged in or detected no monitor yet, 1200x1024 wins). 5. Cross-CRTC chipset limitation (Example: One of above rules configured HDMI output, no config for VGA output and no detection for VGA output, CRTC chip can only do certain resolutions on VGA port when HDMI port has that resolution, so it does that). 6. Historic minimum hardware specs (Example: User says nothing, hardware failed to detect what is plugged in or nothing is plugged in yet, 640x480 @ 60Hz wins). Thanks for your patience. -- This message is hastily written, please ignore any unpleasant wordings, do not consider it a binding commitment, even if its phrasing may indicate so. Its contents may be deliberately or accidentally untrue. Trademarks and other things belong to their owners, if any. -- To unsubscribe, e-mail: radeonhd+unsubscribe@opensuse.org For additional commands, e-mail: radeonhd+help@opensuse.org
participants (4)
-
Egbert Eich
-
Jakob Bohm
-
Luc Verhaegen
-
Matthias Hopf