On 5 September 2012 18:52, Andy Ritger aritger@nvidia.com wrote:
At first glance, I agree that would be easier for applications, but that approach has some drawbacks:
- lies to the user/application about what timings are actually being driven to the monitor
- the above bullet causes confusion: the timings reported in the monitor's on screen display don't match what is reported by the X server
- user/application doesn't get complete control over what actual timings are being sent to the monitor
- does not provide the full flexibility of the hardware to configure, e.g., arbitrary position of the ViewPortOut within the active raster
Perhaps, but none of that changes, as far as Win32 applications are concerned, if we generate modes in Wine instead of in the kernel. From Wine's point of view, we'd just get a bunch of extra code to maintain because nvidia does things differently from everyone else.
I imagine counter arguments include:
- we already have the "scaling mode" output property in most drivers; that is good enough
- Transformation matrix and Border are too low level for most applications
For the first counter argument: I'm trying to make the case that providing the full flexibility, and being truthful about modetimings to users/applications, is valuable enough to merit a change (hopefully even in the drivers that currently expose a "scaling mode" output property).
I must say that I'm having some trouble imagining what not generating standard modes will allow someone to do that they couldn't do before. In terms of figuring out the "real" timings, the RandR "preferred" mode is probably close enough, but I suppose it should be fairly easy to extend RandR to explicitly mark specific modes as "native". I imagine that for most applications it's just an implementation detail whether the display panel has a scaler itself, or if that's done by the GPU though. Either way, that seems like a discussion more appropriate for e.g. dri-devel.
When the RandR primary output (as queried/set by RR[SG]etOutputPrimary) is non-None, then its CRTC will be sorted to the front of the CRTCs list reported by RRGetScreenResources{,Current}. However, None is a valid value for the primary output, in which case all bets are off wrt CRTC/output sorting order in the RRGetScreenResources{,Current} reply.
Yes, as I said this is something we'll probably address at some point.
Further, while RandR primary output seems like a reasonable default, the spec spells out a focus on window manager (e.g., "primary" is where the menu bar should be placed). It seems like a valid use case would be for the user to have his window manager primary output on one monitor, but run his full screen Wine application on another monitor. Given that, would it be reasonable for the user to specify the RandR output he wants Wine to use?
We can probably add an override if there's a lot of demand. It doesn't strike me as a very common use case though.
I can definitely believe that plumbing RandR outputs to multiple objects in Win32 is not an important/compelling use case, since not many Win32 applications would do useful things with that. What seems more useful, though, is driving multiple RandR outputs and presenting that to Win32 as a single big screen. E.g., "immersive gaming" where your Wine application spans two, three, or more RandR outputs (NVIDIA Kepler GPUs can have up to four heads).
Perhaps there's a use case for a "big screen" setup, but that too is something that's probably best handled on the RandR / X server level instead of Wine. I don't think you can actually do "immersive gaming" properly without support from the application though, you'll get fairly significant distortion at the edges if you just render to such a setup as if it was a single very wide display. (Also, uneven numbers of displays are probably more useful for such a thing than even numbers of displays.)