Some background: I wanted to test EGL backend on my old laptop with Intel graphics limited with OpenGL 2.1 / GLES 2.0. When trying to run any game this led to a crash with NULL GL_RENDERER, but not with GLX. Tracing has shown that wined3d tries to create a context with high `WGL_CONTEXT_MAJOR_VERSION_ARB`/`WGL_CONTEXT_MINOR_VERSION`, which is not supported by hardware, resulting in the creation of nil context in EGL, but due to lack of error handling it's always assumed to be successful. In GLX this results in an error and creating a context does not lead to success, after which it tries again to create a context with a lower `WGL_CONTEXT_MAJOR_VERSION_ARB` / `WGL_CONTEXT_MINOR_VERSION` until the actual context is created. Proper error handling when creating EGL context as in this patch also allows wined3d to work on such hardware.
I think the real issue is somewhere in wined3d/wgl and passing wrong version of GL for context at first, but I don’t have proper code base knowledge to talk about it exactly. Anyway, I think that error handling when creating a context like with GLX is necessary in case of such errors.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/9052
On Mon Sep 29 21:59:33 2025 +0000, Matteo Bruni wrote:
> Just a question in passing, unrelated to the patch: do we know if
> `D3DX10_FILTER_POINT` is actually the default filter?
I've yet to find any explicit mention of the default filter type for d3dx10/d3dx11 in the documentation, but I doubt that it's a point filter :) I intend on writing some tests to try and figure it out now that I have some understanding of the different filter types.
Since we don't handle the `Filter` member in the passed in `D3DX10_IMAGE_LOAD_INFO` and the default type is unknown, I've just defaulted to point here. I can print a `FIXME()` somewhere around here if that'd help clarify, or maybe a code comment.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/9034#note_117155
On Mon Sep 29 21:59:33 2025 +0000, Matteo Bruni wrote:
> If I understand correctly, this affects the output. In which case I'd
> avoid calling it an optimization, both in the comment and the commit subject.
> It's a bit of a pet peeve of mine, please bear with me...
I wrote this patch awhile ago, I _think_ I used the term optimization to bolster my argument for changing this behavior. It technically should be faster as it lets us get out of this function earlier. The decompressed values don't change with this "optimization".
The actual compressed data values shouldn't really matter, what should matter is the decompressed values. But, in `check_texture{2d,3d}_data()`, we check for compressed values. What would be a better term to use here? Just "Wine specific change"?
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/9034#note_117153
I split the implementations into 2 parts to make it looks easier to review. So plz first review this MR and make sure it's good so that we don't have to rebase. The follwing patches is in !7571.
--
v3: mfreadwrite: Implement sink_writer_SetInputMediaType.
mfreadwrite: Implement IMFSinkWriterEx.
mfreadwrite: Add converter transform to stream.
mfreadwrite: Add attributes member to writer struct.
https://gitlab.winehq.org/wine/wine/-/merge_requests/7570
And set a QS_DRIVER bit when the queue fd is ready for reading, or clear
it if it isn't, before waiting on it.
Clear the bit after waiting as we expect the client to process events
before calling the server again and before we can poll it again.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/9071