TR1X
TR1X copied to clipboard
OG bug: can't run on older hardware
Running tomb1main on an old computer yields:
../src/gfx/context.c 137 GFX_Context_SetWindowSize Window size: 1366x768
../src/game/random.c 10 Random_SeedControl 41937
../src/game/random.c 22 Random_SeedDraw 40483
../src/gfx/context.c 56 GFX_Context_Attach Attaching to HWND 002C01C0
../src/specific/s_shell.c 46 S_Shell_ShowFatalError Shader compilation failed:
0:1(10): error: GLSL 3.30 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES
Instead of exiting, tomb1main should use previous GLSL versions.
OpenLara has some checks like: https://github.com/XProger/OpenLara/blob/master/src/gapi/gl.h#L1388
https://github.com/XProger/OpenLara/blob/master/src/platform/win/main.cpp#L483
Not sure this is easily solvable but it would be nice to support.
I opened a low quality pull request to address this; I'm against OpenLara's approach of hardcoding the shaders and I'd rather not maintain multiple built-in shaders. I'd appreciate if @richard-l could test if it's working on his machine.
@rr- test latest release on potato machine?
Edit: ah you mean downgrade_shaders branch?
Yes that branch :) Tomb1Main-2.3-1-gaf119dc.zip
I tested this build but now it crashes.
../src/gfx/context.c 137 GFX_Context_SetWindowSize Window size: 1366x768
../src/game/random.c 10 Random_SeedControl 60283
../src/game/random.c 22 Random_SeedDraw 59161
../src/gfx/context.c 56 GFX_Context_Attach Attaching to HWND 00020222
Sadly this tells me nothing. Does original TombATI work for you?
Original TombATI refuses to run as well.
What's your GPU?
Intel(R) UHD Graphics 630
Yes that branch :) Tomb1Main-2.3-1-gaf119dc.zip
Runs perfect on this GPU
I do know though from using integrated Intel GPUs that their drivers are some of the worst ever, so I'm not surprised to hear of problems.
I would like to reopen this. With current v2.7 I have these systems that crap out:
- Windows 7 x64, Core i7-930, Geforce 210 (driver 2016) -> S_Shell_ShowFatalError Shader compilation failed
- Windows 7 x64, Core i3-2310M, Intel Graphics -> Shader compilation failed
Whilst these systems run v2.7 fine when I change the first line of the four shader files back to 330 instead of 130. Of course if there are examples of benefit of the "130" hack, I am all ears. But I kinda doubt it.
Edit: Report from someone else with unmodified shaders:
- Windows 7 x64, Core 2 Quad Q6600, GeForce 9600GT (driver 2015) -> Shader compilation failed
Thanks rr- For clarity; I do not expect developer(s) to put effort into the topic title "Can't run on older hardware", because that can mean a lot of things and a lot of work. Also regarding framerate. I just propose for now to revert the shader file version change.
"Windows 7 x64, Core 2 Quad Q6600, GeForce 9600GT (driver 2015) -> Shader compilation failed" The user of this system also just reported back: shader files versioned 330 made the game work here.
We'll need to have to introduce fallback shaders. (It's such a drag…)
Fallback shaders - I just messed with that and got good results with the Geforce 210. But Intel and AMD Radeon won't render properly. Also it did not help a Radeon 9800 or Geforce 7300GT to even start the game. T1M-Shaders_GLSL-100-120-130-330.zip Edit: What I read is that proper fallback from GLSL v330 to v130 cannot be achieved solely by changing the external shader files. Because the v130 absence of "layout(location = something)" must be handled in the source code with glBindAttribLocation.
Though the additional thing is that the whole effort won't really matter unless the renderer gets a framerate boost. There is not much use in allowing systems to run this game without any chance of a playable framerate. I am a little mystified in that regard, for example with an E-350 APU / Radeon HD6310 I can actually maintain 30 FPS everywhere when running Windowed at the 640x480 in-game setting. Make the resolution setting a little higher OR go fullscreen and framerate takes a nosedive. The actually rendered window-canvas is never said 640x480, which I would recognize, but measures 958x718.
This is just for consideration, I am already a happy user of v2.7 as it is.
No offence but are we sure we want to support niche GPUs like that?
BTW, it works perfectly on OGL 2.1 with just cosmetic modifications to the shaders. The only problem seems that executable crashes without OGL 3.0 context. Right now this can be achieved by just setting MESA_GL_VERSION_OVERRIDE=3.3
. Note that it runs with full hardware support and is not software emulated in this case.
As we're drawing close to releasing 2.14, I'd like us to tackle this as part of the upcoming 2.15. Does anyone feel up to the task? CC @walkawayy / @vvs- / @lahm86
I can submit PR with changes to the shaders if nobody objects.
I didn't need other changes on my side but I don't know about others.
I can submit PR with changes to the shaders if nobody objects.
I didn't need other changes on my side but I don't know about others.
Go ahead! That would be great. This is out of my league. Maybe we can post a build for people to test with to see how it performs on different hardware.
BlackWolfTR had a user report that they can't open the custom game Into the Realm of Darkness Here's the error:
Any further attempts at tinkering with the shaders should really supply a way to load a few variants rather than try to come up with one size fits it all.
That's some Intel driver incompatibility on some Windows version. I don't even have that hardware to test on.
Implementing a new graphical framework to differentiate driver implementations at runtime would be a huge task, IMHO.
BTW, something similar was implemented in OpenLara by preprocessing shaders at runtime, but you @rr- were against it AFAIK.
Preprocessing how?
Well, it has external "template" shader files which get their text substituted before finally compiling them. In particular it changes #version
directive on the fly. This is a classical macro preprocessor engine like e.g. m4
or cpp
, but not that advanced of course.
This is not actually hardcoding shaders but they are still not "real" complete shaders and depend on preprocessing logic in the engine.
I'm fine with such mechanism, as long as we do try to load various shader versions in the hope that the GPU will kindly deem one of them suitable for compilation.
In that Intel case above, I suspect that merely changing #version
to 3.1
would be enough.