Ambermoon.net
Ambermoon.net copied to clipboard
Minor improvements/fixes
WIP for the OpenGLES 3.0 target, as there is still an issue with the character selection screen there (it works up to that point). Additionally, a few other fixes are included:
- Use MaxBy() instead of OrderBy ... LastOrDefault
- Changed OpenGLES to 3.0 from 2.0
- Fixed shader casting to float (would complain about it on the RPI4)
- updated gitignore to add more file types and folders that should be ignored
- Added some more scripts to automate building on Windows (including one for building for Linux ARM64/OpenGLES)
The current state is:
- The game starts (in Windowed mode - is that configurable?) - we do get a warning in the background about the current window is not a GLFW window etc.
- The title screen shows correctly
- The Game Data version menu shows correctly, and we can select Original or Advanced
- The main menu shows up correctly, and I can select Start New Quest
- The character selection screen does NOT show correctly, it seems that various elements are missing (or are in an incorrect layer)
I will have a closer look at it soon. Regarding the configuration. Yes you can change the start mode in the ambermoon.cfg which should be located in the Ambermoon.net project folder. You can change resolution, full screen mode and a bunch of other stuff in there. Just ensure you modify it while the game is not running as it would overwrite it at closing.
Btw the character creator uses the same shader as the main menu (TextureShader) so I guess it is not a shader issue. However there is a layering concept where you can specify a "display layer" for sprites and other render objects to control the z-order.
My guess would be that the precision isn't high enough in your case and thus some elements are drawn in the wrong order.
You could try to play with the display layer values in the character creator and increase the gaps between different elements.
The game is designed for a 24 bit depth buffer. Maybe the RPi only supports less and thus the precision isn't high enough.
As glfw is not used, I guess the depth buffer size is not set to 24bit. I would prefer to make it somehow work with glfw.
You could test the following. Maybe it helps already.
In the file TextureShader and there inside the function TextureVertexShader you find a line where "z" is calculated. There is a factor of 0.00001f at the end. Try to increase this factor. Maybe try 0.00002f or even 0.0001f.
With a 16 bit depth buffer the precision is only 0.000015f so this might be a problem here. However if you even have only a 8 bit depth buffer it would be worth and the precision would be 0.0039f. So if the above values don't work, try a factor of 0.004f as well.
You could test the following. Maybe it helps already.
In the file TextureShader and there inside the function TextureVertexShader you find a line where "z" is calculated. There is a factor of 0.00001f at the end. Try to increase this factor. Maybe try 0.00002f or even 0.0001f.
With a 16 bit depth buffer the precision is only 0.000015f so this might be a problem here. However if you even have only a 8 bit depth buffer it would be worth and the precision would be 0.0039f. So if the above values don't work, try a factor of 0.004f as well.
Thanks, I'll try that and report back. For the record, the RPI can easily handle 8, 16, 24 and 32-bit output without a problem. The default is 24-bit already I think, and I'm using 32-bit by default in Amiberry, for example. But I'll check just in case.
Well it is not only about the support by the RPi but also by the windowing system. In your case it is most likely a SDL window and Silk.NET only used SDL to support Android/iOS. So it was intended for mobile usage. It is possible that the Silk.NET SDL backend explicitly sets the depth buffer size to something else or the SDL window default is something lower than 24 bit. I can't tell for sure but the mentioned test should give us an answer even if it is "the depth buffer is not be the problem". ;)
I had a look at the Silk.NET implementation. It should use the given depth buffer size through calling SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24)
. It is also passed as 24 from my GameWindow implementation. According to https://wiki.libsdl.org/SDL2/SDL_GLattr the context creation should fail if the given minimum value (here 24) is not possible. So I would expect that the depth buffer of the context has indeed 24 bits.
But I noticed there is also a SDL_GLES_DEPTH_SIZE which is not used at all (https://community.khronos.org/t/opengl-es-depth-buffer-problem/62724/8). But maybe this is old (link is from 2010) as I found examples where OpenGLES uses SDL_GL_DEPTH_SIZE (https://docs.tizen.org/application/native/guides/graphics/sdl-opengles/).
It would still be better to make glfw work on the RPi as the glfw implementation of Silk.NET is much more robust.
TextureVertexShader
Just tested changing that value to 0.0001f
, but it made no difference in the result.
Edit: also tried 0.004f - same result. Perhaps then we can say, that it's not an issue with the buffer depth :)
Also, the back-end used is not SDL, but X11
. I checked what window.Native
contained on the RPI, when it's running :)