sdl2
sdl2 copied to clipboard
Cannot seem to create a High DPI window in MacOSX
I'm trying to create a high DPI window on MacOS but I seem to always get the larger resolution. My window setup code looks like this:
setupSDL :: IO SDL.Window
setupSDL =
do
SDL.initializeAll
version <- SDL.version
putStrLn $ "SDL Version: " ++ show version
let windowConfig = SDL.WindowConfig
{ SDL.windowBorder = True
, SDL.windowHighDPI = True
, SDL.windowInputGrabbed = False
, SDL.windowMode = SDL.Windowed -- SDL.FullscreenDesktop
, SDL.windowOpenGL = Nothing
, SDL.windowPosition = SDL.Absolute (L.P $ L.V2 10 10)
, SDL.windowResizable = True
, SDL.windowInitialSize = L.V2 800 600
}
window <- SDL.createWindow (Text.pack "MyApp") windowConfig
config <- SDL.getWindowConfig window
putStrLn $ "Flags: " ++ show config
return window
I'm opening the code in with the 'open' command in terminal from a bundle. My info plist includes:
<key>NSHighResolutionCapable</key> <true/>
@biglambda is this still in issue for you? For what it's worth, on MacOS 10.12.5 I have high DPI working even without the plist key. I can check, but I believe that while the window size doesn't change when I set windowHighDPI, the pixel dimensions of the GL surface it contains do double.
Hmmm... it is an issue, what version of the libraries etc are you using?
SDL 2.0.5. The latest version on homebrew. Using the haskell sdl2 library from this repo.
Can I see your initialization code?
Just using initializeAll. Nothing special. Using the following configuration for the window:
windowConfig :: WindowConfig
windowConfig = WindowConfig
{ windowBorder = True
, windowHighDPI = True
, windowInputGrabbed = False
, windowMode = Windowed
, windowOpenGL = Nothing
, windowPosition = Wherever
, windowResizable = True
, windowInitialSize = V2 800 600
}
If I print out the dimensions returned by rendererViewport for that window, they will be a full 1600x1200.
Note that if I set windowHighDPI to False in the config above, then those dimensions will be 800x600. It is from this information, as well as the impression (if I get a moment I can take screenshots to compare) that the bilinear stretching when I stretch a texture looks nicer with windowHighDPI, that I conclude that the high DPI support is working.
@biglambda can you show a screenshot of your window? Perhaps you were in high DPI the whole time but like #172 didn't know what to look for to see it's working.

This is my startup code:
startInterface :: Point2 IntSpace -> IO InterfaceState
startInterface screenSize =
do SDL.initializeAll -- [SDL.InitEvents, SDL.InitVideo]
version <- SDL.version
putStrLn $ "SDL Version: " ++ show version
let windowConfig = SDL.WindowConfig
{ SDL.windowBorder = True
, SDL.windowHighDPI = True
, SDL.windowInputGrabbed = False
, SDL.windowMode = SDL.Windowed -- SDL.FullscreenDesktop
, SDL.windowOpenGL = Just SDL.defaultOpenGL
, SDL.windowPosition = SDL.Absolute (Point2 10 10)
, SDL.windowResizable = True
, SDL.windowInitialSize = V2 (fromIntegral . unISpace . unOrtho . pX $ screenSize)
(fromIntegral . unISpace . unOrtho . pY $ screenSize)
}
window <- SDL.createWindow (Text.pack "Semblance") windowConfig
config <- SDL.getWindowConfig window
putStrLn $ "Flags: " ++ show config
-------------------- Create Output Bitmap ------------
surface <- SDL.getWindowSurface window
bitmap <- makeBitmap screenSize surface
return $ InterfaceState window bitmap
Right, so your window title bar is in high DPI so I think the whole window is.
Your screenSize variable is let's say 800x600 pixels. But OS X will make a window of something twice the size of that. Your makeBitmap function should take that into account. Try doubling the size of the vector e.g. makeBitmap (let V2 w h = screenSize in V2 (w*2) (h*2)) surface. If that looks better then you can get the proper scalex and scaley like I did in #172.
I finally have some time to work on this. So I'm not using Cairo, I have my own rasterizer. Essentially all I do is get the buffer from the SDL surface with this code:
makeBitmap :: Point2 IntSpace -> SDL.Surface -> IO Bitmap
makeBitmap size surface =
do let width = fromIntegral . unISpace . unOrtho . pX $ size
height = fromIntegral . unISpace . unOrtho . pY $ size
--let numPixels = width * height
--ptr <- mallocBytes (fromIntegral numPixels * sizeOf (undefined :: CUInt))
ptr <- castPtr <$> SDL.surfacePixels surface
return Bitmap { bitW = width
, bitH = height
, bitPtr = ptr
}
I write pixel values directly to that buffer and then I update the display using this code:
updateDisplay :: StateT InterfaceState IO ()
updateDisplay =
do window <- use interfaceWindow
liftIO $ SDL.updateWindowSurface window
And I still get a low DPI output. Most frustrating thing ever :)
Sorry if I missed something or this is distracting, but I have what I think to be high DPI working in one of my programs, which I set up with the following:
let ogl = defaultOpenGL{ glProfile = Core Debug 3 3 }
cfg = defaultWindow{ windowOpenGL = Just ogl
, windowResizable = True
, windowHighDPI = True
, windowInitialSize = V2 640 480
}
Then in my main loop I can query the window with v2Cint <- get $ windowSize window, where
v2Cint shows V2 640 480. Similarly if I use v2Cint <- glGetDrawableSize window I will get the size of the entire current framebuffer, which is at 2x (V2 1280 960). Without the windowHighDPI = True entry in cfg the two will both show V2 640 480.
@biglambda, you will get lower resolution, if you created window with windowHighDPI = True and then call makeBitmap with value that you passed to windowInitialSize or with value from windowSize. You should call your makeBitmap function with value from glGetDrawableSize instead.
Right, @biglambda you need to make your bitmap at least twice the size. Otherwise whatever you give it that is smaller will get stretched to fill the canvas.
@schell, how do you get access to the 1280x960 framebuffer itself, if you want to write to it directly?
@biglambda, if you use SDL.Video.Renderer with SDL.Video.Renderer.Texture and your purpose is to fill fullscreen, you need to create this texture from SDL.Video.Renderer.Surface of size obtained from SDL.Video.OpenGL.glGetDrawableSize, then call SDL.Video.Renderer.copy renderer texture Nothing Nothing.
If you use pure OpenGL api, you need to create and fill OpenGL texture with
foreign import ccall unsafe "glTexImage2D" glTexImage2D ::
GLenum -> GLint -> GLenum -> GLuint -> GLuint -> GLint -> GLenum -> GLenum -> CString -> IO ()
...
V2 w h <- glGetDrawableSize window
glTexImage2D GL_TEXTURE_2D 0 format (fromIntegral w) (fromIntegral h) 0 format GL_UNSIGNED_BYTE ptr
where format = GL_RGBA or GL_RGB and ptr is pointer to array of pixels in specified format of size w * h.
Then you need to draw it to window framebuffer by shader.
Ok, thanks for that insight. I finally have it working. I think the secret so far has been to forget about using the surface from the window. Here are the SDL specific functions I'm currently using.
startInterface :: Point2 IntSpace -> IO InterfaceState
startInterface screenSize =
do SDL.initializeAll -- [SDL.InitEvents, SDL.InitVideo]
version <- SDL.version
putStrLn $ "SDL Version: " ++ show version
let openGL = SDL.defaultOpenGL{ SDL.glProfile = SDL.Core SDL.Normal 3 3 }
let windowConfig = SDL.WindowConfig
{ SDL.windowBorder = True
, SDL.windowHighDPI = True
, SDL.windowInputGrabbed = False
, SDL.windowMode = SDL.Windowed -- SDL.FullscreenDesktop
, SDL.windowOpenGL = Just ogl
, SDL.windowPosition = SDL.Absolute (Point2 10 10)
, SDL.windowResizable = True
, SDL.windowInitialSize = V2 (fromIntegral . unISpace . unOrtho . pX $ screenSize )
(fromIntegral . unISpace . unOrtho . pY $ screenSize )
}
window <- SDL.createWindow (Text.pack "Window Title") windowConfig
let rendererConfig = SDL.RendererConfig
{ SDL.rendererType = SDL.AcceleratedVSyncRenderer
, SDL.rendererTargetTexture = True
}
renderer <- SDL.createRenderer window 0 rendererConfig
return $ InterfaceState window renderer
updateDisplay :: (Bitmap -> IO ()) -> StateT InterfaceState IO ()
updateDisplay drawOn =
do window <- use interfaceWindow
renderer <- use interfaceRenderer
liftIO $ do (V2 width height) <- SDL.glGetDrawableSize window
--putStrLn $ "drawableSize: " ++ show (V2 width height)
texture <- SDL.createTexture renderer SDL.ARGB8888 SDL.TextureAccessStreaming (V2 width height)
(ptr, _) <- SDL.lockTexture texture Nothing
bitmap <- makeBitmap (fromIntegral width) (fromIntegral height) ptr
drawOn bitmap
SDL.unlockTexture texture
SDL.copy renderer texture Nothing Nothing
SDL.present renderer
I'm afraid though that there is an additional unnecessary copy operation versus using SDL.updateWindowSurface. What do you think?
@biglambda, if you create SDL.Renderer with SDL.rendererType = SDL.AcceleratedVSyncRenderer and have suitable drivers, you will get hardware accelerated renderer. You can check if you have an acceleration with SDL.getRendererInfo after creation; here SDL.rendererInfoName will be one of
- "direct3d" (it is DirectX9);
- "direct3d11" (DirectX11);
- "opengl";
- "opengles";
- "opengles2";
- "PSP";
- "software".
If your renderer is not software, function SDL.copy (and another functions that work with SDL.Renderer) is very fast and actually is shader call; all SDL.Textures are just numbers - "names" for textures that are stored in the memory of video card. Also SDL.present renderer is just flipping of textures in video memory.
On the other hand, SDL.Surface lives in RAM. So we have three classes of slower functions
- CPU -> CPU (all functions for
SDL.Surface->SDL.Surface); - CPU -> GPU (
SDL.updateWindowSurface,SDL.lockTextureand writing,SDL.createTextureFromSurface); - GPU -> CPU (
SDL.getWindowSurface,SDL.lockTextureand reading).
I think, that you should always use SDL.Renderer and SDL.Texture when possible. Use SDL.Surface to load pictures from .jpg, .png, .bmp files, then store them in different SDL.Textures, or in texture atlas, packed to single SDL.Texture.
For example, to create and use atlas you can
- create big
SDL.TexturewithSDL.TextureAccessTargetflag for atlas; - set it to
SDL.rendererRenderTarget(then allSDL.copyfunctions will copy to the atlas); - load
SDL.Surfacefrom .jpg, .png, .bmp by using sdl2-image (for pictures) or from .ttf by sdl2-ttf (for letters); - call
SDL.createTextureFromSurfaceto createSDL.Textureof the same size; - call
SDL.copyto copy texture from 4) to desired place on atlas; you don't needSDL.present rendererhere; - go to 3) if need;
- set
SDL.rendererRenderTarget $= Nothing(then allSDL.copyfunctions will copy to window); - in
updateDisplaydraw textures in any places with any rotations bySDL.copyEx(you can draw one texture, e.g. letter, many times), then callSDL.present renderer.
Also you can get some information on your question here.
Ok interesting, the last step of my "drawOn" function is actually an OpenCL kernel that fills a buffer for every pixel in the window. So it's too bad but it seems like I currently copy the buffer back to the host memory and then back to video memory again. Is there a way to allocate a texture and then refer to that as an OpenCL buffer?
@biglambda, you should try to do the following
- create
SDL.RendererwithSDL.rendererInfoName="opengles2" or "opengl"; to ensure this before creation you can set the following hint:
SDL.setHintWithPriority SDL.OverridePriority SDL.HintRenderDriver SDL.OpenGLES2
-
create empty
SDL.Texturewith size fromSDL.glGetDrawableSize; -
now you need to find out OpenGL "name" of texture from the previous step:
type GLint = Int32
type GLenum = Word32
pattern GL_TEXTURE_BINDING_2D :: forall a. (Num a, Eq a) => a
pattern GL_TEXTURE_BINDING_2D = 0x8069
pattern GL_TEXTURE_2D :: forall a. (Num a, Eq a) => a
pattern GL_TEXTURE_2D = 0x0DE1
foreign import ccall unsafe "glGetIntegerv" glGetIntegerv :: GLenum -> Ptr GLint -> IO ()
...
SDL.glBindTexture texture
glName <- alloca (\p -> glGetIntegerv GL_TEXTURE_BINDING_2D p >> peek p)
SDL.glUnbindTexture texture
...
-
use clCreateFromGLTexture2D with texture_target=GL_TEXTURE_2D, miplevel=0, texture=fromIntegral glName to create OpenCL image object;
-
in your
updateDisplayfunction you need to draw on this OpenCL image object, then doSDL.copy renderer texture Nothing Nothing >> SDL.present renderer; you don't need to allocate textures or any memory during this frame drawing step.
Thanks a lot, I'm working on trying to implement this, I'll let you know how it goes.
Ok I think I'm close to having this working currently I'm getting an error from the OpenCL when I try to run clCreateFromGLTexture2D.
glName: 1
"[CL_INVALID_GL_OBJECT] : OpenCL Error : Bad texture object"
"[CL_INVALID_GL_OBJECT] : OpenCL Error : Image creation from a GL object failed."
Not sure about the right approach to debug this.
In this old discussion someone wrote about the same behavior in SDK examples due to the driver. Do you have a possibility to run some small existing OpenGL-OpenCL interop example?
Ok, thanks for pointing me in the right direction.
I think page 10 of this document http://sa10.idav.ucdavis.edu/docs/sa10-dg-opencl-gl-interop.pdf gets into how to do this OpenGL-OpenCL interop on a mac.
It looks like there are a few functions that are needed that, I think, don't have bindings in the current OpenCL package that I'm using namely: CGLGetCurrentContext, CGLGetShareGroup. I looks like @acowley has been down this road.
https://gist.github.com/acowley/cdac93e3b580b65bd7d2#file-clglinterop-hs
I'm going to see if I can get some of this working in my code.
Indeed I do OpenGL-OpenCL interop on macOS all the time. Let me know if you run into any trouble, but the code you linked should get you going.
Hi so what I decided to do was modify @acowley's CLUtil package to include TextureObject parameters and I included an example program that uses SDL to display his QuasiCrystal kernel using the CLGLinterop. You can find that forked repository here: https://github.com/biglambda/CLUtil I wasn't completely sure but it seems to run a lot faster than the buffer copying version.
@biglambda, in your example, why do you create texture in every frame? You can create it once at the beginning.
@biglambda, more precisely, you should create it in the beginning and recreate on window resize event.
Cool, I just pushed a version that does that.
Something I noticed is that this only runs on the CPU so far.
If you change line 170 in my example program from:
clState <- initFromGL CL_DEVICE_TYPE_ALL
to:
clState <- initFromGL CL_DEVICE_TYPE_GPU
For my first device, an Intel Iris Pro, I get.
[CL_INVALID_DEVICE] : OpenCL Error : clCreateCommandQueue failed: Unable to locate device 0x1024500 in context 0x7fc09516bb30.
TestCLGL: CL_INVALID_DEVICE
For my second device an AMD Radeon R9 M370X Compute Engine (which comes up as having the display in my system report) I get a black output. If I switch back to CPU it works fine.