cloud-sweeper icon indicating copy to clipboard operation
cloud-sweeper copied to clipboard

investigate: strategies for adjusting to environments getting sub-60 fps

Open getify opened this issue 8 years ago • 23 comments

Rather than doing artificial timing like with timers, the game uses framerate (via requestAnimationFrame(..)) for all its timings, with the assumed target of 60fps. Luckily, most good devices seem to be running the game very near that already. But old devices or those without defaulted hardware acceleration (like linux desktops!) are running at or below 45fps routinely, which makes current game play on those devices quite painful.

Note: Obviously, optimizing the game's code more is a good strategy all around. However, this thread assumes all that has already been tackled, and that there may still be environments where the best version of the code still runs slower than desired. In those cases, the game should have some strategy for adjusting itself.

Should the game track its own FPS (beyond dev/debugging, but obviously just not displayed)? Should it adjust itself if it determines that a device is running substantially under (or over?) the target 60fps? Should the game refuse to run if the effective fps is just too far below some reasonable threshold (like 20fps)?

The game could monitor fps and round to nearest 15fps increments (15, 30, 45, 60, etc), and then scale its timings accordingly. Or, if the fps is at or below 45, could scale down to 30 fps artificially, and just drop (aka, not draw) every other frame of its normal timings.

Or, it could detect a fps below some threshold like 50 and suggest the user pick a simple-graphics mode which maybe uses lower quality graphics.

Investigate all these possible strategies (and look for others?) and test them out on old devices (like iphone4 for example).

getify avatar Feb 05 '16 07:02 getify

Even with supposedly hardware accelerated 2d canvas, drawing sprites in webgl is still massively faster, so that's an optimization you could still do.

petkaantonov avatar Feb 05 '16 08:02 petkaantonov

Take a cue from streaming video. If you can slightly degrade presentation and improve the experience, you should do so. It's going to have to be on a game by game basis, though. Some games really on minute details for important game mechanics, in those games players may prefer more detailed graphics over higher frame rates. Then again, if the game is more "twitch"-based framerate will be more important.

But for the most part, the gaming industry at large had enabled users to change quality settings for a better experience for decades. Now whether or not you should automate that is debatable. Any framerate assertions will add some minor overhead, as will adjusting quality on the fly.

benlesh avatar Feb 05 '16 08:02 benlesh

I had this problem with the game. But when i resized the canvas it could run at 60fps even with no hardware acceleration. So you could check for the game framerate just on the menu screen and then readjust the size like in half.

pixelgrid avatar Feb 05 '16 09:02 pixelgrid

drawing sprites in webgl is still massively faster, so that's an optimization you could still do.

I'm already drawing the SVGs to off-screen canvases first, at load time (so pre-caching that work), and then only using those canvases for the real-time drawing. That indeed did result in a huge speed up for safari engines (not as noticeably different for chrome).

I do still have one case where that's not happening yet, and have #7 filed to track fixing it.

getify avatar Feb 05 '16 14:02 getify

So you could check for the game framerate just on the menu screen and then readjust the size like in half.

The paradox is that you can't tell what framerate you're going to get when you're doing complex drawing until .. you try to do complex drawing. So I can't just detect at the beginning and decide. Has to be something you figure out as they try to play.

So I'd need some sort more subtle/intricate UX to handle the selection/change once they'd already tried to play at least once.

getify avatar Feb 05 '16 14:02 getify

If you can slightly degrade presentation and improve the experience, you should do so.

Yeah, this seems like a nice option. In practice with a game, I think this will be quite complex. The first complexity is tracking a "low quality" version of every single piece of art. But the more complex thing would be to have all those forked code paths that skipped doing various animations and skipped doing subtle effects like transparency or shadows.

I'd guess this kind of strategy may result in 30% increase in code just to support. So it's not a strategy I want to pursue experimentally. IOW, I want to make really sure that's the only/best way.

For example, I could much easier just "drop frames" of animation (making it choppier) using those fps maths I was alluding to. That's a different kind of quality degradation. Want to investigate if it's acceptable before I pick and make such sweeping (pun intended!!) code changes.

getify avatar Feb 05 '16 14:02 getify

Another option is that the game could default to NOT tracking framerate, but let the user pick an option (after having played at least once) that says something like "adjust game quality to my device", which turns on FPS tracking logic and does the adjustments.

That way users aren't paying the penalty to track FPS if their device is giving them acceptably good perf (for whatever that means to them).

getify avatar Feb 05 '16 14:02 getify

Take a cue from streaming video

Speaking of video, the experience that most people are used to now is that they're picking a quality of the streaming feed themselves. In other cases, the player is picking a stream for them. But in both cases, the selection is done based not so much on device capability but on network. There's a difference here, I suspect... when you blame the network, it feels OK. But if you have to blame the device for being crappy, that might feel worse.

The thing I don't like about an option like "Use simple/low quality graphics" is that it indicates to the user that the game can't keep up on their bad device. It's like making it obvious that their device sucks, and somehow the blame lies on them for having to get a crappier experience in the game.

Saying "adjust quality to my device" reframes (pun intended!) the situation for the user. It's like "hey game, please optimize this for my experience, however you need to." Makes it seem like a helpful assistance rather than blaming.

But then again, if they pick that kind of thing, and all of a sudden the graphics change quite noticeably and animations go away, they may feel like they got "cheated". So I think the adjustments the game makes in such a case would need to be the more subtle kind, like dropping frames, than in the using different art and disabling animations.

getify avatar Feb 05 '16 14:02 getify

One thing I noticed is that gameState.speedRatio is fixed. Moreover runPlaying and runPlayLeaving don't compute (or read) the delta time between frames. This is an issue because it makes your game FPS dependent.

HTML5 games usually follow this pattern:

function tick(dt) {
  //          ^^
  //          Elapsed time since previous frame.
  //          Usually clamped around 1/30.

  // Update positions
  //  - `dx` and `dy` depend on viewport size
  //  - `dt` depends on elapsed time between frames
  entity.x += entity.dx * dt
  entity.y += entity.dy * dt

  // etc...
  // You get the idea.
}

raf(tick) // Wraps `tick` to compute delta time between frames.
          // Also calls `requestAnimationFrame`.

ooflorent avatar Feb 05 '16 14:02 ooflorent

I'm already drawing the SVGs to off-screen canvases first, at load time (so pre-caching that work), and then only using those canvases for the real-time drawing. That indeed did result in a huge speed up for safari engines (not as noticeably different for chrome).

Yes I wouldn't expect anything less but drawImaging from off-screen canvas is still much slower than WebGL, even if the offscreen and onscreen canvases are hardware accelerated (which they might not be if they're smaller than 256x256).

petkaantonov avatar Feb 05 '16 15:02 petkaantonov

This is an issue because it makes your game FPS dependent.

I intentionally made the game timing FPS dependent rather than having variable frame rates, so that the speed and smoothness of the game are pretty consistent. I don't like that if you have a pause, the plane jumps a hundred pixels because you dropped a paint frame but kept the timing the same.

I noticed is that gameState.speedRatio is fixed

Yes, again, this is on purpose (for now), because all the other "speeds" in the game I calibrated to what it feels like on my screen that's ~1200 pixels wide. So if your screen is diff size than that, I want your speed ratio to make that all relatively the same.

What I'm contemplating is an additional ratio added to the mix that's basically snapped at 0.25 for 15fps devices, 0.5 for 30fps, 0.75 for 45fps, and 1.0 for 60fps devices. Then all speeds would be multipled by that ratio, so that even on a slower framerate device, you'll still get the same relative speeds, just with choppier animations.

getify avatar Feb 05 '16 17:02 getify

still much slower than WebGL

Can you explain what you mean by using WebGL here? I'm not doing any 3d, so I don't know why involving the extra complexities of a webgl context would help. Is it more hardware accelerated than a 2d canvas? That wasn't what my research showed, but I may be incorrect.

which they might not be if they're smaller than 256x256

I've heard this before, but I'm not really sure that I'm noticing that diff. I could refactor so all my offscreen canvases had a floor dimension of 256x256, and it may or may not speed things up. But it's not clear if I actually would benefit much from this optimization.

Again, on most tested devices, I'm already getting 60fps, pretty easily, even on retina iphone6's at 3x density. On a non hardware-accelerated device (like an iphone4) it's getting around 30-40fps, which is honestly pretty decent for such a low-powered device (relatively speaking).

This discussion thread is designed to contemplate strategies for how to adjust when those rarer slower FPS devices are encountered -- not particularly to rethink the entire strategy of optimization -- since it's already good enough to achieve 60fps in most cases.

getify avatar Feb 05 '16 17:02 getify

@getify,

Running simulations independent of framerate should be part of your strategy for adjusting the play experience to lower-powered devices. If the simulations run twice as slow on a 30 FPS device, that will drastically affect fundamental design designs like travel times. Furthermore, spikes in framerate up or down (unavoidable due to external causes) will cause changes in timing.

To your original question of automatically and artificially reducing framerate, that is a wonderful idea, as spikes in CPU/simulation load (even if caused by other, background processes) will have a larger buffer before impacting framerate. That said, I wouldn't code to a locked set of framerates, if possible, using a multiple of the refresh rate of the device instead. If that's not possible, common video framerates (48p, 24p, etc.) should align well with devices, since most are optimized for video as much or more than they're optimized for games.

In the end, as long as the framerate is relatively consistent and allows for "headroom", you should be good. It's common practice.

Schoonology avatar Feb 05 '16 18:02 Schoonology

This whole problem seems like something where it would be most beneficial to examine prior art in the gaming industry. Not necessarily just web games, but all games.

There are just so many strategies:

  1. Allow users to toggle frame rate information and change their own settings (preferably mid-game)
  2. Run some frame rate and capability test before the game and automatically select the appropriate settings.
  3. Monitor rolling frame rate averages and automatically adjust settings "in the background" while the game is running.

And there are also so many factors here too...

  • Will adjusting settings incur the download of new resources? (Bigger/smaller models, hi/lo res images, etc).
  • What type of game is it?
  • How much does frame rate really matter?
  • What can be adjusted without hurting the player experience? What can't?

I'm unsure there's just one good answer or proper guidance for this particular problem. At least not generically.

benlesh avatar Feb 05 '16 19:02 benlesh

these are great thoughts! thanks everyone, keep them coming!

getify avatar Feb 06 '16 01:02 getify

@getify as for canvas vs WebGL, while canvas is generally fairly quick, you're not going to beat the GPU for certain things. Things like texture caching, scaling/transformation, clipping, path drawing, etc... are all going to be quicker in WebGL because you control how much the GPU really has to do. Hardware accelerated canvas ops have to be fairly generic which means they're generally pretty slow. At the end of the day you're still interacting with your GPU, just depends on whether you do it directly or through the canvas abstraction. Ultimately being able to control things like draw order and GL state are a huge boon when you have large(ish) transparent textures.

As for "complexity" WebGL really isn't too bad and you should be able to replace your canvas ops fairly easily (assuming that your render code is sufficiently decoupled). Just involves doing more setup ahead of time. This being said, some things like text rendering are definitely more complex but can be sufficiently abstracted.

Also, WebGL may help you by giving you "acceptable" dynamic scaling of your assets at different levels of detail (mipmapping and the like).

Overall canvas is "OK" speed wise but you generally will do better with some custom work in WebGL.

joekarl avatar Feb 16 '16 04:02 joekarl

@joekarl if i'm already getting 59+ fps on modern devices, how much better could webgl possibly be? it's pretty unlikely to help much on the older devices, right?

getify avatar Feb 16 '16 04:02 getify

I concur with @Schoonology. Decoupling the update cycle from your render cycle is a good way of handling your frame rate issues. If requestAnimationFrame throttles back to 45 or 30fps you'll get consistent game play (though graphics will get a bit choppier). Here's some pretty good prior art for how to do that stuff http://www.koonsolo.com/news/dewitters-gameloop/

joekarl avatar Feb 16 '16 04:02 joekarl

this process of "decoupling" as suggested seems way more complex of a refactoring than just adjusting the ratios based on observed framerate. the former is lots of code rewriting (and i'm not convinced it's better that way) and the latter just requires a few variables being multiplied by some value.

getify avatar Feb 16 '16 04:02 getify

@getify so yeah, you're gonna be vsync'd for WebGL so you won't get any better on modern hardware, where you will get better are machines with slow paths between the CPU and GPU and where you have slow CPU performance. I would have to check but I assume that a lot of your bottleneck on the graphics side from slow devices is due to CPU rendering rendering of parts of your scene and then trying to combine those areas (especially where transparency is involved).

joekarl avatar Feb 16 '16 04:02 joekarl

Welp if you're going for the most compatibility looks like canvas is your option... Probably some basic options for image quality and toggling opacity will go a long way to smoothing out issues on "slow" platforms.

joekarl avatar Feb 16 '16 04:02 joekarl

+1 :)

getify avatar Feb 16 '16 14:02 getify