safety-dance icon indicating copy to clipboard operation
safety-dance copied to clipboard

home page should suggest filing compiler bugs for performance issues

Open nlewycky opened this issue 6 years ago • 5 comments

  • If the unsafe block is sound, but can be converted to safe code without losing performance, that's a great thing to do! This is often the case thanks to Rust adding new safe abstractions and improving the optimizer since the code was originally written.
  • It's possible that unsafe can't be eliminated without a performance loss. Unfortunate, but it will happen some of the time. Note that benchmarks must actually be used to back up any performance loss claims. There are already many cases where switching from unsafe to safe alternatives has increased performance, so simply guessing that performance will regress is not enough.

This is great, but it's important to let the compiler developers know what things are out there in the wild where unsafe made real code faster. If nobody tells them, they'll never know.

At the same time, you don't necessarily want the compiler team to receive a lot of reports about the same thing, or things where there's a known-good-reason that the unsafe can be faster than the safe code because they really are doing fundamentally different things under the hood. I don't know how to word this advice on the homepage to strike the right balance, but I think if you're already encouraging people to write benchmarks, they can also look for existing compiler bugs and if there isn't one, file a new bug report with their benchmark.

nlewycky avatar Aug 30 '19 19:08 nlewycky

Yeah, currently it's not really clear what do about things we could not convert to safe code. We should probably set up a file in the repo to document such things.

Shnatsel avatar Aug 31 '19 14:08 Shnatsel

it's not always a bug, is the main problem

Zero initializing memory actually is slower than uninitialized memory.

Bounds checked indexing is just slower than not checking the index.

That's just life.

Lokathor avatar Aug 31 '19 14:08 Lokathor

We can usually make safe abstractions that remove that overhead. It's just that for some use cases they're currently missing; see the Read trait requiring either an unbounded Vec to write to or an initialized fixed-size slice, with no uninitialized-but-fixed-size option.

Shnatsel avatar Aug 31 '19 15:08 Shnatsel

I agree it'd be nice if we didn't file bugs for issues the compiler team can't fix, but having worked on a compiler for years, I found it astonishing what things people simply wouldn't bother to tell the compiler team.

Even zero-initialization can be removed if the memory is subsequently written to. Either the zero-initialization is necessary (you would've written zeroes manually), the zero-initialization is dead stores (to be optimized away), or part of the zero-initialization is unnecessary because the allocation could be shrunk. The challenge is when the compiler can't see the code which uses the zero-initialized allocation, but even then that's something the compiler team can work on (say, using link-time optimization to give the compiler more context while optimizing).

I want to say something along the lines of "Your default action should be to file an optimization bug with the compiler, except when, in your judgement, the compiler couldn't do anything about this". I'm not sure that strikes the right balance.

nlewycky avatar Aug 31 '19 15:08 nlewycky

As another former compiler engineer, I want to strongly endorse the idea that there are a remarkable number of people with slow code that never file bugs :-(

alex avatar Aug 31 '19 15:08 alex