feat(linter): no useless assignment
How to use the Graphite Merge Queue
Add either label to this PR to merge it via the merge queue:
- 0-merge - adds this PR to the back of the merge queue
- hotfix - for urgent hot fixes, skip the queue and merge this PR next
You must have a Graphite account in order to use the merge queue. Sign up using this link.
An organization admin has enabled the Graphite Merge Queue in this repository.
Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue.
Hi @camc314, my PR didn’t pass just r. When I run the local CI check, it generates a bunch of snapshot updates. I’m not sure if that’s expected.
Hi @camc314, my PR didn’t pass just r. When I run the local CI check, it generates a bunch of snapshot updates. I’m not sure if that’s expected.
It'd depend on the specific updates to the snapshots, but that's probably fine as long as the only snapshot it's trying to update is the one for your particular rule.
You'll also want to run just fmt to format the files, otherwise the CI checks will fail on that
You'll also want to run
just fmtto format the files, otherwise the CI checks will fail on that
Thanks for the tip!
If you use VS Code or similar, you can change the rust-analyzer extension's checker setting to "clippy" to get the full linting (or just running cargo clippy in a terminal should perform the lint checks that CI is failing on).
Unfortunately it's also a lot slower, but that should help you fix the lint errors at least. I'll let cam review further as he sees fit :)
If you use VS Code or similar, you can change the rust-analyzer extension's
checkersetting to "clippy" to get the full linting (or just runningcargo clippyin a terminal should perform the lint checks that CI is failing on).Unfortunately it's also a lot slower, but that should help you fix the lint errors at least. I'll let cam review further as he sees fit :)
Thanks for the quick review! I’m new to Rust and OXC, so I didn’t really have that in mind when implementing this. I understand that performance is critical for the project — is there any way I can check benchmarks for my implementation? I’d love the chance to improve the performance.
Amazing work on this!
Out of interest, did you use AI? if so how did you use it and which models did you use?
I understand that performance is critical for the project — is there any way I can check benchmarks for my implementation? I’d love the chance to improve the performance.
I just approced the CI workflows to run, that will give us a codspeed benchmark which should highlight any performance regressions.
Looks like this rule is really struggling on the react benchmark.
CodSpeed Performance Report
Merging #15466 will not alter performance
Comparing zzt1224:feat/no-useless-assignment (292037e) with main (d32d22e)
Summary
✅ 33 untouched
⏩ 4 skipped[^skipped]
[^skipped]: 4 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.
Amazing work on this!
Out of interest, did you use AI? if so how did you use it and which models did you use?
I understand that performance is critical for the project — is there any way I can check benchmarks for my implementation? I’d love the chance to improve the performance.
I just approced the CI workflows to run, that will give us a codspeed benchmark which should highlight any performance regressions.
Looks like this rule is really struggling on the react benchmark.
I’ll see if I can work something out to improve the performance on the React benchmark.
I do use AI — mainly Deepwiki, which Boshen recommended on Discord. I ask all kinds of codebase-related questions, like “How can I detect a loop in a CFG?” or “If I have a node ID, how can I get its CFG block?” It doesn’t give me direct copy-and-paste answers every single time, but it points me to the right parts of the code I can study.
For example, I asked Deepwiki, “How do I get the variable scope of a symbol?” and it gave me your code:
fn get_parent_variable_scope(&self, scope_id: ScopeId) -> ScopeId {
self.scoping()
.scope_ancestors(scope_id)
.find_or_last(|scope_id| self.scoping().scope_flags(*scope_id).is_var())
.expect("scope iterator will always contain at least one element")
}
I’ll see if I can work something out to improve the performance on the React benchmark.
yeahh, we may need a different algorithm. admittedly, this was in debug modes but running this rule on react.development.js was taking 1+ minutes.
i had a local version of this working somewhere that used fixed point iteration instead which maybe a better approach.
Thanks for sharing how you used AI, i do find it interesting how others are using it.
I’ll see if I can work something out to improve the performance on the React benchmark.
yeahh, we may need a different algorithm. admittedly, this was in debug modes but running this rule on react.development.js was taking 1+ minutes.
i had a local version of this working somewhere that used fixed point iteration instead which maybe a better approach.
Thanks for sharing how you used AI, i do find it interesting how others are using it.
I’ve pinpointed that the part slowing everything down is moving things in and out of the backtracking state. I probably need to refactor the whole thing to get rid of that. I’ll try a new approach to this, so you can close this PR for now. Would you mind sharing your solution and elaborating on what “fixed-point iteration” means? Or, if you’re already working on this, I can move on to something else.
To be honest, I thought this one was labeled “good first issue,” so I expected it to be an easy one.
Hi @camc314, I’ve improved the performance of my previous approach — it should be much faster than before. Could you give it another shot and see if the performance looks acceptable now? Thanks!
I used a bitset data structure from oxc/allocator; I’m not sure if that’s allowed. I also didn’t use a fix-point iteration to resolve the loop issue. Instead, I run a separate analysis on the loop to obtain its live-ins, which I believe should be faster.