wsl
wsl copied to clipboard
Consider `i++` as an assignment used in the iteration
My original function:
func sortedKeys(m map[string]interface{}) sort.StringSlice {
keys := make(sort.StringSlice, len(m))
i := 0
for k := range m { // ranges should only be cuddled with assignments used in the iteration
keys[i] = k
i++
}
sort.Sort(keys)
return keys
}
I have found three possible fixes:
-
Isolate
i := 0
but this is the index of thefor
loopfunc sortedKeys(m map[string]interface{}) sort.StringSlice { keys := make(sort.StringSlice, len(m)) i := 0 for k := range m { keys[i] = k i++ } sort.Sort(keys) return keys }
-
Inverse both assignments
i := 0
andkeys := ..
but this is wired :-/func sortedKeys(m map[string]interface{}) sort.StringSlice { i := 0 keys := make(sort.StringSlice, len(m)) for k := range m { keys[i] = k i++ } sort.Sort(keys) return keys }
-
Putting both assignments in one line, but this is the least readable :-(
func sortedKeys(m map[string]interface{}) sort.StringSlice { i, keys := 0, make(sort.StringSlice, len(m)) for k := range m { keys[i] = k i++ } sort.Sort(keys) return keys }
I thing wsl
should consider i++
as an assignment used in the iteration.
@bombsimon What about your opinion?
Thanks for the issue! Similar questions have been adressed in f.ex. #81. I think I've never considered this since I think it's a bad pattern to use that kind of indexing. There are two linked posts about benchmarking assigning indexes vs appending if you set both a length and capacity of your slice in #81, please give those a read! Basically, I would suggest just writing this:
func sortedKeys(m map[string]interface{}) sort.StringSlice {
keys := make(sort.StringSlice, 0, len(m))
for k := range m {
keys = append(keys, k)
}
sort.Sort(keys)
return keys
}
It's not exactly a response to your question though since it's not the same code. If I didn't have any alternative but to use i
I personally think the most readable way would be to group assignments and then separate them from the loop.
func sortedKeys(m map[string]interface{}) sort.StringSlice {
keys := make(sort.StringSlice, len(m))
i := 0
for k := range m {
keys[i] = k
i++
}
sort.Sort(keys)
return keys
}
Or personally I would've grouped them in a var block for alignment:
func sortedKeys(m map[string]interface{}) sort.StringSlice {
var (
keys = make(sort.StringSlice, len(m))
i = 0
)
for k := range m {
keys[i] = k
i++
}
sort.Sort(keys)
return keys
}
If none of these alternatives feels relevant for you I think the other valid proposal would be #24 where the idea is to allow anything cuddled to any block as long as it's used in the block itself. This is a quite common patterns for channels and go routines or like in your case keeping track of a counter in a loop.
It's incredible!
Before reading your answer, I had also implemented exactly the same version with append()
(SortedKeysA
) and wrote a benchmark. My results using Go 1.15:
go test ./pkg/mymap -bench . -benchmem
goos: linux
goarch: amd64
pkg: myapp/pkg/mymap
BenchmarkSortedKeys-8 1000000 1130 ns/op 96 B/op 2 allocs/op
BenchmarkSortedKeysA-8 1000000 1229 ns/op 96 B/op 2 allocs/op
PASS
ok myapp/pkg/mymap 2.378s
The version with append()
takes 9% more time than the version with i++
(99 ns out of 1130 ns).
My Bad. My old laptop CPU was heating and then reducing the CPU frequency. As that may provide biased results, I have changed the benchmark order. Multiple runs show append()
slower 3 times, and i++
slower 6 times.
BenchmarkSortedKeysA-8 1000000 1379 ns/op 24% more time
BenchmarkSortedKeys-8 1866680 1043 ns/op
BenchmarkSortedKeysA-8 1103293 1327 ns/op 35% more time
BenchmarkSortedKeys-8 1199178 866 ns/op
BenchmarkSortedKeysA-8 1000000 1070 ns/op 6% more time
BenchmarkSortedKeys-8 1527624 1003 ns/op
BenchmarkSortedKeysA-8 1204836 1049 ns/op
BenchmarkSortedKeys-8 728811 1631 ns/op 36% more time
BenchmarkSortedKeysA-8 1000000 1201 ns/op
BenchmarkSortedKeys-8 1000000 1288 ns/op 7% more time
BenchmarkSortedKeysA-8 1466814 1254 ns/op
BenchmarkSortedKeys-8 785907 1417 ns/op 12% more time
BenchmarkSortedKeysA-8 1832108 1136 ns/op
BenchmarkSortedKeys-8 1000000 1210 ns/op 6% more time
BenchmarkSortedKeysA-8 1303680 954 ns/op
BenchmarkSortedKeys-8 1000000 1613 ns/op 41% more time
BenchmarkSortedKeysA-8 1310966 893 ns/op
BenchmarkSortedKeys-8 1000000 1497 ns/op 40% more time
Thanks for taking the time to do new benchmarks. I guess this tells us that it's fine to use append
if it's faster 6/9 times?
Also, the times we're talking about is extremely low, a diff of ~100 ns/op (if even that). This would only have any actual effect on very large numbers and big lists. For a list of 1000 items the added time would be 0.1 ms.
Closing this since it's mostly sorted and would be resolved if we land #24