[Golang] Prevent OOM kills on corrupted data.
Most cases of corrupted data can be recovered. However, OOM kills on very large allocations can't be recovered. There is one case in object API that does make([]T) with size that comes from the data. Add a sanity check before that allocation.
- Is there a way to test this within the current test infrastructure?
- Is there a better upper bound for array size?
- If elements are guaranteed to be sequentially increasing in memory, then instead of:
if fooLength > len(rcv._tab.Bytes)/4 {
panic("bad array size")
}
t.Foo = make([]T, fooLength)
for j := 0; j < fooLength; j++ {
t.Foo[j] = rcv.Foo(j)
}
It may be better to emit:
if fooLength > 0 {
_ = rcv.Foo(fooLength - 1)
}
t.Foo = make([]T, fooLength)
for j := 0; j < fooLength; j++ {
t.Foo[j] = rcv.Foo(j)
}
It may be better to emit:
if fooLength > 0 { _ = rcv.Foo(fooLength - 1) } t.Foo = make([]T, fooLength) for j := 0; j < fooLength; j++ { t.Foo[j] = rcv.Foo(j) }
Overall it looks better. But not sure about overflows here. Index functions don't have overflow checks and do calculations in UOffsetT, so even even on 64-bit arch multiplication in offset calculation can overflow.
This pull request is stale because it has been open 6 months with no activity. Please comment or label not-stale, or this will be closed in 14 days.
This pull request was automatically closed due to no activity for 6 months plus the 14 day notice period.