ConcurrentHashMap#values().stream().toArray() is not thread-safe
Bug Reproduction Unit Test
@Test
void raceConditionReproductionTest() {
ConcurrentMap<Integer, String> eclipseMap = new org.eclipse.collections.impl.map.mutable.ConcurrentHashMap<>();
IntStream.range(0, 200000).boxed()
.forEach(i -> eclipseMap.put(i, String.valueOf(i)));
CompletableFuture.allOf(
CompletableFuture.runAsync(
() -> eclipseMap.values().stream().peek(s -> randomWait()).toArray(String[]::new)),
CompletableFuture.runAsync(
() -> IntStream.range(200000, 400000)
.boxed()
.peek(i -> randomWait())
.forEach(i -> eclipseMap.put(i, String.valueOf(i))))
).join();
}
private static void randomWait() {
try {
if (ThreadLocalRandom.current().nextInt(1000) == 0) {
Thread.sleep(1);
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
The above test code adds new values to a ConcurrentHashMap while a Stream#toArray() operation is being performed on the map's values.
Running the test on the latest version of Eclipse Collections (13.0.0) results in an exception like the following being thrown:
java.util.concurrent.CompletionException: java.lang.IllegalStateException: Accept exceeded fixed size of 202893
Note: Using java.util.concurrent.ConcurrentHashMap instead of the Eclipse one fixes the test, so the vanilla Java concurrent map does not have this issue.
Bug Details Upon an initial investigation, the problem seems to be that the collection's spliterator supplies the size of the resulting array before the Stream pipeline processing, as if it is sure that the size of the collection won't change mid-iteration. And so the exception occurs when more elements then expected are appended to the array.