YCSB icon indicating copy to clipboard operation
YCSB copied to clipboard

Duplicate key error YCSB Worload A

Open rijan0619 opened this issue 5 years ago • 5 comments

Hi, I am simply trying to load workload A with 10000 record counts but came up with this error..can anyone help me?

Loading workload... Starting test. mongo client connection created with mongodb://localhost:27017/ycsb?w=1 DBWrapper: report latency for each error is false and specific error codes to track for latency are: [] Exception while trying bulk insert with 0 com.mongodb.MongoWriteException: E11000 duplicate key error collection: ycsb.usertable index: id dup key: { id: "user6284781860667377211" } at com.mongodb.client.internal.MongoCollectionImpl.executeSingleWriteRequest(MongoCollectionImpl.java:967) at com.mongodb.client.internal.MongoCollectionImpl.executeInsertOne(MongoCollectionImpl.java:494) at com.mongodb.client.internal.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:478) at com.mongodb.client.internal.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:472) at site.ycsb.db.MongoDbClient.insert(MongoDbClient.java:270) at site.ycsb.DBWrapper.insert(DBWrapper.java:221) at site.ycsb.workloads.CoreWorkload.doInsert(CoreWorkload.java:601) at site.ycsb.ClientThread.run(ClientThread.java:135) at java.base/java.lang.Thread.run(Thread.java:832) Error inserting, not retrying any more. number of attempts: 1Insertion Retry Limit: 0 [OVERALL], RunTime(ms), 1327 [OVERALL], Throughput(ops/sec), 0.0 [TOTAL_GCS_G1_Young_Generation], Count, 2 [TOTAL_GC_TIME_G1_Young_Generation], Time(ms), 13 [TOTAL_GC_TIME%G1_Young_Generation], Time(%), 0.9796533534287867 [TOTAL_GCS_G1_Old_Generation], Count, 0 [TOTAL_GC_TIME_G1_Old_Generation], Time(ms), 0 [TOTAL_GC_TIME%G1_Old_Generation], Time(%), 0.0 [TOTAL_GCs], Count, 2 [TOTAL_GC_TIME], Time(ms), 13 [TOTAL_GC_TIME%], Time(%), 0.9796533534287867 [CLEANUP], Operations, 1 [CLEANUP], AverageLatency(us), 4234.0 [CLEANUP], MinLatency(us), 4232 [CLEANUP], MaxLatency(us), 4235 [CLEANUP], 95thPercentileLatency(us), 4235 [CLEANUP], 99thPercentileLatency(us), 4235 [INSERT], Operations, 0 [INSERT], AverageLatency(us), NaN [INSERT], MinLatency(us), 9223372036854775807 [INSERT], MaxLatency(us), 0 [INSERT], 95thPercentileLatency(us), 0 [INSERT], 99thPercentileLatency(us), 0 [INSERT], Return=ERROR, 1 [INSERT-FAILED], Operations, 1 [INSERT-FAILED], AverageLatency(us), 117856.0 [INSERT-FAILED], MinLatency(us), 117824 [INSERT-FAILED], MaxLatency(us), 117887 [INSERT-FAILED], 95thPercentileLatency(us), 117887 [INSERT-FAILED], 99thPercentileLatency(us), 117887

rijan0619 avatar Jun 07 '20 06:06 rijan0619

Usually that happens when you've already loaded that workload in the same database. Try deleting and create it again, then you load and execute the workload with the 10k records.

andrehgdias avatar Jun 07 '20 14:06 andrehgdias

I also encountered this situation but did not solve it

cc-a100 avatar Sep 17 '20 04:09 cc-a100

How did deleting the database and starting fresh go?

busbey avatar Nov 27 '20 16:11 busbey

@busbey usually deleting the database on workloads A, B, C and F works just fine. The problem is in workloads D and E as stated at #1472

andrehgdias avatar Jan 29 '21 17:01 andrehgdias

You can run this mongosh ycsb --eval "db.usertable.drop()" beforehand to ensure the database you are loading into is empty beforehand.

anish-palakurthi avatar Feb 18 '25 22:02 anish-palakurthi