Apache Ignite 3.0 - Reading Data from ignite via C++ Client
I have about 500000+ records in ignite in my table called mytable.
I have to read them via my cpp program for which I am using the c++ thin client and configured that within my application successfully.
Initially I was able to perform basic sql like queries e.g fetching 1000 records and writing them in a file etc.
For quering and processing all records I tried two approaches and both result in the same
Error:
org.apache.ignite3.lang.IgniteException: IGN-TX-13 TraceId:613fb353-aa2f-4cca-bde1-87f661c45ad9 org.apache.ignite.sql.SqlException: Transaction is already finished () [txId=0196d8d1-26ae-0000-e5f1-efe300000001, readOnly=true].
Context: I am a beginner with ignite and I have the ignite engine running locally with the default node and I have not specified any custom configuration and using it as extracted from the release version. The C++ application is also running locally.
The first approach I tried was using a query like "Select data from mytable" which fetches a page whose size is implicitly limited to 1024 records by ignite.Then I was iterating over the records and writing them in a file as per my use case.At the end I was doing something like(checking for more pages if any and looping again by fetching next page) :
result_set result = client.get_sql().execute(nullptr, {"SELECT DATA FROM MYTABLE"}, std::vector
do {
std::vector<ignite_tuple> page;
try {
page = result.current_page();
} catch (const std::exception& e) {
std::cerr << "Error fetching current page: " << e.what() << std::endl;
break;
}
if (page.empty())
break;
for (const auto& row : page) {
try {
auto data= row.get("DATA");
{hidden processing}
} catch (const std::exception& e) {
std::cerr << "Error extracting row: " << e.what() << std::endl;
}
}
try {
if (result.has_more_pages())
result.fetch_next_page();
else
break;
} catch (const std::exception& e) {
std::cerr << "Error fetching next page: " << e.what() << std::endl;
break;
}
} while (result.has_more_pages());
This approach works for a few iterations but after 1-2 runs of the for loop and pages , I get the error mentioned earlier.
In the second approach I was trying to do sql like pagiantion by providing an offset and record limit where the cursor I was saving was the ID . The query looked like
SELECT DATA FROM "MYTABLE" where ID > lastIDSaved ORDER BY ID ASC LIMIT bacthSize This query immediately failed and even running this in gridgain console returns the same error.
Expectation: I need a way to be able to read 500000+ records from ignite and process them within my app.I read aboutscan queries in ignite 2 as well but couldnt find anything directly related to reading large data via ignite 3
Another follow up to this question I tried to do the same in another java app and i face the same issue.
try (ResultSet<SqlRow> rs = client.sql().execute(null, "SELECT id FROM mytable")) {
while (rs.hasNext()) {
SqlRow row = rs.next();
String id= row.stringValue("id"); // Extract the ID
idList.add(idList); // Add the ID to the list
}
} catch (Exception e) {
System.err.println("Error fetching ids using SQL: " + e.getMessage());
e.printStackTrace();
}
Error i get is :
Caused by: org.apache.ignite.internal.sql.engine.exec.RemoteFragmentExecutionException: IGN-TX-13 TraceId:7ce7b882-b2cd-45db-bc88-e746352e0b3f Transaction is already finished () [txId=0196ee68-1f6c-0000-e5f1-efe300000001, readOnly=true]. at org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl.onMessage(ExecutionServiceImpl.java:610) at org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl.lambda$start$4(ExecutionServiceImpl.java:302) at org.apache.ignite.internal.sql.engine.message.MessageServiceImpl.onMessageInternal(MessageServiceImpl.java:166) at org.apache.ignite.internal.sql.engine.message.MessageServiceImpl.lambda$onMessage$2(MessageServiceImpl.java:132) at org.apache.ignite.internal.sql.engine.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:86) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 1 more
the above is from cluster logs and everything just fails after this. What i dont understand( or confused about) is that it seems to be a simple read operation. Is there no straightforward way of reading data from ignite like this. Am I missing something completely and going offway? In the ignite documentation, I searched for the error **IGN-TX-13 ** but there are no leads on how to resolve this although the error is listed under transactions.
My usecase is very basic i-e writing some data and reading it. I am not even using multiple nodes etc and everything is running locally. Any suggestions/leads would be helpful
Looks like a known bug: https://issues.apache.org/jira/browse/IGNITE-21861
Does it happen every time or randomly?
Everytime.
Someone on stackoverflow suggested to increase the transaction timeout like
Transaction tx = client.transactions().begin(new TransactionOptions().readOnly(true).timeoutMillis(1000000));
This has seemed to work for now and I can read data.
I still cant seem to read data with the cpp app.
@hsadia538 The exception is still the same?
P.S. The result.current_page() method is just a getter and can not fail, you don't have to surround it with try-catch.
I've filed a ticket to implement transaction timeouts for C++ Client: https://issues.apache.org/jira/browse/IGNITE-25488
Thank you.
Yep the same. On java end i can read and write but with cpp , I am still struggling.
It seems to happen when i pass a really large offset e,g
SELECT id FROM mytable LIMIT 2000 OFFSET 200000
The intention is to read a total of 100000 records by batching of 2000.