Program is nested too deep error
Description When I query fields more than 200, it doesn't work and raise error like "Program is nested too deep error", the info is following:
compilation failed: error @4:22-4:1706: Program is nested too deep error @4:1710-4:1737: Program is nested too deep error @4:1741-4:1749: Program is nested too deep error @4:1753-4:1768: Program is nested too deep error @4:1772-4:1773: Program is nested too deep error @4:1774-4:1780: Program is nested too deep
the influx sql only contains fileds which combined with "or", like the following:
from (bucket: "my_bucket" )|> range(start: 1641353027) |> filter(fn: (r) => r._measurement == "component") |> filter(fn: (r) => r._field == "ds" or r._field == "fg") |> sort(columns: ["_time"], desc: true) |> limit(n:2)
Hi, I have got the exact same probleme here. Any update ? I use influx-client-js tp query, is there a workaround using direct http request which might be more permissive ? Since there is now way to retag existing data, it is not possible to create a more global filtering tag Any suggestions ? Good evening, Loic
The Flux parser limits the depth that the AST can reach, which could happen when operating on a large number of fields. My only real suggestion is to consider how you could restructure your schema to split those fields out into different measurements, thus allowing you to query more easily. This is a good post that discusses how schema layout can affect things like performance and size.
For all who are still looking for a solution, I could suggest this code : |> filter(fn: (r) => r["device"] =~ /deviceId1|deviceId2|deviceId3/)
I have tester it with 3800 deviceIds without any problème, I don't know if there is a limit even with this method About performances, my request was quite simple but It took less than 1s to process it (even with the 3800 Ids)
Restructure my schema is not an option. I think many others hit the same problem I had. I have a service wich push from many devices to influx directly, without querying my Relational Database, doing this way I save money saving huge amount of query and I respect the seperate of concerne principle. Actually I use my RDB to store all the relations ans meta data (localisation, owner...) and influx store time series data of all my sévices, the unique correspondance key is deviceIds. It's really easy to maintain and efficient since I query my rdb to get list of IDS depending on specific relations and then I query influx To be constructiv with influx teams, I would suggest this :
- developp the possibility to bulk update meta data (I mean fields) on many device RETROACTIVELY (which is not possible from nom)
- or developp something optimize for an equivalent of the "IN" operator (maybe my suggestion above is, it could deserve to be more communicate) My suggestion comes from a simple need, As influx is not a relational data base, we need to store those relations somewhere, two possibilities : in influx as field or in a separate RDB, so for the first one we need a way to update and manage efficiently fields (I mean retroactively too) or for the second way, we need a way to have a correspondance key with RDB (like a foreign key) Hope it helps someone ;-) Good evening
The problem has been around for a long time, but is there really no good solution? I'm encountering this issue as well.
Hi, I'm encountering this issue too.
I got this error when specifying 373 field values in one query.
So, is there any updates about the issue?