[Bug]: Cannot use 'in' operator to search for 'content'
What component(s) are affected?
- [ ] Opik Python SDK
- [ ] Opik Typescript SDK
- [ ] Opik Agent Optimizer SDK
- [x] Opik UI
- [ ] Opik Server
- [ ] Documentation
Opik version
- Opik version: 1.7.26
Describe the problem
I get an error when trying to open a project to see the traces in it. So i'm unable to open any project that has 'in' in the content, but this is a frequently used word in chatbots
Getting this with a newly created project as well, so not an issue with 1 project in the platform
Reproduction steps and code snippets
Using langgraph and having "in" in the original input content while running an agent
Error logs or stack trace
Example error: Cannot use 'in' operator to search for 'content' in hey, ik merk dat XYZ traag loopt, is dit iets lang jullie kant of wat kan hier aan gedaan worden?
Healthcheck results
opik healthcheck
*** HEALTHCHECK STARTED ***
Python version: 3.12.8
Opik version: 1.7.1
*** CONFIGURATION FILE ***
Config file path: /Users/cedric/.opik.config
Config file exists: yes
*** CURRENT CONFIGURATION ***
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Setting ┃ Value ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ api_key │ *** HIDDEN *** │
│ background_workers │ 4 │
│ check_tls_certificate │ True │
│ console_logging_level │ INFO │
│ default_flush_timeout │ None │
│ enable_litellm_models_monitoring │ True │
│ file_logging_level │ None │
│ logging_file │ opik.log │
│ project_name │ gl-generalizer │
│ pytest_experiment_enabled │ True │
│ sentry_enable │ True │
│ track_disable │ False │
│ url_override │ https://www.comet.com/opik/api/ │
│ workspace │ xxx │
└──────────────────────────────────┴─────────────────────────────────┘
*** CONFIGURATION SCAN ***
Configuration issues: not found
*** BACKEND WORKSPACE AVAILABILITY ***
--> Checking backend workspace availability at:
https://www.comet.com/opik/api/
Backend workspace available: yes
*** HEALTHCHECK COMPLETED ***
hey @se-bright , thank you for reporting this issue, we'll look into it!
Hi @se-bright Apologies for the delay on this, we misunderstood the issue and didn't realize you were facing a full page crash..
It took us a while to track it down but I opened a PR for it and should be fix promptly
This fix has been released