Skill: Deribit API Integration
Feature: Deribit API Integration Skill
Overview
This issue proposes the development of a new IntentKit skill to integrate with Deribit’s public API. The objective is to enable AI agents to retrieve and analyze crypto options trading data to surface high-quality trading opportunities and generate intelligent reports for users.
Use Cases
This skill will support AI agents in:
- Fetching real-time and historical options market data.
- Performing analysis on implied volatility, historical volatility, funding rates, and option pricing behavior.
- Identifying high-potential trades based on expiry, strike, and volatility anomalies.
- Generating structured trading reports or insights.
- Comparing pricing across different expiries for the same underlying asset.
Core API Endpoints to Integrate
Market Summary and Instrument Discovery
-
/public/get_instruments– Full list of instruments (including strikes, expiries, types). -
/public/get_expirations– Clean access to all expiry dates for a given currency. -
/public/get_book_summary_by_instrument– Core pricing/IV data for individual options. -
/public/get_book_summary_by_currency– Batch summary of all option instruments under a currency (e.g. BTC, ETH).
Trade and Pricing Data
-
/public/get_last_trades_by_instrument– Last trades for individual instruments. -
/public/get_last_trades_by_currency_and_time– Grouped trades across instruments (e.g. all options expiring next Friday). -
/public/get_order_book– Full order depth for an instrument. -
/public/get_order_book_by_instrument_id– More targeted version of above.
Volatility and Reference Data
-
/public/get_index_price– Reference price for the underlying asset. -
/public/get_volatility_index_data– Market-wide IV benchmark. -
/public/get_historical_volatility– Historical realized vol for backtesting IV-HV spreads. -
/public/get_mark_price_history– Time series data for fair pricing behavior. -
/public/get_funding_rate_value– Additional context for perpetual vs options bias.
🧠 Example Agent Flow: Surfacing Expiry-Based Trade Opportunities
-
Discover all option instruments using
/public/get_instruments -
Filter relevant expiries using
/public/get_expirationsor instrument metadata -
Batch fetch summaries with
/public/get_book_summary_by_currencyto analyze pricing across expiries -
Compare IV vs HV using
/public/get_volatility_index_dataand/public/get_historical_volatility -
Analyze trade activity using
/public/get_last_trades_by_currency_and_time -
Drill into specific opportunities using
/public/get_order_book_by_instrument_id -
Generate report summarizing:
- Best IV discounts or spikes
- Unusual volume clusters by expiry
- Mispricing relative to mark/index price
Skill Output Expectations
- JSON-formatted structured responses for all API queries
- Helper functions to:
- Filter instruments by type, expiry, and strike
- Normalize trade and IV data
- Generate rankings or summaries for agents
- Optional: Prompt templates for reports like:
- “Top 3 underpriced BTC call options expiring next week”
- “Options volume spikes by expiry”
Additional Resource
📘 Deribit Institutional Setup Guide (PDF) 📘 API Doc
Priority
Medium – Valuable for financial and analytics agents. Adds deep support for volatility trading and DeFi quant logic.
i've build the skills, but some endpoints returns a lot of json data that some llm couldnt handle it
my solution is to just convert it to CSV, as this will save up to 80% tokens but still send the description to let the llm knows what the fields is about
return {
"descriptions": // description of each field (from pydantic model),
"data_csv": // json from api, converted to csv
}
full code that convert it to csv
import csv
import io
import json
from typing import Any, Dict, Optional, Type
from pydantic import BaseModel
def json_result_to_csv_string(json_data, descriptions: Optional[Type[BaseModel]] = None) -> Dict[str, Any]:
"""
Extracts 'result' key from JSON and converts it to a JSON object containing
CSV string and descriptions from a Pydantic model.
:param json_data: JSON data (string or dict)
:param descriptions: Pydantic BaseModel class with field descriptions
:return: Dict with 'descriptions' and 'data_csv' keys
"""
if isinstance(json_data, str):
json_data = json.loads(json_data)
if not isinstance(json_data, dict):
raise ValueError("JSON data must be a dictionary.")
if "result" not in json_data:
raise ValueError("JSON data must contain a 'result' key.")
result = json_data["result"]
if isinstance(result, dict):
result = [result]
elif not isinstance(result, list):
raise ValueError("'result' must be a dictionary or a list of dictionaries.")
if len(result) == 0:
raise ValueError("'result' is empty.")
keys = set()
for item in result:
if not isinstance(item, dict):
raise ValueError("Each item in 'result' must be a dictionary.")
keys.update(item.keys())
keys = sorted(keys)
field_descriptions = {}
if descriptions:
# Check if there's a 'result' field and it's a List
result_field = descriptions.model_fields.get("result")
if result_field and hasattr(result_field.annotation, "__args__"):
inner_model = result_field.annotation.__args__[0] # Get the type inside List[]
if issubclass(inner_model, BaseModel):
field_descriptions = {
key: (inner_model.model_fields.get(key).description or "")
for key in keys if key in inner_model.model_fields
}
def serialize_value(value):
if isinstance(value, (dict, list)):
return json.dumps(value)
return value
serialized_result = []
for item in result:
serialized_item = {k: serialize_value(item.get(k)) for k in keys}
serialized_result.append(serialized_item)
output = io.StringIO()
dict_writer = csv.DictWriter(output, fieldnames=keys)
dict_writer.writeheader()
if field_descriptions:
dict_writer.writerow({k: field_descriptions.get(k, "") for k in keys})
dict_writer.writerows(serialized_result)
csv_string = output.getvalue()
output.close()
return {
"descriptions": field_descriptions,
"data_csv": csv_string
}
would you be okay with this?
Would we then create a URL for the user to download the CSV? what are we doing with the CSV? the goal is that the agent can use the info and create a summary answer. is that possible with the csv?
On Tue, Apr 29, 2025 at 11:14 AM Yorn @.***> wrote:
yornfifty left a comment (crestalnetwork/intentkit#481) https://github.com/crestalnetwork/intentkit/issues/481#issuecomment-2837325014
i've build the skills, but some endpoints returns a lot of json data that some llm couldnt handle it
image.png (view on web) https://github.com/user-attachments/assets/4b6c31a3-1f0e-48a5-bb65-0652da0a0464
my solution is to just convert it to CSV, as this will save up to 80% tokens but still send the description to let the llm knows what the fields is about
return { "descriptions": // description of each field (from pydantic model), "data_csv": // json from api, converted to csv }full code that convert it to csv
import csvimport ioimport jsonfrom typing import Any, Dict, Optional, Type from pydantic import BaseModel
def json_result_to_csv_string(json_data, descriptions: Optional[Type[BaseModel]] = None) -> Dict[str, Any]: """ Extracts 'result' key from JSON and converts it to a JSON object containing CSV string and descriptions from a Pydantic model. :param json_data: JSON data (string or dict) :param descriptions: Pydantic BaseModel class with field descriptions :return: Dict with 'descriptions' and 'data_csv' keys """
if isinstance(json_data, str): json_data = json.loads(json_data) if not isinstance(json_data, dict): raise ValueError("JSON data must be a dictionary.") if "result" not in json_data: raise ValueError("JSON data must contain a 'result' key.") result = json_data["result"] if isinstance(result, dict): result = [result] elif not isinstance(result, list): raise ValueError("'result' must be a dictionary or a list of dictionaries.") if len(result) == 0: raise ValueError("'result' is empty.") keys = set() for item in result: if not isinstance(item, dict): raise ValueError("Each item in 'result' must be a dictionary.") keys.update(item.keys()) keys = sorted(keys) field_descriptions = {} if descriptions: # Check if there's a 'result' field and it's a List result_field = descriptions.model_fields.get("result") if result_field and hasattr(result_field.annotation, "__args__"): inner_model = result_field.annotation.__args__[0] # Get the type inside List[] if issubclass(inner_model, BaseModel): field_descriptions = { key: (inner_model.model_fields.get(key).description or "") for key in keys if key in inner_model.model_fields } def serialize_value(value): if isinstance(value, (dict, list)): return json.dumps(value) return value serialized_result = [] for item in result: serialized_item = {k: serialize_value(item.get(k)) for k in keys} serialized_result.append(serialized_item) output = io.StringIO() dict_writer = csv.DictWriter(output, fieldnames=keys) dict_writer.writeheader() if field_descriptions: dict_writer.writerow({k: field_descriptions.get(k, "") for k in keys}) dict_writer.writerows(serialized_result) csv_string = output.getvalue() output.close() return { "descriptions": field_descriptions, "data_csv": csv_string }would you be okay with this?
— Reply to this email directly, view it on GitHub https://github.com/crestalnetwork/intentkit/issues/481#issuecomment-2837325014, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADK5QILJTQB4MDTQJUJHOT233VCLAVCNFSM6AAAAAB3X7BYJ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQMZXGMZDKMBRGQ . You are receiving this because you authored the thread.Message ID: @.***>
-- Marouen Zelleg
Of course, you can process the API result before sending it to the AI. As long as you roughly describe the format and structure of the result in the skill prompt, the AI can understand it.
On the other hand, you can control the request parameters to ensure that you only obtain the necessary data.
Would we then create a URL for the user to download the CSV? what are we doing with the CSV? the goal is that the agent can use the info and create a summary answer. is that possible with the csv? …
- user dont have to download anything
- the csv is in this context is not written to any file, its a regular text/string that are directly passes to LLM i convert the json format into csv format
example
[
{
"name": "Alice",
"age": 30,
"city": "New York"
},
{
"name": "Bob",
"age": 25,
"city": "Los Angeles"
},
{
"name": "Charlie",
"age": 35,
"city": "Chicago"
}
]
converted to
name,age,city
Alice,30,New York
Bob,25,Los Angeles
Charlie,35,Chicago
as you can see on my return function above
return {
"descriptions": field_descriptions,
"data_csv": csv_string
}
so the final tool return would be like this
{
"descriptions": {
"name": "The person's name",
"age": "The person's age",
"city": "The city where the person lives"
},
"data_csv": "name,age,city\nAlice,30,New York\nBob,25,Los Angeles\nCharlie,35,Chicago"
}
this returns will be passes to llm for them to read and analyze
why do i came up with this solution?
- json redefine the same key on every entries, while in csv format, we only define it at the very top
- less key = less size = less token used
- we can also pass description on every field of csv like this
"descriptions": {
"name": "The person's name",
"age": "The person's age",
"city": "The city where the person lives"
},
"data_csv": "name,age,city\n....."
now the llm should have the context of what the field is actualy means