Handling pre-flight requests directly
Some endpoints require pre-flight requests. Can iBind handle these? I'm currently implementing a solution like this :
response = client.live_marketdata_snapshot(undConid)
,["31"])
while "31" not in response.data[0]:
sleep(0.1)
print("not ready ...")
response = client.live_marketdata_snapshot(undConid) ,["31"])
Is this the recommended approach?
We could, yeah. I can implement it, it shouldn't be too much work. Do you think you could help out prepping a list of endpoints that need pre-flight requests?
As for what's recommended - I suggest you ask IBKR support about this, I don't know how their server side is implemented.
Ok. After looking the code, The pre-flight checks for the /iserver/marketdata/snapshot endpoint are present in the higher-level live_marketdata_snapshot_by_symbol method, which takes a symbol as input. However, these checks are absent in the lower-level live_marketdata_snapshot method, which directly calls the endpoint. For improved consistency and to ensure the pre-flight logic is always executed, it's better to move it to live_marketdata_snapshot. As for listing endpoint, I see three endpoints with pre-flight logic :
- GET /iserver/marketdata/snapshot
- GET /iserver/account/orders
- GET /iserver/contract/{{ conid }}/algos
Thanks for these observations.
Right, I'm trying to think how to implement this correctly. I remember that I've considered doing the pre-flights in library before, but decided not to because of the following limitations. We could:
- Do what you suggest initially, check if a field is present in the response. For example, we could check the first field in the fields parameter.
- However, there's an issue with this. Sometimes that endpoint will return incomplete data, in that only some fields will be made available.
- I've seen it before myself, that out of 10 fields specified, 1 or 2 are never returned. Calling the endpoint repeatedly didn't help.
- If that's the case, we'd gamble that the value we'd be checking for would be in the batch that does get returned.
- Check for any data being available.
- But sometimes some fields do take longer to propagate. There may be a following case when trying to query 5 different fields:
- response one - contains no fields, pre-flight
- response two and three - contains fields 1, 2 and 5
- response four and onwards - contains fields 1, 2, 3 and 5
- As per the previous point, field 4 is never returned; field 3 gets returned only after the fourth request
- But sometimes some fields do take longer to propagate. There may be a following case when trying to query 5 different fields:
Here lies the issue: If we communicate to the user that we take care of pre-flights, they may not be inclined to continue calling the endpoint in order to collect more data, which may be advisable in some cases. I've seen similar issue with live orders pre-flights.
I'm on the fence, since I do think it would be a good idea to automate things if possible. Not doing it however, would cause each user to read about the pre-flights individually, and through hands-on trial and error discover how the data is being fed back to them - allowing them to adjust their system accordingly.
Thoughts?
That's a very interesting point. I think the responsibility for pre-flight requests should fall to the developer, not be handled automatically by iBind. As you pointed out, we don't always know how the server works, and too many requests could get us IP-banned by IBKR. While iBind's automatic pre-flight handling is very, very convenient, it might be better to disable it by default, with an option to enable it when needed.
Right, thanks for sharing your view on this. I think I agree, we shouldn't take care of this for users.
I updated the WiKi to explain the pre-flight situation better. I think this is a responsible solution to the problem. Please have a look and let me know if anything could be modified there: https://github.com/Voyz/ibind/wiki/Advanced-REST#-pre-flight-requests
It may be overkill, but here is the retry routine that I have implemented to test for data keys / truthiness and retry. I have done preliminary testing, and so far so good. Any feedback / refinement welcomed.
def _retry_call(self, func, keys=None, retries=None, sleep=None, default=_SENTINEL, **kwargs):
"""
Wrapper for API calls with retries.
:param func: API function call.
:param keys: List or tuple of keys to test. If any key is a list or tuple then index 1 onward of that key
will be used for a recursive key test of data[index 0]. If any lookup fails or yields a falsy result
then a retry will be attempted.
:param retries: Retries override. If not specified will use the initialised retry count.
:param sleep: Sleep time in seconds between retries.
:param default: Default value to return when retry limit exceeded.
:param kwargs: Arguments passed to the API function call.
:return: Result of the API call.
"""
retries = self._retries if retries is None else retries
def test_keys(data, keys):
keys = keys if isinstance(keys, (list, tuple)) else [keys]
try:
for k in keys:
if isinstance(k, (list, tuple)):
test_keys(data[k[0]], k[1:] if len(k) > 1 else [])
else:
assert data[k]
except (KeyError, IndexError, TypeError, AssertionError):
raise
attempt = 0
while True:
attempt += 1
data = func(**kwargs).data
if keys is not None:
try:
test_keys(data, keys)
break # Provided keys DID NOT assert (present and have truthy values) - stop retrying
except (KeyError, IndexError, TypeError, AssertionError):
pass # Provided keys DID assert, so data response is incomplete - retry
elif data:
break # No keys to test and non-empty data - stop retrying
if attempt <= retries:
logger.info(f"{func.__name__}() incomplete response, attempt {attempt} of {retries + 1}")
if sleep:
time.sleep(sleep)
else:
logger.warning(f"{func.__name__}() incomplete response after maximum attempts ({retries + 1})")
data = default if default is not _SENTINEL else data
break
return data
@zacciep what is the code trying to assert here? That there is data in every list? ie. is this the core to your test assert data[k]? Wouldn't it break if data[k] is a boolean of value False?
For simplicity it is looking for a "truthy" value (non-zero or True). From memory some requests can erroneously return zero values, so key presence is not always sufficient to signal a complete response. I anticipate that nearly all requests should include at least one field that is reliably not None or zero / False. If a user wanted to handle False then some isinstance(bool) logic could be added.
The keys argument only tests the keys provided - not every key. It does include the ability to drill-down / recurse (though perhaps not the most elegant implementation).
Thanks for the explanation. You say:
For simplicity it is looking for a "truthy" value (non-zero or True)
But reading the code, I think it tries to assure that all are "truthy", not just looking for "a truthy" value - is that correct?
Unless I've implemented it incorrectly it should tests all "keys" as provided by the "keys" argument. Not all "keys" in the data response. So the user can selectively test keys. Key items that are sequences are used to initiate a recursive drill-down into the data. Some examples:
keys=[0]: Testdata[0]for truthinesskeys=[0, 3]: Testdata[0]anddata[3]for truthinesskeys=[[0, 'account', 'order_ref'], 1]: Testdata[0]['account'],data[0]['order_ref']anddata[1]for truthiness (e.g. for data that is a list of dicts)
Hope that helps. But if you think this doesn't align with the code then let me know. It passed my initial (non-exhaustive, admittedly) testing.