Rblpapi
Rblpapi copied to clipboard
Exception handling of error messages
I stumbled upon an interesting issue today. I constantly got the message "Error: Choice sub-element not found for name 'securityData'. which based on #137 made me think that I had breached the daily/monthly data limit, but I could still get data in Excel and through Rblpapi on smaller queries. Working backwards I discovered that there was likely a field limit of max 25 which Bloomberg support later confirmed.
Here is what Bloomberg support came back with running my example in Python.
HistoricalDataResponse = {RESPONSEERROR = {
SOURCE = "BBDBH5"
CODE = 19
CATEGORY = "BAD_ARGS"
MESSAGE = "NUMBER OF FIELDS EXCEEDS MAX OF 25 [NID:107] "
SUBCATEGORY = "TOO_MANY_FIELDS"
}}
I was wondering whether it's possible to capture these event types generically?
If Event.EventType.RESPONSE || Event.EventType.PARTIAL_RESPONSE is in Session::nextEvent() then loop through these two specific event types for AsElement.HasElement("responseError") and throw exception GetElement("responseError").GetElement("message"). Could this not generically catch the different error messages that Bloomberg throws back through the EventQueue?
I would love to contribute directly to this repository but I have zero experience with C++ so it would take me considerably time to implement and would likely be ugly code.
Here is my example if you want to create the error in R.
Rblpapi::blpConnect()
fields <- c("CURRENT_EV_TO_T12M_EBITDA", "CURRENT_EV_TO_12M_SALES", "PX_TO_BOOK_RATIO", "PE_RATIO", "PX_TO_SALES_RATIO", "PX_TO_CASH_FLOW", "SHAREHOLDER_YIELD", "DIVIDEND_YIELD", "T12_FCF_YIELD", "CAPEX_TO_DEPR_EXPN_RATIO", "CUR_MKT_CAP", "OPERATING_ROIC", "NORMALIZED_ROE", "PX_LAST", "VOLATILITY_90D", "EQY_FREE_FLOAT_PCT", "ASSET_TURNOVER", "TRAIL_12M_GROSS_MARGIN", "BEST_TARGET_PRICE", "BEST_ANALYST_RATING", "CUR_RATIO", "RETURN_ON_ASSET", "NORMALIZED_ACCRUALS_BS_METHOD", "COM_EQY_TO_TOT_ASSET", "LT_DEBT_TO_COM_EQY", "TRAILING_12M_SALES_GROWTH")
Rblpapi::bdh(securities = "MMM US Equity", fields = fields, start.date = as.Date("2014-01-01"), options = structure("MONTHLY", names = "periodicitySelection"))
Throws an error.
But only the first 25 fields...
Rblpapi::bdh(securities = "MMM US Equity", fields = fields[1:25], start.date = as.Date("2014-01-01"), options = structure("MONTHLY", names = "periodicitySelection"))
work...and so does the 26th field standalone...
Rblpapi::bdh(securities = "MMM US Equity", fields = fields[26], start.date = as.Date("2014-01-01"), options = structure("MONTHLY", names = "periodicitySelection"))
This is a low priority since eveyone following this repository now know that 25 fields is a limit and more fields should be broken into multiple data requests. But a general solution to error messages would still be awesome.
Thanks for that. We should definitely make the code more robust. At a minimum we can even do something like stopifnot(length(fields) <= 25)
in R before we call. But I also think you gave us something actionable for the C++ level. Time permitting, I'll try to take a closer look. But a little hogged with other project now so if somebody else wants to play -- should make for a nice clean PR.
Does this work in excel? do they account for >25 flds in their code?
@armstrtw Just checked in Excel and apparently it works with more than 25 fields. Strange to constrain API data access through C++ and Python but not in Excel. Maybe because they know API requests through C++ or Python potentially generates much larger data requests (many fields across many securities) and more often compared to Excel. Or maybe they break the Excel request into multiple responses (if number of fields > 25) in the backend and then merge the data for final output in Excel.
Just checked it on my side and we DO take account of this. This is in line with my understanding of how we chunk requests in Excel. The Excel add-in is in effect a consumer of the SDK as much as Rblpapi and the SDK limits historical requests to maximum 25 fields.
I guess we could do the same and just stack 25 column chunks.
Are we guaranteed the same number of rows?