alpha_vantage
alpha_vantage copied to clipboard
extended time series support
i could be doing something wrong, but i'm getting JSONDecoderError when trying to use extended ts api:
envs\DataStore\lib\json\decoder.py in raw_decode(self, s, idx) 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: --> 355 raise JSONDecodeError("Expecting value", s, err.value) from None 356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
sample code: import alpha_vantage as av from alpha_vantage.timeseries import TimeSeries import os source_key = os.getenv('ALPHAVANTAGE_API_KEY') ts = TimeSeries(source_key) ts.get_intraday_extended('SPY', interval='15min', slice='year1month1')
I am surprised noone responded in 2 months, but I face the same problem.
After debugging sources and reading documentation, it is clear that either this endpoint never worked or alpha_vantage did a breaking change, because per doc:
...this endpoint uses the CSV format which is more memory-efficient than JSON...
This wrapper naively appends &datatype=json
to the end of the uri which does work for the regular API but not extended one.
So a hotfix would be to request output_format="csv"
and then pandas.from_csv
for example
if you don't have premium api key, you got only 5 API requests per minute
Not an API issue. I can get a response if I specify CSV format. the issue becomes converting it. as mentioned in https://github.com/RomelTorres/alpha_vantage/issues/287#issuecomment-803411808 the output is a _csv.reader object so pipe it through list and then use pandas.dataframe.from_records to convert it back to pandas.
csv_list = list(_csv.reader)
df = pandas.DataFrame.from_records(csv_list[1:], columns=csv_list[0])
As far as your API limit if you wanted to iterate through the 24 slices you can simply add a throttling delay to avoid hitting the limit.
Why not implement this into the package? This solution works great
[2023-07] Temporarily closing this issue due to inactivity.