Ian Dees
Ian Dees
Unfortunately we don't have the funding to support historical data releases in the public API. This means the API endpoints for older releases won't work. I can update the docs...
On the ACS data endpoints we have a "fake release" called "latest" that looks for the most recent data that contains the geoid you're asking for. Perhaps we could add...
After doing another data update for 2015 ACS data, I'm really interested in making this happen. I'd like to figure out a way to keep Census Reporter running for longer...
The way I like to describe it is that it's a big huge spreadsheet with thousands of _columns_ (columns are grouped into _tables_ of related columns – these are only...
The current setup is one EC2 instance querying an RDS instance. The 300GB EBS volume attached to the RDS instance is currently the biggest expense we have and what I...
It's only ever an ID lookup. There's currently no functionality to do things like `WHERE population > 50000`, only `WHERE geo_id IN ('04055', '04023')`.
There are many, many more in the 5-year, but splitting into files per geoid does make sense for the kinds of queries people would tend to do (focused on geographies).
You can probably leave out geospatial indexing for now. We can pre-generate vector tiles for all of TIGER.
Thanks for doing this, @migurski. This is really great. I will take a closer look tomorrow, but this seems like a great start. Some first thoughts: - To match our...
I tried the gzipped JSON blob idea. Example is posted on S3 [here](https://s3.amazonaws.com/embed.censusreporter.org/test/04000US02.json). It's 249KB gzipped on S3 and 1.1MB expanded in the browser. Timing a fetch, decompress, parse says...