geodatadownloader
geodatadownloader copied to clipboard
Implement server crawling
"Pick Layer" isn't really the best UX for those who don't understand the intricacies of ArcGIS REST services.
When somebody specifies a URL, GDD should provide a good UX. It should crawl the server and list the layers, allowing the user to choose what layer they want to download from.
General Idea:
Url: https://gismaps.kingcounty.gov/arcgis/rest/services/
Hydro
drainage_basins (MapServer)
drainage_basin_boundaries <-- clickable link to <geodatadownloader.com?layer=<drainage_layer>
Property
KingCo_Parcels (MapServer)
King County Parcels <-- clickable link to <geodatadownloader.com?layer=<king_county_parcel>
This way, somebody can quickly start the download process for several layers in different browser tabs at once.
I have been build a search engine for geospatial data currently focused on ArcGIS REST Services. It can be found at https://search.gisdata.io
I have also been building some SDKs so that developers can integrate it into their applications. The python SDK is available now and can be found here. https://github.com/OmniverseXYZ/gisapi-sdk-python
The JS sdk should be coming very soon.
@pdinkins
Oh this is so interesting! Love the concept of a geospatial search engine.
I've been wanting to build something similar for years now.
I'm curious if you're planning on adding any kind of geospatial aspect to searching for data? Like, zoom on a map and only pull relevant data from the extent on the map.
Also, how are you sourcing your data?
Yes I plan on building a spatial index of the data and allowing for a map based search. I am sourcing the data sources from a list created by a community member and I am also working on a web crawling bot that will do it automatically.
Oh nice! What's your method for crawling?
I wrote a paper back in college for ACM SIGSPATIAL around using a chrome extension to have users do the discovery for you, instead of needing to do it yourself. I have a feeling that crowdsourcing is likely important for keeping high quality data.
Oh nice! What's your method for crawling?
I wrote a paper back in college for ACM SIGSPATIAL around using a chrome extension to have users do the discovery for you, instead of needing to do it yourself. I have a feeling that crowdsourcing is likely important for keeping high quality data.
@pdinkins idk if you got a notication for this
I saw it but I was headed out of town so I haven't had a chance to read the paper yet. I plan on reading it Monday morning. Would you like to have a quick chat sometime in the next couple weeks? I don't know what your schedule is like because of the holidays but I am available this week Monday-Wednesday.
@pdinkins, email me at [email protected] and we can set up a time to chat!