data-science
data-science copied to clipboard
Builtwith API Project
Overview
Project: Open Community Survey
Volunteer Opportunity: Create scraper to get information from builtwith.com on technologies used by neighborhood council websites. Organize the data (create categories for the tech), Automate scrape job to run periodically. Additionally, we want to display this information with a dashboard (see Google Data Studio Dashboard linked below under "Project Output" for an example).
Contact: ~Ryan Swan (data science), Kaylani (open community survey)~ Bonnie
Action Items
- [x] Create a wiki page
- [x] Build a scraper that can we reuse to get the data on the NC site technologies
- [x] Add the scripts and other code to the data science repo or if another repo is required, let the leads know.
- [x] Create a Spreadsheet from the results of initial scrape
- [x] ~Create a set of categories in the spreadsheet~
- [x] Rework the script to grab category as well as technology (its available in the API)
- [x] Add category to each technology, so that the data can be grouped and analyzed will happen automatically prior item is done
- [ ] Assess code for current scraper to determine if it still functions properly
- [ ] Perform additional analysis on Widget technology category: Which sites are using calendars? What are the calendars used for (events of the NC or local events)? Which sites use chatbots? Which sites have search functionality? How many sites use translation widgets?
- [ ] Finish analysis of following technology categories: Content Management system (cms), Mobile, SSL, Payment, Framework, and Copyright
- [ ] Fix directory issues with code. Currently, it's in the 311 directory but needs to be moved to the open community survey directory.
- [ ] ~Create a reusable matching table of technology to category~
- [ ] ~Create a script to be able to create a new spreadsheet with the matching table so that the technologies are already categorized (except of course the ones that are new).~
- [ ] ~create instructions for updating matching table and running scripts.~
- [ ] make sure wiki is updated.
- [ ] release dependency on - https://github.com/hackforla/open-community-survey/issues/28
Resources/Instructions
External Tools
- Builtwith
- https://builtwith.com
-
builtwith API
- API limitations: Some sites, are resistant to being crawled (WordPress, for instance https://atwatervillage.org/calendar/). So what we need is a list of all the sites that can't be put through the sitemap maker. See notes about WordPress site crawling: https://community.funnelback.com/knowledge-base/implementation/Gather-And-Index/integration/crawl-wordpress-sites
- Selenium
- Docker
Tutorial
Project input (data)
- Target Website List Here - this is one tab on a larger analysis workbook.
Project output
- Data Science wiki, 99 NC project
- Spreadsheet of Rajinder's script results
- Spreadsheet of updated script results
- Example of Google Data Studio Dashboard
Rajinder's code
- code on data-science repo with Rajinder + Willa's code - this will need to be moved to another directory. It has nothing to do with 311. Its a project for Open Community Survey.
- Rajinder's personal repo - this seems to be updated more recently than the one on data-science.
Current presentation
OCS: Tech usage insights NCs Analytics Analysis Workbook Widgets Analysis Workbook
Related issues from OCS
- https://github.com/hackforla/open-community-survey/issues/25
Past Collaborators:
@akibrhast, @ava li, @Sarah Williams, @wendywilhelm10 @rajindermavi @ShikaZzz @JessicaFB @Poorvi Rao
@ryanswan @salice @JessicaFB @poorvi4 @akibrhast, @ava li, @sarah Williams, @wendywilhelm10 I forgot to mention that @mattyweb has a report tool that he has setup on our AWS and it might be a good place to dump all this data (from the Comparative analysis of features and then the technologies from builtwith api), then we can define the types of reports we want to display and it would allow people to look at a single NC, as well as aggregate stats, etc. Think of it as a real time data visualizers for end users once we have figured out what is worth looking at. At least that's my understanding of how it works. Would be good to ask him to come to the Data Science Community of Practice at some point to discuss it.
@salice Please provide update
- Progress
- Blockers
- Availability
- ETA
@ryanmswan Please provide update
Progress Blockers Availability ETA
@rajindermavi here is the drive with the tutorials. They're large files so they might be still uploading for a few minutes.
@rajindermavi here is the drive with the tutorials. They're large files so they might be still uploading for a few minutes.
@chelseybeck The folder is empty.
@rajindermavi Please provide update:
- Progress
- Blockers
- Availability
- ETA
@ebele-oputa
I got the docker / selenium version working. I can scrape builtwith and output data to a json file. I am now going to try to get all websites from online source using selenium. I'll be at the data science meeting tonight to discuss.
@ebele-oputa
I made a pull request with the webscraping for all websites. It includes a dockerfile and script that produces a json file, which I included.
Thanks @rajindermavi for the updates and the work done! Looking forward to receiving a readable file containing the list in a Google sheet.
@rajindermavi If my understanding is correct, the current issue is that we need to extract the link of each webpage in an NC website and run them through builtwith because builtwith can only analyze the technology used for a specific link instead of the entire website.
In order to extract the link of each page, we could use online tool but the running time is super long and it also extracts the pdf file links. Instead, we could use Python package or /sitemap.xml but neither of these 2 methods can return results if a website does not have a sitemap
What would be the ETA for the next step?
The issue is not that the site does not have a sitemap, its that some sites, are resistant to being crawled (wordpress for instance https://atwatervillage.org/calendar/). So what we need is a list of all the sites that can't be put through the sitemap maker.
See notes about wordpress site crawling: https://community.funnelback.com/knowledge-base/implementation/Gather-And-Index/integration/crawl-wordpress-sites
@rajindermavi can you provide a progress report on this issue?
- Rajinder's personal webscraping repo - seems to be ahead of the repo https://github.com/hackforla/data-science/tree/main/311-data/webscraping
finished scrapping and is working on extracting data from json file and data analysis. plans: extracting data, possibly arrange data into unified dataframe, data analysis ETA for data analysis is about a week.
Abe and Bonnie will clean this issue up. Objective is that this spreadsheet OCS: Builtwith data on 99 NCs technologies will have the data on the NCs that is needed for understanding what they use tech for.
At the top of this issue there was a link for [OCS - NC: Competitive or Comparative Analysis Template] but it went to https://www.sciencedirect.com/science/article/abs/pii/S0161642016307321, which is clearly a mistake, so we removed it.
@akhaleghi - we finished reviewing this issue
Resources we have questions about
Why are we linking to a specific branch in our repo? Selenium Scraping Tools - see branch. It's not necessarily a problem, but it would be good to know why and document it here.
Rajinder's code seems to be in two places. Please sort out
- code on data-science repo with Rajinder's code - this will need to be moved to another directory. It has nothing to do with 311. Its a project for Open Community Survey
- Rajinder's personal repo - this seems to be updated more recently than the one on data-science.
Review Action Items above
Please review the action items at the top of this issue, with Ryan and Sophia so that they can identify any new steps we need to add to the issue either because they are missing, or now we need to do something to get it back on track (e.g., sorting out the difference between old and new code from Rajinder).
Missing notes
Also, I remember Rajinder saying something about the API timing out, or having a limit to how many API calls you could make on one IP. So it's possible that we will need to build throttling or IP hoping, into the script if there is none, or ask them for a non-profit license to use their API in exchange for credit/logo placement in our final published public report. But it would be good to get Rajinder to document what he was experiencing, so we don't have to recreate the issue.
Update: Messaged Rajinder to get him to update the files in the data science repository.
Updated the files in the data science repo. The scraper now produces a table including tech categories, tech urls, and total usage count of each tech (total_count). Linked new output as google sheet in readme.
Added new spreadsheet to OCS folder and updated project wiki page
Hi @willa-mannering are there any updates to the issue for this week?
No new updates for this issue, it should be finished now.
@willa-mannering We just looked at the results and it looks like we will need to dive deeper into the results that come back for framework. For instance
When clicking on
Organization Schema | https://trends.builtwith.com/framework/Organization-Schema | framework The section with a red outline tells us what we need to know about the framework. In this instance, it's schema.org
schema.org
In the next example its Wordpress Elegant Themes | https://trends.builtwith.com/framework/Elegant-Themes | framework
wordpress.org
Potential additional information I can collect from each tech type includes: subcategories (i.e. WordPress Theme), tech description, tech website link, number of sites currently using tech, and competing/similar techs.
Pull all the additional info available with the script and then decide what information is needed in the future meeting
@willa-mannering @ambersu123 are there updates on what additional information needs to be pulled with this script?
No decision on what additional info to pull. I've written a script to pull all options mentioned in my previous comment and am now waiting on input from the OCS team.
@willa-mannering you said
Potential additional information I can collect from each tech type includes: subcategories (i.e. WordPress Theme), tech description, tech website link, number of sites currently using tech, and competing/similar techs.
OCS team said this in response
Pull all the additional info available with the script and then decide what information is needed in the future meeting
So to be clear, we are saying yes, please pull all the information you said you could pull.
@akhaleghi please add this as a recurring reporting item to our DS/Org agenda
@willa-mannering It looks like this got discussed at a meeting but never annotated on this issue, that we only need the above subcategories for the items marked TRUE in the OCS: Builtwith tech_table, tech_categories
The different columns are for our own reference and have no significance for you. Just grab more info for any of the columns marked TRUE.