page-lab
page-lab copied to clipboard
PageLab enables web performance, accessibility, SEO, etc testing at scale.
Setup model for logging misc errors, warnings, debug, etc. with fields like: timestamp message traceback levels: ``` LOG_LEVEL_CHOICES = (('0', 'info',), ('1', 'log',), ('2', 'warn'), ('3', 'error',), ('4', 'debug'),) ```
Currently these are open, non-authenticated APIs. Ideally they should be authenticated URLs.
Some APIs that will be needed for the queen server to manage the worker servers: - Get number of workers and which URLs are being processed - Pause all workers:...
Currently, if a worker fails for some reason (an uncaught exception or whatever) it does not re-spawn. Thus over time the # of concurrent workers has the potential to slowly...
Add collector for localstorage length at the end of each run. This allows to report on what pages and how much data is getting stored in LS (poor perf for...
On report detail page, create a run history datatable showing user-timing measures for each run. This would be identical to the existing KPI run history table, but showing all user-timing...
Setup the API/view to pass a Lighthouse config profile ID and settings to use with each URL it provides to the node test runners. This allows different config settings to...
Title says it all. Allows you to see within the run history table, which profile was used for the report. This get messy when you start talking about the "average"...
Enable test runners to post the ID of the config settings profile used for each URL, when posting the Lighthouse report data back to Django app. This allows the relationship...
Enable node app to accept a Lighthouse run config object that contains the settings to use for the test run with each URL it gets from the queue.