algolia-webcrawler
algolia-webcrawler copied to clipboard
amazon lamda
Works perfect locally But any attempts to run as lamda on aws are unsuccesful. Execution started and ENDED before parsing without any error messages
START RequestId: a966b45b-42df-4591-8d5b-4b49495a486e Version: $LATEST 2020-02-06T14:44:01.116Z a966b45b-42df-4591-8d5b-4b49495a486e INFO Welcome to "myapp" algolia-webcrawler v3.2.32020-02-06T14:44:01.173Z a966b45b-42df-4591-8d5b-4b49495a486e INFO undefined2020-02-06T14:44:01.193Z a966b45b-42df-4591-8d5b-4b49495a486e INFO Loaded "./config.json" configuration2020-02-06T14:44:01.193Z a966b45b-42df-4591-8d5b-4b49495a486e INFO undefined2020-02-06T14:44:01.274Z a966b45b-42df-4591-8d5b-4b49495a486e INFO Configuring your index battEND RequestId: a966b45b-42df-4591-8d5b-4b49495a486e REPORT RequestId: a966b45b-42df-4591-8d5b-4b49495a486e Duration: 6671.46 ms Billed Duration: 6700 ms Memory Size: 128 MB Max Memory Used: 101 MB Init Duration: 119.32 ms XRAY TraceId: 1-5e3c262a-dd0d270c69f75a30a718ae98 SegmentId: 0bd2d5500772a425 Sampled: true
Any idea??
I've never even tried it !
But it may takes time to read the sitemap and fetch all pages so maybe the lambda function wont wait for all fetch to occur before it kills it ?
It never was meant to be run in a http handler (more as a command line program)