datajob icon indicating copy to clipboard operation
datajob copied to clipboard

Build and deploy a serverless data pipeline on AWS with no effort.

Results 20 datajob issues
Sort by recently updated
recently updated
newest added

``` with StepfunctionsWorkflow( datajob_stack=mailswitch_stack, name="workflow" ) as step_functions_workflow: join_labels >> ... ``` it might also be easier to execute a workflow that has the same name as the stack

- to avoid errors when executing datajob cli - set region per project - set AWS_PROFILE per project - maybe in a `.datajob/` folder ?

- start an execution from a specific step in the workflow. - --from "job-name" - start from last error `--from-last-error` - get the current workflow and save it, construct an...

``` @stepfunctions_workflow.task class SomeMockedClass(object): def __init__(self, unique_name): self.unique_name = unique_name self.sfn_task = Task(state_id=unique_name) ``` better resemble reality

let the user add **kwargs to; - all the cdk object, the create functions - to all the step functions object. check the stepfunctions_workflow

``` (node:17808) ExperimentalWarning: The fs.promises API is experimental python: can't open file 'deployment_glue_datajob.py': [Errno 2] No such file or directory Subprocess exited with error 2 DVCL643@10NB03610:~/workspace/python/aws_best_practices$ cd glue DVCL643@10NB03610:~/workspace/python/aws_best_practices/glue$ cdk...

The `None` has a capital letter which is invalid. I ran: ``` export AWS_DEFAULT_ACCOUNT=_____________29 export AWS_PROFILE=my-profile export AWS_DEFAULT_REGION=your-region # e.g. eu-west-1 /datajob/examples/data_pipeline_simple$ datajob deploy --config datajob_stack.py cdk command: cdk deploy...

On a Linux box with conda, this could be an explanation on how to get pytest running. ``` /home/peter_v/anaconda3/bin/python -m pip install --upgrade pip # to avoid warnings about spyder...

- context - stage - ... what is this all about?