superbowleto
superbowleto copied to clipboard
:football: A microservice to issue, register and manage boletos
superbowleto
:football: A microservice to issue, register and manage boletos
Table of Contents
- Technology
-
Developing
- First Install
- Running tests
- Installing new dependencies
- Testing
-
Data Flow
-
Server
-
1. POST /boletos
- a) Provider could process the boleto
- b) Provider could not process the boleto
- 2. GET /boletos
- 3. GET /boletos/:id
-
1. POST /boletos
-
Worker
-
1. Process
boletos-to-register
queue
-
1. Process
-
Server
-
Staging
- Accessing the Pedrero
Technology
Here's a brief overview of our technology stack:
- Docker and Docker Compose to create our development and test environments.
- AWS Fargate to manage and deploy our containers.
- AWS SQS as a queue manager to process things like boletos to register and SQS Quooler as a package to make the interaction with queues more easy.
- Postgres as to store our data and Sequelize as a Node.js ORM.
- Ava as a test runner and Chai to do some more advanced test assertions.
- Express as a tool to build the web server that handles our boleto endpoints.
Developing
In order to develop for this project you must have Docker and Docker Compose installed.
First Install
If you never developed in this repo before:
- Clone the repository:
$ git clone [email protected]:pagarme/superbowleto
- Build the base images:
$ docker-compose build superbowleto-web
Running the server
To run the server, you will have to start the database and run the migrations.
- Start database and run migrations in one command:
$ make setup-db
- Or start database and run migrations separately:
-
Start database (postgres):
$ make start-db
-
Run the migrations:
$ make migrate
- Then finally run the server:
$ make superbowleto-web
Running tests
Tests are separate in functional
, integration
and unit
. You can either run them separately or run them all.
-
Run all tests:
$ docker-compose run test
-
Run only
functional
tests:$ docker-compose run test npm run test-functional
-
Run only
integration
tests:$ docker-compose run test npm run test-integration
-
Run only
unit
tests:$ docker-compose run test npm run test-unit
For the CI purposes we have a specif command that will generate coverage report and xml test results report to be published at the CircleCI.
$ docker-compose run --entrypoint="npm run test-ci" test --abort-on-container-exit
Installing new dependencies
We install our dependencies (aka npm dependencies) inside the Docker image (see our Dockerfile to understand it better).
This gives us the advantage of caching the dependencies installation process, so when we build the image again, it's already cached by Docker and the image can be easily distributed with all its dependencies installed.
However, if you need to install any new dependency, you must rebuild the image, otherwise, your dependency will not be available inside the container.
You can install dependencies and rebuild the image by running:
$ docker-compose run test npm install --save ramda
$ docker-compose build test
Testing
Tests are found inside the test/
directory and are separate by type: functional
, integration
and unit
. It's also common to have some helpers
folders alongside the tests.
-
Unit
tests are used to test the smallest units of functionality, typically a method or a function ref.The folder structure of the unit tests tend to mirror the folder structure of the
src
folder. For instance, we generally see the following folder structure:├── src │ ├── index.js │ └── lib │ └── http.js └── test └── unit ├── index.js └── lib └── http.js
-
Integration
tests build on unit tests by combining the units of code and testing that the resulting combination functions correctlyref.The folder structure of the integration tests tend to mirror the folder structure of the
src
folder. For instance, we generally see the following folder structure:├── src │ ├── index.js │ └── lib │ └── http.js └── test └── integration ├── index.js └── lib └── http.js
-
Functional
tests check a particular feature for correctness by comparing the results for a given input against the specification. Functional tests don't concern themselves with intermediate results or side-effects, just the resultref.The folder structure of functional tests does not need to mirror the source folder, and the files can be organized as they seem fit. One way to organize this files is by
feature
oruser-story
. For instance, take a look at the example below, whereboleto/create.js
andboleto/register.js
are complete user stories:├── test └── functional └── boleto └── create.js └── register.js
-
Helpers
do not test anything, but instead provide tools for the tests. Inside thehelpers
folders one can havefixtures
(also know as "mocks"), or some util functions.For instance, if you need credit card information to perform various tests in many different places, or if you need an util function that is called before your tests are ran, you could place them inside a
helpers
folder in order to not repeat yourself:const creditCardMock = { number: 4242424242424242, holder_name: "David Bowie", expiration_date: 1220, cvv: 123, }; const cleanUpBeforeTests = () => { db.reset(); }; module.exports = { creditCardMock, cleanUpBeforeTests, };
Helpers
folders can be created at any level within thetest
folder structure. If some helper is used only for unit tests, it should reside withintest/unit/helpers
. If the helpers is used across all tests, it should reside withintest/helpers
. If there's a helper that is used only for testing the http module on integration tests, then it should reside withintest/integration/http/helpers
.
Data Flow
This project has two programs, the worker
and the server
.
Server
This section documents what every endpoint of the server
does.
1. POST /boletos
Create a new boleto.
After creating the boleto (on our database), we will try to register the boleto withing the provider. Here, there's two possible outcomes: a) the provider could be reached, could process the boleto and gave us a status (either registered
or refused
); or b) the provider could not be reached or could not process the boleto (giving us an unknown
/undefined
/try_later
status).
a) Provider could process the boleto
The following steps illustrate the case where the provider could be reached and it could process the boleto.
- The
Client
makes an HTTP request to create a boleto. - We create the boleto in the
Database
with statusissued
. - We try to register the boleto within the Provider.
- The provider returns an answer (either
registered
orrefused
). - We update the boleto status in the
Database
. - We return the response to the
Client
(HTTP response).
Diagram built with mermaid.js. Check out the source code at docs/diagrams/server
b) Provider could not process the boleto
The following steps illustrate the case where the provider could be reached and it could process the boleto.
- The
Client
makes an HTTP request to create a boleto. - We create the boleto in the
Database
with statusissued
. - We try to register the boleto within the Provider.
- The provider could not be reached or could not process the boleto.
- We update the boleto status in the
Database
topending_registration
. - We send the boleto (
boleto_id
andissuer
) to an SQS queue calledboletos-to-register
. This queue will be processed by theworker
later. - We return the response to the
Client
(HTTP response) with thestatus = pending_registration
.
Diagram built with mermaid.js. Check out the source code at docs/diagrams/server
Example:
POST /boletos
Content-Type: application/json
{
"queue_url": "http://yopa/queue/test",
"expiration_date": "Tue Apr 18 2017 18:46:59 GMT-0300 (-03)",
"amount": 2000,
"instructions": "Please do not accept after expiration_date",
"issuer": "bradesco",
"payer_name": "David Bowie",
"payer_document_type": "cpf",
"payer_document_number": "98154524872"
}
201 Created
Content-Type: application/json
{
"queue_url": "http://yopa/queue/test",
"status": "issued | registered | refused",
"expiration_date": "Tue Apr 18 2017 18:46:59 GMT-0300 (-03)",
"amount": 2000,
"instructions": "Please do not accept after expiration_date",
"issuer": "bradesco",
"issuer_id": null,
"title_id": "null",
"payer_name": "David Bowie",
"payer_document_type": "cpf",
"payer_document_number": "98154524872"
}
2. GET /boletos
Retrieve all boletos.
Diagram built with mermaid.js. Check out the source code at docs/diagrams/server
Example:
GET /boletos
Content-Type: application/json
{
"count": "10",
"page": "1"
}
200 Ok
Content-Type: application/json
[{
"id": "bol_cj1o33xuu000001qkfmlc6m5c",
"status": "issued",
"queue_url": "http://yopa/queue/test",
"expiration_date": "Tue Apr 18 2017 18:46:59 GMT-0300 (-03)",
"amount": 2000,
"instructions": "Please do not accept after expiration_date",
"issuer": "bradesco",
"issuer_id": null,
"title_id": "null",
"payer_name": "David Bowie",
"payer_document_type": "cpf",
"payer_document_number": "98154524872"
}]
3. GET /boletos/:id
Find one boleto by id.
Diagram built with mermaid.js. Check out the source code at docs/diagrams/server
Example:
GET /boletos/:id
Content-Type: application/json
{
"id": "bol_cj1o33xuu000001qkfmlc6m5c"
}
200 Ok
Content-Type: application/json
{
"id": "bol_cj1o33xuu000001qkfmlc6m5c",
"status": "issued",
"queue_url": "http://yopa/queue/test",
"expiration_date": "Tue Apr 18 2017 18:46:59 GMT-0300 (-03)",
"amount": 2000,
"instructions": "Please do not accept after expiration_date",
"issuer": "bradesco",
"issuer_id": null,
"title_id": "null",
"payer_name": "David Bowie",
"payer_document_type": "cpf",
"payer_document_number": "98154524872"
}
Worker
This section documents what the worker
processes.
1. Process boletos-to-register
queue
This is a worker which consumes the queue which has boletos to register and effectively register them.
When a boleto can't be registered within the provider at the moment of its creation, it will be posted to a SQS Queue called boletos-to-register
. This worker is responsible for processing this queue. Here are the steps:
- This worker consumer function is triggered when an item is on the queue
- Using
sqs-quooler
we then start to poll items from SQS (sqs.receiveMessage
) - Each message received has a
boleto
({ id, issuer}) and amessage
(the raw SQS message fromboletos-to-register
queue). - We use the boleto
id
to find the boleto onDatabase
. - We check if the boleto can be registered, i.e. if the status of the boleto is either
issued
orpending_registration
. - If the boleto can be registered, we try to register it within the provider.
- If the provider could not process the boleto, we stop executing here. The SQS Message will then go back to
boletos-to-register
queue and will be later processed. - We update the boleto status with either
registered
orrefused
. -
IMPORTANT: After the boleto is updated, we notify the boleto owner by sendin an SQS message to the queue the owner specified on the boleto creation (aka:
boleto.queue_url
). The owner will then handle the processing of these SQS Messages. That's the only way we can notify the boleto owner that a boleto that went toboletos-to-register
queue was updated. That's why it's mandatory to pass a queue at the moment of the boleto creation.
Diagram built with mermaid.js. Check out the source code at docs/diagrams/worker
Staging
To publish the application in staging, we must follow some steps:
-
Generate a tag:
We must generate the tag in the pattern below and select the pre-release option:
v0.0.0-rc1
With each new tag in the same version, it will only be necessary to change the ending
rc1
by adding the incremental number. Example: v0.0.0-rc2. -
Enable staging services:
To activate the services we need to talk to the application Mr Krabs in slack, writing these messages:
/* Activate the database */ /mr-krabs start live-superbowleto-stg rds /* Activate the superbowleto server */ /mr-krabs start superbowleto-s-stg ecs /* Activate the superbowleto worker */ /mr-krabs start superbowleto-w-stg ecs
After then we can send the command to list the online services, or we can check directly in aws.
/mr-krabs list all ecs /mr-krabs list all rds
-
IP release on pedrero:
Within the mason application (EC2) of AWS, we will perform a release for access via ssh from your machine.
In the pedrero we have to go:
- In the
Security
tab and accesssecurity groups
. - Access <
Edit inbound rules
> and add ssh rule to your computer's ip
- In the
-
Start the pedrero:
In this step, we need to start the mason service, so that we can make the requests in the superbowleto through it.
We have to perform the following steps:
- Access the pedrero from the terminal (cmd)
$ ssh ubuntu@<ip-public-pedrero>
- Enter the folder
$ cd luciano/proxier
- Start the service
$ npm start (start the proxy)
After the process you can test the API using the url:
http://<ip_public_pedrero>:3003/https://superbowleto.stg.pagarme.net
-
Approve deployment to CI:
After these processes, you can start the deploy by approving the request in circle ci.
-
Shutdown of services:
After use and testing, call the Mr Krabs app again in slack passing the following parameters:
/* Disable the database */ /mr-krabs stop live-superbowleto-stg rds /* Disable the superbowleto server */ /mr-krabs stop superbowleto-s-stg ecs /* Disable the superbowleto worker */ /mr-krabs stop superbowleto-w-stg ecs
Accessing the Pedrero
To access pedrero for the first time, you will have to ask someone for help to register your ssh key in the container.
What do you have to do:
-
Copy your ssh key from the computer and forward it to the person helping you
-
The person will perform the following steps:
- Access the pedrero's machine
$ ssh ubuntu@<ip-public-pedrero>
- Access directory:
$ cd .ssh/
- Edit the
authorized_keys
file by putting your key in it.
-
Test access using the command:
$ ssh ubuntu@<ip-public-pedrero>