go-delve-reload
go-delve-reload copied to clipboard
Node API?
I like this example and have learned a lot from just reading through your code to see how you've set it up, but I'm unfamiliar with the syntax of GO, would you foresee any snagging points if I replaced the back end with what I know of a Node API? Do you have any other repos that would sample that setup? I'll be reading your blog post next, maybe someone has already asked this on the comments there..
I'm migrating a Node backend to Golang sometime this year. This is the main reason I'm blogging about it actually. I recommend you take Bret's Docker for Node.js course. He goes into a lot of detail and has been Dockerizing Node apps since Node first came out. He keeps its content up to date.
My Node app uses Typescript. Here's what the Dockerfile and docker-compose file looks like:
FROM node:10 as base
ENV NODE_ENV=production
EXPOSE 4000
WORKDIR /api
COPY package*.json ./
RUN npm ci \
&& npm cache clean --force
FROM base as dev
ENV NODE_ENV=development
ENV PATH /api/node_modules/.bin:$PATH
EXPOSE 9229
RUN mkdir /api/app && chown -R node:node .
USER node
RUN npm i --only=development \
&& npm cache clean --force
RUN npm config ls -l
WORKDIR /api/app
CMD ["../node_modules/.bin/ts-node-dev","--inspect=0.0.0.0:9229","--respawn","--transpileOnly","./src/app.ts"]
FROM dev as test
COPY --chown=node:node . .
RUN npm audit
FROM test as build-stage
RUN npm run build
FROM base as prod
ARG backend
ARG frontend
ARG api_namespace
ENV REACT_APP_BACKEND=$backend
ENV REACT_APP_FRONTEND=$frontend
ENV API_NAMESPACE=$api_namespace
COPY --from=build-stage /api/app/dist /api/dist
CMD ["node", "./dist/src/app.js"]
version: "3.7"
services:
api:
build:
context: ./api
target: dev
secrets:
- jwt_secret_local
- sendgrid_api_key
environment:
MONGODB_URI: $MONGODB_URI
CLIENT_PORT: 3000
API_NAMESPACE: $API_NAMESPACE
REACT_APP_BACKEND: $REACT_APP_LOCAL_BACKEND
REACT_APP_FRONTEND: $REACT_APP_LOCAL_FRONTEND
JWT_SECRET: /run/secrets/jwt_secret_local
SENDGRID_API_KEY: /run/secrets/sendgrid_api_key
restart: always
ports:
- 4000:4000
- 9229:9229
volumes:
- ./api:/api/app
- /api/app/node_modules
mongo:
image: mongo:3.6.13
restart: always
ports:
- 27017:27017
volumes:
- mongodb:/data/db
volumes:
mongodb:
secrets:
jwt_secret_local:
file: ./secrets/jwt_secret_local
sendgrid_api_key:
file: ./secrets/sendgrid_api_key
It's not perfect. I haven't touched this in over a year but it did the job.
Some Caveats
1) Don't run integration tests in a build stage
I have integration tests where I programmatically create a temporary database using the testcontainers-go library (testcontainers-node also exists).
The issue with running integration tests in a build stage is that you'll get unix socket not found errors. Basically the build stage can't communicate with the docker daemon to run the container. I don't think this is even possible and I just avoided it by not running integration tests or unit tests altogether in my build stage. But if you only have unit tests ( e.i., no containers generated) its fine to run them in a test stage.
2) Sometimes an API cannot connect to a database the first try. I fix this programmatically.
I have a database connection retry function. This saved me in production.
// database.ts
import mongoose from 'mongoose';
const mongoUri = process.env.MONGODB_URI;
const log = (msg: string) => console.log(msg);
mongoose.Promise = global.Promise;
mongoose.set('useFindAndModify', false);
mongoose.connection.on('disconnected', () => log('\nMongo disconnected'.red));
mongoose.connection.on('connected', () => log(`Mongo Connected`.green));
const options = {
autoIndex: false, // Don't build indexes
reconnectTries: 30, // Retry up to 30 times
reconnectInterval: 500, // Reconnect every 500ms
poolSize: 10, // Maintain up to 10 socket connections
bufferMaxEntries: 0, // return errors immediately rather than waiting for reconnect
useCreateIndex: true,
useNewUrlParser: true
};
function connectWithRetry() {
console.log('MongoDB connection with retry');
mongoUri &&
mongoose
.connect(mongoUri, options)
.then(() => {
console.log('MongoDB is connected');
})
.catch(_ => {
console.log('MongoDB connection unsuccessful, retry after 5 seconds.');
setTimeout(connectWithRetry, 5000);
});
}
connectWithRetry();
3) Graceful shutdown your backend
You'll want to capture app termination and nodemon restart events to shutdown your database connection gracefully.
...
// on process restart or termination
function gracefulShutdown(msg: string, callback: any) {
mongoose.connection.close(() => {
log(`Mongo disconnected through ${msg}`.red);
callback();
});
}
// on nodemon restarts
process.once('SIGUSR2', () => {
gracefulShutdown('nodemon restart', () => {
process.kill(process.pid, 'SIGUSR2');
});
});
// on app termination
process.on('SIGINT', () => {
gracefulShutdown('App termination (SIGINT)', () => {
process.exit(0);
});
});
// on Heroku app termination
process.on('SIGTERM', () => {
gracefulShutdown('App termination (SIGTERM)', () => {
process.exit(0);
});
});
4) Docker secrets
When using Docker Swarm mode, Docker secrets aren't just magically available to your app. You need to find them and add their values to process.env
. I used this public docker-secret
library on npm but that's pretty dumb. I recommend copying the npm module code and keeping it as a private package. That way is much safer.
// secrets.ts
import path from 'path';
import { getSecret } from 'docker-secret';
function isSecret(key: string) {
const env = process.env[key];
return env && env.includes('/run/secrets') ? true : false;
}
function getSecrets(): NodeJS.ProcessEnv {
return Object.keys(process.env).reduce(
(env: any, key: string): NodeJS.ProcessEnv => {
const envValue = process.env[key];
if (envValue && isSecret(key)) {
env[key] = getSecret(path.basename(envValue));
}
return env;
},
{}
);
}
process.env = { ...process.env, ...getSecrets() };
Have fun and experiment!
I've taken his Node and Mastery courses about to start the Swarm course soon. I appreciate the very detailed response all great advice
Hey, random question, maybe I'm just not as familiar with the javascript ecosystem as I thought I was, but I noticed in both of these dockerfiles that partway through we create a new app folder inside /client
and copy everything in there after first copying the package.json and package-lock.json into /client
and downloading all the dependencies. Why is that?
Ok upon closer inspection I think I see where the app files get into that /client/app
folder thanks to the volume mount in the docker-compose.yml file, But I'm still not 100% sure why that gets done in the first place. Is it just to get the binaries from node_modules before the actual dev server/build starts up?
@leggettc18 No. Sorry for the late reply.
There can be an issue with mounting node_modules
from your host machine into the container. Here's the thing, some node modules
are built based on your host architecture, for example: node-gyp
. So if your host architecture is something different than the host architecture in the container your app will crash on start up. For example, you might use windows or mac locally but the container uses linux.
Here's the trick, create a fake volume to hide the host machine's node_modules
folder from the container (/api/app/node_modules
). Then add the node_modules
folder in a .dockerignore
file so you don't accidentally copy it into the image during a docker build
. Then you create a parent folder in the container where the node_modules really live. Node will see the fake empty volume realize there's no node_modules and go up a level to find them in the parent directory.
You can see this approach here as well.