keystone-5
keystone-5 copied to clipboard
Keystone build needs an access to mongo if there is an external session store
Bug report
Describe the bug
During the build phase there is an import of entry file required. If I use connect-mongo
lib to create an external session store new MongoStore({ url: config.mongoUri })
is called during the creation of Keystone instance so there has to be an access to MongoDB database. This is a problem if I need to build an app using Docker build
command.
[5/5] Building fresh packages...
success Saved lockfile.
Done in 107.07s.
yarn run v1.21.1
$ cross-env NODE_ENV=production keystone build --entry src/index.js
- Initialising Keystone CLI
ℹ Command: keystone build --entry=src/index.js
-
✔ Validated project entry file ./src/index.js
- Initialising Keystone instance
✔ Initialised Keystone instance
- Exporting Keystone build to ./dist
/home/node/node_modules/connect-mongo/src/index.js:135
throw err
^
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
at Timeout._onTimeout (/home/node/node_modules/mongodb/lib/core/sdam/topology.js:448:30)
at listOnTimeout (internal/timers.js:531:17)
at processTimers (internal/timers.js:475:7) {
name: 'MongoServerSelectionError',
To Reproduce
- Add mongo session store to Keystone constructor
- Create Dockerfile as it's described here
- Run
docker build -t localhost/api-server .
My understanding is that the docker container needs access to the mongoUri
? If that is correct, this a Docker configuration requirement and not a Keystone responsibility?
It is absolutely fine, when the access to Mongo database is required from running Docker container, but at this moment I'm talking about build phase of Docker image - it's not an ordinary practice to provide some external services like database.
Here is an example of my Dockerfile:
# Builder container
FROM node:12-alpine AS builder
ENV BUILD_STAGE=true
RUN apk add --no-cache python make g++
COPY . .
RUN yarn install --production && yarn build && yarn cache clean
# Runtime container
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder . .
EXPOSE 3000
CMD [ "yarn", "start" ]
I had to pass BUILD_STAGE
variable and config.isBuildStage
to exclude calling of mongodb session constructor so there would be any attempt to connect to the database during the build process.
const keystone = new Keystone({
name: config.projectName,
adapter: new MongooseAdapter({ mongoUri: config.mongoUri }),
onConnect: initialiseData,
cookieSecret: config.cookieSecret,
sessionStore: !config.isBuildStage ? new MongoStore({ url: config.mongoUri }) : null,
});
Similar issue I'm trying to solve are secrets of S3 Adapter:
- Initialising Keystone CLI
132 ℹ Command: keystone build --entry=src/index.js
133 -
134 ✔ Validated project entry file ./src/index.js
135 - Initialising Keystone instance
136 ✖ Initialising Keystone instance
137 Error: S3Adapter requires accessKeyId, secretAccessKey, region, bucket, folder.
Why should I need to provide any secrets during the build stage of keystone app? At this moment I import them from config file like this:
module.exports = (folder = null) => new S3Adapter({
accessKeyId: config.user,
secretAccessKey: config.token,
region: 'us-east-1',
bucket: config.bucket,
folder,
getFilename: ({ id, originalFilename }) => `${id}-${originalFilename.replace(/\s+/g, '')}`,
publicUrl: ({ filename }) => urlJoin(`https://${config.bucket}.${config.hostname}`, folder, filename),
s3Options: {
signatureVersion: 'v2',
endpoint: `https://${config.hostname}`,
},
uploadParams: () => ({
ACL: 'public-read',
}),
});
Same problem here, it's because the session store is created at build time, this means in the case of connect-mongo
and other session stores that need to access some sort of backend (e.g. connect-mongo
needs to be able to connect to mongodb) that the backend must be available at build time. That being said, it's not always possible to provide a backend at build time, and you couldn't provide different backends (e.g. connect to dev
or prod
mongodb instance). For these reasons, it would be great to delay this connection until run time.
I ran into pretty much the same scenario building a docker image, but as long as you build before whatever underlying backend is up (so mongodb in the case of connect-mongo
) it will fail since the mongodb instance is available at run time not build time.
To be clear it's not keystone breaking, but it's because keystone wants a session store at build time which can't be configured properly until run time.
Imagine a build process like this:
- run
keystone build
on build machine (doesn't have access to mongodb) - copy
dist
folder to separate machine (which does have access to mongodb) - run
keystone start
Currently this wouldn't be possible, and a docker build process would face the same problem.
One possible solution would be to take a function as the session store (or introduce a new runtimeSessionStore
parameter that calls the function on start and sets up the session store then.
I had a little help from a friend here, so I'm passing it on. The solution was to create a docker-compose file and have it run a mongo instance, create your app, and tied them to the same sub-network.
- Add a docker-compose.yml file -- Add a mongo service -- Add your appService and get the context from the Dockerfile
version: "3.3"
services:
mongo:
image: mongo
networks:
- appNetwork
ports:
- "27017:27017"
volumes:
- "~/data:/data/db"
appService:
build:
context: ./
dockerfile: Dockerfile
networks:
- appNetwork
ports:
- "3000:3000"
networks:
appNetwork:
external: false
-
Update your index.js to make use of the mongo service:
//const adapterConfig = { mongoUri: "mongodb://localhost:27017/your-databasei" };
const adapterConfig = { mongoUri: "mongodb://mongo/your-database" };
-
Build it
(sudo) docker-compose build
-
Run it
(sudo) docker-compose up -d
-
See them, there should be 2 new docker containers for this project (mongo, your app)
docker ps
-
Log it
docker logs containerid
-
You should be good to visit the localhost:3000 url now, however, at this point, you will need to make sure that your NODE_ENV=production is configured correctly, or you will run into this issue where your cannot access the admin after logging in.
Is there anything more we can do with this issue? Would anyone suggest change to the documentation here: https://www.keystonejs.com/guides/deployment
If not can I close this issue?
The suggested workaround doesn't really address the underlying issue.
Say I have a production mongodb instance running (could be cloud or on prem).
And say I want to build a docker image that will use that production database.
I want to be able to run keystone build
during my docker build (the docker image isn't the important part here, just a very common use case, the important part is running the keystone build
on a machine that shouldn't be accessing mongodb).
The problem is that I can only run the build if the build machine has access to the production mongodb instance.
Not only does this not sound like a good idea (the build machine being able to access a production database), but this isn't necessarily even possible all the time (if I start the mongodb instance and the CMS at the same time then the database can't have existed during build time, or if access to my mongodb instance isn't possible from the machine the CMS is being built on e.g. a CI machine running GitHub Actions can't access non public services in an AWS Kubernetes cluster).
The workaround above creates a cluster for running docker build, the idea here is we need to create a mongodb instance to trick keystone build
into thinking it has access to the production database (this is not always possible depending on what the value of mongoUri
should be).
What I'm currently doing is running keystone build
and keystone start
when the image is being executed.
This workaround means a mongodb instance doesn't need to be created or available during the build, but defeats the point of keystone build
being a build step and increases startup time significantly.
What we need is a solution where we don't need to pass the adapter in straight away (e.g. have a option called runtimeAdapter
that takes a function that returns an Adapter, keystone start
would be responsible for calling this function to create the actual Adapter (same sort of deal with a session store).
Another option would depend on what keystone build
actually needs, if it only needs the lists themselves to generate assets, then maybe they could be added as positional arguments to the command (no positional arguments could still do the current behaviour).
To get a better understanding of the problem try to run keystone build
without creating a database.
If you think it's reasonable to enforce the existence and availability of a database then I think that point is worth discussing, because it's my opinion at least that a build process like keystone build
shouldn't be accessing production mongodb.
To me, this isn't simply a documentation thing, a build process shouldn't need access to a production database, and hacking around it by faking the production database during build or running the build process on startup aren't good solutions either.
keystone build
shouldn't need access to any external services like mongodb or Amazon S3.
OMG, I'm currently facing the same issue. How can this be happening? DB access on build time breaks EVERY DevOps pipeline and goes against every design rule. this is absolutely a huge bug impacting the deployment in production of keystonejs apps
quoting from the readme file
It builds on the lessons we learned over the last 5 years of the KeystoneJS' history and focuses on the things we believe are the most powerful features for modern web and mobile applications.
This stuff needs immediate fixing pls
After some playing around, for me at least, things seem to work okay as long as a valid mongodb uri is present (no instance needs to exist), suggesting that the build process doesn't try to connect to the database (which is a big sigh of relief). If this is the case, then a workaround would be to pass a (localhost) uri that doesn't have anything running on it.
The proper solution to this would be to not have the adapter check the uri format and let things fail if the wrong uri is passed. This would mean if a developer configures 'mysuperweirdurl://that.doesnt.exist' then either it's their fault for doing that, or some way some how they know better and that connection happens to work for them.
That is if course assuming keystone is in control of the uri validation logic, if not, then my previous idea of having adapter producing functions could still work I think.
I got the same problem! @cowlingj 's way to use a fake uri worked for me.
I think it's a better solution to go on at this stage. However, we do need a more proper way.
I agree that this should be fixed.
I'm also having the same issue! so is this the way to go until then?
const keystone = new Keystone({
name: config.projectName,
adapter: new MongooseAdapter({ mongoUri: config.mongoUri }),
onConnect: initialiseData,
cookieSecret: config.cookieSecret,
sessionStore: !config.isBuildStage ? new MongoStore({ url: config.mongoUri }) : null,
});
I'm also having the same issue! so is this the way to go until then?
const keystone = new Keystone({ name: config.projectName, adapter: new MongooseAdapter({ mongoUri: config.mongoUri }), onConnect: initialiseData, cookieSecret: config.cookieSecret, sessionStore: !config.isBuildStage ? new MongoStore({ url: config.mongoUri }) : null, });
That's how I solved it, and ultimately I ended up having to also not run keystone.prepare during build because I used the custom express instance which I consider necessary. So basically, instantiate keystone and the apps you want during build, but don't start them unless !config.isBuildStage. My advice currently anyway.
I am running into the same problem and I agree that the way keystone build
is implemented is against most devops best practices. I need to use connect-pg-simple
as sessionStore
, and this will create a PG pool when the code runs..... and the code runs during build. This means that I need a running DB instance on my build server, and all my other process.env
requirements for my production server are requirements for the build server too. In the end this means duplicating my secrets across multiple machines. I really love the initial developer experience of using keystone, but the deployment process has been really painful. I am happy to try to write some documentation around this when I figure out how to fix my issues.
yeah, ultimately I went with something like this. Not happy with it, but it was necessary to work around this issue. BUILD_STAGE was true in dockerhub, and false when running.
let client
if (process.env.BUILD_STAGE !== 'true')
client = redis.createClient({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PW,
})
const keystone = new Keystone({
appVersion: {
...
},
cookie: {
...
},
cookieSecret: process.env.COOKIE_SECRET || 'development',
name: '',
adapter: new Adapter(adapterConfig),
sessionStore:
process.env.BUILD_STAGE === 'true'
? undefined
: new redisStore({
...
}),
onConnect: dbUpdates,
})
const lists = [
]
keystone.createLists()
const authStrategy = keystone.createAuthStrategy({
type: PasswordAuthStrategy,
list: 'User',
})
const apps = [
new GraphQLApp({
...
}),
new AdminUIApp({
...
}),
]
if (process.env.BUILD_STAGE !== 'true') {
keystone
.prepare({apps, dev: process.env.NODE_ENV !== 'production'})
.then(async ({middlewares}) => {
app
.use(middlewares)
.listen(process.env.KEYSTONE_DEV_PORT, () =>
console.log(
`Running on: http://localhost:${process.env.KEYSTONE_DEV_PORT}`,
),
)
})
}
Thanks so much for this @savager. That's really helpful!
It looks like there hasn't been any activity here in over 6 months. Sorry about that! We've flagged this issue for special attention. It wil be manually reviewed by maintainers, not automatically closed. If you have any additional information please leave us a comment. It really helps! Thank you for you contribution. :)