grunt-contrib-connect
grunt-contrib-connect copied to clipboard
Allow ability to stop server.
I'd like to add the ability to stop a currently-running server. I propose that:
- the plugin keeps a cache of running server instances by target name
- if the
stop
flag is specified and a server with the same target has been started, it will be stopped and removed from the cache - this behavior will happen automatically before the task starts a server (by default, disableable with an option)
// Stopping a server explicitly.
grunt.registerTask('do_something', ['connect:dev', 'something', 'connect:dev:stop']);
// Stopping a server implicitly (there would be "port in use" error here).
grunt.registerTask('do_something', ['connect:dev', 'something']);
grunt.registerTask('do_something_else', ['connect:dev', 'something_else']);
grunt.registerTask('do_everything', ['do_something', 'do_something_else']);
Additionally, I'd like to consider this:
- in addition to a server-by-target cache, what if there is also a server-by-port cache, so that if you start a second server on the same port, it will kill whatever was running before it (by default, disableable with an option) before the task starts?
// Stopping a server by port, not target (dev and prod servers have the same port).
grunt.registerTask('do_something', ['connect:dev', 'something']);
grunt.registerTask('do_something_else', ['do_something', 'something_else', 'connect:prod']);
My use-case:
Let's say I need to run my dev server when I do my integration tests:
grunt.registerTask('test-integration', ['connect:dev', 'mochaTest']);
And let's say both my dev
and prod
server use the same port, but have otherwise different configuration.
grunt.initConfig({
connect: {
options: {
port: 8000,
},
dev: {
options: {
base: ['prod', '.'].
}
},
prod: {
options: {
keepalive: true,
base: ['prod'].
}
}
}
})
What happens when I want to run my integration tests before doing my production build, which includes starting the prod
web server?
grunt.registerTask('prod', ['test-integration', 'build_tasks', 'connect:prod']);
Because I haven't stopped the dev server (because I can't) at the end of test-integration
, the prod
server can't run because the port is already in use. And it's not practical to change the port between the two just to work around this issue.
Thoughts?
/cc @jugglinmike @tkellen @shama @sindresorhus @vladikoff
Just a heads up with keeping the servers in a cache, it will only work with watch with spawn: false
enabled. The other option to have it work with both watch modes is ping the server to check if it's active and have the server provide an POST endpoint to call so the server stops itself. Or maybe there is a better solution, maybe cluster?
Man, i REALLY wish spawn: false
was the default.
@shama have you thought about using the vm
module as an alternative to spawning?
I'm fine switching the default to spawn: false
. I just don't want to deal with the support issues it will incur when users don't understand why their modules are bleeding into each other, especially while running test suites. I already get spawn: false
issues opened quite a bit with it not being the default.
I'm open to trying vm
. It's marked unstable. I've noticed that any core node module marked unstable is aptly so. But still any solution that involves sharing the context won't fix the issue of the contexts affecting each other.
@shama it is rewritten in 0.12, so might be more stable now, dunno. isn't the whole point of the vm module that they don't share context?
Grunt is still >= v0.8
. If we used vm
within unshared contexts then it would have the same issue where a task couldn't cache server instances upon subsequent runs.
The best solution, IMO, is to pull out the task running parts of Grunt and just have the watch use that rather than it's own wrapper around the current system.
Another use case for this, if I am reading it right, is that you can have the server restart when changes are made to the Gruntfile. Watch can trigger the server to stop and then restart with the newly loaded Gruntfile. Unless you can do this already and I am in the dark about it.
This can be done, currently the server just needs to provide a way to halt itself rather than storing variables within a single process context.
FWIW, the watch in grunt-next doesn't spawn by default.
I'm running parameterised tests and the inability to stop a connect
means I've had to move the connect
task out to a parent task, and the child tasks can't be run separately anymore.
Ideally we could have something like connect:connect
followed by connect:disconnect
optionally.
Broadcasting an event as a part of done so that we could enqueue the next task only once the server had stopped would work also I think. (for my use case at least - I want to run a set of e2e tests in angular and restart the server with new config each time).
Edit actually because of the way that grunt is treating the async task this won't work :-1:
:thumbsup:
I need to be able to stop the server too. It seems that the pull request never made it. Any workaround?
This issue is still opened? Any current solution on that?
I could use a feature like that too.
I have this...
connect: { server: { options: { port: 9001, base: '', open: true, keepalive: true } } },
So how do I disconnect each time before connecting?