datatracker
datatracker copied to clipboard
TRACKER_BASE_URL should be overriden to the dev instance when in dev mode
Right now, links produced for meeting materials (or any link constructed with settings.TRACKER_BASE_URL) by a dev container point back to the production datatracker. They should instead point to the local instance.
Right now, links produced for meeting materials (or any link constructed with settings.TRACKER_BASE_URL) by a dev container point back to the production datatracker. They should instead point to the local instance.
So it seems overriding the ID_TRACKER_BASE in settings_local.py is the thing to do. Getting the correct URL might be tricky; the Django runserver command has a hairy regexp to parse the host/port args. It would be nice if they exported the base-url that gets displayed at startup into the environment. (I'll make a PR over there) \
We could something like these few lines into manage.py:
if sys.argv[0] == 'runserver':
os.environ['SERVER_BASE_URL'] = 'http://localhost:'+(sys.argv[1] or "8000") + '/'
Or I can pull down the hairy regexp and code listed above.
Thoughts?
@alireza83 are you still working on this?
Not at this time, we can delete the branch
Thoughts?
Maybe we should adjust the deployment scripts to add an ID_TRACKER_BASE
to settings_local.py
when the app container starts up. It currently copies a standard settings_local.py
in places at startup. It could do something like
echo "ID_TRACKER_BASE = 'http://localhost:8000/" >> $WORKSPACEDIR/ietf/settings_local.py'"
after the copy. This will only mostly work, though - it's possible to run the dev datatracker on a different port.
I necessarily end up with dev instances on multiple ports at the same time as I'm working on branches and testing other people's PRs. Something that forces me to only be able to run one instance of the datatracker at a time will have a major negative impact on my workflow.
The proposed PR doesn't limit you to only one instance, it just prints out a warning when you start an instance that uses other than the default.
I'm not sure it does either - this is watching the port in the container, which is always 8001. Docker then tunnels a port on the host interface to that container port. By default that's 8000, but it's configurable. If I understand correctly, Robert runs several containers with different host ports.
I think the fix is just the addition to settings_local plus some plumbing to find the port that docker is using on the host interface. For the docker/run script, that's the CUSTOM_PORT variable in the compose file. I'm not sure if it's the same for dev container users.
Also, the 8001 is talking to nginx, which provides the service actually tunneled out to the host