[Feat] Adding GLOP model
Description
This PR adds the implementation of Global and Local Optimization Policies (GLOP), together with the implementation of Shortest Hamiltonian Path Problem (SHPP) environment.
Motivation and Context
GLOP is an important non-autoregressive (NAR) model for routing problems. For more details, please refer to the original paper.
Types of changes
- [x] New feature (non-breaking change which adds core functionality)
- [x] Documentation (update in the documentation)
- [x] Example (update in the folder of examples)
Checklist
- [x] My change requires a change to the documentation.
- [ ] I have updated the tests accordingly (required for a bug fix or a new feature). The test for SHPP is added, but the test for GLOP is not added yet.
- [ ] I have updated the documentation accordingly.
⚠️ Working on Debugging
The current implementation of GLOP is runnable but it can not learn.
I added one test notebook at the examples/other/3-glop.ipynb. This notebook including the test for SHPP environment, greedy rollout for untrained GLOP policy (including visualization for a better understanding), and launching the training for the GLOP. Please play with it and have a look.
There are following components not implemented yet compare with the original GLOP:
- [ ] Maximum number of vehicles constraint;
- [x] Polar coordinates embedding;
- [x] Sparsify the input graph;
I will add these missing parts soon. And here are some possible ideas to help to reproduce the results:
- [ ] Using the same number of node, capacity settings as the original GLOP;
- [x] Instead of the AM for SHPP as reviser, use insertion to solve sub-TSPs for more efficient training;
If @henry-yeh @Furffico have time, could you help to have a look about the implementation? We need to reproduce the GLOP's result close recently.
After some experiments with the original GLOP implementation, I found that none of the current discrepancies (Maximum number of vehicles constraint; Polar coordinates embedding; Sparsify the input graph; ...) should be the reason for training failure on CVRP100. It's weird @Furffico
Could you push the latest version of GLOP?
Could you push the latest version of GLOP?
We were experimenting with the debug version, which involves a part of code from the official GLOP implementation. The "pure RL4CO" version remains not learning and still requires some debugging.
I see, given it's still in RL4CO (just not totally refactored) I'd suggest merging it and when the pure RL4CO version is ready that will be merged.
What do you think?
Cc: @cbhua
As the GLOP worked at the submission version. We will clean up this branch and push a clean final implementation of it then.
Addressed by #253 !
Codecov Report
Attention: Patch coverage is 53.54331% with 118 lines in your changes missing coverage. Please review.
| Files with missing lines | Coverage Δ | |
|---|---|---|
| rl4co/envs/__init__.py | 60.00% <ø> (ø) |
|
| rl4co/envs/routing/__init__.py | 100.00% <100.00%> (ø) |
|
| rl4co/models/__init__.py | 100.00% <100.00%> (ø) |
|
| rl4co/models/nn/env_embeddings/context.py | 83.33% <100.00%> (+1.44%) |
:arrow_up: |
| rl4co/models/nn/env_embeddings/dynamic.py | 95.45% <ø> (ø) |
|
| rl4co/models/nn/env_embeddings/init.py | 75.32% <100.00%> (+1.47%) |
:arrow_up: |
| rl4co/models/zoo/__init__.py | 100.00% <100.00%> (ø) |
|
| rl4co/models/zoo/glop/__init__.py | 100.00% <100.00%> (ø) |
|
| rl4co/utils/test_utils.py | 97.22% <100.00%> (+0.16%) |
:arrow_up: |
| rl4co/envs/routing/shpp/env.py | 98.14% <98.14%> (ø) |
|
| ... and 4 more |