Separate NAMD GlobalMaster and colvarproxy_namd classes
Currently, colvarproxy_namd (also known as GlobalMasterColvars) is a derived class of GlobalMaster, consistent with what other NAMD features based on global forces do. This inheritance worked okay as long as we were using GlobalMaster as the API for communicating with NAMD. However, this also presented significant issues in #783, which added support for CudaGlobalMasterClient. The most significant was the duplication of functionality that is native to colvarproxy_namd, but could not be used without enabling the traditional GlobalMaster class as well.
This PR removes the inheritance between GlobalMaster and colvarproxy_namd, adding a friend declaration between the former and the latter. GlobalMasterColvars is now an actual class, is a thin wrapper around the second.
@HanatoK @jhenin For now this is no change in functionality.
The draft status is because I would also like to make colvarproxy_namd support either GlobalMaster and CudaGlobalMasterClient dynamically. If it turns out that that is too much work, I would consider merging this with little changes, and do the rest in a different PR.
I guess the following functions can be shared between colvarproxy_impl in colvarproxy_cudaglobalmaster.C and colvarproxy_namd:
load_atoms_pdb
load_coords_pdb
update_target_temperature
log
error
check_atom_id
update_atom_properties
output_stream
flush_output_stream
flush_output_streams
close_output_stream
close_output_streams
backup_file
set_unit_system
I guess the following functions can be shared between
colvarproxy_implincolvarproxy_cudaglobalmaster.Candcolvarproxy_namd:...
Exactly! Plus much of the code in the initialization of individual atoms, etc.
I'm excited to see the code deduplication that will result from this!
Current state is that the two classes are now separated, and things seem to be working.
Next I'll be taking a look at consolidating with the CUDAGlobalMaster-based equivalent (see https://github.com/Colvars/colvars/pull/803#issuecomment-2874036919) to see how much would that entail, i.e. see whether if can fit in this PR or another one.
@HanatoK I'm considering to keep the files under the folder namd/cudaglobalmaster and not move them to namd/src, so that this repo can track their history correctly. Any thoughts?
If this is a complete functioning change set, I would recommend merging now, and doing the next step separately. The next PR will be more legible if it is limited in scope.
Also, please move the files early, for the sake of long-term cleanliness of the directory tree. File history becomes less relevant over time, the directory tree remains.
If this is a complete functioning change set, I would recommend merging now, and doing the next step separately. The next PR will be more legible if it is limited in scope.
Sounds good: lifting the draft status.
Also, please move the files early, for the sake of long-term cleanliness of the directory tree. File history becomes less relevant over time, the directory tree remains.
I'm okay with that, but it would have to be done separately from this PR. Moving the files now would break the CMake recipe that currently builds the plugin.
Current state is that the two classes are now separated, and things seem to be working.
Next I'll be taking a look at consolidating with the
CUDAGlobalMaster-based equivalent (see #803 (comment)) to see how much would that entail, i.e. see whether if can fit in this PR or another one.@HanatoK I'm considering to keep the files under the folder
namd/cudaglobalmasterand not move them tonamd/src, so that this repo can track their history correctly. Any thoughts?
I agree that it is better to keep the files of the CUDAGlobalMaster-based interface under its own directory.
This is ready to merge for me.