DumpSchema / LoadSchema add destDir param
Is your feature request related to a problem? Please describe. We sync the df-files from our Progress databases to a git-repo. Since we have a lot of tables / fields this file got way to large. Some git plugins get really slow processing big files. Also one file in a repo defeats the purpose of some nice features of git.
Describe the solution you'd like
I would like to use PCT like this:
<PCTDumpSchema destDir="${destDir}/${dbName}" dlcHome="${dlcHome}" cpInternal="utf-8" cpStream="utf-8" cpColl="basic" tempDir="${tempDir}">
I expect to find 10 df-files in "${destDir}/${dbName}" if i have 10 tables defined like: ${destDir}/${dbName}/<tableName>.df
The same goes for the loading of schema.
For PCTIncrementalDump i propose a new Parameter destDir aswell which would save the delta per table (only if there is a delta ofcourse).
Describe alternatives you've considered
1 ) I tried to use the tables param, but to do so i first needed to get
<PCTDumpData tables="_File" destDir="${destDir}" dlcHome="${dlcHome}" cpInternal="utf-8" cpStream="utf-8" cpColl="basic" tempDir="${tempDir}">
and comma-seperated-list out of this file. This list could be used in <PCTDumpSchema> again (for each entry in table list)
Additional context
I looked inside your src and ended up in dmpSch.p
RUN prodict/dump_df.p (INPUT cTables, INPUT cFile, INPUT SESSION:CPSTREAM).
I assume that dmpSch.p is under the control of Progress.
Sounds useful.
Could be done. How do you plan to manage incremental dump in your repo (and how will you generate them) ?
@gquerret Not sure if i understand you correctly, but our environment works like this:
Our devs have a local DB based on a df-file provided by the git-repository Theres a script which allows to sync the DB with the git-repo like this:
- Dump the local DB as df-file into this git-repository. (commit)
- Pull the latest df-file from the repo and merge them with our local changes. (fix merge-conflicts if any).
- Create a temporary, second DB based on the merged df-file
- Create a incremental df between those two local dbs
- Load the incremental df into dev-db
- Finally push the changes.
Now the GIt repo and the local DB should be the same. It may sound complicated but we automated this process pretty smoothly (including opening IDE for commit msg / conflicts) and it has increased our means for ci/cd a lot.
If you are interested in the CD processes (deploying to customers DB) let me know. Cheers!
@gquerret I think that are different topics.
Versioning the .df per DB table will simplify things for the SCM side of life:
- single history per file
- faster diff view in the fancy graphical Git clients
- smaller files in pull requests
Your delat.df question is very valid too. We also deploy a full DF with our application framework and create a delta.df on the fly in production.
But for that purpose, we can just concatenate all the ,df's into a single .df.
@mikefechner @CIenthusiast I understand the need (or the preference) for a single DF per table. If you also store the incremental DF in the repo, would you want to generate them per table or per database ?
you also store the incremental DF in the repo
I never do.
I never do.
I think we already had this discussion, but this means that you have to handle field rename or add / delete during deployment.
field renames? Better start with good names from the start ;)
@mikefechner @CIenthusiast I understand the need (or the preference) for a single DF per table. If you also store the incremental DF in the repo, would you want to generate them per table or per database ?
I think both ways should be possible. There might be cases where u want a single file aswell (for CD purposes maybe)
Incremental dump per table would require significant work.
This would be a bonus really. What matters is dump / load per table.
Any news / planned release on this?
Unfortunately, I had no chance to work on that. You can open a PR if you want.
Closed as won't fix.