Add requirements traceability
I'm sure there is more elegant ways of producing the two way traceability file rather than with a temporary dox file and removing it later, but I simply implemented the workflow in a way which I had been doing with aliases, xrefitems and two separate passes to produce the tracefiles, then for the @refs to resolve. so I just followed the workflow I was used to however cludgy
I guess I should have described the purpose of this pull request first. For many safety critical systems there is a need to perform traceability to design and code. The config items I have introduced here REQUIREMENTS_TAGFILES allows the user to import requirements from an external source and produce a traceability table to the requirements contained within, REQUIREMENTS_FILES provides a list of files to treat as sources of requirements.
The requirements are parsed out from anchors within the tagfiles or the files. the requirements are expected to adhere to the following format (PREFIX)(NUMBER) where prefix is an optional length string and groups requirements together (by file typically) and the number unique for the prefix. The combination of (PREFIX)(NUMBER) is referred to herein as REQUIREMENT_ID
To create the traceability to the source and tests, make use of the two new keywords I have introduced.
@satisfies REQUIREMENT_ID - indicates that the REQUIREMENT_ID is satisfied in this region of code
@verifies REQUIREMENT_ID - indicates that the REQUIREMENT_ID is verified by test in this region of code
there is new nav menu items which show the requirements and the traceability.
I hope this new feature is accepted as it makes doxygen incredibly powerful for safety critical validation and I am using my fork already and It would be nice for it to be mainlined. I understand that the way I'm generating the intermediate files is probably less than optimal, but as mentioned I'm just trying to replicate a manual approach I was using in the past by creating intermediate files and double processing.
Thanks for the effort.
I don't think this is the way to go, I'm more thinking about something where the @satisfies and @verifies commands are gathered in commentscan.l and at the end of processing the inputs (but before the start of writing out the documentation) to collect all the information and sort is accordingly etc. This in a similar way as it is done for \section etc.
I placed just a few remarks with the code of things / problems that immediately caught my eye.
Ok thankyou for your quick feedback on the PR. I will address the low hanging fruit of the Tags History and the translator (im sure there will be comments on what I do and don't put in there)
then I will look at \section to see how that uses the commentscan.l.
I don't think this is the way to go, I'm more thinking about something where the
@satisfiesand@verifiescommands are gathered incommentscan.land at the end of processing the inputs (but before the start of writing out the documentation) to collect all the information and sort is accordingly etc. This in a similar way as it is done for\sectionetc.
Can I please get some additional information on how you envisage the workflow? and the correct modification path?
I have been using the following approach for years to make a hack job of the traceability.
that being that I would import the requirements as TAG files from an external source to make them linkable by refs in doxygen, then using ALIASES for satisfies and verifies to create xrefitems as follows:
"satisfies{1}=\xrefitem satisfies \"Satisfies requirement\" \"Requirement Implementation\" \1" \
"verifies{1}=\xrefitem verifies \"Verifies requirement\" \"Requirement Verification\" \1"
I would then parse out the results and use a post processing python script to write the external document in the form of a md doc which is then just parsed again when I run doxygen twice.
Clearly this isn't the best way to do it so I dove into attempting to modify doxygen. unfortunately my modification approach was blinkered by the same initial approach and I tried to follow how xrefitems worked. Rather than sections.
now to go read up on flex.
- How did you translate all the languages (Google translate?, I'm not a fan of using such tools for production texts)
- One important file has not been updated:
translator_adapter.h. You didn't notice as you provided all translations).
- How did you translate all the languages (Google translate?, I'm not a fan of using such tools for production texts)
- One important file has not been updated:
translator_adapter.h. You didn't notice as you provided all translations).
Yes, Google translate. It's the standard way of making an android app multilingual and I assumed that the pull request would be rejected without all translations.
I will have a look at translator_adapter.h
OH! ok
/*! An auxiliary inline method used by the updateNeededMessage()
* for building a warning message.
*/
inline QCString createUpdateNeededMessage(const QCString & languageName,
const QCString & versionString)
{
return QCString("The selected output language \"")
+ languageName
+ "\" has not been updated\nsince "
+ versionString
+ ". As a result some sentences may appear in English.\n\n";
}
understood. wait for proper translators to do the native translation. and the default adaptor will take effect!
I will revert all the other languages. and let native speakers perform the translation.
- How did you translate all the languages (Google translate?, I'm not a fan of using such tools for production texts)
- One important file has not been updated:
translator_adapter.h. You didn't notice as you provided all translations).
removed Google Translations, made use of the translator_adapter.h and bumped rev history of the languages which are now affected.
removed Google Translations, made use of the translator_adapter.h and bumped rev history of the languages which are now affected.
You removed all my Latvian translations from version 1.8.4 till 1.11.0, and reverted to some mistakes in Russian language.
removed Google Translations, made use of the translator_adapter.h and bumped rev history of the languages which are now affected.
You removed all my Latvian translations from version 1.8.4 till 1.11.0, and reverted to some mistakes in Russian language.
will fix. will rebase branch back to before I messed up the whole translation thing and try merge back clean.
Re-based on main branch, updated with previous comments in mind.
@dropbearrob Nice work. I need to chew on it a bit, but I think this is something that can be merged in.
fixing multi-target build workflows
using regex.h and made title regex more windows line ending friendly
@dropbearrob Nice work. I need to chew on it a bit, but I think this is something that can be merged in.
Thankyou. I've been a massive fan of your project for at least 20 years and I mandate all our projects are commented using doxygen style comments. I've used it in a very hacky way to do what this pr is trying to do for about 7 years. I thought it was about time I tried to actually put the feature in as a first class citizen rather than as aliases xreflists, python scripts, and running doxygen twice to re link it all.
@dropbearrob Can you attach an example project that shows the use of REQUIREMENTS_FILES and REQUIREMENTS_TAGFILES in combination with the \verifies and \satisfies commands?
It is not entirely clear to me what the expected syntax is of the requirement files.
Looking at the implementation a bit more, I feel the need to integrate things a bit more tightly.
My idea is the following:
- Introduce a new
\requirementcommand that can be used to specify requirements. A bit similar to\sectionor\xrefitembut then with a unique user-defined ID. A requirement should have an (optional?) title and a description. - Introduce the
\satisfiesand\verifiescommands similar to this PR. These command can link to requirements.\refshould also be able to link to requirements (i.e. the requirement ID acts as an anchor/section). - Introduce a requirement page similar to this PR that lists the requirements and their coverage, and connect requirements with the implementation and test code.
- Allow importing externally defined requirements via an XML file (similar to current tag files, or maybe even the same). These external requirements can then come from another Doxygen project or from an external requirements tool.
- Allow exporting the requirements as an XML file, for referencing them from another project (maybe use the existing tag files?).
- The XML file needs to contain a list of requirements, where each requirement has ID, title, and an anchor (and maybe a URL if we want to link to external tools).
Let me know what you think.
Looking at the implementation a bit more, I feel the need to integrate things a bit more tightly.
My idea is the following:
- Introduce a new
\requirementcommand that can be used to specify requirements. A bit similar to\sectionor\xrefitembut then with a unique user-defined ID. A requirement should have an (optional?) title and a description.- Introduce the
\satisfiesand\verifiescommands similar to this PR. These command can link to requirements.\refshould also be able to link to requirements (i.e. the requirement ID acts as an anchor/section).- Introduce a requirement page similar to this PR that lists the requirements and their coverage, and connect requirements with the implementation and test code.
- Allow importing externally defined requirements via an XML file (similar to current tag files, or maybe even the same). These external requirements can then come from another Doxygen project or from an external requirements tool.
- Allow exporting the requirements as an XML file, for referencing them from another project (maybe use the existing tag files?).
- The XML file needs to contain a list of requirements, where each requirement has ID, title, and an anchor (and maybe a URL if we want to link to external tools).
Let me know what you think.
You aren't wrong to think this and I was circling round this 2 days ago when I was writing a set of examples and I realised I had made a bunch of errors in the way Ive done the REQUIREMENTS_FILES and it has lots of errors and limitations (not being able to ref to them for a start).
Let me add some context.
I started with just REQUIREMENTS_TAGFILES as this is how I was working historically where I would import into doxygen external requirements documents the tag files would look like this
<?xml version='1.0' encoding='UTF-8' standalone='yes' ?>
<tagfile doxygen_version="1.9.4">
<compound kind="page">
<name>md_requirements_requirements</name>
<title>SRS SOME PROJECT TITLE</title>
<filename>md_requirements_requirements.html</filename>
</compound>
<compound kind="page">
<name>index</name>
<title>SRS SOME PROJECT TITLE</title>
<filename>index.html</filename>
<docanchor file="index.html">mainpage</docanchor>
<docanchor file="index.html">PREFIX_1</docanchor>
<docanchor file="index.html">PREFIX_2</docanchor>
<docanchor file="index.html">PREFIX_3</docanchor>
...
Hence this starting point coloured my view on everything and the REQUIREMENTS_TAGFILES is the most complete of this PR. During the examples I quickly realised that by trying to do too much I failed in a lot of how the REQUIREMENTS_FILES works. With that admission of failure out the way. The reason for the PREFIX_ID regex when parsing the requirements is because in an external document there are lots of anchors potentially. I want the imported document left alone as a stand alone requirements document which reads as it was authored (in my case an export from DOORS if that gives away the industry I work in).
So, I started from the working version of this using just the REQUIREMENTS_TAGFILES but then I thought HEY why don't I just process the inputs too!? and added REQUIREMENTS_FILES as an after thought. Which I think if I stripped out the REQUIREMENTS_FILES functionality we may be able to accept as a functional pull request, Then delegate the section about REQUIREMNTS_FILES to a separate PR. once we have a better fleshed out native way of defining the requirements.
The fact I was parsing an imported tagfile was what lead to the mental model of "I'll just use anchors as I'm already using docanchors in the TAGFILE and then its all the same if I just process the input files as per the TAGFILES. This is not an ideal solution at all. because the following became very obvious to me when doing the examples.
- If I am defining the requirements in the project and not getting them from a tagfile, then there is no need to enforce naming rules of the regex at all and they should be able to be ref'd without being a trace (importing from tagfiles already allows them to be ref'd, at least at one point it did, I will re-test)
- For that matter the default regex could be a configuration option if it was desired.
- If we are building the requirements from in source, then why not have a specific keyword requirement which could have much more verbose specification now its under our control such as Requirement{id, description, type } where type could be things like "functional", "safety", "security", etc whatever. it could have even things like parent requirements where its derived or cascaded from an upstream source, a bunch of verbose yet optional fields.
these were just some of the things which popped up when I was doing the examples.
And with all this in mind @doxygen , is the BEST path of action here to split this PR into 2 separate PR's? REQUIREMENTS_TAGFILES which I think is complete. and then work on the notion of how to implement requirements definitions being internal to doxygen and not imported from external?
let me know if splitting it seems right to you, I think current tagfiles with satisfies/verifies solution works quite well as a standalone feature. But I think the creating and managing the requirements internally requires the requirement keyword (as you point out), and bigger discussions about what fields it contains, do they appear in a separate document in the order they are defined, or put together as like an xreflist where it pulls them from where ever they are defined in the code and there's just a collection of requirements rather than a requirements spec,
@dropbearrob I agree that splitting off the more finished part in a separate PR makes sense. I have some time during the Christmas holiday period to work on this, so I can take this part as a starting point and add the remainder if you like.