Introduce DeepEcalProcessFilter
I am updating ldmx-sw, here are the details.
What are the issues that this addresses?
Resolves https://github.com/LDMX-Software/ldmx-sw/issues/1288
Check List
- [x] I successfully compiled ldmx-sw with my developments
- [x] I ran my developments and the following shows that they are successful.
Produced events with the depth of 400:
Started 2962289 events to produce 10000 events.
@tylerhoroho-UVA is testings them now
- [x] I attached any sub-module related changes to this PR.
N/A
As a validation I ran the config at 400, the hard brem seem to stop there and that's where the daughters are starting:
@tylerhoroho-UVA I put the file that contains 276 events (I meant to have 1000, but from 10000000 tries it could only go upto 276 events [also meaning that the z=500 will take a long time to produce stats])
/u/gu/tamasvami/events_pn_deep_test_276events.root
In https://github.com/LDMX-Software/ldmx-sw/pull/1289/commits/24f9e0ba0a0b7a15b4ca80ae642c25c9a8447974?diff=split&w=1 I updated the DQM plots to show the particle types, now it looks like this:
OK I'm done with the filter that doesnt require the photon to come from the target. The changes for the file size: the brem photon is tagged, no other brem is saved; tagging the relevant conversion vs saving all conversion has a ~15% increase in size (152KB vs 130KB per event) so I am more comfortable to save all of them, this makes the code easier to read anyway (I can say more if needed why the code gets complicated otherwise) and makes sure we dont miss any of the relevant physics.
Tyler in the meanwhile is taking an event from my event.root and resimulates it with the target filter on to see what's happening. We may want to have that part decoupled from this PR, although we are not in a rush.
OK in the last push I introduced require_photon_fromTarget_ that governs if we want to force the photon to come from the target. Worked nicely on one event, I'll make a bit more and then move the PR to "ready to review"
There is also a new config that runs the from target photon case:
Biasing/test/ecal_conv_deep_fromTarget.py
Interesting, the volume.contains("target") actually brings in much more than just the target.
@tylerhoroho-UVA I assume this is not good. Maybe we could enforce instead that it's coming from around Z=0 instead of going thru the volume definition. Alternatively if these are called TS_trgt or something like that I could enforce that it should be target but not any TS related. But since the target will always be at Z=0(ish) I think that's a less involved solution. What do you think?
Interesting. From looking at the .gdml files I don't see any objects (other than the target) that contain a "target" substring in their volume name. Maybe it's something on the G4 level that's assigning the name?
If z values can be used instead of substrings in volumes, I think -0.5 < z < 0.5 would fully contain everything in the target plus a little extra room.
Yes, I think everything the the target.gdml, i.e. these https://github.com/LDMX-Software/ldmx-sw/blob/trunk/Detectors/data/ldmx-det-v14-8gev/target.gdml#L72-L91 are going to be called target by G4.
OK, let's do it with the -0.5 < z < 0.5, I'll push the commit when I have it
Talked to Tyler about the issue above, and we concluded that it's better to keep this as it is (in real data we wont know if the interaction happened on the TS or the target), and we can always just remove these at the analysis level. It's also a small effect
I would encourage you to survey values of the biasing threshold and the sorting threshold to see how high you can raise it before losing access to the events you want to study.
Maybe this could be a good study for Anmol, what do you think @tylerhoroho-UVA ? (This certainly can be done after the PR is merged since it's configuration variable)
I ran the test file again after the changes (with a start Z of 240 mm), the DQM histograms look good