OpticSim.jl
OpticSim.jl copied to clipboard
Do not ignore detector in the middle of an optical system
A detector can only be hit if there is no other element that can be hit.
Consider this system:
w1 = leaf(Cuboid(20.0, 20.0, 2.0;
interface = FresnelInterface{Float64}(SCHOTT.N_BK7, Air;
reflectance=0.0, transmission=1.0)),
translation(0.0, 0.0, -30.0))
w2 = leaf(Cuboid(20.0, 20.0, 2.0;
interface = FresnelInterface{Float64}(SCHOTT.N_BK7, Air;
reflectance=0.0, transmission=1.0)),
translation(0.0, 0.0, -60.0))
la = LensAssembly(w1(), w2())
detector = Rectangle(20.0, 20.0, SVector(0.0, 0.0, 1.0), SVector(0.0, 0.0, -50.0); interface = opaqueinterface())
sys = CSGOpticalSystem(la, detector)
Vis.drawtracerays(sys(); trackallrays=true, rayfilter=nothing, colorbynhits=true)
Vis.drawtraceimage(sys())
The drawtracerays visualization shows rays passing through the detector and hitting the lower windown. The drawtraceimage visualization stays dark. Tracing a single ray will return nothing. This seems to happen because the trace function in OpticalSystem only considers intersections with the detector after the ray has ben traced through all elements of the lens assembly. At this time the rays are already below the detector and propagating in the wrong direction.
This behavior is not intuitive for me. There are also use cases where one needs to place a detector at an intermediate position.
How I would like it to work:
- The detector is treated like a normal surface during raytracing of the lens assembly
- If its interface is opaque, the ray is terminated at the detector
- Otherwise the ray can continue to propagate through the system, possibly hitting other detectors (once OpticSim has this capability)
- A ray is terminated, if it hits an opaque detector, if its power drops below a user defined threshold, or if the recursion depth is too large
Yeah this is something which came up quite a while ago and never got fixed. I certainly agree that detectors should work within a system. We should also support multiple detectors in a single system. This might require reasonable restructuring to make work, but is probably worth it
Agreed this needs to be fixed. It will take a bit of time to figure out how to restructure the code.
@cdhf your deterministic ray tracing example was very nice. Thank you for doing this.
It's on the dev branch of the documentation now and will show up in the stable branch as soon as we increment the version number.
Detectors are currently handled specially in the trace function defined for CSGOpticalSystems (src/Optical/OpticalSystem.jl line 100).
Each CSGOpticalSystem object stores a detector, which is just a Surface, and a detectorimage. The update of the detectorimage happens in the trace function. A ray is traced through the lens assembly and then the last ray is intersected with the detector.
To make this work more generically we would probably need to make a Detector type, which stores the detectorimage, something like this:
abstract type AbstractDetector{T} <: Surface{T} end
#typical user defined detector object struct BasicDetector{S<:Surface,I<:Union{HierarchicalImage,Image},T<:Number} <: AbstractDetector{T} detectorsurface::S detectorimage::I end
The detector image could get updated when surfaceintersection is called on the detector object. A typical surfaceintersection function is in src/Optical/Paraxial.jl line 70.
This would be a little weird because surfaceintersection can, in general, return a DisjointUnion of Interval's representing all the intersections with the object. Then closestintersection picks out the one closest to the origin of the intersecting ray.
Logically the code to handle the update of the Detector should be in closestintersction, but that function currently gets a DisjointUnion of intervals and works through them to find the closest point.
Probably we need some other function designed explicitly for updating detectors that gets called in the trace function after all the surfaceintersecion processing is done, maybe something like this:
update(a::AbstractDetector) = nothing update(a:BasicDetector) = #user defined update code
and then in the trace code something like this after closestintersection is called:
if isa(surf,AbstractDetector) update(surf) end
Then users can define their own detector types with whatever properties they want and the detector will be updated by the user defined code. Detectors could store phase, power, rays, ray counts, ray angles, whatever. We supply BasicDetector with the system to handle the most common cases.
Does this seem reasonable? We'd have to document this well. It would be hard for new users to figure out how to use it on their own.
One other thing we'd have to keep track of is that the current system marks traces that eventually end up hitting the detector. Otherwise when you visualize the trace there are tons of, generally irrelevant, rays that are not on a path that eventually ends up at a detector.
What if any surface could be a detector?
Let's say that any surface can have an optional associated detector. After finding the next surface with surfaceintersection and closestintersection, the associated detector is updated. The actual way the detector is updated is based on the geometry of the surface and the information that needs to be stored. Something roughly like this:
function update(surface) update(detectordatatype(surface), detectorgeometry(surace) end
function detectortype(surface) if !hasdetector(surface) nothing else surface.detectortype end end
function detectorgeometry(surface) if !hasdetector(surface) nothing else surface.detectorgeometry end end
function update(dtype::Nothing, dgeometry::Nothing) nothing end
The Detector for the rim of a lens might only be interested in the total number of rays hitting it and would therefore not need to store any position information. The detector for a lens surface might want to store the complete spatial ray distribution with a user defined pixel layout.
If something like this is feasible, this would neatly support simulation of power induced thermal aberrations.
@cdhf that is an intriguing idea. Shouldn't be that hard to do. Let me noodle on it for a day and then we should be able to nail down exactly how the code has to change.
Do you want to take a crack at the actual coding? I'll help out as much as you need.
@BrianGun sure, why not. It's another good chance to understand the code base better.
Great, I've been thinking about how to do this and it looks like there is a way that won't require major changes to the code but will give you, and others, all the flexibility they could want for detectors. Should have something by Monday or Tuesday, we can iterate a few times till it looks good and then you can start coding.
How about something like this?
abstract type AbstractDetector end
"""interface for AbstractDetector """
surface(a::AbstractDetector) #returns the surface contained in the Detector
update!(a::AbstractDetector, b::LensTrace) #update the detector object with the information contained in the LensTrace
#example detector that records an image representing the power of the rays that intersect it.
#Detectors could store any information, such as phase, ray angle, wavelength so long as
#they define an update function that does the processing.
struct ImageDetector{S<:Surface} <: AbstractDetector
name::String (or maybe Symbol)
surface::S
image::HierarchicalImage
end
CSGOpticalSystem changes to have a dict that stores all the detectors in the system, indexed by surface, instead of the single detector it has now. The detectorimage field goes away:
struct CSGOpticalSystem{T,D<:Number,S<:Surface{T},L<:LensAssembly{T}} <: OpticalSystem{T}
assembly::L
detectors::Dict{Surface,AbstractDetector}
#detectorimage::HierarchicalImage{D} #Removed
temperature::T
pressure::T
...
When you construct the optical system you create detectors for whichever surfaces you want and insert them into a Dict{Surface,AbstractDetector}. This then is passed as the detector argument to the CSGOpticalSystem constructor.
Details of the implementation The trace function is currently divided into two main parts: (LensAssembly.jl, line 327) and (OpticalSystem.jl, line 100).
The trace function in OpticalSystem.jl calls the trace function in LensAssembly.jl to trace rays through all the optical elements. Then it sees if the last ray to exit the lens assembly instersects the detector and, if it does, accumulates ray power in the HierarchicalImage of the detector (OpticalSystem.jl, lines 121-164).
The trace function in LensAssembly.jl traces rays through all the optical elements in the system and returns the last ray that intersected a surface in the list of optical elements.
New optical systems would be constructed without a distinguished detector object. Instead a dictionary of detector objects would be passed as an argument to the constructor of CSGOpticalSystem. Any surface can be marked as a detector surface by adding it to the detectors dictionary.
The new trace function would trace a ray through the system. When an intersection with a surface was recorded you'd perform this test:
if in(keys(detectors),surface) #a detector is attached to this surface so update it update!(detectors[surface],lenstrace) end
Most of the code in trace function (OpticalSystem.jl, lines 121-164) would move inside of the other trace function (LensAssembly.jl, line 327) except for the code that explicitly updates the image in the detector. This would move into the user defined update!() function.
Advantages This solution would allow a user to attach a detector to any surface in the system and to do arbitrary processing to record detector information when a ray intersects that surface. There would no longer be a distinguished last element which would be the detector.
This would also simplify the trace code which is now split into two large pieces spread across two files. The new architecture would make the CSGOpticalSystem trace function just a call to the LensSystem trace function.
Perhaps we could have a keyword argument to the AxiSymmetricOpticalSystem constructor with a default argument that would construct a single element detectors array to make it more convenient for people who are working only with simple conventional lens systems with a single image plane.
It would be possible to add a surface to an optical system that has a null interface so it doesn't affect the ray path or power. This would allow you to make measurements anywhere, not just on optical surfaces defined by your lens csg tree.
Does this seem reasonable? I'm happy to iterate a few more times if you'd like but this seems to capture the gist of what you wanted.
This will be a breaking change so we will have to increment the version number. We have another breaking change in the works, the new emitter api. Perhaps the two of these could be combined somehow so we do a single increment.
Combining the changes is feasible (see https://github.com/SciML/ColPrac#unreleased-changes-and--dev). But I also think that it'd be good to get a bug fix for #146 released asap, and this would be difficult to do in parallel. We can talk about it today at our meeting.
@BrianGun This sounds very good.
I have one more suggestion: I think it would make sense if not only the detector but also the surfaces would have a name field. The detector keys would be names of surfaces instead of the actual surfaces. The use case I have in mind are modular optical systems, where you would store optomechanical subassemblies in functions. You later combine these subassemblies to complete systems. If you do the lookup by surface object identity, you would have to decide upfront which surfaces may be interesting later on, and return these surfaces as well. If you can give the surfaces a name, the return value does not need to be more complicated. The names could in principle be autogenerated. Many optics designers have some informal naming scheme anyway.
@cdhf I like your naming suggestion. But I'm leery of detectors indexing on user supplied name fields unless we have some way of guaranteeing that the name fields are unique. If they aren't this will cause unpredictable behavior that might be hard to report back to the user in a way that would be easy to understand.
Naming will require additional thought and should be the subject of a separate PR. Maybe we'll decide that the naming should be done through dictionary fields in CSGOpticalSystem that translate strings to objects and objects to strings.
Or maybe it is better to add a name field to all Surface types. We want to be careful how often we do this or Surface will bristle with fields.
Or maybe a better solution is a mapping from strings to surfaces which is external to the optical system struct, along with utility functions that make it easy for the user to set up detectors. There are many possibilities to consider.
Also, we don't want to fold too much into one PR or it becomes hard to review and hard for people to follow what the PR is about.
For now could we index on surfaces rather than names of surfaces? You can start the discussion about naming with a new PR and we can work through those issues separately. When we figure out a reliable naming mechanism that works smoothly for the user, then we can make the change.
Once you start coding on the detector issue don't hesitate to ping me if you have questions.
@BrianGun These are valid concerns. I'll open a separate issue for this.
@BrianGun I have a question regarding the update code for the HierarchicalImage in OpticalSystem at row 157-158:
pixu, pixv = uvtopix(detector(system), uv(detintsct), size(system.detectorimage))
system.detectorimage[pixv, pixu] += D(sourcepower(r))
I think that the sourcepower function gives the power of the ray as it was emitted at the source. If this is correct, shouldn't this be just the actual power of the ray incident on the detector? Like this:
pixu, pixv = uvtopix(detector(system), uv(detintsct), size(system.detectorimage))
system.detectorimage[pixv, pixu] += D(power(r))
Otherwise, any apodization due to eg coating effects would not be accounted for.
This is a symptom of a wider issue that needs to be addressed - the way we simulate energy on the detector is through Monte Carlo integration, so rather than ray power being attenuated on each ray when it hits an interface, we instead kill rays proportional to the transmission/reflection efficiency of any given interface. This means we need to count each ray as having power 1 when it hits the detector, except it may emit with lower power, so we use the source power... this was always a huge ugly hack and should be addressed. What exactly a neat solution looks like I’m not sure. Perhaps it’s worth addressing polarisation at the same time?
The ‘power’ of the ray here is the ‘instantaneous power’ ie the attenuated power carried with ray - this is just for visualisation purposes. I think we couldn’t use this directly (and not kill any rays) because at a partially reflective surface we don’t split rays meaning the power would essentially be attenuated by the efficiency squared.
Sorry if this isn’t super clear - I can try and clarify anything if needed
@friggog Ok, this makes sense. I did not think of the interaction with the nondeterministic nature of the raytracing method, but of course Zemax is doing the same if raysplitting is disabled.
It seems to me that there is an interesting API design question in the future. If raysplitting is implemented or rays are traced deterministically, one has to use the attenuated power. This can probably be worked around by using another detector implementation, but it would be fairly easy to use this wrong.
@cdhf your observation about using attenuated power if we implement raysplitting is correct.
Looking over the code you submitted in PR #63 exactly this type of error seems to have occurred. I noticed it when first looking over your PR and then got distracted and neglected to fix it before accepting the PR. And there it remained. My bad.
In the code below the power of the ray is attenuated by the transmission or reflection Fresnel coefficient in the case of mode ReflectOrTransmit. In this mode the ray power should not be attenuated because the attenuation is being accounted for by Monte Carlo integration. @cdhf do you want to raise an issue about this and make the fix? Or do you want me to fix it?
Either we need better documentation about how to write custom OpticalInterface types or a rethink of the API. An API rethink is a longer term problem, but better docs could happen any time.
if interfacemode(opticalinterface) == Transmit || (interfacemode(opticalinterface) == ReflectOrTransmit && r < powₜ)
# refraction
raydirection = refractedray(nᵢ, nₜ, normal, direction(incidentray))
raypower = powₜ * incident_pow #ERROR: this attenuation should not happen in ReflectOrTransmit mode
elseif interfacemode(opticalinterface) == Reflect || (interfacemode(opticalinterface) == ReflectOrTransmit && r < (powᵣ + powₜ))
# reflection
raypower = powᵣ * incident_pow #ERROR: this attenuation should not happen in ReflectOrTransmit mode
raydirection = reflectedray(normal, direction(incidentray))
else
@BrianGun I fear I do not understand why this code should be wrong.
In ReflectOrTransmit mode the code behaves like previously, randomly choosing reflected or transmitted ray according to the reflection/transmission probabilty. If I understand the explanation of what the power field means correctly, this is what should happen. The power of the LensTrace would give the correct attenuation of a single ray by the optical system, and the integrated power of the detector would be correct by using the source power when updating the detector. In the trace function in LensAssembly.jl the sourcepower field seems to be simply handed over unmodified for each new ray segment. Is this understanding not correct?
@cdhf
Imagine we have a FresneInterface that transmits 70% of the power and reflects 30%. All of the transmitted power is focused on pixel T and the reflected power is focused on a different pixel R.
Assume we shoot 100 rays.
If the mode is Transmit then 100 of the rays hit pixel T. Each pixel has power reduced by .7 so the power on pixel T is 100*.7 = 70. If the mode is Reflect then 100 of the rays hit pixel R. Each pixel has power reduced by .3 so the power on pixel T is 100*.3 = 30. This is exactly as one would expect.
If the mode is TransmitOrReflect then, on average, 100*.7 = 70 of the pixels are transmitted. In the current code their power is also reduced by .7 so the total power on pixel T is 70*.7 = 49. Similarly, on average 100*.3 = 30 of the pixels are reflected. Their power is reduced by .3 so the total power on pixel R is 30*.3 = 9.
We expect the power measured at pixels T and R to be consistent regardless of which mode, Transmit, Reflect, TransmitOrReflect is used. However with the code as it is not only are the absolute values different but even the ratio of transmitted to reflected power is different :
TransmitOrReflect case: transmitted power/reflected power = 49/9 = 5.44 (transmitted power in Transmit case)/(reflected power in Reflect case) = 70/30 = 2.33
The reason for this difference is that in TransmitOrReflect mode the Fresnel attenuation is accounted for by the stochastic choice of taking the reflected or transmitted path. Also multiplying by the Fresnel attenuation factor squares the attenuation (notice that in the example above 5.44 = 2.33^2).
To correct this the ray power should not be scaled by the Fresnel attenuation terms in the TransmitOrReflect case.
@BrianGun I understand this, but from friggog's comment above I understood that this is handled by the detector update code in OpticalSystem.jl, which ignores the instantaneous ray power completely and uses the power at the source:
pixu, pixv = uvtopix(detector(system), uv(detintsct), size(system.detectorimage))
system.detectorimage[pixv, pixu] += D(sourcepower(r))
Maybe @friggog can comment?
I only had chance to read this quickly but I think the error is actually the inverse effect to what @BrianGun is saying above?
The power attribute of the ray represents the "instantaneous" power and is not accumulated on the detector (as quoted by @cdhf), so there is no squaring effect of the attenuation that I can tell. The power attribute is used for visualization only, so it is useful for it to be attenuated by the reflectance/transmission at the surface, though strictly not correct according to how we accumulate power at detectors. Tbh I think the nicest solution is to either:
- implement ray splitting and have power carried at the ray, attenuated at each interface and accumulated at the detector
- get rid of the power attribute completely to make it clearer that everything is happening through monte carlo integration and proportionally rejection of rays to achieve this, this should probably even extend to emitters so we can get rid of
sourcepowertoo, just having the emitter emit rays in directions distributed according to the power profile
I think the issue with the new code is that neither the ratio of power or the total power on either detector pixel will not be consistent with ReflectOrTransmit mode, as Brian identified - but for the opposite reason. If in Reflect only mode then all rays are reflected, not the amount proportional to the reflectance, and not incorporating the Fresnel effect, likewise for transmission. i.e. no rays are thrown away meaning the monte carlo integration to get accurate power at the detector is broken.
Possibly the conditions should be (excuse python syntax):
if r < powₜ and interfacemode(opticalinterface) in [ReflectOrTransmit, Transmit]:
# refract
elseif r < (powᵣ + powₜ) and interfacemode(opticalinterface) in [ReflectOrTransmit, Reflect]:
# reflect
else:
# stop
This way the same behaviour as ReflectOrTransmit mode is maintained, but rays which don't conform to the desired operation (transmit or reflect) are just thrown away
Perhaps my understanding here is wrong, but based on the snippets above I think this makes sense?
Definitely worth reiterating that refactoring power more comprehensively (maybe one of my two suggestions above) should be high priority. It really shouldn't be this hard for all of us to understand what is happening in such a simple case...
@cdhf thanks for pointing out the use of sourcepower for updating the image. This looks incorrect for the new version of the code that added support for deterministic raytracing in PR #101 .
Prior to this the assumption was that all surfaces used Monte Carlo sampling to account for transmission/reflection coefficients so it was approximately correct to use sourcepower rather than instantaneous power. However the current code doesn't include the generally small effect of absorption in the glass. This seems to be an independent error.
In the current code sourcepower remains unchanged from whatever it was initially set to before ray tracing begins. I added an assertion to the OpticalRay constructors to check this and for your draw_stackedbeamsplitters example sourcepower is always 1.0. This is clearly not the correct result to record on the image detector.
Since PR #101 one can have a mix of deterministic and non-deterministic tracing. Now it is necessary to use instantaneous power for updating the detector image.
For Monte Carlo sampling (ReflectOrTransmit mode) the instantaneous power should include the effect of absorption but not the Fresnel transmission/reflection coefficients, since those are accounted for by Monte Carlo integration.
For deterministic sampling the instantaneous power should include both absorption and transmission/reflection coefficient contributions.
If processintersection in Fresnel.jl is modified to leave power unchanged for ReflectOrTransmit mode, and to include the Fresnel attenuation coefficients for Transmit and Reflect modes, then the current code should be correct once the update to the image detector is changed to use instantaneous power rather than source power.
We should probably get rid of the sourcepower field in OpticalRay. It doesn't seem to have a purpose anymore.
@BrianGun This sounds like the right way to do it now.
It still bothers me that it is so easy to use the machinery wrong. In particular, if you do deterministic raytracing, you probably do trace single rays through the system for some custom analysis. If there is only one Monte Carlo traced interface in the system, the power recorded for your analysis is wrong, and in many cases you may not even notice it. But I don't have a good idea of how to do it better at the moment.
@cdhf could you expand on the example where you think the power recorded for the system will be wrong? If we make the suggested changes then mixing deterministic and MonteCarlo surfaces should generate correct power results. Could you create an example where this isn't true so we can figure out how to make it work correctly all the time?
@BrianGun I think the problem is not, that the recorded power is wrong, but it is easy to use it wrong.
Thinking in more detail about this, it seems to me that there are actually three cases to distinguish:
- Purely Transmit/Reflect modes as replacement for deterministic, sequential raytracing. The kind of analysis done in these modes will probably not rely on any kind of accumulating detector, instead using the raw ray data. An example might be calculating wavefront error and apodization and then using both of these simulalition result in some kind of (partial) coherent image simulation. I would just trace a bunch of rays through the system, then trace them back to the exit pupil and retrieve the OPD difference to the reference sphere and the relative power distribution over the pupil. These two can then be used to do the diffraction step to the image plane. Since you use the raw ray data without any eventually converging accumulating detector, the power information for each ray must be correct.
- ReflectOrTransmit nondeterministic mode, which I think only makes sense in the context of using accumulating detectors for analysis. As discussed, in this mode the power of each individual ray cannot be correct.
- An hypothetical ReflectAndTransmit raysplitting mode, which would be deterministic, and again each individual power record for each ray must be correct. But the difference with 1 is that you would use this mode for the same type of analysis as 2.
Now, when it comes to mixing different interface modes in one system, it gets somewhat complicated. When doing the type of analysis described in 1, you probably never want to have neither ReflectOrTransmit (wrong per ray power) nor ReflectAndTransmit (too much work) interfaces in your system. But I can imagin cases where it makes sense to mix ReflectOrTransmit and ReflectAndTransmit, for example simulating systems with very tight requirements on straylight reduction. These different modes are to me just ways to optimize between memory and CPU requirements.
So with your proposed changes, simulations are always correct, if the person who creates a lens assembly in a function ensures that all surfaces in all configurations of the assembly are consistently of Reflect/Transmit type or ReflectOrTransmit/ReflectAndTransmit type. And the one doing the simulations, which might be another person, has to ensure that the lens assembly is created in the correct configuration for the analysis that is to be performed.
@cdhf Changing interface modes is inconvenient now because we don't have helper functions that make it easy to recreate a system with different interface modes, nor do we have a UI to allow users to set interface modes interactively. We are working right now on the latter. This will at least make it easier for the two hypothetical users in your example to communicate the nature of the system and analysis to each other.
However, it seems that even for case 1, in the limit the ray power determined by summing a mix of deterministic and stochastic ray samples gives the correct power.
When you trace rays back to the exit pupil to compute power distribution over the pupil you will get point samples on the pupil which you then explicitly or implicitly interpolate to compute power over an area.
Using Reflect/Transmit mode gives you precise power for a single point sample but you still need many samples spread across the exit pupil to get an accurate measure of relative power because it is the spatial distribution of rays that determines the power , in many cases to a much greater extent than the power of each ray.
Let's assume we have an optical system with good antireflection coatings and glass of high transmission, no partially mirrored surfaces. In this case the vast majority of the power hitting the sensor will be transmitted and the power of all transmitted rays will be almost the same. The primary factor determining power/unit area on the exit pupil will be the spatial distribution of rays across the exit pupil, not the power of each ray.
In this case there will be a small difference between ReflectorTransmit and Reflect/Transmit because. All the ray powers of the transmitted rays that make it to the sensor are nearly the same for either case, because they will all be close to 1. The spatial distribution will also be nearly the same, since most rays will transmit.
When you have partially mirrored surfaces then you need more stochastic samples to approximate power accurately than you might with just Reflect/Transmit samples. If you send more samples you should converge to the correct power.
The answer should not be wrong if you mix stochastic and deterministic tracing. It might be noisy but it's not wrong. If you continue sending more samples you will eventually converge to the correct result. It comes down to choosing your sampling rate correctly.
@BrianGun You are right, if you use enough samples, you will converge to the correct result. On the other hand, the primary motivation to trace rays deterministically in my view is to avoid the need for stochastic sampling completely. For example, if I am looking at apodization in an system with typical lenses and coatings, I can assume smoothness over the pupil and rotational symmetry of the coatings, and I can get away with tracing only a handful of rays.
But if the problem as you wrote is communicating intent between two users, maybe a short term solution is easy. Maybe we can just add another power field that captures the effect of all Fresnel coefficients regardless of interface mode, in addition to the one that ignores them for ReflectOrTransmit mode. Then, the user doing the analysis can actually check that the assumptions of the analysis are correct.
@BrianGun I've started to implement multiple detectors now, and I am at the point where I need to lookup detectors by surface. My idea was to add a new field surface_id to the Intersection struct, and then add constructors to fill that field using objectid like this:
function Intersection(α::T, point::SVector{N,T}, normal::SVector{N,T}, u::T, v::T, surface::Surface{T}, interface::AllOpticalInterfaces{T}; flippednormal = false) where {T<:Real,N}
new{T,N}(α, point, normalize(normal), u, v, objectid(surface), interface, flippednormal)
end
The optical system constructor would store a Dict of surface_id to detectors, and the LensAssembly trace function would look up the detector from the intersection object and update it, if it exists.
Does this seem to be a sensible way to do this? I found one usage of the Intersection struct in BoundingBox, which isn't a Surface at all. I'm not sure how intersections with BoundingBox are used. Would using objectid(nothing) for the surface_id be reasonable in this case?
Great to hear about progress on this! It sounds ok to me but @BrianGun will be able to give better insight I’m sure. Possibly the detectors in the optical system wouldn’t be accessible from the lens assembly though?
Regarding the bounding box intersection, we use this to speed up ray tracing of complex CSG objects by fist checking intersection with the bounding box of each child of each binary node and discarding that subtree if there is no hit. I don’t think the result of the intersection is ever propagated (it’s just used for this binary check). Based on this your suggestion to just leave the objectid blank for these cases seems reasonable.
@cdhf sorry it's taken me so long to reply. Things are temporarily insanely busy at work.
Here's my interpretation of what you are proposing:
When a surface is created it is assigned a unique id. This id is returned by the function objectid which is used to generate the index to the dictionary which maps surfaces to detectors.
Every surface type would have to be modified to contain an id field and all the constructors would change to generate this id field automatically when the surface is constructed, preferably in a thread safe way. I believe the uuid() function is thread safe so it might be a good way to generate id's.
The Intersection type would also be modified to add an id field.
Would also have to update the Surface type doc string to include the new objectid requirement.
Is this what you are thinking? Quite a few changes but not unmanageable.
When you suggested using objectid(nothing) for an intersection with a BoundingBox what would objectid return? Would it return 0 or a negative number or something else that indicated no object?
@BrianGun Actually I wanted to use the builtin Base.objectid function, but unfortunately since then I've found out that I've misunderstood the semantics of this function. So all you have written applies.
The semantics I want is that of a GUID. Surfaces that would hash to the same value would be 100% a problem. So I agree that the functions from UUID would be good candidates to create unique tags. For a bounding box one could either simply call uuid(), since the value should be unique every time. Or one could make the surface_id field of the Intersection struct a Union{UUID, Nothing} and return nothing.
So in short, the proposed solution is to add a surface_id::UUID field to every Surface type, fill it in the constructor, and return that surface_id in the Intersection struct.