openmc icon indicating copy to clipboard operation
openmc copied to clipboard

Random Ray Transport

Open jtramm opened this issue 2 years ago • 11 comments

Introduction of Basic Random Ray Solver Mode in OpenMC

What is Random Ray?

Random ray is a stochastic transport method, closely related to the deterministic Method of Characteristics (MOC). Rather than each ray representing a single neutron as in Monte Carlo, it represents a characteristic line through the reactor upon which the transport equation can be written as an ordinary differential equation that can be solved analytically (although with discretization required in energy space, making it a multigroup method). The behavior of the governing transport equation can be approximated by solving along many characteristic tracks (rays) through the reactor. Unlike particles in Monte Carlo, rays in random ray or MOC are not affected by the material characteristics of the simulated problem -- rays are selected so as to explore the full simulation problem with a statistically equal distribution in space and angle.

An example of the transport process, showing two rays being processed, is given below:

https://github.com/openmc-dev/openmc/assets/1009059/087d5a2e-6c6f-4f07-b6d7-5c81aa59b3fb

Where can I read more about this method?

The below papers have some good background info for those that are interested. The third one may make for a good starting point as it introduces the method in the context of MC and compares/contrasts the two methods.

Why is it being added to OpenMC?

There are a few good reasons:

  1. The random ray solver complements the capabilities of MC nicely. One area that MC struggles with is maintaining accuracy in regions of low physical particle flux. Random ray, on the other hand, has approximately even variance throughout the entire global simulation domain, such that areas with low neutron flux are no less well known that areas of high neutron flux. Absent weight windows in MC, random ray can be several orders of magnitude faster than multigroup Monte Carlo in classes of problems where areas with low physical neutron flux need to be resolved. While MC uncertainty can be greatly improved with variance reduction techniques, they add some user complexity, and weight windows can often be expensive to generate via MC transport alone (e.g., MAGIC). While not yet implemented in this PR, I also plan to add capability so that the random ray solver can be used to generate fast and high quality weight windows via FW-CADIS that can then be used to accelerate convergence in MC. Early work in the field has shown significant speedup in weight window generation and weight window quality with random ray and FW-CADIS as compared to MAGIC.

  2. In practical implementation terms, random ray is mechanically very similar to how Monte Carlo works, in terms of the process of ray tracing on CSG geometry and handling stochastic convergence, etc. In the original 1972 paper by Askew that introduces MOC (which random ray is a variant of), he stated:

One of the features of the method proposed [MoC] is that ... the tracking process needed to perform this operation is common to the proposed method ... and to Monte Carlo methods. Thus a single tracking routine capable of recognizing a geometric arrangement could be utilized to service all types of solution, choice being made depending which was more appropriate to the problem size and required accuracy.

This prediction holds up -- the additional requirements needed in OpenMC to handle random ray transport turned out to be fairly small, only in the range of 800 lines of code for the first implementation I did. Additionally, most of the solver was able to be added in a neatly "silo'd" fashion that doesn't complicate the MC portion of OpenMC. The current PR has increased in size to around 2400 lines of code, but much of that is due to the new example, regression test, and other boilerplate interface code etc. Thus, for relatively few additional lines of code, users will have capabilities to more efficiently handle certain types of simulation problems that are more computationally challenging to solve with MC, and will definitely have a faster transport method for generating weight windows.

  1. It amortizes the code complexity in OpenMC for representing multigroup cross sections. There is a significant amount of interface code, documentation, and complexity in allowing OpenMC to generate and use multigroup XS data in its MGMC mode. Random ray allows the same multigroup data to be used, making full reuse of these existing capabilities.

Why not make this a standalone fork of OpenMC instead of adding it in?

This might have also been a reasonable approach. It could be argued that the new random ray solver creates bloat and might make it harder to maintain the repo. However, as the new solver is very small, and is written in a silo'd approach, there is very minimal added code complexity for anyone that is only concerned with developing MC features. As only a handful of very small changes are made to the main body of the repo (with the rest sequestered in the random_ray source folder), my guess is most developers won't even be aware that the solver was added unless they go looking for the changes.

Another key argument in favor of the random ray inclusion directly into OpenMC is to (eventually) enable relatively automatic generation of good weight windows, which are a key necessity in many fusion neutronics simulation problems. Requiring that weight windows be generated in an external code and maintaining the interface between the codes is likely much more work (both for users and developers) as compared to integrating weight window generation in the main OpenMC branch.

Why not add random ray to OpenMOC instead?

I had considered doing this, but I found it was a great deal easier to get this added to OpenMC as compared to OpenMOC. The reason being is that OpenMC is both inherently 3D and inherently stochastic, whereas the 3D extruded geometry ray tracing approach in OpenMOC would raise a lot of challenges in random ray for doing on-the-fly ray tracing. Additionally, it would take significant work in OpenMOC to allow it to handle inactive vs. active batches, accumulate unknowns with uncertainties, report standard deviations, etc, which we get for free in OpenMC.

How do I use the random ray mode?

Generally the inputs are all just about identical to what you'd see if doing a multigroup MC run. I've added a new pincell example to the repo to demonstrate its basic usage. The main changes you'll see are when configuring the settings for a run:

# Instantiate a Settings object, set all runtime parameters, and export to XML
settings = openmc.Settings()
settings.energy_mode = "multi-group"
settings.batches = 600
settings.inactive = 300
settings.particles = 50
settings.solver_type = 'random ray'
settings.random_ray_distance_inactive = 40.0
settings.random_ray_distance_active = 400.0
  • particles setting now controls the number of rays
  • solver_type is a new flag that can be used to specify random ray mode (defaults to Monte Carlo)
  • random_ray_distance_inactive is the inactive (dead zone) distance that each ray travels before beginning integration (so as to build an estimate of the ray's starting angular flux on-the-fly).
  • random_ray_distance_active is the total active distance that each ray travels while performing integration.

(If the dead zone and active length ideas are new to you, this paper has more info on these random ray concepts.)

This is about it in terms of the interface changes that are required to use the random ray mode. Beyond this, the only other major change you might want to do when converting a MGMC input deck into a random ray input deck is that you may need to subdivide cells into smaller regions so as to reduce the error associated with the flat source approximation used in this solver for random ray. In the future, we may add in higher order sources that will likely alleviate some of these requirements. We are also planning to add in the ability to automatically overlay a Cartesian mesh over the global geometry to more easily subdivide cells into flat source regions, but that is left for a future PR. The animation towards the top of the repo gives an idea of the level of subdivision typically used in light water reactor problems.

How are tallies handled?

Most tallies, filters, and scores that you would expect to work with a multigroup solver like random ray should work. E.g., you can define 3D mesh tallies with energy filters and flux, fission, and nu-fission scores, etc. There are some restrictions though. For starters, it is assumed that all filter mesh boundaries will conform to physical surface boundaries (or lattice boundaries) in the simulation geometry. It is acceptable for multiple cells (FSRs) to be contained within a filter mesh cell (e.g., pincell-level or assembly-level tallies should work), but it is currently left as undefined behavior if a single simulation cell is able to score to multiple filter mesh cells. In the future, we plan to add the capability to fully support mesh tallies, but for now this restriction needs to be respected.

How is plotting handled?

Visualization of geometry via openmc --plot is handled without any modifications, as the random ray solver uses the same geometry as in MC. However, any voxel plots that are defined will also be output at the end of the simulation generating .vtk files that can be directly read/plotting with Paraview. The purpose of this secondary (post simulation) plotting round is to output flat source region (FSR), power, and multigroup flux spectrum data on the voxel grid along with material information.

Why is plotting done like this?

In fixed source MC, by default the only thing we know after a simulation is the escape fraction. In a k-eigenvalue solve, by default all we know is the eigenvalue. Spatial flux information is left totally up to the user to record, and often fine-grained spatial meshes are considered costly/unnecessary, so it makes no sense in MC mode to try to attempt to plot any spatial flux or power info by default. Conversely, in random ray, the solver functions by estimating the multigroup source and flux spectrums in every (fairly small) cell each iteration. Thus, in random ray, in both fixed source and eigenvalue simulations, we always finish the simulation with a flux estimate for all areas. As such, it is much more common in MOC and other deterministic codes to plot in situ commonly as we always have spatial flux information available. Thus, OpenMC spits out .vtk files quickly at the end. In the future, all FSR data will be made available in the statepoint file, such that users will still have the ability to plot/manipulate it on the python end, although statepoint support is not yet available.

Why define a new plotter from scratch? Why not re-use the existing OpenMC plotter?

This is a fair point -- as the PR is being reviewed, I may try to see if there is an elegant way of making use of the existing plotter. The main issue that caused me to write my own is that the current voxel plotting is focused only on plotting geometry, and is only run at the beginning of the simulation, whereas for random ray we want to plot the spatial flux/power data after the simulation. In the interest of getting something done quickly, I simply reused a binary VTK plotter that I'd written previously, and found that it worked well and got the job done in ~150 lines of code.

What are the current limitations to the solver?

There are a bunch, currently! This is just the first basic implementation, more features and support will be added in subsequent PRs. However, current limitations are:

  1. Mesh tallies currently have some limitations to them, as described earlier.

  2. Only the following scores are currently supported: flux, total, fission, nu_fission, and events.

  3. Only the following spatial tally filters are supported: cell, cell instance, distribcell, energy, material, mesh, and universe.

  4. MGXS data must be isotropic and isothermal.

  5. Fixed source transport is not yet supported, but will be soon.

  6. You must always specify a single uniform SpatialBox isotropic source term for random ray, so that the solver knows where to sample ray starting points/angles from. The source term must not be biased, e.g., it should not be limited to fissionable regions, it should not be a point source. It should fill the entire spatial simulation domain.

  7. While only voxel plots are accepted for generating outputs at the end of the simulation, all normal pre-simulation plots in plots.xml are supported.

  8. MPI via domain replication is supported. However, domain replication does not scale well in random ray due to the need to all reduce the full scalar flux vector after each power iteration. Unlike in MC, where ranks only need to exchange a few fission sites with their neighbors to even the load, random ray requires a full all reduction of the scalar flux which is not scalable past a node or two. Domain replication was added for now though as it can be useful for very small problems (e.g., 2D pincell) that have very few cells causing memory contention issues. In these cases, domain replication can greatly speedup the runtime, so it does have its uses. Longer term, we plan on considering addition of domain decomposition (for both MC and random ray) so as to make the random ray solver more scalable.

  9. User tallies will be written to statepoint files (and tallies.out, if enabled) as in a normal MC simulation. However, statepoints do not contain any other random ray state information as of yet (e.g., no cell-wise flux information or iteration source information is stored in the statepoint). As such, restart mode is not yet supported, and FSR flux/power information is not output anywhere yet. Results therefore are left to the contents of any tallies and any .vtk voxel plots.

What will be added in subsequent PRs? Is there a roadmap?

This PR is generally a "minimum viable product", with the basics of random ray added. A more robust feature set will be added in subsequent PR's over the next few months. The strategy of release a smaller basic PR first is to allow the community to try the solver out and give feedback before the layout of the solver starts to solidify as more advanced features are rolled out. Additionally, the smaller PR size makes it easier to review than if the branch was held until being "feature complete." Nonetheless, the current PR does have enough features to accomplish useful work. For instance, 2D C5G7 has been run and validated:

c5g7_thermal

The main features slated for the next few months are:

  • Shannon entropy reporting for evaluating inactive batch convergence.
  • Cylindrical lattice definition for making FSR subdivision of pincells much easier and more performant.
  • Improvements in mesh tallying capabilities so as to fully support all mesh types, and remove restriction that tally meshes conform to surfaces in the geometry.
  • Performance optimization. Vectorization over energy groups at the inner loop may give significant speedup. Additionally, access of MGXS data (a frequent operation) is currently directly using the MGMC interface, which is not highly efficiently for the random ray use case. In the future, I may plan on either optimizing the MGMC interface or copying the data into a more ideal data structure for random ray's access pattern.
  • Fixed source transport.
  • Automatic mesh overlay to allow for easy subdivision of FSRs. This is likely to be a key feature for making fusion simulation problems that use DAGMC work with random ray.
  • Statepoint storage of all flux/source data for postprocessing and for restarting.
  • Adjoint solver
  • In-depth theory and methods documentation.
  • Weight window generation via FW-CADIS
  • Full python interface

A few longer term stretch goals for random ray are also:

  • Domain decomposition (for MC as well). Currently only MPI domain replication is supported.
  • Streamlining of MGXS generation/usage workflow.

What is the runtime performance like?

The implementation in the current PR is lacking a few optimizations, but runtime info for random ray in OpenMC can be found in this paper. I'll plan on appending some formal C5G7 accuracy and timing results for this specific PR as a comment in the next few days.

Checklist

  • [x] I have performed a self-review of my own code
  • [x] I have run clang-format (version 15) on any C++ source files (if applicable)
  • [x] I have followed the style guidelines for Python source files (if applicable)
  • [x] I have made corresponding changes to the documentation (if applicable)
  • [x] I have added tests that prove my fix is effective or that my feature works (if applicable)

jtramm avatar Jan 09 '24 06:01 jtramm

This is awesome. CSG is extremely buggy in OpenMOC anyway from my experience, so this will be a useful tool for sure! I'm definitely planning on using this when it's ready.

Requests

Right now I only have one request. Can we merge #2744 and rebase this on it to have the RandomRay inherit from Geometron? IMO we should have our overarching simulation logic abstracted a bit better so that Particle doesn't have to be a mega class, but this PR probably isn't the place for that. Looking at what @stchaker has to do for a tentative transient simulation mode, this would really clean stuff up a lot long-term.

Some other thoughts

Plotting at user-defined times

Since we are defining our geometry in a way entirely different from most other simulation codes, it makes sense to me that we'd have some specialized plotting code to handle CSG. That was sort of my motivation for the recent ray-based plotters I've made PRs for.

I just wanted to say: we could add something like MOOSE's ExecuteOn to the PlottableInterface base class. Then we'd be able to straightforwardly control plotting on, for example, each power iteration, only the end of the calculation, at each depletion step, etc. Even if it's not as powerful as making voxel files to view in some fancy GUI plotter, IMO this is really useful for having an efficient workflow where you integrate it under one roof.

The possibility of using deterministic tracks in the future

I personally am a fan of fixed track sets, which makes parameter sweeps fast to carry out: you don't have to re-accumulate statistics again and instead can just re-converge from the last step's flux results. Not that it has to happen in this PR, but it shouldn't be too bad to abstract out the track generation method. In the case of pure vacuum BCs, deterministic track setup can be very very easy. While it may seem that random ray beats deterministic product angular quadrature approaches to 3D MOC , IMO this comes down to product quadratures generally being very inefficient for 3D. I've not seen any reasonable non-product quadrature get applied to 3D MOC problems before. And the level symmetric quadratures suck bigtime, so I'm not talking about those either. In the presence of good CMFD acceleration, it seems to be an open question as to whether RR is better as a production tool for 3D reactor analysis. So, what I'm advocating here is that designing the code to potentially plan for an abstracted track generation method that can work using either deterministic track creation or random might be worthwhile, both from a research and practical point of view.

Domain decomposition

Secondly, concerning domain decomposition: I've been thinking about how to do this cleanly in OpenMC. Leaving aside the problem of decomposition of spatial filters over domains, I think this would be straightforward to add to OpenMC in a few steps that could be split into their own PRs. It also would enable a fully general decomposition that doesn't rely on an hexagonal or Cartesian grouping of assemblies, for example.

  1. Put everything in the openmc::model namespace into a single global variable, something like Model that has a variable called Geometry within it. Then allow different MPI ranks to read different geometry.xml files. You'd create a mapping of MPI rank ID to geometry.xml files in the settings.xml file, or perhaps in another file entirely.
  2. Add a surface boundary condition called InterprocessBC that queues arriving particles for sending to another rank. The boundary condition would specify the other MPI rank's ID to send to. 2.5) Periodically some communication is carried out to revive particles out of the send/receive queues that InterprocessBC has created. In event mode this is as straightforward as treating communication as its own event, if we want a blocking approach. In history mode, we could probably come up with some nonblocking way to do this. Threads would be pulling new particles either from the fission bank or from particles to revive sent from other ranks.
  3. The surfaces and their respective BCs can be either set up manually or through some simple Cartesian decomposition that the python API would ideally set up for you. The python API would ideally also handle splitting spatial filters across domains with some local-to-global mapping.

So, the aforementioned steps could be carried out as three separate PRs, 2 and 2.5 each being in one PR. There are a few advantages to what I'm proposing. Firstly, it's a fully general decomposition. Because the surface BC approach does not require that the sending surface coincide with a surface on the receiving end, this would enable both overlapping and non-overlapping decompositions. Secondly, we can handle the particle communication in one PR without touching tallies at all: I'm pretty sure we should be able to maintain eigenvalue reproducibility when splitting in this way, since we're already doing a parallel reduce on contributions to k over the different ranks. I guess making sure the fission bank sites stay in the right domain would require some nontrivial changes to the parallel fission bank algorithm, though. Thirdly, this BC-based approach to communicating could be used with any ray-based solver. Lastly, since tally decomposition might be the most complicated, it can be handled last in a way separate from figuring out how to communicate particles or rays.

gridley avatar Jan 09 '24 21:01 gridley

Can we merge #2744 and rebase this on it to have the RandomRay inherit from Geometron?

Good idea -- apologies, I've wanted to give that one a review but have been slammed with so many other things. I'll try to prioritize that in the next few days and help to get that one merged.

paulromano avatar Jan 09 '24 21:01 paulromano

Agreed on getting #2744 in first. I'll plan on updating this PR to use the Geometron once that gets rolled in, as it should help with performance and memory usage.

Thanks @gridley for the other thoughts:

  • It would definitely be nice to be able to generate data plots at user specified times, and would be cool to use projection plots to show data as well. Hopefully we'll find the time to add that stuff in!
  • Deterministic tracks would also be really nice to have. I almost named the RandomRay object in this PR as CharacteristicTrack to allow for more natural extension there, haha. For the present "minimal" PR though, I think it may not be worth adding in the extra abstraction layer that accounts for a deterministic solver (given changes in convergence etc), but is something that could potentially be added in as a separate PR down the line.
  • Domain decomposition is definitely something that would be great to make work for both MC and random ray. I haven't started work on that yet, but if you're interested perhaps we could work together on getting that in. The ideas you're proposing sound like a reasonable strategy.

jtramm avatar Jan 10 '24 19:01 jtramm

So, if a problem has any vacuum BCs, why do we even need an inactive ray length? Why don't we just start all rays headed in from the vacuum BC in order to avoid that? I suppose generating rays coming in like that is a little nontrivial.

However in a problem bounded by a user-defined surface, it might be nice to allow that capability.

This is a great idea! I had actually tried this when originally experimenting with random ray, as it seemed like an easy win. While it sounded like a great idea on paper, it ended up showing significant bias in the simulation. I wondered for awhile why this was, but figured out that you end up with far higher ray density in regions near the vacuum boundary, and much lower ray density in regions elsewhere. The numerics demand that the rays be of statistically uniform density in space and angle, so biasing the sampling process like this ended up violating that.

That said, it is ok I think to sample on the boundaries if all boundaries are vacuum, provided that rays are then transported from vacuum -> vacuum instead of by a set length. However, most reactors tend to have symmetry and as such, reflective boundaries are very common, so it's maybe not worth adding special treatment for the all vacuum BC case. Even if added in, it would likely only speed things up by the fraction of time spent in the inactive length, which usually gets set to like 10%-20% of active length or so. Section 2.4 of this paper has more discussion on this stuff if you're interested.

Also, if a ray is in its inactive length and reflects off a vacuum BC, we can immediately set it to being active. It seems like the code does not do that at present. We know the fluxes exactly as soon as we're coming off a vac BC.

This is another great idea! Similar to the one above, I had also tried this one out when originally developing random ray, as it seemed like the obvious thing to do. However, I also noticed bias issues when doing this. After investigating, it turns out that it causes the same sort of quadrature biasing, where this causes far more rays to begin their active lengths near the vacuum BCs, with cells farther away from the vacuum BC's having lower than expected quadrature/ray density. With long enough active ray lengths, the bias becomes pretty minor, but it gets quite severe for shorter ray lengths.

Single temperature is a bit of a bummer to see! What is the difficulty in having this work in general with multiple temperatures?

I don't think this should be a big problem to add. I can definitely add it onto the near future work list, as it should be useful to have in. For multigroup data though, you'll still have to regenerate XS's at each temperature level one way or another, so lacking this feature is not as big of a deal as with CE data. As a workaround for now, can just make the different temperature data macro XS sets as different materials rather than combining them together into one material with multiple temperature levels.

Another point: it would be very nice to allow linear sources at some point, considering that OpenMC CSG cells are going to be pretty coarse if naively adapting from a MC model. It would be nice to not need to manually subdivide so much.

Yes, linear source is planned for inclusion. They actually already got linear source random ray running in SCONE, so hopefully should be a snap to add in here now that the math is all figured out.

jtramm avatar Jan 10 '24 20:01 jtramm

Thanks for taking the time to answer my questions in detail John!

I think overall the code looks good. One thing we'd need as part of the MVP is documentation in the theory manual, IMO. Perhaps just some pasted stuff from your thesis or paper! Or perhaps references to your paper with very brief high-level explanations as to how the method works.

gridley avatar Jan 15 '24 16:01 gridley

Thanks for the review @gridley! Yes, some sort of docs on the theory and methodology is going to be necessary - it is on the roadmap as #2836. I had originally planned on adding it a little later on (once more features are in, especially fixed source), but it is probably a good call to have some basic docs go in with this PR.

jtramm avatar Jan 16 '24 15:01 jtramm

To validate the implementation, I ran the 2D C5G7 benchmark on a dual-socket AMD Rome 7742 node (128 cores total). The FSR mesh featured 142,964 FSRs -- which was generated by manually subdividing the pincells in the benchmark specification into finer FSRs, such that each pincell featured three radial rings in both the fuel and moderator regions with 8 octants as well. This is a standard mesh that is used in previous publications on random ray, as well as in publications on OpenMOC. In all tests, I kept the number of inactive batches at 600, the dead zone (inactive) ray length at 20 cm, the active ray length at 200 cm, and the number of rays per batch at 1,750. I then varied the number of active batches and reported the results below. Several metrics for accuracy are given beyond the eigenvalue, which measure the spatial power distribution. In particular, they measure the pin-integrated total fission source distribution (as computed using a mesh fission tally). Metrics are given in terms of the RMS pin power error, the average (absolute value) percent pin power error (AAPE), and the maximum percent pin power error. Due to the bias inherent in the FSR mesh, the errors do not trend all the way to zero, but rather converge to the solution for the FSR mesh. As can be seen, there are pretty minimal gains running more than 1600 active batches, so if you wanted to reduce the error further you would be better off subdividing the FSR mesh further. Overall, the numerical results look as expected to me. There is definitely still room for improvement though on runtime performance.

image

The thermal flux is given below for illustration (as plotted via Paraview, which can read the generated .vtk files natively).

Screen Shot 2024-01-19 at 2 24 07 PM

My input deck for 2D C5G7 is at: https://github.com/jtramm/trrm_examples/tree/main/2D_c5g7

jtramm avatar Jan 19 '24 20:01 jtramm

I've added in documentation for the random ray solver. There is a new theory section that contains the derivation and other discussions on the methods, as well as a new section in the user guide covering random ray. @gridley, let me know if there's any other areas/topics I should add in -- happy to add more!

jtramm avatar Jan 30 '24 20:01 jtramm

After working on implementing fixed source random ray transport as part of an upcoming PR, I found that the way I am using external sources in the present PR would benefit from a change. Currently, we are assuming there is a single external source that will be used to sample starting ray locations. Once fixed source gets added in, we'll also need to represent neutron/photon sources along with the integration ray source.

To avoid breaking input files etc when more features go in, I've added a new "random_ray" particle type that can be specified when creating an IndependentSource object, so as to mark a particular source for use when sampling the integrating ray starting positions/directions, e.g.,

    uniform_dist = openmc.stats.Box(lower_left, upper_right, only_fissionable=False)
    settings.source = openmc.IndependentSource(space=uniform_dist, particle="random_ray")

I also updated the docs/tests/examples to use the new interface.

jtramm avatar Feb 06 '24 17:02 jtramm

Hey John, sorry I've been away for a bit! That sqrtkT accessor bug seemed like a tough one to catch.

I'll be back to review this soon!

gridley avatar Feb 12 '24 22:02 gridley

@gridley -- I've refactored the random ray solver into classes, so the random ray stuff is no longer adding any global variables to OpenMC (besides a handful of things to the settings namespace). Definitely a little more organized now, and should make it fairly natural to add linear sources in down the line. The new organization should also help with serializing things to/from statepoint as part of #2833.

As for the doc rendering issue, it is looking good to me with firefox, safari, edge, and chrome. To narrow down the issue, let's see if the bad rendering is due to your specific browser vs. differences in the docs build package versions etc. Here's a link to my built .html files -- let me know how they look in your browser:

https://drive.google.com/file/d/1MKxcbhhE1LfROJCkKV2OfDlVkaXyto_U/view?usp=share_link

jtramm avatar Feb 14 '24 16:02 jtramm

Hi @yardasol -- if you're happy with the changes I've made, please switch your review status from "requested changes" to "approved". Thanks!

jtramm avatar Apr 05 '24 14:04 jtramm

Good to go; thanks again for your work on this and especially for your patience @jtramm!

paulromano avatar Apr 18 '24 20:04 paulromano