visit
visit copied to clipboard
Mili integration point advancement
At some point, we will need to let users choose which integration points are displayed in visit. Because of how the mili data is organized, accessed, and displayed, this is a surprisingly difficult task. There are some discussions on this topic in issue #3370.
Originally, we were planning on using the array_decompose expression to allow users to set integration points. Unfortunately, this doesn't really work out for reasons outlined here: https://github.com/visit-dav/visit/issues/3370#issuecomment-497010889
Some other potential avenues that come to mind:
- Update VisIt to allow creating mutable database expressions (able to be manipulated by users), and have the Mili plugin create mutable expressions for each available class as constant mesh scalars representing the desired integration point for that class, and the user could update/change the integration point through the expression definition. The plugin would then check the expression list to determine which integration point is desired for a given class. One downside to this is that we would be creating an entire field to represent a single variable. It also just feels like a bit of a hack.
- Create a SetAuxiliaryData method within the database plugin infrastructure. This would be similar to GetAuxiliaryData, except it would send information to the database plugin rather than retrieve information from it. If this was in place, we could create some kind of interface that allows users to send strings of information to the current plugin to try to interpret. In the Mili case, we could just allow the same commands that are used in Griz (something like set_ipt
number).
Why not just introduce a new expression function that is similar to array_decompose
except it is designed (and named) to handle integration points. The new function might be, select_integration_points(<varname>, [str0,str1,...], [p00, p01,...],[p10,p11,...])
where varname
is a VisIt variable name, [str0,str1
is a list of arbitrary strings (but strings like brick
, shell
, etc. are recognized), and the lists [p00,p01...]
is the list of integration point lists to handle on each class. if the variable is not shared, the classname arg can be ""
some other proxy for don't care
.
We should take care to generalize away from Mili-specific verbiage and so maybe instead of treating those strings as classnames we treat them as element type names and use something more VTK-esque or VisIt-esque such as hex
(instead of brick
) and quad
(instead of shell
-- though shell
may in fact be more general because most users I believe think quad
is a 2D object whereas a shell
can really exist in a 3D geometric space (and has thickness).
Yeah, something like that could work. It seems a bit strange, though, because the variable in <varname>
would have nothing to do with setting the integration points.
I am not sure I get that. I mean, refering to the comment you ref above, in the example of a shared variable, it would be Primal/shared/bar
and then the expression would be selecting the integration points to display from this variable for each class it is defined on. In the case of a non-shared variable, it would be something like Primarl/Bricks/foo
and the expression would be selecting integration points for that variable, though the syntax of the expression function for that case could be simplified a bit.
So, the approach I suggest would get problematic if you happened to have a number of shared variables and you wanted to display them all with one set of integration points and then make a change in IPs and display them all again. That would be a lot of expression editing.
So, if we stay away from handling this in varable world and instead seek a way to handle it in mesh land, then it starts to feel much more like a SIL controls issue...to specify which "parts" (ips) of the mesh we want served up each time we request variable(s). This is more like your suggestion b
above, except that it gets codified somehow in the SIL. I can't remember...did we already go down that path and beat it to death before we abandon it?
I am not sure I get that. I mean, refering to the comment you ref above, in the example of a shared variable, it would be
Primal/shared/bar
and then the expression would be selecting the integration points to display from this variable for each class it is defined on. In the case of a non-shared variable, it would be something likePrimarl/Bricks/foo
and the expression would be selecting integration points for that variable, though the syntax of the expression function for that case could be simplified a bit.
Kind of, except setting the integration point for 'Primal/Bricks/foo' would actually set the integration point for all element sets belonging to Bricks, not just foo (at least that's how it's current setup in Griz world).
Kind of, except setting the integration point for 'Primal/Bricks/foo' would actually set the integration point for all element sets belonging to Bricks, not just foo (at least that's how it's current setup in Griz world).
Yeah, so this remark is highly suggestive of the notion that integration points are really more naturally associated with the mesh than they are with the variable and that seems consistent with your earlier observations. I keep loosing sight of that fact.
So, I guess there really are a few parts to the puzzle here.
Characterization of the Integration Points
There is the mesh, its finite elements and their types and the integration (e.g. quadrature) schemes (which specify the locations in parametric coordinates within an element of a given class where values -- degrees of freedom or dofs -- for a field are given). So, the points within each class of element at which field dofs are given (e.g. the integration points) are really an artifact of the element class(es) (e.g. mesh-land kinds of stuff) used to represent the given field(s). In almost every way they are like the piecewise-constant (cell-centered) and piecewise-linear (node-centered) interpolation schemes we commonly use for other fields.
Field Dofs at the Integration Points
Then, each field that follows a given integration scheme stores multiple dofs internal to each zone for that field. Instead of a single dof as we might ordinarily have for a common cell-centered field (and for which the interpolation scheme for the field's actual value over the zone is a box integration), we have multiple dofs for the field.
Display of IP fields by IP-Selection
This is the functionality you are aiming at providing here and it looks like expressions (alone) are not sufficient. Its the answer to the question...how do we display a field in VisIt that has multiple internal dofs? Answer...pick one and treat it like any other cell-centered field. Is that really right? Because the points are definitely not all at the cell's center, we in fact might have to do some amount of interpolation/extrapolation of any given selected IP to the cell's center. Also, If we haven't considered adding some SIL functionality to support, we should probably discuss further. It smells somewhat similar to species data which we also handle via SIL (though, honestly, handling species as mesh-like stuff has never really made much sense to me because I see species a more of field-like stuff).
Interpolationg IP Fields to the Nodes
This is something new I am throwing into the discussion. This is the equivalent of re-centering operation except for fields with IP dofs instead of fields with cell-centered dofs. Interpolating IP data to the nodes is subtle. See quadrature_interp.pdf for some discussion of one example. If we had such an ability, would that obsolete displaying IP data by IP-selection? Probably not as analysts might occasionally want to inspect specific IP dofs of any given field.
Accurate Query/Contour of Native IP Data
Also something new I am throwing into the discussion. At some point, it could make sense for us to support accurate/correct contour plottting of the native IP data (you'd get curved contours inside individual elements) and/or performing Queries that do the Integrations the IP data exists to represent to begin with.
Also, I happened to find this study of how to modify VTK to support integration points.
How about this:
- I could create a variable in the Mili plugin for each class and call it something like
<class_name>_integration_point
to represent the chosen integration point for each class, and I could also create a variable called something likeintegration_points
to represent each class' chosen integration points all together (which is similar to how Mili already visualizes classes as both separate and combined). - We could then create a new expression similar to your first suggestion, but we could alter it slightly to be something like
select_integration_point(<varname>, <point>)
. Using this expression on<class_name>_integartion_point
would result in setting the integration point for<class_name>
, and using the expression on<integration_points>
would set the integration point for all classes. This would also allow users to visualize the current integration points so that they could see what was actually chosen (since the nearest IP to whatever is requested is chosen rather than explicitly using what the user requests).
Also, I'm not too familiar with the expression system in general, so I'm not actually clear on how easy it would be to implement an expression like this.
So I like the intention of these suggested expressions but I honestly don't think that will pan out as you expect because what we need is for the selected integration point(s) to then influence the behavior of any subsequent variable (e.g. Primal/Bricks/foo
or Primal/shared/bar
) we try to plot but it will have the effect of influencing only the behavior of the <class_name>_integration_point
or integration_points
variables.
However, if these expressions are to somehow be used kinda-sorta like an arithmetic expression operator
(not a VisIt Operator but like +
, /
, *
, -
...that might work. Lets call it the IP expression operator denoted by #
, then the expression Primal/Bricks/foo # bricks_integration_point
would have the effect of selecting the desired integration point(s) for a bricks-only variable named foo
whereas Primal/shared/bar # integration_points
would have the effect of selecting the desired integration points for the shared variable bar
. Further, if we had a slew of variables of the form Primal/Bricks/XXX # bricks_integration_point
and then wanted to change the selected integration point for Bricks
, we could affect that by adjusting the one expression for bricks_integration_point
.
So, maybe thats the right way to go...to sort of treat the IP selection process similarly to an arithmetic expression operator and then provide the expressions you propose and the new #
operator to the expression system. Of course, instead of adding an #
operator, we could just add an expression function, select_ips(<var>,<ip-selector-var>)
too.
So I like the intention of these suggested expressions but I honestly don't think that will pan out as you expect because what we need is for the selected integration point(s) to then influence the behavior of any subsequent variable (e.g.
Primal/Bricks/foo
orPrimal/shared/bar
) we try to plot but it will have the effect of influencing only the behavior of the<class_name>_integration_point
orintegration_points
variables.
I'm not quit sure I follow this, but I think it's because there's some confusion on both sides here. <class_name>_integration_point
and <integration_points>
would technically be "real" variables defined on the database, but they wouldn't map to real variables. I was hoping that there was some way to have an expression variable sent to a plugin (or at least its definition) before being requested for visualization, but maybe that's not possible right now. If, for instance, the expression variable could be force requested when defined, it would immediately send the new integration point for the given class (or all classes) when a user updated it. In the plugin, instead of sending back a dataset to visualize, it could just know that variables of type integration_point
are meant to set integration points rather than send back data for visualization. It seems like this is not how the expressions system is set up, though.
However, if these expressions are to somehow be used kinda-sorta like an arithmetic expression
operator
(not a VisIt Operator but like+
,/
,*
,-
...that might work. Lets call it the IP expression operator denoted by#
, then the expressionPrimal/Bricks/foo # bricks_integration_point
would have the effect of selecting the desired integration point(s) for a bricks-only variable namedfoo
whereasPrimal/shared/bar # integration_points
would have the effect of selecting the desired integration points for the shared variablebar
. Further, if we had a slew of variables of the formPrimal/Bricks/XXX # bricks_integration_point
and then wanted to change the selected integration point forBricks
, we could affect that by adjusting the one expression forbricks_integration_point
.
I think I'm following this, but it seems to go back in a direction that we want to get away from. The first part goes back to the idea of setting integration points for variables, which is something we want to avoid. The last sentence talks about setting integration points for an entire class (we basically always have a slew of variables for each class), which is what we want, but I don't see how that's being accomplished without still requesting a single variable.
Of course, instead of adding an
#
operator, we could just add an expression function,select_ips(<var>,<ip-selector-var>)
too.
One of the things I don't like about this approach is that it requires us to load all integration points into memory. I'm not sure how many that is on average, but I'm guessing it could be quite a bit(?). It seems like this whole process would be much easier/more efficient if we could someone just tell the plugin which integration points to use for each class and only load those arrays into memory. This is how it's currently set up, with the mid point being the default, but there's no way to change it right now.
I spoke with Kevin about how many integration points we should expect to see for a given element set. It sounds like it's not yet entirely clear because of how new this is, but there are some general ideas. The default is typically 3-5, but Edd says that it can conceivably reach up to 200.
Reading Just Some (of the IPs) of a Variable
We can't really do this with current plugin interfaces. Multiple reasons. First, does the file format even support reading just some of a variable? When VisIt reads a vector variable from a plugin, it reads all components even if a subsequent expression is using just one. When VisIt reads a material variable from a plugin, it reads information about all the constitutent materials, not just some, even if in the current SIL selection only some materials are selected. When VisIt reads an enumerated scalar variable from a plugin, it reads information about all the associated subsets, not just some of them. When VisIt reads an array variable from a plugin, it reads all the array members, not just some.
It is certainly reasonable to enhance the plugin interface to support asking for just part of some variable. In some very restricted cases (e.g. hyperslabbing of a structured mesh variable), we already do handle part of a variable because we ware spatially subsetting it (e.g. restricting the domain space of the variable). However, we don't ever do this for the range space of the variable. Doing that at all might be easily possible. Doing it correctly including changes to plugin public interfaces I suspect is quite a bit more involved. It would make sense to understand the broader set of use cases such an enhancement might be useful for before we did that, IMHO. Side note: its concievable adopting some kind of a convention in the string variable name that gets passed to the plugin could affect your needs (e.g. avtMiliFileFormat::GetVar("stress")
means to read all of stress (as we normally would) but avtMiliFileFormat::GetVar("stress[1,5-8,17]"
) means to read it for only those IPs. How the extra "[1,5-8,17]"
gets tacked onto the variable name before sending to plugin would involve some work.
We are handling variables with IPs as Array variables and yes, we'd read all of them, even if they had 200 members. Same is true for scattering angle variables which often contain several hundered entries for each scattering angle. If memory a problem...more processors.
Treating IPs in Variable World
I get that IPs aren't really part of the variables. They are really part of the mesh (elements that comprise the mesh).
Well, my proposed new expression operator (combined with your proposed new variables represent which IPs are selected) has the desired effect even if it is conceptually not quite where we want it. Think of the proposed #
operator as something that operates on the domain space of the variables it is combined with. When you change the definitions of either <class_name>_integration_point
or integration_points
(which is selecting which IPs are involved) any expression that derived from these using the #
operator is effectively updated. Now, one would probably have to hit the Clear Plots and Draw plots button to see the updated plots because we currently don't have support for updating the plots automatically when the expressions they are derived from change. But, I think this is totally doable and probably lowest cost enhancement.
Treating IPs in Mesh World
This would be ideal, I guess. We'd handle it via SIL controls in a manner similar to how we handle Species variables I think. I admit I haven't thought this through but it is probably a more natural conceptual fit. I just think it would involve more work. Bonus though that changing SIL controls automatically updates displayed plots so you wouldn't have this Clear Plots/Draw cycle to go through to see updates after changing IPs.
If you wanna set up a WebEx (inviting anyone else to join whose interested), maybe we should do that? We might make faster progress.
Think of the proposed
#
operator as something that operates on the domain space of the variables it is combined with.
Oops...I mean range space there.
First, does the file format even support reading just some of a variable?
The mili plugin does, but we don't currently have a way of telling it which part (element set) to read, so it takes the mid point.
We are handling variables with IPs as Array variables and yes, we'd read all of them, even if they had 200 members
This seems a bit odd to me. If we loaded a shared variable into memory with all IPs, we could potentially have something that looks like Variable[num_elements][vector_size][es_X] (or Variable[num_elements][es_X]), where es_X is the number of integration points for a given element set. This last dimension can be ragged, since we can have different element sets defined on different areas of the mesh with different numbers of integration points, and they're all combined in the shared variable. Can VisIt handle something like this? I suppose we could pad the last dimension, but that also seems a bit odd.
Am attaching a paper I found that goes into some details regarding visualization of bricks with IPs.
Full disclosure...the abstract mentions fiber bundles of all things but that isn't what caughy my eye 😮
Also, MFEM team has been investing some similar issues...
- https://github.com/mfem/mfem/issues/1477
- https://github.com/mfem/mfem/issues/679.
Including here some remarks from an email exchange with @tzanio...
If you have a first-order (8-node) hex with data at 8 interior (e.g. Gauss-Legendre) quadrature points, then you have several options:
- Visualize the data at these 8 points as a “point cloud” i.e. small spheres colored accordingly
- Interpolate the 8 point data as a Q1 L2 field (tri-linear discontinuous finite elements) and visualize that
- Interpolate the 8 point data as a Q0 LOR field (piecewise-constants on a once refined version of the hex)
There could be other options, but these are the ones that were discussed in the issues below and make sense for more quadrature points and/or higher-order elements.
I will very much advise against using only one of the points (or averaging to the vertices).
And, in response to my questions...
- But, is there a 4th option...interpolate the 8 point data to the nodes and then perform a “nodal-averaging perhaps wegithed by zone volume” to arrive at a tri-linear continuous representation?
- Also, what about numerical operations like Queries in VisIt? In these cases, we’re computing things like sums (integrals), etc. Presumably, such operations could be performed “better” than doing their equivs on one of the 3 modalities you suggest below.
Option 1 is really the only true vis, anything else attributes properties to the data that it may not have. For example, in BLAST we have the artificial viscosity coefficient as a scalar at the quadrature points. That is a true point value, not a function so it will be mathematically incorrect to interpret it as option 2. Option 3 will be acceptable for this.
In the case where the quadrature data does correspond to a function, that function is typically discontinuous. Again, that is essentially the case for all such fields in BLAST, so option 2 will be the most appropriate in this case. Integrals should work just fine. Option 3 will be acceptable for this too.
If you prefer to reconstruct an H1 (continuous) field, then you should do exactly what you said: option 2, followed by simple averaging of the values at shared degrees of freedom on element boundaries (vertices in this case).
These Abaqus refs, 1 and 2, explain some aspects of integration points on different element types.
There is some other stuff in #2602
Page 31 of this ref describes in some good detail procedures for extrapolating integration point data to nodes and then averaging nodal values to produce a (quadratic) continuous field.
this ref describes a nodal-based method (on tetrahedra) for doing the same thing.
These notes from a researchgate.net discussion are also useful...
In most software, the stress is already calculated at the gauss points as noted by Alejandro. In general, stress is not directly solved for by the finite element method. Displacements are calculated at the nodes. All other displacements are interpolated from the shape functions. The stress (and resulting strain) is found by taking the derivative terms from the shape functions. Hence stress and strain are never calculated directly, but are interpolated from these derivative terms and the nodal displacements. It turns out, the stress happens to be most accurately interpolated at the Gauss points in the isoparametric formulation, and most commercial software will output the stress at these locations in a straightforward manner. Please note that if stress at a particular node is desired, it will be different when calculated from each element attached to that node. Care should be taken when combining the stress from multiple elements at a node.
In FEM, nodes are connected to more than one elements (in general) and have no volume. So basically, we can calculate only the displacements/forces at the nodes. We cannot define a strain or stress at a volume-less, infinitesimal point. The strain and stress can only be defined on the (volume of the) element (I.e. Gauss point or integration point). When you see nodal stresses and strains in a FEM analysis then that is actually an averaged (smoothed) value of the stresses/strains obtained in the neighboring integration points of the elements to which the node is connected.
This discussion considers integration points from a more general setting than just solid mechanics and is closer to the level of abstraction I think any implementation in VisIt should be pitiched.
Notes from Ed
Based upon input form others and myself, this is how MDG would like to render results involving multiple integration points.
Beam elements contain one or more locations along their length where integration points (IP) exist. At each location, there are typically multiple IPs within the cross section (e.g., at the top left, at the bottom middle , in the center, etc.), and the number of points can vary between element sets. When examining results, it is convenient to select a particular IP in the cross section and display a certain result from it on the entire beam. Every beam in that element set should display the same IP/result pair. If multiple integration locations exist along the length the element nodal position functions (or others) can be used to interpolate /extrapolate integration point values along the beam length to generate a location-specific value/ color. While the result is not smooth between adjacent elements, the rendered result jump is an indicator of the mesh “accuracy” and is useful.
Shells contain one or more multiple in-plane locations where IPs exist, and at those location, there are normally multiple through-thickness integration points. In low order quad shells, 2x2 in-plane integration is used. It is useful to render data from the same through-thickness location/IP in all shell elements in an element set. Again, interpolation/extrapolation of the IP quantities across the 2-D element surface provides valuable information.
In many problems there is other non-IP data also put out for an element. This data is commonly constant in the element.
Brick element can contain multiple IPs too. Standard 8-node bricks commonly use a 2x2x2 integration scheme (8 IPs) within the element. It is useful to display the same result from all IPs within the element set and interpolate/extrapolate as with the other element types. In higher-order elements, the same approach is desirable too.
For bricks with only 1 IP and shells with only 1 in-plane IP, it may be preferred not to interpolate/extrapolate certain result as it may yield more meaningful images - especially for unitary quantities. Having these options is desirable. This can be achieved in elements with multiple volume or in-plane IPs by creating “sub-elements” – one for each separate integration point in the element. Here a constant value/color is rendered on each sub-element. The trade off with this approach is jumps across parent element boundaries are not readily discerned and variations across an element are not smooth. Nonetheless, it has uses in certain cases.
How LLNL Engineering codes deal with multiple integration points today (sub-optimal).
ParaDyn and Diablo only output 1 IP data set for low-order solid elements even if 8-point integration is used. ParaDyn can output multiple IP data sets for solids, but as Griz does not support it today, it is not the default. For shells, the codes write out only one in-plane location. Diablo limits the through-thickness IP data to just 3 IPs (top, bottom and middle), and by default ParaDyn does the same thing. Again, ParaDyn has the ability to write one or all in-plane locations and any or all through thickness IP data sets. For beams, ParaDyn can write out some or all the IPs, and, to the best of my knowledge, Diablo does not put out any IP data. That said, being able to put out some or all the IP data is essential, and Diablo will be upgraded appropriately. For NIKE3D’s higher order elements, low-order sub-elements are created each with a single “integration point”.
The post-processing tool Griz has two rendering modes given how the codes usually supply data. Griz can render a single result color on the entire element (the current default mode), or it can construct result averaged nodal values and then interpolate the nodal values across the element. This introduces diffusion and has problems at material or element set boundaries.
I note that in engineering applications, the number of integration points can vary greatly between element sets - more so for the structural elements and less so for the solid elements. Global and element-set specific integration-point selection commands are essential for working with such data.
See these Abaqus notes regarding their implementation of thick shell elements.
The last few slides in this slide deck describes a method for plotting C0 continuous stresses from the guass quadrature data.
From a Quora discussion...
In an FEA model, you are basically just solving the spring equation, F = kx. However, you are solving the matrix form of this equation which accommodates the model being broken into many elements.
The model is discretized into elements that each act as a small spring. A stiffness matrix is built for each element, and then a global stiffness matrix is composed for the overall structure by combining the individual element stiffness matrices. Now you have the "k" for the structure that fits into F = kx. If you have boundary conditions / constraints in your model (and you will), then those are applied by modifying the global stiffness matrix.
Since you know what the applied forces are, you can also build the {F} vector. You can then solve for the displacements. You start with:
{F} = [K]*{U}
where {F} is the applied force vector, [K] is the global stiffness matrix, and {U} is the nodal displacement vector. Solve for the displacements:
{U} = [K]^-1 * {F}
Now that you know the displacements of each node, you can solve for the strain in each element. Once the strains are known, they can be used in combination with the material elastic modulus to solve for stresses. Alternatively, the nodal displacements can be used in combination with the individual element stiffness matrices to solve for the internal forces on each node. These nodal forces, combined with element geometry, can then be used to solve for the stresses on each element.
This ref discusses the importance of UNaveraged stress contours (to highlight how discontinuous stress may be) in assessing quality of results.
Wanted to also capture here this ref which relates element stiffness matrices to Voigt Notation which VisIt users elsewhere for symmetric tensors.
Structural engineering and solid mechanics problems are often concerned with deformation of objects composed of materials due to applied/external forces and boundary conditions. The simplest and canonical example of a deforming object is a spring where the force, F
, required to deform some spring a distance x
is related by Hooke's law yielding the equation F=kx
and k
is the spring constant which is a characteristic of the spring (including the material(s) comprising it) and is also know as its stiffness. Typically, in FEM problems, F
and k
are known and the problem is to compute x
, the displacement.
These slides give numerous, detailed examples of using the analogy of springs to understand the basics of FEM analysis.
In FEM analysis, a large, complex (in geometry and/or material composition) object is decomposed into many small pieces (e.g. elements) for numerical computations. Given applied loads and/or boundry conditions, the forces, F
, on each element are known (at the nodes). We have [F]=[k][x]
and need to solve for the displacements, [x]
and so have [x]=[k]'[F]
. Here, the object's global stiffness takes the form of a massive matrix, [k]
called the stiffness matrix.
In general, building the stiffness matrix involves the integration of the individual element's basis (aka interopolation) functions and then assembling those individual integrations into the global matrix. It is the integration involved in this step that introduces integration point (aka Gauss Quadrature Points or just Gauss Points -- note that depending on the element types, there can be a variety of different kinds of integration points not necessarily all called Gauss Points) data in handling stress data.
Both computational efficiency and accuracy demand that the underlying functions be evaluated at integration points which are typically interior to the element's geometrical boundary; multiple points along a 1D beam, points interior to triangles and quads in 2D, points along the thickness of shell elements or internal to bricks in 3D, etc. The number of such points typically depends on the order of the basis functions.
From these notes in a researchgate.net discussion...
In general, stress is not directly solved for by the finite element method. Displacements are calculated at the nodes. Displacements at other places internal to an element are interpolated from the shape functions. The stress (and resulting strain) is found by taking the derivative terms from the shape functions. Hence stress and strain are never calculated directly, but are interpolated from these derivative terms and the nodal displacements. It turns out, the stress happens to be most accurately interpolated at the Gauss points in the isoparametric formulation, and most commercial software will output the stress at these locations in a straightforward manner. Please note that if stress at a particular node is desired, it will be different when calculated from each element attached to that node. Care should be taken when combining the stress from multiple elements at a node.
In FEM, nodes are connected to more than one elements (in general) and have no volume. So basically, we can calculate only the displacements/forces at the nodes. We cannot define a strain or stress at a volume-less, infinitesimal point. The strain and stress can only be defined on the (volume of the) element (I.e. Gauss point or integration point). When you see nodal stresses and strains in a FEM analysis then that is actually an averaged (smoothed) value of the stresses/strains obtained in the neighboring integration points of the elements to which the node is connected.
For VisIt, the upshot is that key solid mechanics data, like stresses, tends to be known in downstream viz. tools at points other than the mesh nodes. Methods for visualizing such data introduces new concerns different than those typical of zone-centered and node-centered fields. In particular, such fields are discontinuous across element boundaries. In some sense, this makes these fields similar to piecewise constant (e.g. zone-centered) fields which are also discontinuous across element boundaries. However, for stress data for example, such discontinuities are important for users to see in order to assess the quality and convergence of the mesh and simulated results.
But, this also depends on the specific element types used in the data producer and how the data producer winds up storing data for downstream visualization tools. These element types include such funny sound things thin and thick beams and shells which utilize integration points to capture various cross-sectional stress information. Note that in general, the element types used in structural mechanics may vary far beyond those supported in the Mili database. In fact, the Mili database itself may be capable of handling a much wider variety of element types than we currently encounter from LLNL's engineering codes. Some data producers may effectively resample such data to the nodes and then VisIt won't be able to offer any choices other than to treat it as any other normal nodal field. In other cases, a data producer could wind up storing stress data both at the the integration points and resampled to the nodes to give users an option in what to plot during visualization. VisIt should be able to offer the option of resampling to the nodes.
While integration point data is not higher order than nodal data when interpreted on an element-by-element basis (e.g. in an iso-parametric brick element, there are often 8 dof-points for the geometry and 8 dof-points for the stress) it is higher order than nodal when interpreted in a global mesh sense because the global function is piecewise-DIScontinuous. There are more total dofs than any nodal field. Each element has a unique set of dofs for the stress field whereas in a typical nodal field, nodal dofs are shared among all the elements that share each node.
Specific notes from Ed Z.
Based upon input form others and myself, this is how MDG would like to render results involving multiple integration points.
Beam elements contain one or more locations along their length where integration points (IP) exist. At each location, there are typically multiple IPs within the cross section (e.g., at the top left, at the bottom middle , in the center, etc.), and the number of points can vary between element sets. When examining results, it is convenient to select a particular IP in the cross section and display a certain result from it on the entire beam. Every beam in that element set should display the same IP/result pair. If multiple integration locations exist along the length the element nodal position functions (or others) can be used to interpolate /extrapolate integration point values along the beam length to generate a location-specific value/ color. While the result is not smooth between adjacent elements, the rendered result jump is an indicator of the mesh “accuracy” and is useful.
Shells contain one or more multiple in-plane locations where IPs exist, and at those location, there are normally multiple through-thickness integration points. In low order quad shells, 2x2 in-plane integration is used. It is useful to render data from the same through-thickness location/IP in all shell elements in an element set. Again, interpolation/extrapolation of the IP quantities across the 2-D element surface provides valuable information.
In many problems there is other non-IP data also put out for an element. This data is commonly constant in the element.
Brick element can contain multiple IPs too. Standard 8-node bricks commonly use a 2x2x2 integration scheme (8 IPs) within the element. It is useful to display the same result from all IPs within the element set and interpolate/extrapolate as with the other element types. In higher-order elements, the same approach is desirable too.
For bricks with only 1 IP and shells with only 1 in-plane IP, it may be preferred not to interpolate/extrapolate certain result as it may yield more meaningful images - especially for unitary quantities. Having these options is desirable. This can be achieved in elements with multiple volume or in-plane IPs by creating “sub-elements” – one for each separate integration point in the element. Here a constant value/color is rendered on each sub-element. The trade off with this approach is jumps across parent element boundaries are not readily discerned and variations across an element are not smooth. Nonetheless, it has uses in certain cases.
Ed Z. on How LLNL Engineering codes deal with multiple integration points today (sub-optimal).
ParaDyn and Diablo only output 1 IP data set for low-order solid elements even if 8-point integration is used. ParaDyn can output multiple IP data sets for solids, but as Griz does not support it today, it is not the default. For shells, the codes write out only one in-plane location. Diablo limits the through-thickness IP data to just 3 IPs (top, bottom and middle), and by default ParaDyn does the same thing. Again, ParaDyn has the ability to write one or all in-plane locations and any or all through thickness IP data sets. For beams, ParaDyn can write out some or all the IPs, and, to the best of my knowledge, Diablo does not put out any IP data. That said, being able to put out some or all the IP data is essential, and Diablo will be upgraded appropriately. For NIKE3D’s higher order elements, low-order sub-elements are created each with a single “integration point”.
The post-processing tool has two rendering modes given how the codes usually supply data. It can render a single result color on the entire element (the current default mode), or it can construct result averaged nodal values and then interpolate the nodal values across the element. This introduces diffusion and has problems at material or element set boundaries.
I note that in engineering applications, the number of integration points can vary greatly between element sets - more so for the structural elements and less so for the solid elements. Global and element-set specific integration-point selection commands are essential for working with such data.
Plotting in VisIt
So, for Mili data, we will have a mesh organized into element sets. Each element set is homogeneous in element type and therefore IP configuration. A mesh can have a mix of 1D (beam), 2D (shell) and 3D (brick) elements. It may be the case that IPs from different dimensioned element sets are somehow related (e.g. the middle IP in a beam goes with to the middle IP in a shell) but other than perhaps the use of a common numbering or naming convention, it isn't clear that Mili would capture that relationship.
As Ed Z. describes in notes, above, for each element set, users will need the ability to select IPs that are currently active. That sounds, smells and tastes a lot like VisIt subset (SIL) controls or the use of VisIt SIL controls for species data. But, lets set that aside for the moment. However, some of the options for plotting IP data could also involve all the IPs. For example, @tzanio proposed three common options to plot 3D IP data on bricks/hex...
If you have a first-order (8-node) hex with data at 8 interior (e.g. Gauss-Legendre) quadrature points, then you have several options:
- Visualize the data at these 8 points as a “point cloud” i.e. small spheres colored accordingly
- Interpolate the 8 point data as a Q1 L2 field (tri-linear discontinuous finite elements) and visualize that
- Interpolate the 8 point data as a Q0 LOR field (piecewise-constants on a once refined version of the hex)
Note that option 3 does require VisIt knows enough about the element types to compute the proper once-refined element. That refinement probably varies with element type and IP configuration.
In addition to options 1-3, above, there are others including simply picking one of the IPs (e.g. the selected or active IP) to be used as the zone-centered value for each element as well as possibly re-centering that field to the nodes, using @tzanio's option 2, above, but averaging the values at the nodes to arrive at a node-centered field. These approaches that involve averaging to the nodes are probably useful for pretty pictures but are probably not good for analysis in the same way certain MIR algorithms are useful for pretty pictures but not analysis.
While it will be useful for users to select specific IPs for PC plotting, what about contour poltting? Does it really make any sense to NOT use all the IPs for contour plots? If we want continuous contours, we'll need to do something similar to what we'd do described in paragraph above for continous colors in a PC plot. OTOH, what if we contoured the dual mesh (see below).
Picks and Queries
For a Pick operation, I don't think IP selection is needed. I mean, I think a Pick should return all known information for the picked element and simply ensure that in whatever, possibly large, textual result which includes IP data the user can associate numbers with IPs correctly. Or, maybe a Pick should restrict itself to returning information only about selected or active IPs.
For a lineout operation, it would seem we might also want to use all the IPs in an element as the ray for the line passes through an element it will pass near each IP suggesting the lineout value should look close to the IP it is near.
For Min and Max queries, if specific IP selection(s) are active, then the query should return information specifc to the selected IPs. Otherwise, it should return information for all IPs. In other words, Min/Max among the values of selected IPs or Min/Max of all IPs.
What happens for things like Variable Sum (which is basically an integral) and friend WVS query? I am still not clear on this but because the IP data represents values of a field at the integration points of a quadrature scheme, it would seem that WVS should use IP data, with weights determined by the IP configuration, to perform the sum (integration). We should confirm this though.
Would a Dual Mesh for Visualization operations Work Better?
Why not visualize IP data on a dual mesh? The IP data does in fact represent actual values of a field. They are field samples. We can choose to interpolate those values over an element however we wish to fill in (color for example) the field's variation across an element of the mesh. One option would be to compute a dual mesh such that the IPs of the original mesh are the nodes of this dual mesh. Then, VisIt can treat all IP data as normal nodal field on this dual mesh. The resulting field would be continuous and so likely useful for pretty pictures but maybe less so for analysis. OTOH, it could represent an approach that would eliminate the need for IP selection as every plot would involve all the IPs.
VisIt needs deeper understanding of element types and/or IP configurations
A lot of the above indicates a need for VisIt to understand more about element types than it currently does. For example, it would need to know the local coordinates of IPs as well as possibly the weights used in the data producer's integration schemes. This tends to be information associated with each element type. Certainly, it needs to know Mili's element types and IP configurations but we should take care to not be too Mili specific in this regard and have some broader understanding of structural engineering and solid mechanics element types in general.
SIL Controls for IP Selection
Different element sets will have different IP configurations from which to select IPs. This is almost identical to different species having different constituents. We won't have a unique IP configuration for each element set. But, we will have some number of IP configurations and if a users has two element sets with identical IP configurations, they may still want to manipulate the selected IPs on each set independently. SIL controls for this seems like it would be bi-level. At the top would be all the unique IP configurations (which represent the various element types in the mesh I think). Maybe we have a mesh with 4 node fully integrated (4 IPs) quads and reduced integrated (1 IP) quads and also 8 node fully integrated (8 IP) and reduced integrated (1 IP) bricks. So, the top of the SIL would be these 4 element types. IP select at that level would select the IP for that element type regardless of element set. The next level down would be the element sets that are comprised of elements of one of the types at the top level. At this level, a user could have two element sets of fully integrated quads but nonetheless select different IPs for those sets. This is vary much how material-cross-domain selection works in the SIL if, for example, each domain consisted of only a single material. At the top, you can select from among materials or domains. Below each domain you can select various materials. Below each material, you can select various domains. Likewise for IP selection, you would select from among element types or element sets. Below an element set, you can select specific IPs and below an element type, you can select specific element sets.
For a first step/quick fix to get some functionality to users, we've decided to add a database read option that allows users to globally set the integration point across all element sets (choose the nearest). We can then move forward with adding the remaining derived variables.
@JustinPrivitera one thing worth summarizing out of this long dialog is that my thoughts and comments at the time were motivated mostly by not modifying much internally in VisIt...so no new variable types or mesh types or whatever and using what we already have in VisIt that is mostly like this...which was array variables at the time. An array variable is like a vector variable but with an arbitrarily large number of components (maybe hundreds).
So, the idea is that any Mili variable with IPs would be read from the Mili plugin as an array variable (one array entry per IP). Then, we could use any of our existing array funciontality to decide which IPs to actually render in a plot.
There are two issues with that approach...first there may be some performance (or logistical API) considerations in reading such data from the Mili library in the plugin that Alister was dealing with that I didn't fully understand. Next, there is probably a significant semantic mismatch between a VisIt array variable and IPs. One I can think of is that IPs have actually two kinds of data associated with them. One is the variable values at each IP. The other is the IP positions within a class of element (fixed for each element class in a given Mili database). VisIt array variables would only handle the first, not the second.
Some other useability issues involve how users tend to identify IPs...by a number, by ordinal moniker (e.g. first, middle, last or smallest, middle, largest) by some other user-defined names (e.g. Devon, Latisha, Tre) and whether our interface (selection and pick) will support those.
Cyrus had me assign both of us so that it does not fall off the radar - engineering folks would like us to investigate this again. It is not pressing. I haven't read all of the discussion yet.
Probably this most recent detailed comment is the best place to start. The remaining dialog is probably useful from seeing what was considered and why it will or won't work.
And, I forgot to emphasize above...the motivation to shoehorn everything into VisIt array variables is ultimately unsatisfactory for us and for users. I believe the reality is that suitable visualization of IP data will require some novel approaches to how we actually plot them.