Theory behind mmg meshing algorithm

Hi all, I am Francesco and I am using mmg3d coupled to FreeFem++.

In particular, I use 3D adaptation with a metric prescription in order to control an error estimator. As I am willing to deepen the functioning of mmg3d I would like to know what is exactly the method used to satisfy the given metric field with an output mesh.

This is because I need to estimate in the most precise way the number of elements using only the prescribed metric, before giving it to mmg. For this reason I need to know how mmg3d exactly works when a prescribed metric is given.

I am not able to find any reference, with the exception of the two papers referenced on the website community >> publications, and many others on the user guide, but I did not find the exact method.

I would be very thankful to anyone who can give additional info.


EDIT: taking a look to the source code on GitHub as far as I understand there are 2 algorithms: Delaunay and pattern splitting, depending on the definition of the environmental variable PATTERN in CMakeLists.txt. Now, I have downloaded the binary file: was it compiled with PATTERN=ON or OFF?
Is it possible to have any reference about the two above algorithm?

Hi Francesco,

It is a very hard task to estimate a priori the number of elements created by Mmg. I am very interested to know your results and method when it will be done.

To answer to your question, I will first detail the entire process of Mmg, then I will give you few advise (that you can ignore of course :wink: ) .

The Mmg remeshing process (Note that each wave works on the entire mesh iteratively until reaching its objective and that new nodes are always inserted at middle of edges (but on the “ideal” surface for surface edges)):

  • In a first step, Mmg tries to have a good approximation of the surface (with respect to the hausdorff parameter) :

    • In a first wave, it remeshes on geometric criterion. Mmg scan the surface tetrahedra and split the tetra using predefined patterns if the hausdorff distance between the surface triangle(s) of the tetra and its curve representation doesn’t respect the hausdorff parameter (we split all the edges at a hausdorff distance larger than 2.5 x hausd (hausd being the hausdorff parameter)). Then, Mmg scan again the surface tetrahedra and collapse all the edges at a hausdorff distance smaller than 0.3 x hausd. This step is important because it allows to compute the curvature tensor (needed for the next step) at each surface point with a sufficient accuracy.

    • Mmg intersects the “physical” metric (provided by the user or computed by Mmg itself if not) and a surface metric computed at each point from the hausdorff parameter and the curvature tensor at the point.

    • Mmg smoothes the metric to respect the gradation parameter : for isotropic metrics, if I note m(a) and m(b) the metrics at the extremity of the edge [a,b], hgrad the gradation parameter and if I suppose that m(a) is greater than m(b). If m(a) > m(b) + hgrad ||ab||, m(a) is replaced by m(b) + hgrad ||ab||. The metrics are iteratively propagated until the respect of the gradation everywhere.

    • In a second wave, Mmg remeshes the surface tetra in order to respect roughly this new metric (with the same tolerance than the previous one).

  • Last, Mmg remeshes both the volume and surface to have edges between 0.6 and 1.3 (in the metric) : it scans each tetra and each edge of the tetra and computes the edge length in the metric. If the length is smaller than 0.6, the edge is collapsed, if it is greater than 1.3, the edge is splitted.

Now, as you can see, there is multiple issues to compute an evaluation of the number of elements created by Mmg:

  • First, the remeshing linked to the hausdorff parameter
  • Second, the fact that Mmg modifies the input metric
  • Last, the interpolation method used to compute the metric at new points (with an isotropic metric, it is very easy, we perform a linear interpolation, but on an anisotropic metric, the method depends on the regularity of the point).

So If I were you, I will begin by working:

  • on a mesh whose the surface has alraedy been adapted by Mmg on the hausdorff parameter that you will use
  • with the gradation disabled (-hgrad -1)
  • with an isotropic metric
  • So you can try to evaluate the number of elements created by your metric only.
  • If you have good result on that, it must not be too hard to add the gradation influence.
  • And it will remains the evaluation of the number of elements created by the hausdorff… which is the harder part I think.

I hope that it he help you (and that I have not discouraged you).

Regards,

Algiane

Hi Algiane,

I thank you for you answer, it clarifies a lot of a ideas.

I am currently solving PDEs on parallelepipeds, so that the Hausdorff parameters should be always respected, at least far from the domain edges, doesn’t it?

Regarding the gradation option I have always set it to 0.0 in order to avoid its use, so it has been wrong all this time? If 0.0 is considered a non-disabling value, then mmg provides adapted meshes varying very little, right? (and this would explain why my adapted meshes have low gradation rate, at most 1.2, but still varying in accordance with my error estimator, considering also the [0.6, 1.3] feature).

Apart from this, does mmg limits the gradation also with respect to the orientation (in anisotropic adapt)? E.g., suppose I have two adjacent tetras, with same diagonal values (same shape), but totally different orientation, say they are orthogonal, does mmg adjust them?

What is the justification of 0.6 and 1.3? Is it possible to tune them without modifying source code? If not, where are they located? I suppose that the computational cost of the algorithm rises if the interval [0.6, 1.3] is restricted, there have been studies for this problem?

Up to now I focused on the fact that the product of the eigenvalues of the metric provides the future cell volume. So that, integrating this value and using the domain measure an a priori estimation for the number of elements is provided. This is done on the metric on the old mesh before adaptation. Anyway, this estimate jumps a lot and it is not reliable for this purpose. This estimator assumes that the future mesh is constructed on a not graded metric, and that all the new edges are perfectly unitary. To be more precise, this estimator captures the mean behavior, but it has too much variability (I performed some preliminary statistical exploration).

So that, a good idea could be: disable hgrad and focus on the distribution of the edges with respect to the future mesh. I think that if many edges are in [0.6, 1] w.r.t. the metric in the new mesh, then the estimator overestimates, on the contrary if many edges are in [1, 1.3] then the estimator overestimates. Is it right? If so, it would be great to obtain in advance some sort of measures on how many/ how much edges will miss the value of 1.0.

Anyway, I will go back to the isotropic metric and follow your advises. Thank you!

Hi Francesco,

  1. If your domain is a parallelepiped, yes, your hausdorff is always respected (it has no effects on planar surfaces).

  2. Setting the gradation to 0.0 is equivalent to -1 (as it is impossible to solve for Mmg, it disables the gradation). The gradation parameter must be a ratio between the largest and the smallest array so it is always greater than 1… For example, a gradation of 1.05 impose Mmg to create very small variations between edges while a gradation of 2.3 authorize large variations.

  3. The anisotropic gradation tries to preserve the metric directions. For example, if the metric in A is largest than the metric in B and do not respect the gradation along the edge [AB]. The metric in A is modified but its directions are preserved (we reduce the eigenvalue associated to the « most significant » eigenvector, i.e. the eigenvector which is closest to the considered edge). Just note that I suppose that you don’t have required edges in your mesh. Be careful too if you have sharp angles and/or singular points in your mesh because on such points, the prescribed metric is modified to satisfy the Mmg surface metric (an isotropic metric is imposed on singular points while 2 metrics associated to each portion of surfaces delimited by the ridge are defined on ridge points (with one direction matching to the ridge tangent and one direction corresponding to the normal of the surface)).

  4. You can’t tune the 0.6 and 1.3 values. All the code is fitted on thresholds that works together and if you modify one of this threshold, you will obtain bad behaviours (for example, you can decrease the Mmg convergency, or the Mmg results). This values allows to have enough degrees of freedom while trying to target an edge length and to have a reasonable ratio of convergency when Mmg is run on its own output.

  5. You are right, the product of the eigenvalues of the metric provides the future cell volume and must provide an idea of the final number of elements in the mesh. I think that if you start from a mesh not too far from your final mesh, it must give you reasonably good results (at least in isotropic case). Starting from a mesh far from the final one is more problematic because the metric interpolation and the remeshing steps will have a large influence on the final results. It is particularly true for anisotropic metrics because depending on their orientations, the « temporary » edges (the edges created during the remeshing iterations) will create different metrics (and this phenomenon is very sensitive).

  6. To print the histogram of the output edge lengths, you can run with a high level of verbosity (-v 5). It print the mean of the edge lengths too.

I am sorry to not have more tips to help you but I don’t know how to have a reliable computation of the final number of elements.

Regards,

Algiane

Hi Algiane,

doing several tests I’ve found out that multiplying by 4.5 the eigenvalue-number-of-elements-estimator works with acceptable accuracy. The issue is that it works with a gradation bounded to 1.3, then increasing the gradation makes the computation more and more inaccurate.

For instance, bounding the number of elements with 1.200.000 it gives the following results:

1.002.676
1.080.769
1.108.892
1.127.852
1.136.581
1.146.442
1.152.794
1.161.362
1.167.988

in the middle of each iteration I only solved a PDE problem with the new mesh. These results have been obtained in a 3D setting using “real” data coming from physical experiment. I noticed that the noise of the real data has a great regularizing effect on the gradients, and so on the final mesh, bounding in a natural way the gradation. On the opposite, adapting on an indicator function, this procedure is basically useless.

This magic number works also with another mesher, Omega_h (https://github.com/SNLComputation/omega_h).

This procedure is more an alchemical formula rather than a rigorous procedure, but it seems to work quite well in practice. I would like to get my hands on mmg source to test this, but I fear I have zero time as I am currently finalizing my M.Sc. thesis (I still don’t know what to do after, so I’ll keep it on my to-do list!).
I have a very little knowledge on mmg (basically everything derives from this discussion), but in my intuition there should be a way to adjust progressively the metric between each meshing iteration using the integral of the eigenvalues and some magic parameter deriving from the matching conditions.

Hi Francesco,

This is a good news that it works for the 1.3 gradation value. I have no idea about the loss of accuracy when increasing the gradation… Is it possible that you need another “magic factor”?

I am not sure to understand your test case:

  • You start from experimental data
  • you compute a metric
  • you bounded it in order to respect the gradation and a maximal number of points?

I would be happy if you find time to test this in Mmg but I understand that it is hard ;-)!
Your intuition about the metric seems valid to me.
Do you have a public repository with your eigenvalue-number-of-elements-estimator? I would be very interested to look at it.

Good luck for your M. Sc Thesis (it seems that you have done a great job)!
Regards,

Algiane