@kephale: I'm researching various data structures for meshes. There are tradeoffs to each. Thinking about supporting several and making it easy to pick and choose the style of representation that one wants. Since a mesh is usually an immutable structure I could see the benefit to being able to switch data structures midstream.
For example, right now I'm creating a mesh by starting with a seed platonic solid to which I apply various operators until, in the end, I have a fully triangulated mesh that gets output to an X3D file.
Along the way I'm controlling these mesh operators using parameters like the number of edges of a face. So at that point in the process I need a data structure that allows a face to be more than just a triangle, and I need to be able to access various properties of a face.
When I get towards the end of the process I start using operators than only work on triangles. So when that happens it makes sense to start creating those meshes using a data structure optimized for triangles.
It's the same mesh, and yet it isn't since really the process is such that at every step a new mesh is getting created based on the previous mesh. So at any point the next iteration of the mesh could use a different data structure, different underlying matrix library, whatever.
Not sure if that applies to the type of mesh deformations you apply in your simulations.
There could also be the idea of having one data representation for the core mesh data that get operated on, and a different representation (fully triangulated, for example) that is used for rendering purposes only. And then mapping from the core mesh to the display mesh in realtime.
I think that's how some video games are rendered.
Where smoothing algorithms like catmull-clark are applied on-the-fly to a simpler/rougher underlying mesh.