Much to my surprise, it seems there hasn't been much movement there. ANSYS still seems to be the leader for general simulation and multi-physics. NASTRAN still popular. Still no viable open-source solution.
The only new player seems to be COMSOL. Has anyone experience with it? Would it be worth a try for someone who knows ANSYS and NASTRAN well?
For the more low-level stuff there's the FEniCS project[1], for solving PDEs using fairly straight forward Python code like this[2]. When I say fairly straight forward, I mean it follows the math pretty closely, it's not exactly high-school level stuff.
[1]: https://fenicsproject.org/
[2]: https://jsdokken.com/dolfinx-tutorial/chapter2/linearelastic...
So to have anything useful, you need that entire pipeline? For hobbyists, I assume we need this stack. What are the popular modelling tools?
But, I guess you get what you pay for in this space still.
[1]: https://gmsh.info/
[2]: https://github.com/qingfengxia/Cfd
Just curious what kind of hobby leads to a finite element analysis?
Unfortunately the barrier to entry is too high for most hobbyists in these fields to use FEM right now.
[0] I’m pretty sure “insufferability” isn’t a real word. [Edit: don’t use an asterisk for footnotes.]
Hey, I resemble that remark! I'd be maybe a little less armchair with more surplus equipment access, but maybe no less insufferable.
By all accounts, though, a degree of insufferability is no bar to doing worthwhile work; Socrates, Galileo, Newton, Babbage, and Heaviside were all apparently quite insufferable, perhaps as much so as that homeless guy who yells at you about adrenochrome when you walk by his park encampment. (Don't fall into the trap of thinking it's an advantage, though.) Getting sidetracked by trivialities and delusions is a greater risk. Most people spend their whole lives on it.
As for how little any person can know, you can certainly know more than anyone who lived a century ago: more than Einstein, more than Edison, more than Noether, more than Tesla, more than Gauss. Any one of the hobbies you named will put you in contact with information they never had, and you can draw on a century or more of academic literature they didn't have, thanks to Libgen and Sci-Hub (and thus Bitcoin).
And it's easy to know more than an average doctorate holder; all you have to do is study, but not forget everything you study the way university students do, and not fall into traps like ancient aliens and the like. I mean, you can still do good work if you believe in ancient aliens (Newton and Tesla certainly believed dumber things) but probably not good archeological work.
Don't be discouraged by prejudice against autodidacts. Lagrange, Heaviside, and du Châtelet were autodidacts, and Ptolemy seems to have been as well. And they didn't even have Wikipedia or Debian! Nobody gets a Nobel for passing a lot of exams.
[1]: https://www.featool.com/tutorial/2017/06/16/Python-Multiphys...
They buy "vertical aligned" software, integrate it, then slowly let it die. They just announced they're killing off one of these next year, that they bought ten years ago, because they want to push a competitive product with 20% of the features.
I've been using nastran for half as long but it isn't much better. It's all sales.
I dabbed a bit in abaqus, that seems nice. Probably cause I just dabbed in it.
But here I'm just trying to do my work, and all these companies do is move capabilities around their license tiers and boil the frog as fast as they get away with.
Abaqus is more difficult to get up to speed in, but its really nice from an advanced usability standpoint. They struggle due to cost though, it is hugely expensive and we've had to fight hard to keep it time and time again.
LS-Dyna is similar to Abaqus (though I'm not fully up in it yet), but we're all just waiting to see how Ansys ruins it, especially now that they got bought out by Synopsys.
Each individual physics regime is not particularly good on its own - there are far better mechanical, CFD, electromagnetism, etc solvers out there - but they're all made by different vendors and don't play nicely with each other.
Ouch. I kind of know Comsol because it was already taught in my engineering school 15 years ago, so that it still counts as a “new entrant” really gives an idea of how slow the field evolves.
But they changed to COMSOL because they didn't have the trademark in Japan and FEM also gave associations to the feminine gender.
Wait? What? NASTRAN was originally developed by NASA and open sourced over two decades ago. Is this commercial software built on top that is closed source?
I’m astonished ANSYS and NASTRAN are still the only players in town. I remember using NASTRAN 20 years ago for FE of structures while doing aero engineering. And even then NASTRAN was almost 40 years old and ancient.
There might be better programs for some problems, but COMSOL is quite nice.
Happy to hear if people have good resources!
It's written in C++, makes heavy use of templates and been in development since 2000. It's not meant for solid mechanics or fluid mechanics specifically, but for FEM solutions of general PDEs.
The documentation is vast, the examples are numerous and the library interfaces with other libraries like Petsc, Trilinos etc. You can output results to a variety of formats.
I believe support for triangle and tetrahedral elements has been added only recently. In spite of this, one quirk of the library is that meshes are called "triangulations".
And with that you wrote the best reply to your own comment. Great programmers of the past wrote amazing systems just in assembly. But you needed to be a great programmer just to get anything done at all.
Nowadays dunces like me can write reasonable software in high level languages with plenty of libraries. That's progress.
Similar for mechanical engineering.
(Doing prototypes etc might still be a good idea, of course. My argument is mainly that what works for the best engineers doesn't necessarily work for the masses.)
Certainly computers allow more complexity, so there is interplay between what it enables and what’s driven by good engineering.
It doesn't replace things like actual tests, but it makes designing and understanding testing more efficient and more effective. It is also much easier to convince reviewers you've done your job correctly with them.
I'd argue computer simulation has been an important component a majority of mechanical engineering innovation in the last century. If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit. We did "just fine" without cars for the majority of humanity, but motorized vehicles significantly changed how we do things and changed the reach of what we can do.
In other words, the work that doesn't change the underlying reality of the product?
> We did "just fine" without cars for the majority of humanity
We went to the moon, invented aircraft, bridges, skyscrapers, etc, all without FEM. So that's why this is a bad comparison.
> If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit.
Of course. That's what they are accustomed to. 80/20 paper techniques that were replaced by SW were forgotten.
When tests are cheap, you make a lot of them. When they are expensive, you do a few and maximize the information you learn from them.
I'm not arguing FEM doesn't provide net benefit to the industry.
Things like modern automotive structural safety or passenger aircraft safety are leagues better today than even as recently as the 1980s because engineers can perform many high-fidelity simulations long before they get to integrated system test. When integrated system test is so expensive, you're not going to explore a lot of new ideas that way.
The argument that computational tools are eroding deep engineering understanding is long-standing, and has aspects of both truth and falsity. Yep, they designed the SR-71 without FEA, but you would never do that today because for the same inflation-adjusted budget, we'd expect a lot more out of the design. Tools like FEA are what help engineers fulfill those expectations today.
That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."
Now what's my opinion? FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.
> passenger aircraft safety are leagues better today
Yep, but that's just restating the pros. Local iteration and testing.
> You're replying to a practicing mechanical engineer
Oh drpossum and I are getting to know each other.
I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.
> That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."
In refuting the original casually-worded blanket statement, yes, you're right. You can indeed design crash resilient structures without FEA. Especially if they are terrestrial (i.e., civil engineering).
In high-performance applications like aerospace vehicles (excluding general aviation) or automobiles, you will not achieve the required performance on any kind of acceptable timeline or budget without FEA. In these kinds of high-performance applications, the original statement is valid.
> FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.
Do you have any experience in aerospace applications? Because quite often, we reliably achieve structural efficiencies, at prescribed levels of robustness, that we would not achieve sans FEA. It's a matter of making the performance bar, not a matter of simple vs. complex solutions.
> I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.
That was one of his points, not the main one. The idea that its primary value is pandering to paper-pushing regulatory bodies and "policy based governance" is specious. Does it help with your certification case? Of course. But the real value is that analyses from these tools are the substantiation we use to determine the if the (expensive) design will meet requirements and survive all its stressing load cases before we approve building it. We then have a high likelihood of what we build, assuming it conforms to design intent, performing as expected.
Engineering is designing a bridge that holds up to a certain load, with the least amount of material and/or cost. FEM gives you tighter bounds on that.
https://www.infrastructurereportcard.org/wp-content/uploads/...
I'd posit a large fraction were designed with FEM.
FEM is the tool you'd use to tell when and where the mechanical linkage assembly will break.
Actual, practical use of FEM has been stagnate for quite some time. There have been some nice stability improvements to the numerical algorithms that make highly nonlinear problems a little easier; solvers are more optimized; and hardware is of course dramatically more capable (flash storage has been a godsend).
Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems. They have some nice results on the world's simplest "laboratory" problem, but accuracy is abysmal on most real-world problems - e.g. it might give good results on a cylinder in simple tension, but fails horribly when adding bending.
There's still nothing better, but looking back I'm pretty surprised I'm still basically doing things the same way I was as an Engineer 1; and not for lack of trying. I've been on countless development projects that seem promising but just won't validate in the real world.
Industry focus has been far more on Verification and Validation (ASME V&V 10/20/40) which has done a lot to point out the various pitfalls and limitations. Academic research and the software vendors haven't been particularly keen to revisit the supposedly "solved" problems we're finding.
Another group within my company is evaluating them right now and the early results seems to be "not very accurate, but directionally correct and very fast" so there may be some value in non-FEM experts using them to quickly tell if A or B is a better design; but will still need a more proper analysis in more accurate tools.
It's still early though and we're just starting to see the first non-research solvers hitting the market.
You could do SWE with finite elements, but generally finite volumes would be your choice to handle any potential discontinuities and is more stable and accurate for practical problems.
Here is a tutorial. https://www.tfd.chalmers.se/~hani/kurser/OS_CFD_2010/johanPi...
What's unclear to me is how do I model the spherical geometry without exploding the complexity of the solution. I know that a fully custom mesh with a pile of formulas for something like beltrami-laplace operator would work, but I want something more elegant than this. For a example, can I use the Fibbonacci spiral to generate a uniform spherical mesh, and then somehow compute gradients and the laplacian?
I suspect that the stability of FE or FV methods is rooted in the fact that the FE functions slightly overlap, so computing the next step is a lot like using an implicit FD scheme, or better, a variation of the compact FD scheme. However I'm interested in how an adept in the field would solve this problem in practice. Again, I'm aware that there are methods of solving such systems (Jacobi, etc.), but those make the solution 10x more complex, buggier and slower.
Unless "normally" you mean the normal distribution, which indeed has zero skewness.
Basic politeness is absolutely dead, nobody has any concept of acknowledging they are asking for a favour; we just blast Instagram/TikTok reels at top volume and smoke next to children and elderly in packed public spaces etc. I'm 100% sure it's not rose-tinted memories of the 90s making me think, it wasn't always like this...
Typically, Finite Volume Method is used for fluid flow problems. It is possible to use Finite Element Methods, but it is rare.
Even Arnold's work? FEEC seemed quite promising last time I was reading about it, but never seemed to get much traction in the wider FEM world.
[0]: https://open.umich.edu/find/open-educational-resources/engin...
It seems like a hot candidate to potentially yield better results in the future
Unfortunately my experience is that FEA is a blunt instrument with narrow practical applications. Where it’s needed, it is absolutely fantastic. Where it’s used when it isn’t needed, it’s quite the albatross.
> FEM: Finite Element Method: https://en.wikipedia.org/wiki/Finite_element_method
>> FEM: Finite Element Method (for ~solving coupled PDEs (Partial Differential Equations))
>> FEA: Finite Element Analysis (applied FEM)
> awesome-mecheng > Finite Element Analysis: https://github.com/m2n037/awesome-mecheng#fea
And also, "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 re: the "relaxation technique" .. https://news.ycombinator.com/item?id=40396171
The OOP framework I created was based on Petrov-Galerkin FEM. (Both proper 2D and "layered" 3D.)
Before my PhD work, the people I worked with (worked for) used spectral methods and Alternate-direction FEM (i.e. using 1D to approximate 2D.)
In some conferences and interviews certain scientists would tell me that programming FEM is easy (for LSAP.) I always kind of agree and ask how many times they have done it. (For LSAP or anything else.) I was not getting an answer from those scientists...
Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.
FEM at it's core ends up being just a technique to find approximate solutions to problems expressed with partial differential equations.
Finding solutions to practical problems that meet both boundary conditions and domain is practically impossible to have with analytical methods. FEM trades off correctness with an approximation that can be exact in prescribed boundary conditions but is an approximation in both how domains are expressed and the solution,and has nice properties such as the approximation errors converging to the exact solution by refining the approximation. This means exponentially larger computational budgets.
Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?
If I recall correctly convergence rates are exactly the same, but the whole approach fails to realize that, other than boundaries, geometry and the fields of quantities of interest do not have the same spatial distributions.
IGA has been around for ages, and never materialized beyond the "let's reuse the CAD functions" trick, which ends up making the problem more complex without any tangible return when compared with plain old P-refinent. What is left in terms of potential?
> Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community.
I recall the name Tom Hughes. I have his FEM book and he's been for years (decades) the only one pushing the concept. The reason being that the whole computational mechanics community looked at it,found it interesting, but ultimately wasn't worth the trouble. There are far more interesting and promising ideas in FEM than using splines to build elements.
That's how it started, yes. The splines used to specify the geometry are trimmed surfaces, and IGA has expanded from there to the use of splines generally as the shape functions, as well as trimming of volumes, etc. This use of smooth splines as shape functions improves the accuracy per degree of freedom.
> If I recall correctly convergence rates are exactly the same
Okay, looks like I remembered wrong here. What we do definitely see is that in IGA you get the convergence rates of higher degrees without drastically increasing your degree of freedom, meaning that there is better accuracy per degree of freedom for any degree above 1. See for example Figures 16 and 18 in this paper: https://www.researchgate.net/profile/Laurens-Coox/publicatio...
> geometry and the fields of quantities of interest do not have the same spatial distributions.
Using the same shape functions doesn't automatically mean that they will have the same spatial distributions. In fact, with hierarchical refinement in splines you can refine the geometry and any single field of interest separately.
> What is left in terms of potential?
The biggest potential other than higher accuracy per degree of freedom is perhaps trimming. In FEM, trimming your shape functions makes the solution unusable. In IGA, you can immerse your model in a "brick" of smooth spline shape functions, trim off the region outside, and run the simulation while still getting optimal convergence properties. This effectively means little to no meshing required. For a company that is readying this for use in industry, take a look at https://coreform.com/ (disclosure, I used to be a software developer there).