I\'m creating a library of components in Modelica, and would appreciate some input on techniques for unit testing the package.
So far I have a test package, consisting o
I like how Openmodelica testing results look, see
No idea how they are doing it, though. Obviously some kind of regression testing is done, with previous results stored, but no idea if that is from some testing library or self-made.
In general, I find it kinda sad/suboptimal, that there isn't "the one" testing solution everybody can/should use (cf. e.g. nose or pytest in the python ecosystem), instead everybody seems to cook up their own solutions (or tries to), and all you find is some Modelica conference papers (often without a trace of implementation) or unmaintained library of unknown status.
Off the top of my head, I found/know of (some already linked in other answers here)
This seems like a pathological instance of https://xkcd.com/927/. It's kinda impossible for a (non-dev) user to know which of those to choose, which are actually good/usable/available/...
(Not real testing, but also relevant: parsing and semantic analysis using ANTLR: modelica.org/events/Conference2003/papers/h31_parser_Tiller.pdf)
If you have Mathematica and SystemModeler you can run the simulation from Mathematica and use the VerificationTest "function" to test:
VerificationTest[Abs[WSMSimulate["HelloWorld"]["x", .1] - .90] < .01]
.
Multiple tests can then be simulated in a TestReport[].
Writing a .mos
script would be one way but there is also a small proof-of-concept library by Michael Tiller: XogenyTest which you could use as a basis.
I prefer using the .mos
script, it works pretty well when you further integrate your test framework into a continuous integration tool. BuildingPy is a good example of this, though it's not implemented in CI tools, it's still a good tool.
Here's a reference of a good framework design: UnitTesting: A Library for Modelica Unit Testing