Automated Performance Testing

As John pointed out here, we would have caught the regression I introduced that inhibits the formation of neighbor lists (now fixed) if we had automated performance testing. We can put ideas on implementing this and the details of it in this thread.

For instance… would we ever automatically fail a PR for a substantial performance degradation? I think it would make more sense to automatically create a report to show how the code before and after the merge runs on a few problems, preferably including a few things like fixed source mode, DagMC, WMP, etc.

It appears this is feasible in github actions:

1 Like

If we could capture something like that within github actions, it would be great, But it is a little constraining to run on the resources that are allotted (essentially 2 cores). My dream is to have a dedicated node with a fair number of cores so that we could tease out any potential performance issues with parallelism. That would likely require a more elaborate solution though. Running GH actions on a self-hosted runner for a public repo is discouraged due to the potential security issues. In lieu of that, we could have a manually triggered job that reports a status back to GitHub via its APIs.

That’s definitely a solution. Another would be simply doing a CRON job on something like the NSEcluster; I’m thinking weekly. It could just log the commit it compiled on, and save performance results for a battery of tests. I think that would be pretty neat. This might be a reasonable approach, since minor performance degradations aren’t really a reason to turn away any PR. Rather, we just want to keep track of general trends over time.

Similarly, with your dedicated node approach, it could only do performance testing after a PR is accepted, not before. That should mitigate the security concern. The manual invocations could similarly be done if there’s a concern a given PR may affect performance.

1 Like