As John pointed out here, we would have caught the regression I introduced that inhibits the formation of neighbor lists (now fixed) if we had automated performance testing. We can put ideas on implementing this and the details of it in this thread.
For instance… would we ever automatically fail a PR for a substantial performance degradation? I think it would make more sense to automatically create a report to show how the code before and after the merge runs on a few problems, preferably including a few things like fixed source mode, DagMC, WMP, etc.
It appears this is feasible in github actions: