Improve testing times using pytest-xdist or pytest-parallel

Definitely agree that we need a consistent limit on how many cores are occupied, which is going to depend on e.g. how many cores the machine has and how many executors we have. The higher levels of parallelization already present for build in some cases may also explain variances in load. It looks like Docker should make it easy to control this: https://docs.docker.com/config/containers/resource_constraints/

Though limiting it in docker doesn’t solve the problem of consistently parallelizing to the same amount in all places, where going above or below that is unfortunate. We don’t have a consistent control of this today:

*** build 1-way parallel (i.e. not parallel)

*** build 2-way parallel

*** build 4-way parallel

*** build 8-way parallel

A consistent place where each thing gets run, and/or a consistent set of environment variables, should help here.

The last couple of days have been a bit busy for me and I’ve not been able to pay attention to this thread at all.

Having read this thread again and given that it’s going into what feels with a wider set of issues with the test harness.

  1. Running the tests as part of the upstream PR CI pipeline. There is the upstream CI pipeline and that needs to be considered. It appears to me that the current test infrastructure is set up to make that efficient.

  2. Running the tests as a developer on the machine I’m working on is a use case from where I originally started this conversation. If possible as a developer, I would like to run all the CI CPU tests using a command line option and get all the results for all the tests in a single run. Also it would be nice if the test run gave us clear results for all the test failures and not to be in a situation of fixing one test group at a time. Preferably this interface to running the tests is the same as what the CI does but with an easier set of command line options.

  3. Have a standard way of specifying the environment and the target architecture to execute and test on. Also differentiating between architecture specific test runs and non-architecture specific test runs.

  4. Through the scripts and environment, it would be nice to be able to specify a single test to execute in a standardized manner. Not having to run every single test when doing this would be nice.

  5. From all the conversations pytest-parallel is something that may not cut it because of thread safety for a large number of our tests.

  6. Using pytest-xdist or other ways of parallelism is hard because we have varied ways of running our tests today with the examples that @broune has shown in the previous post.

Regards, Ramana