=============== Benchmarking =============== This document outlines the current approach to benchmarking within PMMoTo using the ``pytest_benchmark`` plugin. Please note that this is a work in progress, and improvements are ongoing. Overview -------- Benchmarking is essential for evaluating the performance of the PMMoTo framework. The ``pytest_benchmark`` plugin provides a simple way to measure the performance of your code. Usage ----- To run benchmarks, you can create benchmark tests in your test files. Here is a basic example: .. code-block:: python import pytest @pytest.mark.benchmark def test_my_function(benchmark): # ...existing code... result = benchmark(my_function, *args, **kwargs) # ...existing code... Running Benchmarks ------------------- To execute your benchmarks, run the following command: .. code-block:: shell pytest benchmark/ Interpreting Results --------------------- The results will include various metrics such as: - **Mean**: The average time taken for the benchmarked function. - **Standard Deviation**: The variability of the benchmark results. - **Minimum and Maximum**: The fastest and slowest recorded times. Future Improvements -------------------- As this benchmarking framework is still under development, future enhancements may include: - More detailed reporting options. - Integration with continuous integration tools for automated performance testing. - Additional benchmarks for various components of PMMoTo. Conclusion ---------- Benchmarking is a critical aspect of maintaining performance standards in PMMoTo. We encourage contributions and feedback as we continue to refine this process.