.mpt-coveragerc |
|
150 |
.raptor-coveragerc |
|
129 |
__init__.py |
|
493 |
argparser.py |
%(prog)s [options] [test paths] |
15841 |
environment.py |
Sets the argument |
3502 |
fzf |
|
|
hooks.py |
|
2045 |
layers.py |
Sets the argument |
5076 |
mach_commands.py |
|
10593 |
metadata.py |
|
1690 |
metrics |
|
|
runner.py |
Pure Python runner so we can execute perftest in the CI without
depending on a full mach toolchain, that is not fully available in
all worker environments.
This runner can be executed in two different ways:
- by calling run_tests() from the mach command
- by executing this module directly
When the module is executed directly, if the --on-try option is used,
it will fetch arguments from Tascluster's parameters, that were
populated via a local --push-to-try call.
The --push-to-try flow is:
- a user calls ./mach perftest --push-to-try --option1 --option2
- a new push to try commit is made and includes all options in its parameters
- a generic TC job triggers the perftest by calling this module with --on-try
- run_test() grabs the parameters artifact and converts them into args for
perftest
|
10288 |
schemas |
|
|
script.py |
\
%(filename)s
%(filename_underline)s
:owner: %(owner)s
:name: %(name)s
|
12606 |
system |
|
|
test |
|
|
tests |
|
|
tools.py |
Raised when a performance change is detected.
This failure happens with regressions, and improvements. There
is no unique failure for each of them.
TODO: We eventually need to be able to distinguish between these.
To do so, we would need to incorporate the "lower_is_better" settings
into the detection tooling.
|
5361 |
utils.py |
Raised when perfMetrics were not found, or were not output
during a test run. |
19439 |