Name Description Size Coverage
expect_helper.py Assert that the content of the file at `path` contains `actual`. If the environment variable `UPDATE_EXPECT` is set, the path content is updated to `actual`. This allows to update the file contents easily. Args: path: Path to the expected file actual: The actual content to compare is_json: If True, performs JSON comparison instead of string comparison 1625 -
gifft_output_Event 1823 -
gifft_output_Histogram 1945 -
gifft_output_Scalar 2850 -
jogfactory_output 24610 -
jogfile_output 7646 -
metrics_expires_versions_test.yaml 1865 -
metrics_test.yaml 12083 -
metrics_test_output 87173 -
metrics_test_output_cpp 15908 -
metrics_test_output_js_cpp 45069 -
metrics_test_output_js_h 3180 -
metrics2_test.yaml 11193 -
pings_test.yaml 5398 -
pings_test_output 6856 -
pings_test_output_cpp 2332 -
pings_test_output_js_cpp 13827 -
pings_test_output_js_h 877 -
pings_use_ohttp.yaml 981 -
python.toml 222 -
test_gifft.py A regression test. Very fragile. It generates C++ for GIFFT for metrics_test.yaml and compares it byte-for-byte with expected output C++ files. To generate new expected output files, set `UPDATE_EXPECT=1` when running the test suite: UPDATE_EXPECT=1 mach test toolkit/components/glean/tests/pytest 1540 -
test_glean_parser_cpp.py Honestly, this is a pretty bad test. It generates C++ for a given test metrics.yaml and compares it byte-for-byte with an expected output C++ file. Expect it to be fragile. To generate new expected output files, set `UPDATE_EXPECT=1` when running the test suite: UPDATE_EXPECT=1 mach test toolkit/components/glean/tests/pytest 2611 -
test_glean_parser_js.py Honestly, this is a pretty bad test. It generates C++ for a given test metrics.yaml and compares it byte-for-byte with an expected output C++ file. Expect it to be fragile. To generate new expected output files, set `UPDATE_EXPECT=1` when running the test suite: UPDATE_EXPECT=1 mach test toolkit/components/glean/tests/pytest 2815 -
test_glean_parser_rust.py Honestly, this is a pretty bad test. It generates Rust for a given test metrics.yaml and compares it byte-for-byte with an expected output Rust file. Expect it to be fragile. To generate new expected output files, set `UPDATE_EXPECT=1` when running the test suite: UPDATE_EXPECT=1 mach test toolkit/components/glean/tests/pytest 3960 -
test_jogfile_output.py A regression test. Very fragile. It generates a jogfile for metrics_test.yaml and compares it byte-for-byte with an expected output file. Also, part one of a two-part test. The generated jogfile is consumed in Rust_TestJogfile in t/c/g/tests/gtest/test.rs This is to ensure that the jogfile we generate in Python can be consumed in Rust. To generate new expected output files, set `UPDATE_EXPECT=1` when running the test suite: UPDATE_EXPECT=1 mach test toolkit/components/glean/tests/pytest 2364 -
test_no_expired_metrics.py Of all the metrics included in this build, are any expired? If so, they must be removed or renewed. (This also checks other lints, as a treat.) 1592 -
test_yaml_indices.py Ensure the yamls indices are sorted lexicographically. 1210 -