Name Description Size Coverage
ABIArgGenerator.h jit_ABIArgGenerator_h 2040 92 %
ABIFunctionList-inl.h 16777 -
ABIFunctions.h jit_VMFunctions_h 2888 100 %
ABIFunctionType.h jit_ABIFunctionType_h 1882 -
ABIFunctionType.yaml 6733 -
AliasAnalysis.cpp 11355 96 %
AliasAnalysis.h jit_AliasAnalysis_h 1491 100 %
AlignmentMaskAnalysis.cpp 3231 41 %
AlignmentMaskAnalysis.h namespace jit 694 100 %
arm -
arm64 -
Assembler.h jit_Assembler_h 1098 -
AtomicOp.h jit_AtomicOp_h 3454 100 %
AtomicOperations.h [SMDOC] Atomic Operations The atomic operations layer defines types and functions for JIT-compatible atomic operation. The fundamental constraints on the functions are: - That their realization here MUST be compatible with code the JIT generates for its Atomics operations, so that an atomic access from the interpreter or runtime - from any C++ code - really is atomic relative to a concurrent, compatible atomic access from jitted code. That is, these primitives expose JIT-compatible atomicity functionality to C++. - That accesses may race without creating C++ undefined behavior: atomic accesses (marked "SeqCst") may race with non-atomic accesses (marked "SafeWhenRacy"); overlapping but non-matching, and hence incompatible, atomic accesses may race; and non-atomic accesses may race. The effects of races need not be predictable, so garbage can be produced by a read or written by a write, but the effects must be benign: the program must continue to run, and only the memory in the union of addresses named in the racing accesses may be affected. The compatibility constraint means that if the JIT makes dynamic decisions about how to implement atomic operations then corresponding dynamic decisions MUST be made in the implementations of the functions below. The safe-for-races constraint means that by and large, it is hard to implement these primitives in C++. See "Implementation notes" below. The "SeqCst" suffix on operations means "sequentially consistent" and means such a function's operation must have "sequentially consistent" memory ordering. See mfbt/Atomics.h for an explanation of this memory ordering. Note that a "SafeWhenRacy" access does not provide the atomicity of a "relaxed atomic" access: it can read or write garbage if there's a race. Implementation notes. It's not a requirement that these functions be inlined; performance is not a great concern. On some platforms these functions may call functions that use inline assembly. See GenerateAtomicOperations.py. In principle these functions will not be written in C++, thus making races defined behavior if all racy accesses from C++ go via these functions. (Jitted code will always be safe for races and provides the same guarantees as these functions.) The appropriate implementations will be platform-specific and there are some obvious implementation strategies to choose from, sometimes a combination is appropriate: - generating the code at run-time with the JIT; - hand-written assembler (maybe inline); or - using special compiler intrinsics or directives. Trusting the compiler not to generate code that blows up on a race definitely won't work in the presence of TSan, or even of optimizing compilers in seemingly-"innocuous" conditions. (See https://www.usenix.org/legacy/event/hotpar11/tech/final_files/Boehm.pdf for details.) 13069 92 %
AutoWritableJitCode.h jit_AutoWritableJitCode_h 2256 85 %
BacktrackingAllocator.cpp 184110 92 %
BacktrackingAllocator.h 34025 97 %
Bailouts.cpp 13398 99 %
Bailouts.h 8583 100 %
BaselineBailouts.cpp BaselineStackBuilder helps abstract the process of rebuilding the C stack on the heap. It takes a bailout iterator and keeps track of the point on the C stack from which the reconstructed frames will be written. It exposes methods to write data into the heap memory storing the reconstructed stack. It also exposes method to easily calculate addresses. This includes both the virtual address that a particular value will be at when it's eventually copied onto the stack, as well as the current actual address of that value (whether on the heap allocated portion being constructed or the existing stack). The abstraction handles transparent re-allocation of the heap memory when it needs to be enlarged to accommodate new data. Similarly to the C stack, the data that's written to the reconstructed stack grows from high to low in memory. The lowest region of the allocated memory contains a BaselineBailoutInfo structure that points to the start and end of the written data. 74149 92 %
BaselineCacheIRCompiler.cpp 142830 97 %
BaselineCacheIRCompiler.h 7385 100 %
BaselineCodeGen.cpp HandlerArgs = 216130 93 %
BaselineCodeGen.h 21069 95 %
BaselineCompileQueue.h 2151 19 %
BaselineCompileTask.cpp static 4823 11 %
BaselineCompileTask.h 5408 55 %
BaselineDebugModeOSR.cpp 19301 88 %
BaselineDebugModeOSR.h 1090 -
BaselineFrame-inl.h jit_BaselineFrame_inl_h 4616 95 %
BaselineFrame.cpp 5877 97 %
BaselineFrame.h 14962 100 %
BaselineFrameInfo-inl.h jit_BaselineFrameInfo_inl_h 1372 100 %
BaselineFrameInfo.cpp 6453 83 %
BaselineFrameInfo.h 13118 100 %
BaselineIC.cpp 84973 96 %
BaselineIC.h 18064 100 %
BaselineICList.h jit_BaselineICList_h 2233 -
BaselineJIT.cpp 43250 74 %
BaselineJIT.h 21157 96 %
BitSet.cpp 2627 18 %
BitSet.h jit_BitSet_h 4250 100 %
BranchHinting.cpp This pass propagates branch hints across the control flow graph using dominator information. Branch hints are read at compile-time for specific basic blocks. This pass propagates this property to successor blocks in a conservative way. The algorithm works as follows: - The CFG is traversed in reverse-post-order (RPO). Dominator parents are visited before the blocks they dominate. - For each basic block, if we have a hint, it is propagated to the blocks it immediately dominates (its children in the dominator tree). - The pass will then continue to work its way through the CFG. Because we only propagate along dominator-tree edges (parent -> child), each block receives information from exactly one source. This avoids conflicts that would otherwise arise at CFG join points. 2522 100 %
BranchHinting.h jit_BranchHinting_h 638 -
BytecodeAnalysis.cpp stackDepth= 11005 98 %
BytecodeAnalysis.h jit_BytecodeAnalysis_h 2604 100 %
CacheIR.cpp 566538 95 %
CacheIR.h 20044 95 %
CacheIRAOT.cpp 5851 -
CacheIRAOT.h jit_CacheIRAOT_h 1106 -
CacheIRCloner.h jit_CacheIRCloner_h 2217 100 %
CacheIRCompiler.cpp 400268 94 %
CacheIRCompiler.h 50034 97 %
CacheIRGenerator.h 46729 100 %
CacheIRHealth.cpp 13515 0 %
CacheIRHealth.h JS_CACHEIR_SPEW 4484 -
CacheIROps.yaml 67440 -
CacheIRReader.h 5547 99 %
CacheIRSpewer.cpp 21116 22 %
CacheIRSpewer.h JS_CACHEIR_SPEW 3656 26 %
CacheIRWriter.h 26281 98 %
CalleeToken.h namespace js::jit 2242 100 %
CodeGenerator.cpp 804509 95 %
CodeGenerator.h 20642 100 %
CompactBuffer.h 6615 86 %
CompilationDependencyTracker.h jit_CompilationDependencyTracker_h 2990 96 %
CompileInfo.h env chain and argument obj 14802 100 %
CompileWrappers.cpp static 7492 89 %
CompileWrappers.h 4772 100 %
Disassemble.cpp 4485 100 %
Disassemble.h jit_Disassemble_h 680 -
DominatorTree.cpp virtual root 10878 93 %
DominatorTree.h jit_DominatorTree_h 618 -
EdgeCaseAnalysis.cpp 1493 100 %
EdgeCaseAnalysis.h jit_EdgeCaseAnalysis_h 712 -
EffectiveAddressAnalysis.cpp 7651 89 %
EffectiveAddressAnalysis.h namespace jit 886 100 %
ExecutableAllocator.cpp willDestroy = 10594 93 %
ExecutableAllocator.h 6348 100 %
FixedList.h jit_FixedList_h 2249 89 %
FlushICache.cpp 5548 -
FlushICache.h Flush the instruction cache of instructions in an address range. 3288 100 %
FoldLinearArithConstants.cpp namespace jit 3697 96 %
FoldLinearArithConstants.h namespace jit 689 -
GenerateABIFunctionType.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateABIFunctionType.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 20944 -
GenerateAtomicOperations.py INLINE_ATTR void %(fun_name)s() { asm volatile ("mfence\n\t" ::: "memory"); } 36179 -
GenerateCacheIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateCacheIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 23537 -
GenerateLIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateLIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 17727 -
GenerateMIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateMIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 14526 -
GraphSpewer.cpp 7965 -
GraphSpewer.h indent 1674 -
ICState.h 7449 100 %
ICStubSpace.h jit_ICStubSpace_h 1185 100 %
InlinableNatives.cpp 14104 66 %
InlinableNatives.h 14686 -
InlineList.h 16224 100 %
InlineScriptTree-inl.h jit_InlineScriptTree_inl_h 2269 59 %
InlineScriptTree.h jit_InlineScriptTree_h 4393 100 %
InstructionReordering.cpp 8524 98 %
InstructionReordering.h 612 -
InterpreterEntryTrampoline.cpp countIncludesThis = 8621 3 %
InterpreterEntryTrampoline.h The EntryTrampolineMap is used to cache the trampoline code for each script as they are created. These trampolines are created only under --emit-interpreter-entry and are used to identify which script is being interpeted when profiling with external profilers such as perf. The map owns the JitCode objects that are created for each script, and keeps them alive at least as long as the script associated with it in case we need to re-enter the trampoline again. As each script is finalized, the entry is manually removed from the table in BaseScript::finalize which will also release the trampoline code associated with it. During a moving GC, the table is rekeyed in case any scripts have relocated. 2388 0 %
Invalidation.h jit_Invalidation_h 2253 100 %
InvalidationScriptSet.h jit_InvalidationScriptSet_h 1576 100 %
Ion.cpp 86591 91 %
Ion.h this 4276 88 %
IonAnalysis.cpp 199854 90 %
IonAnalysis.h 8167 92 %
IonCacheIRCompiler.cpp 84109 91 %
IonCacheIRCompiler.h jit_IonCacheIRCompiler_h 3153 100 %
IonCompileTask.cpp 7719 97 %
IonCompileTask.h jit_IonCompileTask_h 3423 94 %
IonGenericCallStub.h jit_IonGenericCallStub_h 1423 100 %
IonIC.cpp 22298 89 %
IonIC.h 21358 100 %
IonOptimizationLevels.cpp static 3736 98 %
IonOptimizationLevels.h 6670 100 %
IonScript.h 19341 90 %
IonTypes.h [SMDOC] Avoiding repeated bailouts / invalidations To avoid getting trapped in a "compilation -> bailout -> invalidation -> recompilation -> bailout -> invalidation -> ..." loop, every snapshot in Warp code is assigned a BailoutKind. If we bail out at that snapshot, FinishBailoutToBaseline will examine the BailoutKind and take appropriate action. In general: 1. If the bailing instruction comes from transpiled CacheIR, then when we bail out and continue execution in the baseline interpreter, the corresponding stub should fail a guard. As a result, we will either increment the enteredCount for a subsequent stub or attach a new stub, either of which will prevent WarpOracle from transpiling the failing stub when we recompile. Note: this means that every CacheIR op that can bail out in Warp must have an equivalent guard in the baseline CacheIR implementation. FirstExecution works according to the same principles: we have never hit this IC before, but after we bail to baseline we will attach a stub and recompile with better CacheIR information. 2. If the bailout occurs because an assumption we made in WarpBuilder was invalidated, then FinishBailoutToBaseline will set a flag on the script to avoid that assumption in the future: for example, UninitializedLexical. 3. Similarly, if the bailing instruction is generated or modified by a MIR optimization, then FinishBailoutToBaseline will set a flag on the script to make that optimization more conservative in the future. Examples include LICM, EagerTruncation, and HoistBoundsCheck. 4. Some bailouts can't be handled in Warp, even after a recompile. For example, Warp does not support catching exceptions. If this happens too often, then the cost of bailing out repeatedly outweighs the benefit of Warp compilation, so we invalidate the script and disable Warp compilation. 5. Some bailouts don't happen in performance-sensitive code: for example, the |debugger| statement. We just ignore those. 28237 89 %
Jit.cpp osrFrame = 7994 93 %
Jit.h jit_Jit_h 1149 -
JitAllocPolicy.h 5710 96 %
JitCode.h 6163 97 %
JitcodeMap.cpp 37587 92 %
JitcodeMap.h The jitcode map implements tables to allow mapping from addresses in jitcode to the list of scripts that are implicitly active in the frame at that point in the native code. To represent this information efficiently, a multi-level table is used. At the top level, a global AVL-tree of JitcodeGlobalEntry describing the mapping for each individual JitCode generated by compiles. The entries are ordered by their nativeStartAddr. Every entry in the table is of fixed size, but there are different entry types, distinguished by the kind field. 28454 92 %
JitCommon.h 2292 -
JitContext.cpp 3961 85 %
JitContext.h jit_JitContext_h 4749 -
Jitdump.h This file provides the necessary data structures to meet the JitDump specification as of https://github.com/torvalds/linux/blob/f2906aa863381afb0015a9eb7fefad885d4e5a56/tools/perf/Documentation/jitdump-specification.txt 1592 -
JitFrames-inl.h jit_JitFrames_inl_h 862 80 %
JitFrames.cpp 94054 83 %
JitFrames.h 26054 100 %
JitHints-inl.h jit_JitHints_inl_h 1949 100 %
JitHints.cpp 4697 91 %
JitHints.h [SMDOC] JitHintsMap The Jit hints map is an in process cache used to collect Baseline and Ion JIT hints to try and skip as much of the warmup as possible and jump straight into those tiers. Whenever a script enters one of these tiers a hint is recorded in this cache using the script's filename+sourceStart value, and if we ever encounter this script again later, e.g. during a navigation, then we try to eagerly compile it into baseline and ion based on its previous execution history. 6623 90 %
JitOptions.cpp 17452 55 %
JitOptions.h 6197 88 %
JitRuntime.h 18395 94 %
JitScript-inl.h jit_JitScript_inl_h 1144 100 %
JitScript.cpp depth= 33433 86 %
JitScript.h [SMDOC] ICScript Lifetimes An ICScript owns an array of ICEntries, each of which owns a linked list of ICStubs. A JitScript contains an embedded ICScript. If it has done any trial inlining, it also owns an InliningRoot. The InliningRoot owns all of the ICScripts that have been created for inlining into the corresponding JitScript. This ties the lifetime of the inlined ICScripts to the lifetime of the JitScript itself. We store pointers to ICScripts in two other places: on the stack in BaselineFrame, and in IC stubs for CallInlinedFunction. The ICScript pointer in a BaselineFrame either points to the ICScript embedded in the JitScript for that frame, or to an inlined ICScript owned by a caller. In each case, there must be a frame on the stack corresponding to the JitScript that owns the current ICScript, which will keep the ICScript alive. Each ICStub is owned by an ICScript and, indirectly, a JitScript. An ICStub that uses CallInlinedFunction contains an ICScript for use by the callee. The ICStub and the callee ICScript are always owned by the same JitScript, so the callee ICScript will not be freed while the ICStub is alive. The lifetime of an ICScript is independent of the lifetimes of the BaselineScript and IonScript/WarpScript to which it corresponds. They can be destroyed and recreated, and the ICScript will remain valid. When we discard JIT code, we mark ICScripts that are active on the stack as active and then purge all of the inactive ICScripts. We also purge ICStubs, including the CallInlinedFunction stub at the trial inining call site, and reset the ICStates to allow trial inlining again later. If there's a BaselineFrame for an inlined ICScript, we'll preserve both this ICScript and the IC chain for the call site in the caller's ICScript. See ICScript::purgeStubs and ICScript::purgeInactiveICScripts. 21957 92 %
JitSpewer.cpp 20302 -
JitSpewer.h Information during sinking 11272 86 %
JitZone.h 12178 96 %
JSJitFrameIter-inl.h jit_JSJitFrameIter_inl_h 1806 96 %
JSJitFrameIter.cpp 22800 77 %
JSJitFrameIter.h 27232 88 %
KnownClass.cpp 3243 85 %
KnownClass.h 879 -
Label.cpp 883 -
Label.h 3224 100 %
LICM.cpp OUT 13210 100 %
LICM.h jit_LICM_h 635 -
Linker.cpp 2339 83 %
Linker.h jit_Linker_h 1248 100 %
LIR.cpp 23367 92 %
LIR.h 73947 97 %
LIROps.yaml 91595 -
loong64 -
Lowering.cpp useAtStart = 294855 94 %
Lowering.h useAtStart = 2849 100 %
MachineState.h jit_MachineState_h 3604 100 %
MacroAssembler-inl.h 40470 95 %
MacroAssembler.cpp 386885 97 %
MacroAssembler.h 278619 95 %
mips-shared -
mips64 -
MIR-wasm.cpp 35733 85 %
MIR-wasm.h Everything needed to build actual MIR instructions: the actual opcodes and instructions, the instruction interface, and use chains. 110738 95 %
MIR.cpp 245191 89 %
MIR.h Everything needed to build actual MIR instructions: the actual opcodes and instructions, the instruction interface, and use chains. 325360 92 %
MIRGenerator.h 6450 91 %
MIRGraph.cpp 43295 89 %
MIRGraph.h 32430 98 %
MIROps.yaml 95261 -
MoveEmitter.h jit_MoveEmitter_h 1094 -
MoveResolver.cpp 14054 63 %
MoveResolver.h 9924 83 %
moz.build 10084 -
none -
OffthreadSnapshot.h jit_OffthreadSnapshot_h 1950 100 %
PerfSpewer.cpp 39569 53 %
PerfSpewer.h 9079 96 %
ProcessExecutableMemory.cpp Inspiration is V8's OS::Allocate in platform-win32.cc. VirtualAlloc takes 64K chunks out of the virtual address space, so we keep 16b alignment. x86: V8 comments say that keeping addresses in the [64MiB, 1GiB) range tries to avoid system default DLL mapping space. In the end, we get 13 bits of randomness in our selection. x64: [2GiB, 4TiB), with 25 bits of randomness. 32365 90 %
ProcessExecutableMemory.h 5687 100 %
RangeAnalysis.cpp 122678 91 %
RangeAnalysis.h 25473 99 %
ReciprocalMulConstants.cpp 5888 100 %
ReciprocalMulConstants.h jit_ReciprocalMulConstants_h 1478 100 %
Recover.cpp 75616 77 %
Recover.h 33763 82 %
RegExpStubConstants.h jit_RegExpStubConstants_h 1231 -
RegisterAllocator.cpp 20822 33 %
RegisterAllocator.h 10046 92 %
Registers.h 9353 98 %
RegisterSets.h 45036 97 %
RematerializedFrame-inl.h 746 100 %
RematerializedFrame.cpp static 6430 63 %
RematerializedFrame.h 7432 77 %
riscv64 -
SafepointIndex-inl.h jit_SafepointIndex_inl_h 694 100 %
SafepointIndex.cpp 732 100 %
SafepointIndex.h namespace jit 2343 100 %
Safepoints.cpp 19675 86 %
Safepoints.h jit_Safepoints_h 4253 100 %
ScalarReplacement.cpp 137116 90 %
ScalarReplacement.h jit_ScalarReplacement_h 761 -
ScalarTypeUtils.h jit_ScalarTypeUtils_h 1207 100 %
ScriptFromCalleeToken.h namespace js::jit 1023 83 %
ShapeList.cpp static 5598 94 %
ShapeList.h 1779 -
shared 78 %
SharedICHelpers-inl.h jit_SharedICHelpers_inl_h 1298 -
SharedICHelpers.h jit_SharedICHelpers_h 1250 -
SharedICRegisters.h jit_SharedICRegisters_h 1264 -
ShuffleAnalysis.cpp 28239 89 %
ShuffleAnalysis.h 4756 100 %
SimpleAllocator.cpp 46040 86 %
SimpleAllocator.h 14285 100 %
Simulator.h jit_Simulator_h 959 -
Sink.cpp 9489 91 %
Sink.h jit_Sink_h 604 -
Snapshots.cpp 24298 96 %
Snapshots.h 18895 96 %
SparseBitSet.h 5729 99 %
StackSlotAllocator.h jit_StackSlotAllocator_h 3416 86 %
StubFolding.cpp 25119 91 %
StubFolding.h 793 -
TemplateObject-inl.h jit_TemplateObject_inl_h 3392 100 %
TemplateObject.h jit_TemplateObject_h 2589 100 %
Trampoline.cpp 14524 93 %
TrampolineNatives.cpp 13427 99 %
TrampolineNatives.h jit_TrampolineNatives_h 1925 -
TrialInlining.cpp 35676 86 %
TrialInlining.h [SMDOC] Trial Inlining WarpBuilder relies on transpiling CacheIR. When inlining scripted functions in WarpBuilder, we want our ICs to be as monomorphic as possible. Functions with multiple callers complicate this. An IC in such a function might be monomorphic for any given caller, but polymorphic overall. This make the input to WarpBuilder less precise. To solve this problem, we do trial inlining. During baseline execution, we identify call sites for which it would be useful to have more precise inlining data. For each such call site, we allocate a fresh ICScript and replace the existing call IC with a new specialized IC that invokes the callee using the new ICScript. Other callers of the callee will continue using the default ICScript. When we eventually Warp-compile the script, we can generate code for the callee using the IC information in our private ICScript, which is specialized for its caller. The same approach can be used to inline recursively. 6068 100 %
TypeData.h jit_TypeData_h 1314 100 %
TypePolicy.cpp 46098 83 %
TypePolicy.h 20321 96 %
UnrollLoops.cpp 78818 91 %
UnrollLoops.h jit_UnrollLoops_h 635 -
ValueNumbering.cpp [SMDOC] IonMonkey Value Numbering Some notes on the main algorithm here: - The SSA identifier id() is the value number. We do replaceAllUsesWith as we go, so there's always at most one visible value with a given number. - Consequently, the GVN algorithm is effectively pessimistic. This means it is not as powerful as an optimistic GVN would be, but it is simpler and faster. - We iterate in RPO, so that when visiting a block, we've already optimized and hashed all values in dominating blocks. With occasional exceptions, this allows us to do everything in a single pass. - When we do use multiple passes, we just re-run the algorithm on the whole graph instead of doing sparse propagation. This is a tradeoff to keep the algorithm simpler and lighter on inputs that don't have a lot of interesting unreachable blocks or degenerate loop induction variables, at the expense of being slower on inputs that do. The loop for this always terminates, because it only iterates when code is or will be removed, so eventually it must stop iterating. - Values are not immediately removed from the hash set when they go out of scope. Instead, we check for dominance after a lookup. If the dominance check fails, the value is removed. 49932 89 %
ValueNumbering.h jit_ValueNumbering_h 4647 -
VMFunctionList-inl.h 29925 -
VMFunctions.cpp Unexpected return type for a VMFunction. 109426 89 %
VMFunctions.h 29661 90 %
WarpBuilder.cpp maybePred = 116608 94 %
WarpBuilder.h Intentionally not implemented 12535 100 %
WarpBuilderShared.cpp 4418 99 %
WarpBuilderShared.h 12747 98 %
WarpCacheIRTranspiler.cpp 232198 97 %
WarpCacheIRTranspiler.h jit_WarpCacheIRTranspiler_h 913 -
WarpOracle.cpp 61510 88 %
WarpOracle.h jit_WarpOracle_h 3311 100 %
WarpSnapshot.cpp 15787 92 %
WarpSnapshot.h 22588 100 %
wasm32 -
WasmBCE.cpp 4869 91 %
WasmBCE.h jit_wasmbce_h 970 -
x64 88 %
x86 -
x86-shared 82 %
XrayJitInfo.cpp 622 100 %