Name Description Size
AliasAnalysis.cpp 16342
AliasAnalysis.h jit_AliasAnalysis_h 1605
AlignmentMaskAnalysis.cpp 3213
AlignmentMaskAnalysis.h namespace jit 726
arm 26
arm64 22
AtomicOp.h jit_AtomicOp_h 2983
AtomicOperations.h [SMDOC] Atomic Operations The atomic operations layer defines types and functions for JIT-compatible atomic operation. The fundamental constraints on the functions are: - That their realization here MUST be compatible with code the JIT generates for its Atomics operations, so that an atomic access from the interpreter or runtime - from any C++ code - really is atomic relative to a concurrent, compatible atomic access from jitted code. That is, these primitives expose JIT-compatible atomicity functionality to C++. - That accesses may race without creating C++ undefined behavior: atomic accesses (marked "SeqCst") may race with non-atomic accesses (marked "SafeWhenRacy"); overlapping but non-matching, and hence incompatible, atomic accesses may race; and non-atomic accesses may race. The effects of races need not be predictable, so garbage can be produced by a read or written by a write, but the effects must be benign: the program must continue to run, and only the memory in the union of addresses named in the racing accesses may be affected. The compatibility constraint means that if the JIT makes dynamic decisions about how to implement atomic operations then corresponding dynamic decisions MUST be made in the implementations of the functions below. The safe-for-races constraint means that by and large, it is hard to implement these primitives in C++. See "Implementation notes" below. The "SeqCst" suffix on operations means "sequentially consistent" and means such a function's operation must have "sequentially consistent" memory ordering. See mfbt/Atomics.h for an explanation of this memory ordering. Note that a "SafeWhenRacy" access does not provide the atomicity of a "relaxed atomic" access: it can read or write garbage if there's a race. Implementation notes. It's not a requirement that these functions be inlined; performance is not a great concern. On some platforms these functions may call out to code that's generated at run time. In principle these functions will not be written in C++, thus making races defined behavior if all racy accesses from C++ go via these functions. (Jitted code will always be safe for races and provides the same guarantees as these functions.) The appropriate implementations will be platform-specific and there are some obvious implementation strategies to choose from, sometimes a combination is appropriate: - generating the code at run-time with the JIT; - hand-written assembler (maybe inline); or - using special compiler intrinsics or directives. Trusting the compiler not to generate code that blows up on a race definitely won't work in the presence of TSan, or even of optimizing compilers in seemingly-"innocuous" conditions. (See https://www.usenix.org/legacy/event/hotpar11/tech/final_files/Boehm.pdf for details.) 15330
BacktrackingAllocator.cpp 106924
BacktrackingAllocator.h 26516
Bailouts.cpp excInfo = 11294
Bailouts.h 9200
BaselineBailouts.cpp BaselineStackBuilder helps abstract the process of rebuilding the C stack on the heap. It takes a bailout iterator and keeps track of the point on the C stack from which the reconstructed frames will be written. It exposes methods to write data into the heap memory storing the reconstructed stack. It also exposes method to easily calculate addresses. This includes both the virtual address that a particular value will be at when it's eventually copied onto the stack, as well as the current actual address of that value (whether on the heap allocated portion being constructed or the existing stack). The abstraction handles transparent re-allocation of the heap memory when it needs to be enlarged to accommodate new data. Similarly to the C stack, the data that's written to the reconstructed stack grows from high to low in memory. The lowest region of the allocated memory contains a BaselineBailoutInfo structure that points to the start and end of the written data. 81082
BaselineCacheIRCompiler.cpp 74358
BaselineCacheIRCompiler.h jit_BaselineCacheIRCompiler_h 1029
BaselineCompiler.cpp HandlerArgs = 154105
BaselineCompiler.h 17656
BaselineDebugModeOSR.cpp 36388
BaselineDebugModeOSR.h mustUnwindActivation 2427
BaselineFrame-inl.h jit_BaselineFrame_inl_h 2844
BaselineFrame.cpp 4724
BaselineFrame.h 13470
BaselineFrameInfo-inl.h jit_BaselineFrameInfo_inl_h 1139
BaselineFrameInfo.cpp 4979
BaselineFrameInfo.h 8788
BaselineIC.cpp static 205142
BaselineIC.h 89578
BaselineICList.h jit_BaselineICList_h 3324
BaselineInspector.cpp 47435
BaselineInspector.h 5013
BaselineJIT.cpp static 41697
BaselineJIT.h 23441
BitSet.cpp 2573
BitSet.h jit_BitSet_h 4187
BytecodeAnalysis.cpp stackDepth= 7400
BytecodeAnalysis.h jit_BytecodeAnalysis_h 1854
CacheIR.cpp 195442
CacheIR.h 84114
CacheIRCompiler.cpp 129047
CacheIRCompiler.h 35034
CacheIRSpewer.cpp 5341
CacheIRSpewer.h JS_CACHEIR_SPEW 2704
CodeGenerator.cpp 472454
CodeGenerator.h 14293
CompactBuffer.h 5740
CompileInfo-inl.h jit_CompileInfo_inl_h 2258
CompileInfo.h JavaScript execution, not analysis. 17185
CompileWrappers.cpp static 7305
CompileWrappers.h 3686
EdgeCaseAnalysis.cpp 1431
EdgeCaseAnalysis.h jit_EdgeCaseAnalysis_h 710
EffectiveAddressAnalysis.cpp 7001
EffectiveAddressAnalysis.h namespace jit 884
ExecutableAllocator.cpp willDestroy = 10181
ExecutableAllocator.h 9963
FixedList.h jit_FixedList_h 2183
FoldLinearArithConstants.cpp namespace jit 3624
FoldLinearArithConstants.h namespace jit 649
GenerateOpcodeFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateOpcodeFiles.py. Do not edit! */ #define %(listname)s(_) \\ %(ops)s #endif // %(includeguard)s 1997
ICState.h jit_ICState_h 4335
ICStubSpace.h jit_ICStubSpace_h 2101
InlinableNatives.h 8710
InlineList.h 15530
InstructionReordering.cpp 6538
InstructionReordering.h 592
Ion.cpp 100010
Ion.h 6030
IonAnalysis.cpp 166026
IonAnalysis.h 6192
IonBuilder.cpp 459277
IonBuilder.h 62999
IonCacheIRCompiler.cpp 82257
IonCode.h 22201
IonControlFlow.cpp 64529
IonControlFlow.h 24060
IonIC.cpp static 21487
IonIC.h 17690
IonInstrumentation.h jit_IonInstrumentatjit_h 854
IonOptimizationLevels.cpp 5420
IonOptimizationLevels.h 9010
IonTypes.h 26985
Jit.cpp osrFrame = 5282
Jit.h jit_Jit_h 985
JitAllocPolicy.h 5600
JitcodeMap.cpp 54268
JitcodeMap.h The Ion jitcode map implements tables to allow mapping from addresses in ion jitcode to the list of (JSScript*, jsbytecode*) pairs that are implicitly active in the frame at that point in the native code. To represent this information efficiently, a multi-level table is used. At the top level, a global splay-tree of JitcodeGlobalEntry describings the mapping for each individual IonCode script generated by compiles. The entries are ordered by their nativeStartAddr. Every entry in the table is of fixed size, but there are different entry types, distinguished by the kind field. 46628
JitCommon.h 2276
JitFrames-inl.h jit_JitFrames_inl_h 1008
JitFrames.cpp 81844
JitFrames.h 28103
JitOptions.cpp 11267
JitOptions.h jit_JitOptions_h 3715
JitRealm.h 22477
JitSpewer.cpp 17224
JitSpewer.h Information during sinking 8987
JSJitFrameIter-inl.h jit_JSJitFrameIter_inl_h 1816
JSJitFrameIter.cpp 23387
JSJitFrameIter.h 27200
JSONSpewer.cpp 7029
JSONSpewer.h JS_JITSPEW 1140
Label.h 3443
LICM.cpp 9023
LICM.h jit_LICM_h 661
Linker.cpp 2271
Linker.h jit_Linker_h 1171
LIR.cpp 18625
LIR.h 59469
LoopUnroller.cpp 14669
LoopUnroller.h 629
Lowering.cpp useAtStart = 161174
Lowering.h useAtStart = 2529
MacroAssembler-inl.h 30864
MacroAssembler.cpp 131594
MacroAssembler.h 143241
MCallOptimize.cpp 132014
mips-shared 19
mips32 20
mips64 20
MIR.cpp 187235
MIR.h Everything needed to build actual MIR instructions: the actual opcodes and instructions, the instruction interface, and use chains. 389001
MIRGenerator.h 6005
MIRGraph.cpp 47729
MIRGraph.h 33287
MoveEmitter.h jit_MoveEmitter_h 918
MoveResolver.cpp 13763
MoveResolver.h 9655
moz.build 7923
none 11
OptimizationTracking.cpp 40931
OptimizationTracking.h 19665
PcScriptCache.h jit_PcScriptCache_h 2432
PerfSpewer.cpp 9016
PerfSpewer.h jit_PerfSpewer_h 2888
ProcessExecutableMemory.cpp Inspiration is V8's OS::Allocate in platform-win32.cc. VirtualAlloc takes 64K chunks out of the virtual address space, so we keep 16b alignment. x86: V8 comments say that keeping addresses in the [64MiB, 1GiB) range tries to avoid system default DLL mapping space. In the end, we get 13 bits of randomness in our selection. x64: [2GiB, 4TiB), with 25 bits of randomness. 23482
ProcessExecutableMemory.h 2516
RangeAnalysis.cpp 113711
RangeAnalysis.h 25305
Recover.cpp 46989
Recover.h 21990
RegisterAllocator.cpp 21394
RegisterAllocator.h 11773
Registers.h 9859
RegisterSets.h 38304
RematerializedFrame.cpp static 6622
RematerializedFrame.h 6881
Safepoints.cpp 16020
Safepoints.h jit_Safepoints_h 3724
ScalarReplacement.cpp 48249
ScalarReplacement.h jit_ScalarReplacement_h 707
shared 14
SharedICHelpers-inl.h jit_SharedICHelpers_inl_h 1050
SharedICHelpers.h jit_SharedICHelpers_h 1014
SharedICRegisters.h jit_SharedICRegisters_h 1074
Sink.cpp 9274
Sink.h jit_Sink_h 630
Snapshots.cpp 21398
Snapshots.h 16194
StackSlotAllocator.h jit_StackSlotAllocator_h 3010
StupidAllocator.cpp 14484
StupidAllocator.h jit_StupidAllocator_h 2824
TemplateObject-inl.h jit_TemplateObject_inl_h 4741
TemplateObject.h jit_TemplateObject_h 3701
TypedObjectPrediction.cpp 8529
TypedObjectPrediction.h 6242
TypePolicy.cpp 49177
TypePolicy.h 17809
ValueNumbering.cpp [SMDOC] IonMonkey Value Numbering Some notes on the main algorithm here: - The SSA identifier id() is the value number. We do replaceAllUsesWith as we go, so there's always at most one visible value with a given number. - Consequently, the GVN algorithm is effectively pessimistic. This means it is not as powerful as an optimistic GVN would be, but it is simpler and faster. - We iterate in RPO, so that when visiting a block, we've already optimized and hashed all values in dominating blocks. With occasional exceptions, this allows us to do everything in a single pass. - When we do use multiple passes, we just re-run the algorithm on the whole graph instead of doing sparse propagation. This is a tradeoff to keep the algorithm simpler and lighter on inputs that don't have a lot of interesting unreachable blocks or degenerate loop induction variables, at the expense of being slower on inputs that do. The loop for this always terminates, because it only iterates when code is or will be removed, so eventually it must stop iterating. - Values are not immediately removed from the hash set when they go out of scope. Instead, we check for dominance after a lookup. If the dominance check fails, the value is removed. 46601
ValueNumbering.h jit_ValueNumbering_h 4440
VMFunctions.cpp static 58889
VMFunctions.h 42788
WasmBCE.cpp 4683
WasmBCE.h jit_wasmbce_h 964
x64 16
x86 16
x86-shared 23