Name Description Size
AliasAnalysis.cpp 16328
AliasAnalysis.h jit_AliasAnalysis_h 1605
AlignmentMaskAnalysis.cpp 3213
AlignmentMaskAnalysis.h namespace jit 726
arm 25
arm64 20
AtomicOp.h jit_AtomicOp_h 2983
AtomicOperations.h [SMDOC] Atomic Operations The atomic operations layer defines types and functions for JIT-compatible atomic operation. The fundamental constraints on the functions are: - That their realization here MUST be compatible with code the JIT generates for its Atomics operations, so that an atomic access from the interpreter or runtime - from any C++ code - really is atomic relative to a concurrent, compatible atomic access from jitted code. That is, these primitives expose JIT-compatible atomicity functionality to C++. - That accesses may race without creating C++ undefined behavior: atomic accesses (marked "SeqCst") may race with non-atomic accesses (marked "SafeWhenRacy"); overlapping but non-matching, and hence incompatible, atomic accesses may race; and non-atomic accesses may race. The effects of races need not be predictable, so garbage can be produced by a read or written by a write, but the effects must be benign: the program must continue to run, and only the memory in the union of addresses named in the racing accesses may be affected. The compatibility constraint means that if the JIT makes dynamic decisions about how to implement atomic operations then corresponding dynamic decisions MUST be made in the implementations of the functions below. The safe-for-races constraint means that by and large, it is hard to implement these primitives in C++. See "Implementation notes" below. The "SeqCst" suffix on operations means "sequentially consistent" and means such a function's operation must have "sequentially consistent" memory ordering. See mfbt/Atomics.h for an explanation of this memory ordering. Note that a "SafeWhenRacy" access does not provide the atomicity of a "relaxed atomic" access: it can read or write garbage if there's a race. Implementation notes. It's not a requirement that these functions be inlined; performance is not a great concern. On some platforms these functions may call out to code that's generated at run time. In principle these functions will not be written in C++, thus making races defined behavior if all racy accesses from C++ go via these functions. (Jitted code will always be safe for races and provides the same guarantees as these functions.) The appropriate implementations will be platform-specific and there are some obvious implementation strategies to choose from, sometimes a combination is appropriate: - generating the code at run-time with the JIT; - hand-written assembler (maybe inline); or - using special compiler intrinsics or directives. Trusting the compiler not to generate code that blows up on a race definitely won't work in the presence of TSan, or even of optimizing compilers in seemingly-"innocuous" conditions. (See https://www.usenix.org/legacy/event/hotpar11/tech/final_files/Boehm.pdf for details.) 14998
BacktrackingAllocator.cpp 106924
BacktrackingAllocator.h 26520
Bailouts.cpp excInfo = 11311
Bailouts.h 9203
BaselineBailouts.cpp BaselineStackBuilder helps abstract the process of rebuilding the C stack on the heap. It takes a bailout iterator and keeps track of the point on the C stack from which the reconstructed frames will be written. It exposes methods to write data into the heap memory storing the reconstructed stack. It also exposes method to easily calculate addresses. This includes both the virtual address that a particular value will be at when it's eventually copied onto the stack, as well as the current actual address of that value (whether on the heap allocated portion being constructed or the existing stack). The abstraction handles transparent re-allocation of the heap memory when it needs to be enlarged to accommodate new data. Similarly to the C stack, the data that's written to the reconstructed stack grows from high to low in memory. The lowest region of the allocated memory contains a BaselineBailoutInfo structure that points to the start and end of the written data. 75632
BaselineCacheIRCompiler.cpp 98030
BaselineCacheIRCompiler.h jit_BaselineCacheIRCompiler_h 3759
BaselineCodeGen.cpp HandlerArgs = 207053
BaselineCodeGen.h 27997
BaselineDebugModeOSR.cpp 19127
BaselineDebugModeOSR.h 1026
BaselineFrame-inl.h jit_BaselineFrame_inl_h 2852
BaselineFrame.cpp 5510
BaselineFrame.h 14405
BaselineFrameInfo-inl.h jit_BaselineFrameInfo_inl_h 1155
BaselineFrameInfo.cpp 6642
BaselineFrameInfo.h 12718
BaselineIC.cpp 132514
BaselineIC.h 65592
BaselineICList.h jit_BaselineICList_h 4492
BaselineInspector.cpp 47474
BaselineInspector.h jit_BaselineInspector_h 4734
BaselineJIT.cpp 34175
BaselineJIT.h 20397
BitSet.cpp 2573
BitSet.h jit_BitSet_h 4187
BytecodeAnalysis.cpp stackDepth= 7874
BytecodeAnalysis.h jit_BytecodeAnalysis_h 2073
CacheIR.cpp 214129
CacheIR.h 105010
CacheIRCompiler.cpp 145721
CacheIRCompiler.h 40890
CacheIRSpewer.cpp 7175
CacheIRSpewer.h JS_CACHEIR_SPEW 3167
CodeGenerator.cpp 478499
CodeGenerator.h 15394
CompactBuffer.h 5741
CompileInfo-inl.h jit_CompileInfo_inl_h 2258
CompileInfo.h JavaScript execution, not analysis. 17389
CompileWrappers.cpp static 6990
CompileWrappers.h 3465
EdgeCaseAnalysis.cpp 1431
EdgeCaseAnalysis.h jit_EdgeCaseAnalysis_h 710
EffectiveAddressAnalysis.cpp 7001
EffectiveAddressAnalysis.h namespace jit 884
ExecutableAllocator.cpp willDestroy = 10188
ExecutableAllocator.h 9987
FixedList.h jit_FixedList_h 2183
FoldLinearArithConstants.cpp namespace jit 3624
FoldLinearArithConstants.h namespace jit 649
GenerateOpcodeFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateOpcodeFiles.py. Do not edit! */ #define %(listname)s(_) \\ %(ops)s #endif // %(includeguard)s 1997
ICState.h jit_ICState_h 4335
ICStubSpace.h jit_ICStubSpace_h 1998
InlinableNatives.h 8922
InlineList.h 15530
InstructionReordering.cpp 6538
InstructionReordering.h 592
Ion.cpp 95226
Ion.h 7692
IonAnalysis.cpp 167516
IonAnalysis.h 6227
IonBuilder.cpp 452677
IonBuilder.h 63036
IonCacheIRCompiler.cpp 80460
IonCacheIRCompiler.h jit_IonCacheIRCompiler_h 2467
IonCode.h 22373
IonControlFlow.cpp 64515
IonControlFlow.h 23807
IonIC.cpp static 21397
IonIC.h 17690
IonInstrumentation.h jit_IonInstrumentatjit_h 854
IonOptimizationLevels.cpp 6212
IonOptimizationLevels.h 9607
IonTypes.h 27357
Jit.cpp osrFrame = 6092
Jit.h jit_Jit_h 985
JitAllocPolicy.h 5607
JitcodeMap.cpp 52190
JitcodeMap.h The Ion jitcode map implements tables to allow mapping from addresses in ion jitcode to the list of (JSScript*, jsbytecode*) pairs that are implicitly active in the frame at that point in the native code. To represent this information efficiently, a multi-level table is used. At the top level, a global splay-tree of JitcodeGlobalEntry describings the mapping for each individual IonCode script generated by compiles. The entries are ordered by their nativeStartAddr. Every entry in the table is of fixed size, but there are different entry types, distinguished by the kind field. 48451
JitCommon.h 2324
JitFrames-inl.h jit_JitFrames_inl_h 1008
JitFrames.cpp 81709
JitFrames.h 28270
JitOptions.cpp 11403
JitOptions.h jit_JitOptions_h 4321
JitRealm.h 24827
JitScript-inl.h Note: for non-escaping arguments, argTypes reflect only the initial type of the variable (e.g. passed values for argTypes, or undefined for localTypes) and not types from subsequent assignments. 3887
JitScript.cpp static 23983
JitScript.h 21989
JitSpewer.cpp 18172
JitSpewer.h Information during sinking 9149
JSJitFrameIter-inl.h jit_JSJitFrameIter_inl_h 1828
JSJitFrameIter.cpp 23309
JSJitFrameIter.h 27202
JSONSpewer.cpp 7138
JSONSpewer.h JS_JITSPEW 1192
Label.h 3506
LICM.cpp 8966
LICM.h jit_LICM_h 661
Linker.cpp 2227
Linker.h jit_Linker_h 1235
LIR.cpp 18696
LIR.h 60781
Lowering.cpp useAtStart = 163261
Lowering.h useAtStart = 2545
MacroAssembler-inl.h 31904
MacroAssembler.cpp 124981
MacroAssembler.h 147218
MCallOptimize.cpp 140503
mips-shared 19
mips32 20
mips64 20
MIR.cpp 183312
MIR.h Everything needed to build actual MIR instructions: the actual opcodes and instructions, the instruction interface, and use chains. 390947
MIRGenerator.h 6007
MIRGraph.cpp 47552
MIRGraph.h 33097
MoveEmitter.h jit_MoveEmitter_h 932
MoveResolver.cpp 13763
MoveResolver.h 9655
moz.build 8311
none 10
OptimizationTracking.cpp 41011
OptimizationTracking.h 19665
PcScriptCache.h jit_PcScriptCache_h 2432
PerfSpewer.cpp 9048
PerfSpewer.h jit_PerfSpewer_h 2892
ProcessExecutableMemory.cpp Inspiration is V8's OS::Allocate in platform-win32.cc. VirtualAlloc takes 64K chunks out of the virtual address space, so we keep 16b alignment. x86: V8 comments say that keeping addresses in the [64MiB, 1GiB) range tries to avoid system default DLL mapping space. In the end, we get 13 bits of randomness in our selection. x64: [2GiB, 4TiB), with 25 bits of randomness. 24609
ProcessExecutableMemory.h 4321
RangeAnalysis.cpp 114933
RangeAnalysis.h 25305
Recover.cpp 46183
Recover.h 22014
RegisterAllocator.cpp 21394
RegisterAllocator.h 11773
Registers.h 9873
RegisterSets.h 38376
RematerializedFrame.cpp static 6429
RematerializedFrame.h 7445
Safepoints.cpp 16024
Safepoints.h jit_Safepoints_h 3724
ScalarReplacement.cpp 42819
ScalarReplacement.h jit_ScalarReplacement_h 707
shared 19
SharedICHelpers-inl.h jit_SharedICHelpers_inl_h 1064
SharedICHelpers.h jit_SharedICHelpers_h 1028
SharedICRegisters.h jit_SharedICRegisters_h 1090
Sink.cpp 9274
Sink.h jit_Sink_h 630
Snapshots.cpp 21451
Snapshots.h 16194
StackSlotAllocator.h jit_StackSlotAllocator_h 2978
StupidAllocator.cpp 14484
StupidAllocator.h jit_StupidAllocator_h 2824
TemplateObject-inl.h jit_TemplateObject_inl_h 4245
TemplateObject.h jit_TemplateObject_h 3467
TIOracle.cpp 1551
TIOracle.h jit_TIOracle_h 3305
TypedObjectPrediction.cpp 8529
TypedObjectPrediction.h 6242
TypePolicy.cpp 49244
TypePolicy.h 17800
ValueNumbering.cpp [SMDOC] IonMonkey Value Numbering Some notes on the main algorithm here: - The SSA identifier id() is the value number. We do replaceAllUsesWith as we go, so there's always at most one visible value with a given number. - Consequently, the GVN algorithm is effectively pessimistic. This means it is not as powerful as an optimistic GVN would be, but it is simpler and faster. - We iterate in RPO, so that when visiting a block, we've already optimized and hashed all values in dominating blocks. With occasional exceptions, this allows us to do everything in a single pass. - When we do use multiple passes, we just re-run the algorithm on the whole graph instead of doing sparse propagation. This is a tradeoff to keep the algorithm simpler and lighter on inputs that don't have a lot of interesting unreachable blocks or degenerate loop induction variables, at the expense of being slower on inputs that do. The loop for this always terminates, because it only iterates when code is or will be removed, so eventually it must stop iterating. - Values are not immediately removed from the hash set when they go out of scope. Instead, we check for dominance after a lookup. If the dominance check fails, the value is removed. 46601
ValueNumbering.h jit_ValueNumbering_h 4440
VMFunctionList-inl.h 22869
VMFunctions.cpp extraValuesToPop = 63538
VMFunctions.h 39227
WasmBCE.cpp 4683
WasmBCE.h jit_wasmbce_h 964
x64 16
x86 16
x86-shared 21