Name Description Size
AliasAnalysis.cpp 16051
AliasAnalysis.h jit_AliasAnalysis_h 1605
AlignmentMaskAnalysis.cpp 3204
AlignmentMaskAnalysis.h namespace jit 726
AtomicOp.h jit_AtomicOp_h 2983
AtomicOperations.h [SMDOC] Atomic Operations The atomic operations layer defines types and functions for JIT-compatible atomic operation. The fundamental constraints on the functions are: - That their realization here MUST be compatible with code the JIT generates for its Atomics operations, so that an atomic access from the interpreter or runtime - from any C++ code - really is atomic relative to a concurrent, compatible atomic access from jitted code. That is, these primitives expose JIT-compatible atomicity functionality to C++. - That accesses may race without creating C++ undefined behavior: atomic accesses (marked "SeqCst") may race with non-atomic accesses (marked "SafeWhenRacy"); overlapping but non-matching, and hence incompatible, atomic accesses may race; and non-atomic accesses may race. The effects of races need not be predictable, so garbage can be produced by a read or written by a write, but the effects must be benign: the program must continue to run, and only the memory in the union of addresses named in the racing accesses may be affected. The compatibility constraint means that if the JIT makes dynamic decisions about how to implement atomic operations then corresponding dynamic decisions MUST be made in the implementations of the functions below. The safe-for-races constraint means that by and large, it is hard to implement these primitives in C++. See "Implementation notes" below. The "SeqCst" suffix on operations means "sequentially consistent" and means such a function's operation must have "sequentially consistent" memory ordering. See mfbt/Atomics.h for an explanation of this memory ordering. Note that a "SafeWhenRacy" access does not provide the atomicity of a "relaxed atomic" access: it can read or write garbage if there's a race. Implementation notes. It's not a requirement that these functions be inlined; performance is not a great concern. On some platforms these functions may call out to code that's generated at run time. In principle these functions will not be written in C++, thus making races defined behavior if all racy accesses from C++ go via these functions. (Jitted code will always be safe for races and provides the same guarantees as these functions.) The appropriate implementations will be platform-specific and there are some obvious implementation strategies to choose from, sometimes a combination is appropriate: - generating the code at run-time with the JIT; - hand-written assembler (maybe inline); or - using special compiler intrinsics or directives. Trusting the compiler not to generate code that blows up on a race definitely won't work in the presence of TSan, or even of optimizing compilers in seemingly-"innocuous" conditions. (See for details.) 14998
BacktrackingAllocator.cpp 109961
BacktrackingAllocator.h 27719
Bailouts.cpp excInfo = 11790
Bailouts.h 9198
BaselineBailouts.cpp BaselineStackBuilder helps abstract the process of rebuilding the C stack on the heap. It takes a bailout iterator and keeps track of the point on the C stack from which the reconstructed frames will be written. It exposes methods to write data into the heap memory storing the reconstructed stack. It also exposes method to easily calculate addresses. This includes both the virtual address that a particular value will be at when it's eventually copied onto the stack, as well as the current actual address of that value (whether on the heap allocated portion being constructed or the existing stack). The abstraction handles transparent re-allocation of the heap memory when it needs to be enlarged to accommodate new data. Similarly to the C stack, the data that's written to the reconstructed stack grows from high to low in memory. The lowest region of the allocated memory contains a BaselineBailoutInfo structure that points to the start and end of the written data. 74173
BaselineCacheIRCompiler.cpp 100310
BaselineCacheIRCompiler.h jit_BaselineCacheIRCompiler_h 4405
BaselineCodeGen.cpp HandlerArgs = 200844
BaselineCodeGen.h 18515
BaselineDebugModeOSR.cpp 19488
BaselineDebugModeOSR.h 1026
BaselineFrame-inl.h jit_BaselineFrame_inl_h 2852
BaselineFrame.cpp 5450
BaselineFrame.h 15321
BaselineFrameInfo-inl.h jit_BaselineFrameInfo_inl_h 1155
BaselineFrameInfo.cpp 6675
BaselineFrameInfo.h 12992
BaselineIC.cpp 123617
BaselineIC.h 64962
BaselineICList.h jit_BaselineICList_h 4363
BaselineInspector.cpp 47987
BaselineInspector.h jit_BaselineInspector_h 4653
BaselineJIT.cpp 34064
BaselineJIT.h 21630
BitSet.cpp 2573
BitSet.h jit_BitSet_h 4187
BytecodeAnalysis.cpp stackDepth= 7939
BytecodeAnalysis.h jit_BytecodeAnalysis_h 2057
CacheIR.cpp 248544
CacheIR.h 61388
CacheIRCompiler.cpp 202876
CacheIRCompiler.h 39504
CacheIROps.yaml 26368
CacheIRSpewer.cpp 13333
CacheIRSpewer.h JS_CACHEIR_SPEW 3043
CodeGenerator.cpp 509189
CodeGenerator.h 15504
CompactBuffer.h 5747
CompileInfo-inl.h jit_CompileInfo_inl_h 2258
CompileInfo.h JavaScript execution, not analysis. 17231
CompileWrappers.cpp static 7578
CompileWrappers.h 3713
Disassemble.cpp 2976
Disassemble.h jit_Disassemble_h 649
EdgeCaseAnalysis.cpp 1431
EdgeCaseAnalysis.h jit_EdgeCaseAnalysis_h 710
EffectiveAddressAnalysis.cpp 6987
EffectiveAddressAnalysis.h namespace jit 884
ExecutableAllocator.cpp willDestroy = 10523
ExecutableAllocator.h 6386
FixedList.h jit_FixedList_h 2183
FlushICache.h Flush the instruction cache of instructions in an address range. 1131
FoldLinearArithConstants.cpp namespace jit 3592
FoldLinearArithConstants.h namespace jit 649 \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/ Do not edit! */ %(contents)s #endif // %(includeguard)s 16574 \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/ Do not edit! */ #define %(listname)s(_) \\ %(ops)s #endif // %(includeguard)s 2033
ICState.h jit_ICState_h 4335
ICStubSpace.h jit_ICStubSpace_h 1998
InlinableNatives.h 9825
InlineList.h 15550
InstructionReordering.cpp 7248
InstructionReordering.h 592
Ion.cpp 86462
Ion.h this 5094
IonAnalysis.cpp 168296
IonAnalysis.h 6277
IonBuilder.cpp 411265
IonBuilder.h 53279
IonCacheIRCompiler.cpp 81520
IonCacheIRCompiler.h jit_IonCacheIRCompiler_h 2714
IonCompileTask.cpp 6779
IonCompileTask.h jit_IonCompileTask_h 2719
IonIC.cpp static 22487
IonIC.h 17497
IonInstrumentation.h jit_IonInstrumentatjit_h 854
IonOptimizationLevels.cpp 6432
IonOptimizationLevels.h 9607
IonScript.h 19066
IonTypes.h 31079
JSJitFrameIter-inl.h jit_JSJitFrameIter_inl_h 1992
JSJitFrameIter.cpp 24090
JSJitFrameIter.h 27698
JSONSpewer.cpp 7138
JSONSpewer.h JS_JITSPEW 1192
Jit.cpp osrFrame = 6189
Jit.h jit_Jit_h 985
JitAllocPolicy.h 5615
JitCode.h 6077
JitCommon.h 2324
JitContext.cpp 4707
JitContext.h jit_JitContext_h 4372
JitFrames-inl.h jit_JitFrames_inl_h 1025
JitFrames.cpp 82320
JitFrames.h 28641
JitOptions.cpp 12488
JitOptions.h jit_JitOptions_h 4543
JitRealm.h 25407
JitScript-inl.h Note: for non-escaping arguments, argTypes reflect only the initial type of the variable (e.g. passed values for argTypes, or undefined for localTypes) and not types from subsequent assignments. 3873
JitScript.cpp static 24339
JitScript.h 22552
JitSpewer.cpp 18090
JitSpewer.h Information during sinking 9599
JitcodeMap.cpp 44787
JitcodeMap.h The Ion jitcode map implements tables to allow mapping from addresses in ion jitcode to the list of (JSScript*, jsbytecode*) pairs that are implicitly active in the frame at that point in the native code. To represent this information efficiently, a multi-level table is used. At the top level, a global splay-tree of JitcodeGlobalEntry describings the mapping for each individual IonCode script generated by compiles. The entries are ordered by their nativeStartAddr. Every entry in the table is of fixed size, but there are different entry types, distinguished by the kind field. 38479
LICM.cpp 8967
LICM.h jit_LICM_h 661
LIR.cpp 18948
LIR.h 64221
Label.h 3513
Linker.cpp 2386
Linker.h jit_Linker_h 1171
Lowering.cpp useAtStart = 177053
Lowering.h useAtStart = 2403
MCallOptimize.cpp 142476
MIR.cpp 183353
MIR.h Everything needed to build actual MIR instructions: the actual opcodes and instructions, the instruction interface, and use chains. 410862
MIRBuilderShared.h 8200
MIRGenerator.h 5258
MIRGraph.cpp 46522
MIRGraph.h 33198
MacroAssembler-inl.h 32660
MacroAssembler.cpp 129441
MacroAssembler.h 168055
MoveEmitter.h jit_MoveEmitter_h 932
MoveResolver.cpp 14064
MoveResolver.h 9749
PcScriptCache.h jit_PcScriptCache_h 2432
PerfSpewer.cpp 9048
PerfSpewer.h jit_PerfSpewer_h 2892
ProcessExecutableMemory.cpp Inspiration is V8's OS::Allocate in VirtualAlloc takes 64K chunks out of the virtual address space, so we keep 16b alignment. x86: V8 comments say that keeping addresses in the [64MiB, 1GiB) range tries to avoid system default DLL mapping space. In the end, we get 13 bits of randomness in our selection. x64: [2GiB, 4TiB), with 25 bits of randomness. 25362
ProcessExecutableMemory.h 4437
RangeAnalysis.cpp 115797
RangeAnalysis.h 25423
Recover.cpp 46288
Recover.h 22016
RegisterAllocator.cpp 19849
RegisterAllocator.h 11543
RegisterSets.h 38912
Registers.h 10004
RematerializedFrame-inl.h 746
RematerializedFrame.cpp static 6455
RematerializedFrame.h 7369
Safepoints.cpp 16048
Safepoints.h jit_Safepoints_h 3724
ScalarReplacement.cpp 43409
ScalarReplacement.h jit_ScalarReplacement_h 707
SharedICHelpers-inl.h jit_SharedICHelpers_inl_h 1064
SharedICHelpers.h jit_SharedICHelpers_h 1028
SharedICRegisters.h jit_SharedICRegisters_h 1090
Simulator.h jit_Simulator_h 778
Sink.cpp 9274
Sink.h jit_Sink_h 630
Snapshots.cpp 21637
Snapshots.h 16194
StackSlotAllocator.h jit_StackSlotAllocator_h 3481
TIOracle.cpp 1551
TIOracle.h jit_TIOracle_h 3305
TemplateObject-inl.h jit_TemplateObject_inl_h 4192
TemplateObject.h jit_TemplateObject_h 3424
TypePolicy.cpp 48819
TypePolicy.h 18160
VMFunctionList-inl.h 24176
VMFunctions.cpp extraValuesToPop = 67699
VMFunctions.h 40395
ValueNumbering.cpp [SMDOC] IonMonkey Value Numbering Some notes on the main algorithm here: - The SSA identifier id() is the value number. We do replaceAllUsesWith as we go, so there's always at most one visible value with a given number. - Consequently, the GVN algorithm is effectively pessimistic. This means it is not as powerful as an optimistic GVN would be, but it is simpler and faster. - We iterate in RPO, so that when visiting a block, we've already optimized and hashed all values in dominating blocks. With occasional exceptions, this allows us to do everything in a single pass. - When we do use multiple passes, we just re-run the algorithm on the whole graph instead of doing sparse propagation. This is a tradeoff to keep the algorithm simpler and lighter on inputs that don't have a lot of interesting unreachable blocks or degenerate loop induction variables, at the expense of being slower on inputs that do. The loop for this always terminates, because it only iterates when code is or will be removed, so eventually it must stop iterating. - Values are not immediately removed from the hash set when they go out of scope. Instead, we check for dominance after a lookup. If the dominance check fails, the value is removed. 46601
ValueNumbering.h jit_ValueNumbering_h 4440
WarpBuilder.cpp maybePred = 85613
WarpBuilder.h Intentionally not implemented 6908
WarpBuilderShared.cpp 2794
WarpBuilderShared.h 1254
WarpCacheIRTranspiler.cpp 36163
WarpCacheIRTranspiler.h jit_WarpCacheIRTranspiler_h 1551
WarpOracle.cpp 25588
WarpOracle.h jit_WarpOracle_h 1473
WarpSnapshot.cpp 8179
WarpSnapshot.h 12871
WasmBCE.cpp 4518
WasmBCE.h jit_wasmbce_h 964
arm 25
arm64 20
mips-shared 19
mips32 20
mips64 20 8471
none 10
shared 18
x64 16
x86 16
x86-shared 21