Name Description Size
ABIFunctionList-inl.h 12192
ABIFunctions.h jit_VMFunctions_h 2678
AliasAnalysis.cpp 10499
AliasAnalysis.h jit_AliasAnalysis_h 1453
AlignmentMaskAnalysis.cpp 3205
AlignmentMaskAnalysis.h namespace jit 694
AtomicOp.h jit_AtomicOp_h 2983
AtomicOperations.h [SMDOC] Atomic Operations The atomic operations layer defines types and functions for JIT-compatible atomic operation. The fundamental constraints on the functions are: - That their realization here MUST be compatible with code the JIT generates for its Atomics operations, so that an atomic access from the interpreter or runtime - from any C++ code - really is atomic relative to a concurrent, compatible atomic access from jitted code. That is, these primitives expose JIT-compatible atomicity functionality to C++. - That accesses may race without creating C++ undefined behavior: atomic accesses (marked "SeqCst") may race with non-atomic accesses (marked "SafeWhenRacy"); overlapping but non-matching, and hence incompatible, atomic accesses may race; and non-atomic accesses may race. The effects of races need not be predictable, so garbage can be produced by a read or written by a write, but the effects must be benign: the program must continue to run, and only the memory in the union of addresses named in the racing accesses may be affected. The compatibility constraint means that if the JIT makes dynamic decisions about how to implement atomic operations then corresponding dynamic decisions MUST be made in the implementations of the functions below. The safe-for-races constraint means that by and large, it is hard to implement these primitives in C++. See "Implementation notes" below. The "SeqCst" suffix on operations means "sequentially consistent" and means such a function's operation must have "sequentially consistent" memory ordering. See mfbt/Atomics.h for an explanation of this memory ordering. Note that a "SafeWhenRacy" access does not provide the atomicity of a "relaxed atomic" access: it can read or write garbage if there's a race. Implementation notes. It's not a requirement that these functions be inlined; performance is not a great concern. On some platforms these functions may call out to code that's generated at run time. In principle these functions will not be written in C++, thus making races defined behavior if all racy accesses from C++ go via these functions. (Jitted code will always be safe for races and provides the same guarantees as these functions.) The appropriate implementations will be platform-specific and there are some obvious implementation strategies to choose from, sometimes a combination is appropriate: - generating the code at run-time with the JIT; - hand-written assembler (maybe inline); or - using special compiler intrinsics or directives. Trusting the compiler not to generate code that blows up on a race definitely won't work in the presence of TSan, or even of optimizing compilers in seemingly-"innocuous" conditions. (See https://www.usenix.org/legacy/event/hotpar11/tech/final_files/Boehm.pdf for details.) 14516
AutoJitContextAlloc.h jit_AutoJitContextAlloc_h 984
AutoWritableJitCode.h jit_AutoWritableJitCode_h 2645
BacktrackingAllocator.cpp 112224
BacktrackingAllocator.h 26862
Bailouts.cpp exceptionInfo= 11032
Bailouts.h 8877
BaselineBailouts.cpp BaselineStackBuilder helps abstract the process of rebuilding the C stack on the heap. It takes a bailout iterator and keeps track of the point on the C stack from which the reconstructed frames will be written. It exposes methods to write data into the heap memory storing the reconstructed stack. It also exposes method to easily calculate addresses. This includes both the virtual address that a particular value will be at when it's eventually copied onto the stack, as well as the current actual address of that value (whether on the heap allocated portion being constructed or the existing stack). The abstraction handles transparent re-allocation of the heap memory when it needs to be enlarged to accommodate new data. Similarly to the C stack, the data that's written to the reconstructed stack grows from high to low in memory. The lowest region of the allocated memory contains a BaselineBailoutInfo structure that points to the start and end of the written data. 75015
BaselineCacheIRCompiler.cpp 99314
BaselineCacheIRCompiler.h jit_BaselineCacheIRCompiler_h 4660
BaselineCodeGen.cpp HandlerArgs = 200970
BaselineCodeGen.h 18458
BaselineDebugModeOSR.cpp 19275
BaselineDebugModeOSR.h 962
BaselineFrame-inl.h jit_BaselineFrame_inl_h 3642
BaselineFrame.cpp 5584
BaselineFrame.h 15451
BaselineFrameInfo-inl.h jit_BaselineFrameInfo_inl_h 1155
BaselineFrameInfo.cpp 6887
BaselineFrameInfo.h 13617
BaselineIC.cpp 82266
BaselineIC.h 28868
BaselineICList.h jit_BaselineICList_h 4000
BaselineJIT.cpp 34616
BaselineJIT.h 21635
BitSet.cpp 2627
BitSet.h jit_BitSet_h 4250
BytecodeAnalysis.cpp stackDepth= 9137
BytecodeAnalysis.h jit_BytecodeAnalysis_h 2291
CacheIR.cpp 366150
CacheIR.h 72148
CacheIRCompiler.cpp 275857
CacheIRCompiler.h 44118
CacheIRHealth.cpp 10224
CacheIRHealth.h JS_CACHEIR_SPEW 3870
CacheIROps.yaml 45385
CacheIRSpewer.cpp 13032
CacheIRSpewer.h JS_CACHEIR_SPEW 3099
CalleeToken.h namespace js::jit 2242
CodeGenerator.cpp 522021
CodeGenerator.h 14773
CompactBuffer.h 6478
CompileInfo.h JavaScript execution, not analysis. 14828
CompileWrappers.cpp static 6744
CompileWrappers.h 4013
Disassemble.cpp 3139
Disassemble.h jit_Disassemble_h 680
EdgeCaseAnalysis.cpp 1461
EdgeCaseAnalysis.h jit_EdgeCaseAnalysis_h 700
EffectiveAddressAnalysis.cpp 7037
EffectiveAddressAnalysis.h namespace jit 874
ExecutableAllocator.cpp willDestroy = 10497
ExecutableAllocator.h 6626
FixedList.h jit_FixedList_h 2249
FlushICache.h Flush the instruction cache of instructions in an address range. 1274
FoldLinearArithConstants.cpp namespace jit 3672
FoldLinearArithConstants.h namespace jit 639
GenerateCacheIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateCacheIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 19271
GenerateOpcodeFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateOpcodeFiles.py. Do not edit! */ #define %(listname)s(_) \\ %(ops)s #endif // %(includeguard)s 2085
ICState.h 6420
ICStubSpace.h jit_ICStubSpace_h 1998
InlinableNatives.cpp 12005
InlinableNatives.h 10586
InlineList.h 15550
InlineScriptTree-inl.h jit_InlineScriptTree_inl_h 2097
InlineScriptTree.h jit_InlineScriptTree_h 3338
InstructionReordering.cpp 7666
InstructionReordering.h 593
Invalidation.h jit_Invalidation_h 1881
Ion.cpp 83904
Ion.h this 4313
IonAnalysis.cpp 138733
IonAnalysis.h 6048
IonCacheIRCompiler.cpp shouldDiscardStack = 69005
IonCacheIRCompiler.h jit_IonCacheIRCompiler_h 1914
IonCompileTask.cpp 6475
IonCompileTask.h jit_IonCompileTask_h 2528
IonIC.cpp static 20563
IonIC.h 19341
IonOptimizationLevels.cpp 4628
IonOptimizationLevels.h 5128
IonScript.h 19984
IonTypes.h [SMDOC] Avoiding repeated bailouts / invalidations To avoid getting trapped in a "compilation -> bailout -> invalidation -> recompilation -> bailout -> invalidation -> ..." loop, every snapshot in Warp code is assigned a BailoutKind. If we bail out at that snapshot, FinishBailoutToBaseline will examine the BailoutKind and take appropriate action. In general: 1. If the bailing instruction comes from transpiled CacheIR, then when we bail out and continue execution in the baseline interpreter, the corresponding stub should fail a guard. As a result, we will either increment the enteredCount for a subsequent stub or attach a new stub, either of which will prevent WarpOracle from transpiling the failing stub when we recompile. Note: this means that every CacheIR op that can bail out in Warp must have an equivalent guard in the baseline CacheIR implementation. FirstExecution works according to the same principles: we have never hit this IC before, but after we bail to baseline we will attach a stub and recompile with better CacheIR information. 2. If the bailout occurs because an assumption we made in WarpBuilder was invalidated, then FinishBailoutToBaseline will set a flag on the script to avoid that assumption in the future. Examples include NotOptimizedArgumentsGuard and UninitializedLexical. 3. Similarly, if the bailing instruction is generated or modified by a MIR optimization, then FinishBailoutToBaseline will set a flag on the script to make that optimization more conservative in the future. Examples include LICM, EagerTruncation, and HoistBoundsCheck. 4. Some bailouts can't be handled in Warp, even after a recompile. For example, Warp does not support catching exceptions. If this happens too often, then the cost of bailing out repeatedly outweighs the benefit of Warp compilation, so we invalidate the script and disable Warp compilation. 5. Some bailouts don't happen in performance-sensitive code: for example, the |debugger| statement. We just ignore those. 31087
JSJitFrameIter-inl.h jit_JSJitFrameIter_inl_h 2031
JSJitFrameIter.cpp 24352
JSJitFrameIter.h 27693
JSONSpewer.cpp 7021
JSONSpewer.h JS_JITSPEW 1192
Jit.cpp osrFrame = 6393
Jit.h jit_Jit_h 1013
JitAllocPolicy.h 5279
JitCode.h 5633
JitCommon.h 2292
JitContext.cpp 3374
JitContext.h jit_JitContext_h 4888
JitFrames-inl.h jit_JitFrames_inl_h 862
JitFrames.cpp 82186
JitFrames.h cached saved frame bit 24982
JitOptions.cpp 12561
JitOptions.h jit_JitOptions_h 4378
JitRealm.h 5614
JitRuntime.h 15748
JitScript-inl.h jit_JitScript_inl_h 1159
JitScript.cpp depth= 21167
JitScript.h [SMDOC] ICScript Lifetimes An ICScript owns an array of ICEntries, each of which owns a linked list of ICStubs. A JitScript contains an embedded ICScript. If it has done any trial inlining, it also owns an InliningRoot. The InliningRoot owns all of the ICScripts that have been created for inlining into the corresponding JitScript. This ties the lifetime of the inlined ICScripts to the lifetime of the JitScript itself. We store pointers to ICScripts in two other places: on the stack in BaselineFrame, and in IC stubs for CallInlinedFunction. The ICScript pointer in a BaselineFrame either points to the ICScript embedded in the JitScript for that frame, or to an inlined ICScript owned by a caller. In each case, there must be a frame on the stack corresponding to the JitScript that owns the current ICScript, which will keep the ICScript alive. Each ICStub is owned by an ICScript and, indirectly, a JitScript. An ICStub that uses CallInlinedFunction contains an ICScript for use by the callee. The ICStub and the callee ICScript are always owned by the same JitScript, so the callee ICScript will not be freed while the ICStub is alive. The lifetime of an ICScript is independent of the lifetimes of the BaselineScript and IonScript/WarpScript to which it corresponds. They can be destroyed and recreated, and the ICScript will remain valid. 18334
JitSpewer.cpp 18048
JitSpewer.h Information during sinking 9843
JitZone.h 5775
JitcodeMap.cpp 43851
JitcodeMap.h The Ion jitcode map implements tables to allow mapping from addresses in ion jitcode to the list of (JSScript*, jsbytecode*) pairs that are implicitly active in the frame at that point in the native code. To represent this information efficiently, a multi-level table is used. At the top level, a global splay-tree of JitcodeGlobalEntry describings the mapping for each individual IonCode script generated by compiles. The entries are ordered by their nativeStartAddr. Every entry in the table is of fixed size, but there are different entry types, distinguished by the kind field. 36800
KnownClass.cpp 3042
KnownClass.h 879
LICM.cpp 9035
LICM.h jit_LICM_h 629
LIR.cpp 19246
LIR.h 62790
Label.cpp 883
Label.h 3186
Linker.cpp 2373
Linker.h jit_Linker_h 1248
Lowering.cpp useAtStart = 194882
Lowering.h useAtStart = 2476
MIR.cpp 154128
MIR.h Everything needed to build actual MIR instructions: the actual opcodes and instructions, the instruction interface, and use chains. 437933
MIRGenerator.h 5311
MIRGraph.cpp 36929
MIRGraph.h 30086
MacroAssembler-inl.h 35684
MacroAssembler.cpp 157094
MacroAssembler.h 220482
MoveEmitter.h jit_MoveEmitter_h 932
MoveResolver.cpp 14032
MoveResolver.h 9753
PcScriptCache.h jit_PcScriptCache_h 2434
PerfSpewer.cpp 9048
PerfSpewer.h jit_PerfSpewer_h 2885
ProcessExecutableMemory.cpp Inspiration is V8's OS::Allocate in platform-win32.cc. VirtualAlloc takes 64K chunks out of the virtual address space, so we keep 16b alignment. x86: V8 comments say that keeping addresses in the [64MiB, 1GiB) range tries to avoid system default DLL mapping space. In the end, we get 13 bits of randomness in our selection. x64: [2GiB, 4TiB), with 25 bits of randomness. 26410
ProcessExecutableMemory.h 4933
RangeAnalysis.cpp 119522
RangeAnalysis.h 25980
Recover.cpp 57030
Recover.h 27173
RegisterAllocator.cpp 21719
RegisterAllocator.h 11519
RegisterSets.h 39304
Registers.h 10004
RematerializedFrame-inl.h 746
RematerializedFrame.cpp static 6478
RematerializedFrame.h 7690
SafepointIndex-inl.h jit_SafepointIndex_inl_h 694
SafepointIndex.cpp 732
SafepointIndex.h namespace jit 2376
Safepoints.cpp 16215
Safepoints.h jit_Safepoints_h 3826
ScalarReplacement.cpp 58078
ScalarReplacement.h jit_ScalarReplacement_h 675
ScalarTypeUtils.h jit_ScalarTypeUtils_h 1328
ScriptFromCalleeToken.h namespace js::jit 1023
SharedICHelpers-inl.h jit_SharedICHelpers_inl_h 1064
SharedICHelpers.h jit_SharedICHelpers_h 1028
SharedICRegisters.h jit_SharedICRegisters_h 1090
Simulator.h jit_Simulator_h 875
Sink.cpp 9515
Sink.h jit_Sink_h 598
Snapshots.cpp 21640
Snapshots.h 16195
StackSlotAllocator.h jit_StackSlotAllocator_h 3481
TemplateObject-inl.h jit_TemplateObject_inl_h 3424
TemplateObject.h jit_TemplateObject_h 2633
TrialInlining.cpp 25030
TrialInlining.h [SMDOC] Trial Inlining WarpBuilder relies on transpiling CacheIR. When inlining scripted functions in WarpBuilder, we want our ICs to be as monomorphic as possible. Functions with multiple callers complicate this. An IC in such a function might be monomorphic for any given caller, but polymorphic overall. This make the input to WarpBuilder less precise. To solve this problem, we do trial inlining. During baseline execution, we identify call sites for which it would be useful to have more precise inlining data. For each such call site, we allocate a fresh ICScript and replace the existing call IC with a new specialized IC that invokes the callee using the new ICScript. Other callers of the callee will continue using the default ICScript. When we eventually Warp-compile the script, we can generate code for the callee using the IC information in our private ICScript, which is specialized for its caller. The same approach can be used to inline recursively. 5151
TypePolicy.cpp 42099
TypePolicy.h 19011
VMFunctionList-inl.h 23524
VMFunctions.cpp Unexpected return type for a VMFunction. 87206
VMFunctions.h 26362
ValueNumbering.cpp [SMDOC] IonMonkey Value Numbering Some notes on the main algorithm here: - The SSA identifier id() is the value number. We do replaceAllUsesWith as we go, so there's always at most one visible value with a given number. - Consequently, the GVN algorithm is effectively pessimistic. This means it is not as powerful as an optimistic GVN would be, but it is simpler and faster. - We iterate in RPO, so that when visiting a block, we've already optimized and hashed all values in dominating blocks. With occasional exceptions, this allows us to do everything in a single pass. - When we do use multiple passes, we just re-run the algorithm on the whole graph instead of doing sparse propagation. This is a tradeoff to keep the algorithm simpler and lighter on inputs that don't have a lot of interesting unreachable blocks or degenerate loop induction variables, at the expense of being slower on inputs that do. The loop for this always terminates, because it only iterates when code is or will be removed, so eventually it must stop iterating. - Values are not immediately removed from the hash set when they go out of scope. Instead, we check for dominance after a lookup. If the dominance check fails, the value is removed. 45780
ValueNumbering.h jit_ValueNumbering_h 4469
WarpBuilder.cpp maybePred = 106407
WarpBuilder.h Intentionally not implemented 13035
WarpBuilderShared.cpp 2839
WarpBuilderShared.h 10335
WarpCacheIRTranspiler.cpp 152654
WarpCacheIRTranspiler.h jit_WarpCacheIRTranspiler_h 905
WarpOracle.cpp 39410
WarpOracle.h jit_WarpOracle_h 2233
WarpSnapshot.cpp 12881
WarpSnapshot.h 19322
WasmBCE.cpp 4584
WasmBCE.h jit_wasmbce_h 964
XrayJitInfo.cpp 613
arm 25
arm64 20
mips-shared 19
mips32 20
mips64 20
moz.build 8554
none 10
shared 18
x64 16
x86 16
x86-shared 22