Name Description Size
ABIArgGenerator.h jit_ABIArgGenerator_h 2623
ABIFunctionList-inl.h 12521
ABIFunctions.h jit_VMFunctions_h 2678
AliasAnalysis.cpp 10458
AliasAnalysis.h jit_AliasAnalysis_h 1453
AlignmentMaskAnalysis.cpp 3205
AlignmentMaskAnalysis.h namespace jit 694
arm 25
arm64 20
Assembler.h jit_Assembler_h 940
AtomicOp.h jit_AtomicOp_h 2983
AtomicOperations.h [SMDOC] Atomic Operations The atomic operations layer defines types and functions for JIT-compatible atomic operation. The fundamental constraints on the functions are: - That their realization here MUST be compatible with code the JIT generates for its Atomics operations, so that an atomic access from the interpreter or runtime - from any C++ code - really is atomic relative to a concurrent, compatible atomic access from jitted code. That is, these primitives expose JIT-compatible atomicity functionality to C++. - That accesses may race without creating C++ undefined behavior: atomic accesses (marked "SeqCst") may race with non-atomic accesses (marked "SafeWhenRacy"); overlapping but non-matching, and hence incompatible, atomic accesses may race; and non-atomic accesses may race. The effects of races need not be predictable, so garbage can be produced by a read or written by a write, but the effects must be benign: the program must continue to run, and only the memory in the union of addresses named in the racing accesses may be affected. The compatibility constraint means that if the JIT makes dynamic decisions about how to implement atomic operations then corresponding dynamic decisions MUST be made in the implementations of the functions below. The safe-for-races constraint means that by and large, it is hard to implement these primitives in C++. See "Implementation notes" below. The "SeqCst" suffix on operations means "sequentially consistent" and means such a function's operation must have "sequentially consistent" memory ordering. See mfbt/Atomics.h for an explanation of this memory ordering. Note that a "SafeWhenRacy" access does not provide the atomicity of a "relaxed atomic" access: it can read or write garbage if there's a race. Implementation notes. It's not a requirement that these functions be inlined; performance is not a great concern. On some platforms these functions may call out to code that's generated at run time. In principle these functions will not be written in C++, thus making races defined behavior if all racy accesses from C++ go via these functions. (Jitted code will always be safe for races and provides the same guarantees as these functions.) The appropriate implementations will be platform-specific and there are some obvious implementation strategies to choose from, sometimes a combination is appropriate: - generating the code at run-time with the JIT; - hand-written assembler (maybe inline); or - using special compiler intrinsics or directives. Trusting the compiler not to generate code that blows up on a race definitely won't work in the presence of TSan, or even of optimizing compilers in seemingly-"innocuous" conditions. (See https://www.usenix.org/legacy/event/hotpar11/tech/final_files/Boehm.pdf for details.) 14537
AutoJitContextAlloc.h jit_AutoJitContextAlloc_h 984
AutoWritableJitCode.h jit_AutoWritableJitCode_h 2999
BacktrackingAllocator.cpp 113222
BacktrackingAllocator.h 27148
Bailouts.cpp exceptionInfo= 11121
Bailouts.h 8877
BaselineBailouts.cpp BaselineStackBuilder helps abstract the process of rebuilding the C stack on the heap. It takes a bailout iterator and keeps track of the point on the C stack from which the reconstructed frames will be written. It exposes methods to write data into the heap memory storing the reconstructed stack. It also exposes method to easily calculate addresses. This includes both the virtual address that a particular value will be at when it's eventually copied onto the stack, as well as the current actual address of that value (whether on the heap allocated portion being constructed or the existing stack). The abstraction handles transparent re-allocation of the heap memory when it needs to be enlarged to accommodate new data. Similarly to the C stack, the data that's written to the reconstructed stack grows from high to low in memory. The lowest region of the allocated memory contains a BaselineBailoutInfo structure that points to the start and end of the written data. 73397
BaselineCacheIRCompiler.cpp 105435
BaselineCacheIRCompiler.h jit_BaselineCacheIRCompiler_h 4584
BaselineCodeGen.cpp HandlerArgs = 200612
BaselineCodeGen.h 18373
BaselineDebugModeOSR.cpp 19218
BaselineDebugModeOSR.h 962
BaselineFrame-inl.h jit_BaselineFrame_inl_h 3951
BaselineFrame.cpp 5548
BaselineFrame.h 15590
BaselineFrameInfo-inl.h jit_BaselineFrameInfo_inl_h 1372
BaselineFrameInfo.cpp 6887
BaselineFrameInfo.h 13570
BaselineIC.cpp 76641
BaselineIC.h 16252
BaselineICList.h jit_BaselineICList_h 1991
BaselineJIT.cpp 34446
BaselineJIT.h 21613
BitSet.cpp 2627
BitSet.h jit_BitSet_h 4250
BytecodeAnalysis.cpp stackDepth= 8867
BytecodeAnalysis.h jit_BytecodeAnalysis_h 2291
CacheIR.cpp 375577
CacheIR.h 73006
CacheIRCompiler.cpp 289592
CacheIRCompiler.h 43962
CacheIRHealth.cpp 13172
CacheIRHealth.h JS_CACHEIR_SPEW 4353
CacheIROps.yaml 47629
CacheIRSpewer.cpp 12009
CacheIRSpewer.h JS_CACHEIR_SPEW 3099
CalleeToken.h namespace js::jit 2242
CodeGenerator.cpp 553654
CodeGenerator.h 15650
CompactBuffer.h 7020
CompileInfo.h env chain and argument obj 14327
CompileWrappers.cpp static 7175
CompileWrappers.h 4152
Disassemble.cpp 3139
Disassemble.h jit_Disassemble_h 680
EdgeCaseAnalysis.cpp 1461
EdgeCaseAnalysis.h jit_EdgeCaseAnalysis_h 700
EffectiveAddressAnalysis.cpp 7037
EffectiveAddressAnalysis.h namespace jit 874
ExecutableAllocator.cpp willDestroy = 10476
ExecutableAllocator.h 6626
FixedList.h jit_FixedList_h 2249
FlushICache.h Flush the instruction cache of instructions in an address range. 1274
FoldLinearArithConstants.cpp namespace jit 3672
FoldLinearArithConstants.h namespace jit 639
GenerateCacheIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateCacheIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 19465
GenerateLIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateLIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 9677
GenerateMIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateMIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 12632
ICState.h 6530
ICStubSpace.h jit_ICStubSpace_h 2013
InlinableNatives.cpp 12107
InlinableNatives.h 10851
InlineList.h 15550
InlineScriptTree-inl.h jit_InlineScriptTree_inl_h 2097
InlineScriptTree.h jit_InlineScriptTree_h 3338
InstructionReordering.cpp 7739
InstructionReordering.h 593
Invalidation.h jit_Invalidation_h 1881
Ion.cpp 83561
Ion.h this 4528
IonAnalysis.cpp 131484
IonAnalysis.h 5972
IonCacheIRCompiler.cpp shouldDiscardStack = 70014
IonCacheIRCompiler.h jit_IonCacheIRCompiler_h 1922
IonCompileTask.cpp 6446
IonCompileTask.h jit_IonCompileTask_h 2528
IonIC.cpp static 20526
IonIC.h 19341
IonOptimizationLevels.cpp 4628
IonOptimizationLevels.h 5128
IonScript.h 19984
IonTypes.h [SMDOC] Avoiding repeated bailouts / invalidations To avoid getting trapped in a "compilation -> bailout -> invalidation -> recompilation -> bailout -> invalidation -> ..." loop, every snapshot in Warp code is assigned a BailoutKind. If we bail out at that snapshot, FinishBailoutToBaseline will examine the BailoutKind and take appropriate action. In general: 1. If the bailing instruction comes from transpiled CacheIR, then when we bail out and continue execution in the baseline interpreter, the corresponding stub should fail a guard. As a result, we will either increment the enteredCount for a subsequent stub or attach a new stub, either of which will prevent WarpOracle from transpiling the failing stub when we recompile. Note: this means that every CacheIR op that can bail out in Warp must have an equivalent guard in the baseline CacheIR implementation. FirstExecution works according to the same principles: we have never hit this IC before, but after we bail to baseline we will attach a stub and recompile with better CacheIR information. 2. If the bailout occurs because an assumption we made in WarpBuilder was invalidated, then FinishBailoutToBaseline will set a flag on the script to avoid that assumption in the future: for example, UninitializedLexical. 3. Similarly, if the bailing instruction is generated or modified by a MIR optimization, then FinishBailoutToBaseline will set a flag on the script to make that optimization more conservative in the future. Examples include LICM, EagerTruncation, and HoistBoundsCheck. 4. Some bailouts can't be handled in Warp, even after a recompile. For example, Warp does not support catching exceptions. If this happens too often, then the cost of bailing out repeatedly outweighs the benefit of Warp compilation, so we invalidate the script and disable Warp compilation. 5. Some bailouts don't happen in performance-sensitive code: for example, the |debugger| statement. We just ignore those. 31179
Jit.cpp osrFrame = 6520
Jit.h jit_Jit_h 1013
JitAllocPolicy.h 5279
JitCode.h 5633
JitcodeMap.cpp 43865
JitcodeMap.h The Ion jitcode map implements tables to allow mapping from addresses in ion jitcode to the list of (JSScript*, jsbytecode*) pairs that are implicitly active in the frame at that point in the native code. To represent this information efficiently, a multi-level table is used. At the top level, a global splay-tree of JitcodeGlobalEntry describings the mapping for each individual IonCode script generated by compiles. The entries are ordered by their nativeStartAddr. Every entry in the table is of fixed size, but there are different entry types, distinguished by the kind field. 36796
JitCommon.h 2292
JitContext.cpp 3374
JitContext.h jit_JitContext_h 4888
JitFrames-inl.h jit_JitFrames_inl_h 862
JitFrames.cpp 82154
JitFrames.h cached saved frame bit 24982
JitOptions.cpp 12435
JitOptions.h jit_JitOptions_h 4347
JitRealm.h 5649
JitRuntime.h 15586
JitScript-inl.h jit_JitScript_inl_h 1159
JitScript.cpp depth= 20961
JitScript.h [SMDOC] ICScript Lifetimes An ICScript owns an array of ICEntries, each of which owns a linked list of ICStubs. A JitScript contains an embedded ICScript. If it has done any trial inlining, it also owns an InliningRoot. The InliningRoot owns all of the ICScripts that have been created for inlining into the corresponding JitScript. This ties the lifetime of the inlined ICScripts to the lifetime of the JitScript itself. We store pointers to ICScripts in two other places: on the stack in BaselineFrame, and in IC stubs for CallInlinedFunction. The ICScript pointer in a BaselineFrame either points to the ICScript embedded in the JitScript for that frame, or to an inlined ICScript owned by a caller. In each case, there must be a frame on the stack corresponding to the JitScript that owns the current ICScript, which will keep the ICScript alive. Each ICStub is owned by an ICScript and, indirectly, a JitScript. An ICStub that uses CallInlinedFunction contains an ICScript for use by the callee. The ICStub and the callee ICScript are always owned by the same JitScript, so the callee ICScript will not be freed while the ICStub is alive. The lifetime of an ICScript is independent of the lifetimes of the BaselineScript and IonScript/WarpScript to which it corresponds. They can be destroyed and recreated, and the ICScript will remain valid. 19504
JitSpewer.cpp 18702
JitSpewer.h Information during sinking 9920
JitZone.h 5775
JSJitFrameIter-inl.h jit_JSJitFrameIter_inl_h 2031
JSJitFrameIter.cpp 24286
JSJitFrameIter.h 27663
JSONSpewer.cpp 6998
JSONSpewer.h JS_JITSPEW 1192
KnownClass.cpp 3037
KnownClass.h 879
Label.cpp 883
Label.h 3186
LICM.cpp OUT 13204
LICM.h jit_LICM_h 629
Linker.cpp 2354
Linker.h jit_Linker_h 1248
LIR.cpp 19664
LIR.h 62962
LIROps.yaml 67959
Lowering.cpp useAtStart = 205241
Lowering.h useAtStart = 2550
MacroAssembler-inl.h 36821
MacroAssembler.cpp 181296
MacroAssembler.h 235146
mips-shared 19
mips32 20
mips64 20
MIR.cpp 179478
MIR.h Everything needed to build actual MIR instructions: the actual opcodes and instructions, the instruction interface, and use chains. 335247
MIRGenerator.h 5409
MIRGraph.cpp 36880
MIRGraph.h 30086
MIROps.yaml 55196
MoveEmitter.h jit_MoveEmitter_h 932
MoveResolver.cpp 14032
MoveResolver.h 9753
moz.build 8182
none 11
PcScriptCache.h jit_PcScriptCache_h 2434
PerfSpewer.cpp 9048
PerfSpewer.h jit_PerfSpewer_h 2885
ProcessExecutableMemory.cpp Inspiration is V8's OS::Allocate in platform-win32.cc. VirtualAlloc takes 64K chunks out of the virtual address space, so we keep 16b alignment. x86: V8 comments say that keeping addresses in the [64MiB, 1GiB) range tries to avoid system default DLL mapping space. In the end, we get 13 bits of randomness in our selection. x64: [2GiB, 4TiB), with 25 bits of randomness. 26383
ProcessExecutableMemory.h 4933
RangeAnalysis.cpp 119581
RangeAnalysis.h 25980
Recover.cpp 59566
Recover.h 28026
RegisterAllocator.cpp 21719
RegisterAllocator.h 11519
Registers.h 10029
RegisterSets.h 40011
RematerializedFrame-inl.h 746
RematerializedFrame.cpp static 6447
RematerializedFrame.h 7690
SafepointIndex-inl.h jit_SafepointIndex_inl_h 694
SafepointIndex.cpp 732
SafepointIndex.h namespace jit 2376
Safepoints.cpp 16576
Safepoints.h jit_Safepoints_h 3826
ScalarReplacement.cpp 59037
ScalarReplacement.h jit_ScalarReplacement_h 675
ScalarTypeUtils.h jit_ScalarTypeUtils_h 1328
ScriptFromCalleeToken.h namespace js::jit 1023
shared 19
SharedICHelpers-inl.h jit_SharedICHelpers_inl_h 1064
SharedICHelpers.h jit_SharedICHelpers_h 1028
SharedICRegisters.h jit_SharedICRegisters_h 1090
ShuffleAnalysis.cpp 25145
ShuffleAnalysis.h 4368
Simulator.h jit_Simulator_h 875
Sink.cpp 9457
Sink.h jit_Sink_h 598
Snapshots.cpp 21148
Snapshots.h 16205
StackSlotAllocator.h jit_StackSlotAllocator_h 3426
TemplateObject-inl.h jit_TemplateObject_inl_h 3227
TemplateObject.h jit_TemplateObject_h 2528
TrialInlining.cpp 25946
TrialInlining.h [SMDOC] Trial Inlining WarpBuilder relies on transpiling CacheIR. When inlining scripted functions in WarpBuilder, we want our ICs to be as monomorphic as possible. Functions with multiple callers complicate this. An IC in such a function might be monomorphic for any given caller, but polymorphic overall. This make the input to WarpBuilder less precise. To solve this problem, we do trial inlining. During baseline execution, we identify call sites for which it would be useful to have more precise inlining data. For each such call site, we allocate a fresh ICScript and replace the existing call IC with a new specialized IC that invokes the callee using the new ICScript. Other callers of the callee will continue using the default ICScript. When we eventually Warp-compile the script, we can generate code for the callee using the IC information in our private ICScript, which is specialized for its caller. The same approach can be used to inline recursively. 5283
TypeData.h jit_TypeData_h 1314
TypePolicy.cpp 42330
TypePolicy.h 19011
ValueNumbering.cpp [SMDOC] IonMonkey Value Numbering Some notes on the main algorithm here: - The SSA identifier id() is the value number. We do replaceAllUsesWith as we go, so there's always at most one visible value with a given number. - Consequently, the GVN algorithm is effectively pessimistic. This means it is not as powerful as an optimistic GVN would be, but it is simpler and faster. - We iterate in RPO, so that when visiting a block, we've already optimized and hashed all values in dominating blocks. With occasional exceptions, this allows us to do everything in a single pass. - When we do use multiple passes, we just re-run the algorithm on the whole graph instead of doing sparse propagation. This is a tradeoff to keep the algorithm simpler and lighter on inputs that don't have a lot of interesting unreachable blocks or degenerate loop induction variables, at the expense of being slower on inputs that do. The loop for this always terminates, because it only iterates when code is or will be removed, so eventually it must stop iterating. - Values are not immediately removed from the hash set when they go out of scope. Instead, we check for dominance after a lookup. If the dominance check fails, the value is removed. 45775
ValueNumbering.h jit_ValueNumbering_h 4469
VMFunctionList-inl.h 24186
VMFunctions.cpp Unexpected return type for a VMFunction. 90248
VMFunctions.h 26966
WarpBuilder.cpp maybePred = 105535
WarpBuilder.h Intentionally not implemented 13097
WarpBuilderShared.cpp 2842
WarpBuilderShared.h 10139
WarpCacheIRTranspiler.cpp 162852
WarpCacheIRTranspiler.h jit_WarpCacheIRTranspiler_h 905
WarpOracle.cpp 38425
WarpOracle.h jit_WarpOracle_h 2233
WarpSnapshot.cpp 12588
WarpSnapshot.h 17636
WasmBCE.cpp 4735
WasmBCE.h jit_wasmbce_h 964
x64 16
x86 16
x86-shared 22
XrayJitInfo.cpp 613