Name Description Size Coverage
ABIArgGenerator.h jit_ABIArgGenerator_h 1925 92 %
ABIFunctionList-inl.h 16662 -
ABIFunctions.h jit_VMFunctions_h 2773 100 %
ABIFunctionType.h jit_ABIFunctionType_h 1767 -
ABIFunctionType.yaml 6771 -
AliasAnalysis.cpp 11240 96 %
AliasAnalysis.h jit_AliasAnalysis_h 1376 100 %
AlignmentMaskAnalysis.cpp 3116 41 %
AlignmentMaskAnalysis.h namespace jit 579 100 %
arm -
arm64 -
Assembler.h jit_Assembler_h 983 -
AtomicOp.h jit_AtomicOp_h 3339 100 %
AtomicOperations.h [SMDOC] Atomic Operations The atomic operations layer defines types and functions for JIT-compatible atomic operation. The fundamental constraints on the functions are: - That their realization here MUST be compatible with code the JIT generates for its Atomics operations, so that an atomic access from the interpreter or runtime - from any C++ code - really is atomic relative to a concurrent, compatible atomic access from jitted code. That is, these primitives expose JIT-compatible atomicity functionality to C++. - That accesses may race without creating C++ undefined behavior: atomic accesses (marked "SeqCst") may race with non-atomic accesses (marked "SafeWhenRacy"); overlapping but non-matching, and hence incompatible, atomic accesses may race; and non-atomic accesses may race. The effects of races need not be predictable, so garbage can be produced by a read or written by a write, but the effects must be benign: the program must continue to run, and only the memory in the union of addresses named in the racing accesses may be affected. The compatibility constraint means that if the JIT makes dynamic decisions about how to implement atomic operations then corresponding dynamic decisions MUST be made in the implementations of the functions below. The safe-for-races constraint means that by and large, it is hard to implement these primitives in C++. See "Implementation notes" below. The "SeqCst" suffix on operations means "sequentially consistent" and means such a function's operation must have "sequentially consistent" memory ordering. See mfbt/Atomics.h for an explanation of this memory ordering. Note that a "SafeWhenRacy" access does not provide the atomicity of a "relaxed atomic" access: it can read or write garbage if there's a race. Implementation notes. It's not a requirement that these functions be inlined; performance is not a great concern. On some platforms these functions may call functions that use inline assembly. See GenerateAtomicOperations.py. In principle these functions will not be written in C++, thus making races defined behavior if all racy accesses from C++ go via these functions. (Jitted code will always be safe for races and provides the same guarantees as these functions.) The appropriate implementations will be platform-specific and there are some obvious implementation strategies to choose from, sometimes a combination is appropriate: - generating the code at run-time with the JIT; - hand-written assembler (maybe inline); or - using special compiler intrinsics or directives. Trusting the compiler not to generate code that blows up on a race definitely won't work in the presence of TSan, or even of optimizing compilers in seemingly-"innocuous" conditions. (See https://www.usenix.org/legacy/event/hotpar11/tech/final_files/Boehm.pdf for details.) 12954 92 %
AutoWritableJitCode.h jit_AutoWritableJitCode_h 2141 85 %
BacktrackingAllocator.cpp 183995 92 %
BacktrackingAllocator.h 33910 97 %
Bailouts.cpp exceptionInfo= 13283 99 %
Bailouts.h 8468 100 %
BaselineBailouts.cpp BaselineStackBuilder helps abstract the process of rebuilding the C stack on the heap. It takes a bailout iterator and keeps track of the point on the C stack from which the reconstructed frames will be written. It exposes methods to write data into the heap memory storing the reconstructed stack. It also exposes method to easily calculate addresses. This includes both the virtual address that a particular value will be at when it's eventually copied onto the stack, as well as the current actual address of that value (whether on the heap allocated portion being constructed or the existing stack). The abstraction handles transparent re-allocation of the heap memory when it needs to be enlarged to accommodate new data. Similarly to the C stack, the data that's written to the reconstructed stack grows from high to low in memory. The lowest region of the allocated memory contains a BaselineBailoutInfo structure that points to the start and end of the written data. 74027 92 %
BaselineCacheIRCompiler.cpp 143293 97 %
BaselineCacheIRCompiler.h 7507 100 %
BaselineCodeGen.cpp HandlerArgs = 216885 95 %
BaselineCodeGen.h 20811 95 %
BaselineCompileQueue.h 1625 69 %
BaselineCompileTask.cpp static 4708 11 %
BaselineCompileTask.h 5293 55 %
BaselineDebugModeOSR.cpp 19120 88 %
BaselineDebugModeOSR.h 975 -
BaselineFrame-inl.h jit_BaselineFrame_inl_h 4501 95 %
BaselineFrame.cpp 5713 98 %
BaselineFrame.h 14833 100 %
BaselineFrameInfo-inl.h jit_BaselineFrameInfo_inl_h 1257 100 %
BaselineFrameInfo.cpp 6338 83 %
BaselineFrameInfo.h 13003 100 %
BaselineIC.cpp 84936 96 %
BaselineIC.h 17949 100 %
BaselineICList.h jit_BaselineICList_h 2118 -
BaselineJIT.cpp 44444 80 %
BaselineJIT.h 21042 96 %
BitSet.cpp 2512 18 %
BitSet.h jit_BitSet_h 4006 100 %
BranchHinting.cpp This pass propagates branch hints across the control flow graph using dominator information. Branch hints are read at compile-time for specific basic blocks. This pass propagates this property to successor blocks in a conservative way. The algorithm works as follows: - The CFG is traversed in reverse-post-order (RPO). Dominator parents are visited before the blocks they dominate. - For each basic block, if we have a hint, it is propagated to the blocks it immediately dominates (its children in the dominator tree). - The pass will then continue to work its way through the CFG. Because we only propagate along dominator-tree edges (parent -> child), each block receives information from exactly one source. This avoids conflicts that would otherwise arise at CFG join points. 2407 71 %
BranchHinting.h jit_BranchHinting_h 523 -
BranchPruning.cpp 19775 95 %
BranchPruning.h jit_BranchPruning_h 878 -
BytecodeAnalysis.cpp stackDepth= 10890 98 %
BytecodeAnalysis.h jit_BytecodeAnalysis_h 2489 100 %
CacheIR.cpp 567974 95 %
CacheIR.h 19929 95 %
CacheIRAOT.cpp 5736 -
CacheIRAOT.h jit_CacheIRAOT_h 991 -
CacheIRCloner.h jit_CacheIRCloner_h 2102 100 %
CacheIRCompiler.cpp 400247 94 %
CacheIRCompiler.h 49919 97 %
CacheIRGenerator.h 49886 100 %
CacheIRHealth.cpp 13400 0 %
CacheIRHealth.h JS_CACHEIR_SPEW 4369 -
CacheIROps.yaml 67440 -
CacheIRReader.h 5432 99 %
CacheIRSpewer.cpp 21001 22 %
CacheIRSpewer.h JS_CACHEIR_SPEW 3541 26 %
CacheIRWriter.h 26820 98 %
CalleeToken.h namespace js::jit 2127 100 %
CodeGenerator.cpp 805054 95 %
CodeGenerator.h 20527 100 %
CompactBuffer.h 6500 86 %
CompilationDependencyTracker.h jit_CompilationDependencyTracker_h 2882 96 %
CompileInfo.h env chain and argument obj 14687 100 %
CompileWrappers.cpp static 7479 89 %
CompileWrappers.h 4657 100 %
Disassemble.cpp 4370 100 %
Disassemble.h jit_Disassemble_h 565 -
DominatorTree.cpp virtual root 10763 93 %
DominatorTree.h jit_DominatorTree_h 503 -
EdgeCaseAnalysis.cpp 1378 100 %
EdgeCaseAnalysis.h jit_EdgeCaseAnalysis_h 597 -
EffectiveAddressAnalysis.cpp 7536 89 %
EffectiveAddressAnalysis.h namespace jit 771 100 %
ExecutableAllocator.cpp Copyright (C) 2008 Apple Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 10467 93 %
ExecutableAllocator.h Copyright (C) 2008 Apple Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 6233 100 %
FixedList.h jit_FixedList_h 2134 89 %
FlushICache.cpp 5433 -
FlushICache.h Flush the instruction cache of instructions in an address range. 3167 100 %
FoldLinearArithConstants.cpp namespace jit 3582 96 %
FoldLinearArithConstants.h namespace jit 574 -
FoldTests.cpp 25198 93 %
FoldTests.h jit_FoldTests_h 561 -
GenerateABIFunctionType.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateABIFunctionType.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 20944 -
GenerateAtomicOperations.py INLINE_ATTR void %(fun_name)s() { asm volatile ("mfence\n\t" ::: "memory"); } 36179 -
GenerateCacheIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateCacheIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 23537 -
GenerateLIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateLIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 17736 -
GenerateMIRFiles.py \ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef %(includeguard)s #define %(includeguard)s /* This file is generated by jit/GenerateMIRFiles.py. Do not edit! */ %(contents)s #endif // %(includeguard)s 17903 -
GraphSpewer.cpp 7850 -
GraphSpewer.h indent 1559 -
ICState.h 7334 100 %
ICStubSpace.h jit_ICStubSpace_h 1070 100 %
InlinableNatives.cpp 13989 66 %
InlinableNatives.h 14571 -
InlineList.h 16109 100 %
InlineScriptTree-inl.h jit_InlineScriptTree_inl_h 2154 59 %
InlineScriptTree.h jit_InlineScriptTree_h 4278 100 %
InstructionReordering.cpp 8409 98 %
InstructionReordering.h 497 -
InterpreterEntryTrampoline.cpp countIncludesThis = 8474 3 %
InterpreterEntryTrampoline.h The EntryTrampolineMap is used to cache the trampoline code for each script as they are created. These trampolines are created only under --emit-interpreter-entry and are used to identify which script is being interpeted when profiling with external profilers such as perf. The map owns the JitCode objects that are created for each script, and keeps them alive at least as long as the script associated with it in case we need to re-enter the trampoline again. As each script is finalized, the entry is manually removed from the table in BaseScript::finalize which will also release the trampoline code associated with it. During a moving GC, the table is rekeyed in case any scripts have relocated. 2273 0 %
Invalidation.h jit_Invalidation_h 2138 100 %
InvalidationScriptSet.h jit_InvalidationScriptSet_h 1461 100 %
Ion.cpp 86601 91 %
Ion.h this 4161 88 %
IonAnalysis.cpp 78881 92 %
IonAnalysis.h 5522 92 %
IonCacheIRCompiler.cpp 83994 91 %
IonCacheIRCompiler.h jit_IonCacheIRCompiler_h 3038 100 %
IonCompileTask.cpp 7604 97 %
IonCompileTask.h jit_IonCompileTask_h 3308 94 %
IonGenericCallStub.h jit_IonGenericCallStub_h 1308 100 %
IonIC.cpp static 22183 89 %
IonIC.h 21243 100 %
IonOptimizationLevels.cpp static 3621 98 %
IonOptimizationLevels.h 6555 100 %
IonScript.h 19234 90 %
IonTypes.h [SMDOC] Avoiding repeated bailouts / invalidations To avoid getting trapped in a "compilation -> bailout -> invalidation -> recompilation -> bailout -> invalidation -> ..." loop, every snapshot in Warp code is assigned a BailoutKind. If we bail out at that snapshot, FinishBailoutToBaseline will examine the BailoutKind and take appropriate action. In general: 1. If the bailing instruction comes from transpiled CacheIR, then when we bail out and continue execution in the baseline interpreter, the corresponding stub should fail a guard. As a result, we will either increment the enteredCount for a subsequent stub or attach a new stub, either of which will prevent WarpOracle from transpiling the failing stub when we recompile. Note: this means that every CacheIR op that can bail out in Warp must have an equivalent guard in the baseline CacheIR implementation. FirstExecution works according to the same principles: we have never hit this IC before, but after we bail to baseline we will attach a stub and recompile with better CacheIR information. 2. If the bailout occurs because an assumption we made in WarpBuilder was invalidated, then FinishBailoutToBaseline will set a flag on the script to avoid that assumption in the future: for example, UninitializedLexical. 3. Similarly, if the bailing instruction is generated or modified by a MIR optimization, then FinishBailoutToBaseline will set a flag on the script to make that optimization more conservative in the future. Examples include LICM, EagerTruncation, and HoistBoundsCheck. 4. Some bailouts can't be handled in Warp, even after a recompile. For example, Warp does not support catching exceptions. If this happens too often, then the cost of bailing out repeatedly outweighs the benefit of Warp compilation, so we invalidate the script and disable Warp compilation. 5. Some bailouts don't happen in performance-sensitive code: for example, the |debugger| statement. We just ignore those. 28122 89 %
Jit.cpp osrFrame = 7879 93 %
Jit.h jit_Jit_h 1034 -
JitAllocPolicy.h 5595 96 %
JitCode.h 6048 97 %
JitcodeMap.cpp 37472 92 %
JitcodeMap.h The jitcode map implements tables to allow mapping from addresses in jitcode to the list of scripts that are implicitly active in the frame at that point in the native code. To represent this information efficiently, a multi-level table is used. At the top level, a global AVL-tree of JitcodeGlobalEntry describing the mapping for each individual JitCode generated by compiles. The entries are ordered by their nativeStartAddr. Every entry in the table is of fixed size, but there are different entry types, distinguished by the kind field. 28339 92 %
JitCommon.h 2177 -
JitContext.cpp 3846 85 %
JitContext.h jit_JitContext_h 4634 -
Jitdump.h This file provides the necessary data structures to meet the JitDump specification as of https://github.com/torvalds/linux/blob/f2906aa863381afb0015a9eb7fefad885d4e5a56/tools/perf/Documentation/jitdump-specification.txt 1477 -
JitFrames-inl.h jit_JitFrames_inl_h 747 80 %
JitFrames.cpp 94506 83 %
JitFrames.h 25939 100 %
JitHints-inl.h jit_JitHints_inl_h 1834 100 %
JitHints.cpp 4619 91 %
JitHints.h [SMDOC] JitHintsMap The Jit hints map is an in process cache used to collect Baseline and Ion JIT hints to try and skip as much of the warmup as possible and jump straight into those tiers. Whenever a script enters one of these tiers a hint is recorded in this cache using the script's filename+sourceStart value, and if we ever encounter this script again later, e.g. during a navigation, then we try to eagerly compile it into baseline and ion based on its previous execution history. 6507 90 %
JitOptions.cpp 18133 55 %
JitOptions.h 6316 88 %
JitRuntime.h 18280 94 %
JitScript-inl.h jit_JitScript_inl_h 1029 100 %
JitScript.cpp depth= 33399 86 %
JitScript.h [SMDOC] ICScript Lifetimes An ICScript owns an array of ICEntries, each of which owns a linked list of ICStubs. A JitScript contains an embedded ICScript. If it has done any trial inlining, it also owns an InliningRoot. The InliningRoot owns all of the ICScripts that have been created for inlining into the corresponding JitScript. This ties the lifetime of the inlined ICScripts to the lifetime of the JitScript itself. We store pointers to ICScripts in two other places: on the stack in BaselineFrame, and in IC stubs for CallInlinedFunction. The ICScript pointer in a BaselineFrame either points to the ICScript embedded in the JitScript for that frame, or to an inlined ICScript owned by a caller. In each case, there must be a frame on the stack corresponding to the JitScript that owns the current ICScript, which will keep the ICScript alive. Each ICStub is owned by an ICScript and, indirectly, a JitScript. An ICStub that uses CallInlinedFunction contains an ICScript for use by the callee. The ICStub and the callee ICScript are always owned by the same JitScript, so the callee ICScript will not be freed while the ICStub is alive. The lifetime of an ICScript is independent of the lifetimes of the BaselineScript and IonScript/WarpScript to which it corresponds. They can be destroyed and recreated, and the ICScript will remain valid. When we discard JIT code, we mark ICScripts that are active on the stack as active and then purge all of the inactive ICScripts. We also purge ICStubs, including the CallInlinedFunction stub at the trial inining call site, and reset the ICStates to allow trial inlining again later. If there's a BaselineFrame for an inlined ICScript, we'll preserve both this ICScript and the IC chain for the call site in the caller's ICScript. See ICScript::purgeStubs and ICScript::purgeInactiveICScripts. 22363 94 %
JitSpewer.cpp 20187 -
JitSpewer.h Information during sinking 11157 86 %
JitZone.h 12063 96 %
JSJitFrameIter-inl.h jit_JSJitFrameIter_inl_h 1691 96 %
JSJitFrameIter.cpp 22685 76 %
JSJitFrameIter.h 27117 88 %
KnownClass.cpp 3128 85 %
KnownClass.h 764 -
Label.cpp 768 -
Label.h 3109 100 %
LICM.cpp OUT 12879 100 %
LICM.h jit_LICM_h 520 -
Linker.cpp 2224 83 %
Linker.h jit_Linker_h 1133 100 %
LIR.cpp 23252 92 %
LIR.h 73832 97 %
LIROps.yaml 91057 -
loong64 -
Lowering.cpp useAtStart = 296288 94 %
Lowering.h useAtStart = 2734 100 %
MachineState.h jit_MachineState_h 3489 100 %
MacroAssembler-inl.h 40346 95 %
MacroAssembler.cpp 386939 97 %
MacroAssembler.h 279815 95 %
mips-shared -
mips64 -
MIR-wasm.cpp 35126 85 %
MIR-wasm.h Everything needed to build actual MIR instructions: the actual opcodes and instructions, the instruction interface, and use chains. 116111 94 %
MIR.cpp 221597 90 %
MIR.h Everything needed to build actual MIR instructions: the actual opcodes and instructions, the instruction interface, and use chains. 311526 92 %
MIRGenerator.h 6335 91 %
MIRGraph.cpp 64758 88 %
MIRGraph.h 34128 98 %
MIROps.yaml 104087 -
MoveEmitter.h jit_MoveEmitter_h 979 -
MoveResolver.cpp 13939 63 %
MoveResolver.h 9809 83 %
moz.build 10096 -
none -
OffthreadSnapshot.h jit_OffthreadSnapshot_h 1835 100 %
PerfSpewer.cpp 39657 53 %
PerfSpewer.h 8964 96 %
ProcessExecutableMemory.cpp Inspiration is V8's OS::Allocate in platform-win32.cc. VirtualAlloc takes 64K chunks out of the virtual address space, so we keep 16b alignment. x86: V8 comments say that keeping addresses in the [64MiB, 1GiB) range tries to avoid system default DLL mapping space. In the end, we get 13 bits of randomness in our selection. x64: [2GiB, 4TiB), with 25 bits of randomness. 32250 90 %
ProcessExecutableMemory.h 5566 100 %
RangeAnalysis.cpp 122528 91 %
RangeAnalysis.h 25289 99 %
ReciprocalMulConstants.cpp 5786 100 %
ReciprocalMulConstants.h jit_ReciprocalMulConstants_h 1363 100 %
Recover.cpp 75501 76 %
Recover.h 33648 82 %
RegExpStubConstants.h jit_RegExpStubConstants_h 1116 -
RegisterAllocator.cpp 20690 33 %
RegisterAllocator.h 9931 92 %
Registers.h 9238 98 %
RegisterSets.h [SMDOC] Allocating registers by hand To reduce our maintenance burden, much of our codegen is written using an architecture-independent macroassembler layer. All abstractions are leaky; one of the main ways that the masm abstraction leaks is when it comes to finding registers. The main constraint is 32-bit x86. There are eight general purpose registers on 32-bit x86, but two of those (esp and ebp) are permanently claimed for the stack pointer and the frame pointer, leaving six useful registers. JS::Value is eight bytes, so 32-bit architectures require two registers to store Values (one for the tag and one for the payload). Therefore, the most that architecture-independent codegen can keep alive at one time is: +------+---------+ | GPRs | Values | +------+---------+ | 6 | 0 | | 4 | 1 | | 2 | 2 | | 0 | 3 | +------+---------+ Hand-written masm (eg trampolines, stubs) often requires agreement between the caller and the code itself about which values will be passed in which registers. In such cases, we usually define a named register in a header: either an arch-specific header, like RegExpMatcherRegExpReg in jit/<arch>/Assembler-<arch>.h, or an arch-independent header like the registers in jit/IonGenericCallStub.h. For scratch registers in such code, AllocatableGeneralRegisterSet can be used. Baseline deals primarily with boxed values, so we define three architecture-independent ValueOperands (R0, R1, and R2). When individual registers are needed, R<N>.scratchReg() can be used. If the sum of live Values and GPRs is greater than three, then AllocatableGeneralRegisterSet may be required. In baseline IC code, one additional register is dedicated to ICStubReg, so at most five registers (or 3 registers + 1 Value, ...) are available. Temps can be allocated using AutoScratchRegister. It's common for IC stubs to return a Value; to free up the register pair used for that output on 32-bit platforms, AutoScratchRegisterMaybeOutput/AutoScratchRegisterMaybeOutputType can be used for values that are dead before writing to the output. If more registers are necessary, there are a variety of workarounds. In some cases, the simplest answer is simply to disable an optimization on x86. We still support it, but it's not a performance priority. For example, we don't attach some specialized stubs for Map.get/has/set on x86. In other cases, it may be possible to manually spill a register to the stack to temporarily free it up. One useful pattern is to pass InvalidReg in cases where a register is not available, and decide whether to spill depending on whether a real register is free. See, for example, CacheIRCompiler::emitDataViewBoundsCheck. ### CallTempReg, ABINonArgReg, ABINonArgReturnReg, et al There are other more-or-less architecture-independent register definitions that can be useful. These include: - CallTempReg<0-5>: Six registers that are not part of the call machinery itself (not the stack pointer, frame pointer, or the link register). They are useful when setting up a call that does not use the system ABI. In Ion, JS uses these to allocate temporary registers for LIR ops that will generate a call. No more CallTempRegs can be added for use in architecture- independent code, because x86 doesn't have enough registers. - ABINonArgReg<0-3>: Four registers that are not part of the call machinery, and are also not used for passing arguments according to the system/Wasm ABI. These are primarily used for Wasm. ABINonArgReg4 could be added if needed. After that, we run out of registers on x86, because Wasm also pins the InstanceReg. See the "Wasm ABIs" SMDOC for more information. - ABINonArgReturnReg<0-1> / ABINonVolatileReg: Three registers that are not part of the call machinery, and not used by the system ABI for passing arguments or return values. ABINonVolatileReg is also required to have its value preserved over a call. This set is (again) constrained by x86: esp and ebp are claimed by the call machinery, eax/edx are return registers, and esi is the instance register. ABINonArgReturnVolatileReg is a register that is not used for calls but is *not* preserved over a call; it may or may not be the same as ABINonArgReturn<0-1>. 44921 97 %
RematerializedFrame-inl.h 631 100 %
RematerializedFrame.cpp static 6315 63 %
RematerializedFrame.h 7317 77 %
riscv64 -
SafepointIndex-inl.h jit_SafepointIndex_inl_h 579 100 %
SafepointIndex.cpp 617 100 %
SafepointIndex.h namespace jit 2228 100 %
Safepoints.cpp 19560 86 %
Safepoints.h jit_Safepoints_h 4138 100 %
ScalarReplacement.cpp 136609 90 %
ScalarReplacement.h jit_ScalarReplacement_h 646 -
ScalarTypeUtils.h jit_ScalarTypeUtils_h 1092 100 %
ScriptFromCalleeToken.h namespace js::jit 908 83 %
ShapeList.cpp static 5483 94 %
ShapeList.h 1664 -
shared 78 %
SharedICHelpers-inl.h jit_SharedICHelpers_inl_h 1183 -
SharedICHelpers.h jit_SharedICHelpers_h 1135 -
SharedICRegisters.h jit_SharedICRegisters_h 1149 -
ShuffleAnalysis.cpp 28136 89 %
ShuffleAnalysis.h 4676 100 %
SimpleAllocator.cpp 45925 86 %
SimpleAllocator.h 14170 100 %
Simulator.h jit_Simulator_h 844 -
Sink.cpp 9374 91 %
Sink.h jit_Sink_h 489 -
Snapshots.cpp 24183 96 %
Snapshots.h 18780 98 %
SparseBitSet.h 5540 99 %
StackSlotAllocator.h jit_StackSlotAllocator_h 3301 86 %
StubFolding.cpp 26370 91 %
StubFolding.h 952 -
TemplateObject-inl.h jit_TemplateObject_inl_h 3277 100 %
TemplateObject.h jit_TemplateObject_h 2474 100 %
Trampoline.cpp 14409 93 %
TrampolineNatives.cpp 13312 99 %
TrampolineNatives.h jit_TrampolineNatives_h 1810 -
TrialInlining.cpp 35843 89 %
TrialInlining.h [SMDOC] Trial Inlining WarpBuilder relies on transpiling CacheIR. When inlining scripted functions in WarpBuilder, we want our ICs to be as monomorphic as possible. Functions with multiple callers complicate this. An IC in such a function might be monomorphic for any given caller, but polymorphic overall. This make the input to WarpBuilder less precise. To solve this problem, we do trial inlining. During baseline execution, we identify call sites for which it would be useful to have more precise inlining data. For each such call site, we allocate a fresh ICScript and replace the existing call IC with a new specialized IC that invokes the callee using the new ICScript. Other callers of the callee will continue using the default ICScript. When we eventually Warp-compile the script, we can generate code for the callee using the IC information in our private ICScript, which is specialized for its caller. The same approach can be used to inline recursively. 5998 100 %
TypeAnalysis.cpp anonymous namespace 43206 90 %
TypeAnalysis.h jit_TypeAnalysis_h 775 -
TypeData.h jit_TypeData_h 1199 100 %
TypePolicy.cpp 45983 83 %
TypePolicy.h 20346 96 %
UnrollLoops.cpp 78711 91 %
UnrollLoops.h jit_UnrollLoops_h 520 -
ValueNumbering.cpp [SMDOC] IonMonkey Value Numbering Some notes on the main algorithm here: - The SSA identifier id() is the value number. We do replaceAllUsesWith as we go, so there's always at most one visible value with a given number. - Consequently, the GVN algorithm is effectively pessimistic. This means it is not as powerful as an optimistic GVN would be, but it is simpler and faster. - We iterate in RPO, so that when visiting a block, we've already optimized and hashed all values in dominating blocks. With occasional exceptions, this allows us to do everything in a single pass. - When we do use multiple passes, we just re-run the algorithm on the whole graph instead of doing sparse propagation. This is a tradeoff to keep the algorithm simpler and lighter on inputs that don't have a lot of interesting unreachable blocks or degenerate loop induction variables, at the expense of being slower on inputs that do. The loop for this always terminates, because it only iterates when code is or will be removed, so eventually it must stop iterating. - Values are not immediately removed from the hash set when they go out of scope. Instead, we check for dominance after a lookup. If the dominance check fails, the value is removed. 50014 89 %
ValueNumbering.h jit_ValueNumbering_h 4532 -
VMFunctionList-inl.h 29729 -
VMFunctions.cpp Unexpected return type for a VMFunction. 109232 89 %
VMFunctions.h 29431 90 %
WarpBuilder.cpp maybePred = 116359 94 %
WarpBuilder.h Intentionally not implemented 12420 100 %
WarpBuilderShared.cpp 4303 99 %
WarpBuilderShared.h 12632 98 %
WarpCacheIRTranspiler.cpp 232069 97 %
WarpCacheIRTranspiler.h jit_WarpCacheIRTranspiler_h 798 -
WarpOracle.cpp 61869 88 %
WarpOracle.h jit_WarpOracle_h 3261 100 %
WarpSnapshot.cpp 15672 88 %
WarpSnapshot.h 22473 100 %
wasm32 -
WasmBCE.cpp 4754 91 %
WasmBCE.h jit_wasmbce_h 855 -
WasmRefTypeAnalysis.cpp 13416 83 %
WasmRefTypeAnalysis.h jit_WasmRefTypeAnalysis_h 655 -
x64 88 %
x86 -
x86-shared 82 %
XrayJitInfo.cpp 507 100 %