Source code

Revision control

Copy as Markdown

Other Tools

/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
* vim: set ts=8 sts=2 et sw=2 tw=80:
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
/*
* [SMDOC] Garbage Collector
*
* This code implements an incremental mark-and-sweep garbage collector, with
* most sweeping carried out in the background on a parallel thread.
*
* Full vs. zone GC
* ----------------
*
* The collector can collect all zones at once, or a subset. These types of
* collection are referred to as a full GC and a zone GC respectively.
*
* It is possible for an incremental collection that started out as a full GC to
* become a zone GC if new zones are created during the course of the
* collection.
*
* Incremental collection
* ----------------------
*
* For a collection to be carried out incrementally the following conditions
* must be met:
* - the collection must be run by calling js::GCSlice() rather than js::GC()
* - the GC parameter JSGC_INCREMENTAL_GC_ENABLED must be true.
*
* The last condition is an engine-internal mechanism to ensure that incremental
* collection is not carried out without the correct barriers being implemented.
* For more information see 'Incremental marking' below.
*
* If the collection is not incremental, all foreground activity happens inside
* a single call to GC() or GCSlice(). However the collection is not complete
* until the background sweeping activity has finished.
*
* An incremental collection proceeds as a series of slices, interleaved with
* mutator activity, i.e. running JavaScript code. Slices are limited by a time
* budget. The slice finishes as soon as possible after the requested time has
* passed.
*
* Collector states
* ----------------
*
* The collector proceeds through the following states, the current state being
* held in JSRuntime::gcIncrementalState:
*
* - Prepare - unmarks GC things, discards JIT code and other setup
* - MarkRoots - marks the stack and other roots
* - Mark - incrementally marks reachable things
* - Sweep - sweeps zones in groups and continues marking unswept zones
* - Finalize - performs background finalization, concurrent with mutator
* - Compact - incrementally compacts by zone
* - Decommit - performs background decommit and chunk removal
*
* Roots are marked in the first MarkRoots slice; this is the start of the GC
* proper. The following states can take place over one or more slices.
*
* In other words an incremental collection proceeds like this:
*
* Slice 1: Prepare: Starts background task to unmark GC things
*
* ... JS code runs, background unmarking finishes ...
*
* Slice 2: MarkRoots: Roots are pushed onto the mark stack.
* Mark: The mark stack is processed by popping an element,
* marking it, and pushing its children.
*
* ... JS code runs ...
*
* Slice 3: Mark: More mark stack processing.
*
* ... JS code runs ...
*
* Slice n-1: Mark: More mark stack processing.
*
* ... JS code runs ...
*
* Slice n: Mark: Mark stack is completely drained.
* Sweep: Select first group of zones to sweep and sweep them.
*
* ... JS code runs ...
*
* Slice n+1: Sweep: Mark objects in unswept zones that were newly
* identified as alive (see below). Then sweep more zone
* sweep groups.
*
* ... JS code runs ...
*
* Slice n+2: Sweep: Mark objects in unswept zones that were newly
* identified as alive. Then sweep more zones.
*
* ... JS code runs ...
*
* Slice m: Sweep: Sweeping is finished, and background sweeping
* started on the helper thread.
*
* ... JS code runs, remaining sweeping done on background thread ...
*
* When background sweeping finishes the GC is complete.
*
* Incremental marking
* -------------------
*
* Incremental collection requires close collaboration with the mutator (i.e.,
* JS code) to guarantee correctness.
*
* - During an incremental GC, if a memory location (except a root) is written
* to, then the value it previously held must be marked. Write barriers
* ensure this.
*
* - Any object that is allocated during incremental GC must start out marked.
*
* - Roots are marked in the first slice and hence don't need write barriers.
* Roots are things like the C stack and the VM stack.
*
* The problem that write barriers solve is that between slices the mutator can
* change the object graph. We must ensure that it cannot do this in such a way
* that makes us fail to mark a reachable object (marking an unreachable object
* is tolerable).
*
* We use a snapshot-at-the-beginning algorithm to do this. This means that we
* promise to mark at least everything that is reachable at the beginning of
* collection. To implement it we mark the old contents of every non-root memory
* location written to by the mutator while the collection is in progress, using
* write barriers. This is described in gc/Barrier.h.
*
* Incremental sweeping
* --------------------
*
* Sweeping is difficult to do incrementally because object finalizers must be
* run at the start of sweeping, before any mutator code runs. The reason is
* that some objects use their finalizers to remove themselves from caches. If
* mutator code was allowed to run after the start of sweeping, it could observe
* the state of the cache and create a new reference to an object that was just
* about to be destroyed.
*
* Sweeping all finalizable objects in one go would introduce long pauses, so
* instead sweeping broken up into groups of zones. Zones which are not yet
* being swept are still marked, so the issue above does not apply.
*
* The order of sweeping is restricted by cross compartment pointers - for
* example say that object |a| from zone A points to object |b| in zone B and
* neither object was marked when we transitioned to the Sweep phase. Imagine we
* sweep B first and then return to the mutator. It's possible that the mutator
* could cause |a| to become alive through a read barrier (perhaps it was a
* shape that was accessed via a shape table). Then we would need to mark |b|,
* which |a| points to, but |b| has already been swept.
*
* So if there is such a pointer then marking of zone B must not finish before
* marking of zone A. Pointers which form a cycle between zones therefore
* restrict those zones to being swept at the same time, and these are found
* using Tarjan's algorithm for finding the strongly connected components of a
* graph.
*
* GC things without finalizers, and things with finalizers that are able to run
* in the background, are swept on the background thread. This accounts for most
* of the sweeping work.
*
* Reset
* -----
*
* During incremental collection it is possible, although unlikely, for
* conditions to change such that incremental collection is no longer safe. In
* this case, the collection is 'reset' by resetIncrementalGC(). If we are in
* the mark state, this just stops marking, but if we have started sweeping
* already, we continue non-incrementally until we have swept the current sweep
* group. Following a reset, a new collection is started.
*
* Compacting GC
* -------------
*
* Compacting GC happens at the end of a major GC as part of the last slice.
* There are three parts:
*
* - Arenas are selected for compaction.
* - The contents of those arenas are moved to new arenas.
* - All references to moved things are updated.
*
* Collecting Atoms
* ----------------
*
* Atoms are collected differently from other GC things. They are contained in
* a special zone and things in other zones may have pointers to them that are
* not recorded in the cross compartment pointer map. Each zone holds a bitmap
* with the atoms it might be keeping alive, and atoms are only collected if
* they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
* this bitmap is managed.
*/
#include "gc/GC-inl.h"
#include "mozilla/Range.h"
#include "mozilla/ScopeExit.h"
#include "mozilla/TextUtils.h"
#include "mozilla/TimeStamp.h"
#include <algorithm>
#include <initializer_list>
#include <iterator>
#include <stdlib.h>
#include <string.h>
#include <utility>
#include "jsapi.h" // JS_AbortIfWrongThread
#include "jstypes.h"
#include "debugger/DebugAPI.h"
#include "gc/ClearEdgesTracer.h"
#include "gc/GCContext.h"
#include "gc/GCInternals.h"
#include "gc/GCLock.h"
#include "gc/GCProbes.h"
#include "gc/Memory.h"
#include "gc/ParallelMarking.h"
#include "gc/ParallelWork.h"
#include "gc/WeakMap.h"
#include "jit/ExecutableAllocator.h"
#include "jit/JitCode.h"
#include "jit/JitRuntime.h"
#include "jit/ProcessExecutableMemory.h"
#include "js/HeapAPI.h" // JS::GCCellPtr
#include "js/Printer.h"
#include "js/SliceBudget.h"
#include "util/DifferentialTesting.h"
#include "vm/BigIntType.h"
#include "vm/EnvironmentObject.h"
#include "vm/GetterSetter.h"
#include "vm/HelperThreadState.h"
#include "vm/JitActivation.h"
#include "vm/JSObject.h"
#include "vm/JSScript.h"
#include "vm/PropMap.h"
#include "vm/Realm.h"
#include "vm/Shape.h"
#include "vm/StringType.h"
#include "vm/SymbolType.h"
#include "vm/Time.h"
#include "gc/Heap-inl.h"
#include "gc/Nursery-inl.h"
#include "gc/ObjectKind-inl.h"
#include "gc/PrivateIterators-inl.h"
#include "vm/GeckoProfiler-inl.h"
#include "vm/JSContext-inl.h"
#include "vm/Realm-inl.h"
#include "vm/Stack-inl.h"
#include "vm/StringType-inl.h"
using namespace js;
using namespace js::gc;
using mozilla::EnumSet;
using mozilla::MakeScopeExit;
using mozilla::Maybe;
using mozilla::Nothing;
using mozilla::Some;
using mozilla::TimeDuration;
using mozilla::TimeStamp;
using JS::AutoGCRooter;
using JS::SliceBudget;
using JS::TimeBudget;
using JS::WorkBudget;
const AllocKind gc::slotsToThingKind[] = {
// clang-format off
/* 0 */ AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT2, AllocKind::OBJECT4,
/* 4 */ AllocKind::OBJECT4, AllocKind::OBJECT8, AllocKind::OBJECT8, AllocKind::OBJECT8,
/* 8 */ AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
/* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
/* 16 */ AllocKind::OBJECT16
// clang-format on
};
static_assert(std::size(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT,
"We have defined a slot count for each kind.");
// A table converting an object size in "slots" (increments of
// sizeof(js::Value)) to the total number of bytes in the corresponding
// AllocKind. See gc::slotsToThingKind. This primarily allows wasm jit code to
// remain compliant with the AllocKind system.
//
// To use this table, subtract sizeof(NativeObject) from your desired allocation
// size, divide by sizeof(js::Value) to get the number of "slots", and then
// index into this table. See gc::GetGCObjectKindForBytes.
const constexpr uint32_t gc::slotsToAllocKindBytes[] = {
// These entries correspond exactly to gc::slotsToThingKind. The numeric
// comments therefore indicate the number of slots that the "bytes" would
// correspond to.
// clang-format off
/* 0 */ sizeof(JSObject_Slots0), sizeof(JSObject_Slots2), sizeof(JSObject_Slots2), sizeof(JSObject_Slots4),
/* 4 */ sizeof(JSObject_Slots4), sizeof(JSObject_Slots8), sizeof(JSObject_Slots8), sizeof(JSObject_Slots8),
/* 8 */ sizeof(JSObject_Slots8), sizeof(JSObject_Slots12), sizeof(JSObject_Slots12), sizeof(JSObject_Slots12),
/* 12 */ sizeof(JSObject_Slots12), sizeof(JSObject_Slots16), sizeof(JSObject_Slots16), sizeof(JSObject_Slots16),
/* 16 */ sizeof(JSObject_Slots16)
// clang-format on
};
static_assert(std::size(slotsToAllocKindBytes) == SLOTS_TO_THING_KIND_LIMIT);
MOZ_THREAD_LOCAL(JS::GCContext*) js::TlsGCContext;
JS::GCContext::GCContext(JSRuntime* runtime) : runtime_(runtime) {}
JS::GCContext::~GCContext() {
MOZ_ASSERT(!hasJitCodeToPoison());
MOZ_ASSERT(!isCollecting());
MOZ_ASSERT(gcUse() == GCUse::None);
MOZ_ASSERT(!gcSweepZone());
MOZ_ASSERT(!isTouchingGrayThings());
}
void JS::GCContext::poisonJitCode() {
if (hasJitCodeToPoison()) {
jit::ExecutableAllocator::poisonCode(runtime(), jitPoisonRanges);
jitPoisonRanges.clearAndFree();
}
}
#ifdef DEBUG
void GCRuntime::verifyAllChunks() {
AutoLockGC lock(this);
fullChunks(lock).verifyChunks();
availableChunks(lock).verifyChunks();
emptyChunks(lock).verifyChunks();
}
#endif
void GCRuntime::setMinEmptyChunkCount(uint32_t value, const AutoLockGC& lock) {
minEmptyChunkCount_ = value;
if (minEmptyChunkCount_ > maxEmptyChunkCount_) {
maxEmptyChunkCount_ = minEmptyChunkCount_;
}
MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
}
void GCRuntime::setMaxEmptyChunkCount(uint32_t value, const AutoLockGC& lock) {
maxEmptyChunkCount_ = value;
if (minEmptyChunkCount_ > maxEmptyChunkCount_) {
minEmptyChunkCount_ = maxEmptyChunkCount_;
}
MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
}
inline bool GCRuntime::tooManyEmptyChunks(const AutoLockGC& lock) {
return emptyChunks(lock).count() > minEmptyChunkCount(lock);
}
ChunkPool GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock) {
MOZ_ASSERT(emptyChunks(lock).verify());
MOZ_ASSERT(minEmptyChunkCount(lock) <= maxEmptyChunkCount(lock));
ChunkPool expired;
while (tooManyEmptyChunks(lock)) {
TenuredChunk* chunk = emptyChunks(lock).pop();
prepareToFreeChunk(chunk->info);
expired.push(chunk);
}
MOZ_ASSERT(expired.verify());
MOZ_ASSERT(emptyChunks(lock).verify());
MOZ_ASSERT(emptyChunks(lock).count() <= maxEmptyChunkCount(lock));
MOZ_ASSERT(emptyChunks(lock).count() <= minEmptyChunkCount(lock));
return expired;
}
static void FreeChunkPool(ChunkPool& pool) {
for (ChunkPool::Iter iter(pool); !iter.done();) {
TenuredChunk* chunk = iter.get();
iter.next();
pool.remove(chunk);
MOZ_ASSERT(chunk->unused());
UnmapPages(static_cast<void*>(chunk), ChunkSize);
}
MOZ_ASSERT(pool.count() == 0);
}
void GCRuntime::freeEmptyChunks(const AutoLockGC& lock) {
FreeChunkPool(emptyChunks(lock));
}
inline void GCRuntime::prepareToFreeChunk(TenuredChunkInfo& info) {
MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
numArenasFreeCommitted -= info.numArenasFreeCommitted;
stats().count(gcstats::COUNT_DESTROY_CHUNK);
#ifdef DEBUG
/*
* Let FreeChunkPool detect a missing prepareToFreeChunk call before it
* frees chunk.
*/
info.numArenasFreeCommitted = 0;
#endif
}
void GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock) {
MOZ_ASSERT(arena->allocated());
MOZ_ASSERT(!arena->onDelayedMarkingList());
MOZ_ASSERT(TlsGCContext.get()->isFinalizing());
arena->zone->gcHeapSize.removeGCArena(heapSize);
arena->release(lock);
arena->chunk()->releaseArena(this, arena, lock);
}
GCRuntime::GCRuntime(JSRuntime* rt)
: rt(rt),
systemZone(nullptr),
mainThreadContext(rt),
heapState_(JS::HeapState::Idle),
stats_(this),
sweepingTracer(rt),
fullGCRequested(false),
helperThreadRatio(TuningDefaults::HelperThreadRatio),
maxHelperThreads(TuningDefaults::MaxHelperThreads),
helperThreadCount(1),
maxMarkingThreads(TuningDefaults::MaxMarkingThreads),
markingThreadCount(1),
createBudgetCallback(nullptr),
minEmptyChunkCount_(TuningDefaults::MinEmptyChunkCount),
maxEmptyChunkCount_(TuningDefaults::MaxEmptyChunkCount),
rootsHash(256),
nextCellUniqueId_(LargestTaggedNullCellPointer +
1), // Ensure disjoint from null tagged pointers.
numArenasFreeCommitted(0),
verifyPreData(nullptr),
lastGCStartTime_(TimeStamp::Now()),
lastGCEndTime_(TimeStamp::Now()),
incrementalGCEnabled(TuningDefaults::IncrementalGCEnabled),
perZoneGCEnabled(TuningDefaults::PerZoneGCEnabled),
numActiveZoneIters(0),
cleanUpEverything(false),
grayBitsValid(true),
majorGCTriggerReason(JS::GCReason::NO_REASON),
minorGCNumber(0),
majorGCNumber(0),
number(0),
sliceNumber(0),
isFull(false),
incrementalState(gc::State::NotActive),
initialState(gc::State::NotActive),
useZeal(false),
lastMarkSlice(false),
safeToYield(true),
markOnBackgroundThreadDuringSweeping(false),
useBackgroundThreads(false),
#ifdef DEBUG
hadShutdownGC(false),
#endif
requestSliceAfterBackgroundTask(false),
lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
lifoBlocksToFreeAfterFullMinorGC(
(size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
lifoBlocksToFreeAfterNextMinorGC(
(size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
sweepGroupIndex(0),
sweepGroups(nullptr),
currentSweepGroup(nullptr),
sweepZone(nullptr),
abortSweepAfterCurrentGroup(false),
sweepMarkResult(IncrementalProgress::NotFinished),
#ifdef DEBUG
testMarkQueue(rt),
#endif
startedCompacting(false),
zonesCompacted(0),
#ifdef DEBUG
relocatedArenasToRelease(nullptr),
#endif
#ifdef JS_GC_ZEAL
markingValidator(nullptr),
#endif
defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS),
compactingEnabled(TuningDefaults::CompactingEnabled),
parallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled),
rootsRemoved(false),
#ifdef JS_GC_ZEAL
zealModeBits(0),
zealFrequency(0),
nextScheduled(0),
deterministicOnly(false),
zealSliceBudget(0),
selectedForMarking(rt),
#endif
fullCompartmentChecks(false),
gcCallbackDepth(0),
alwaysPreserveCode(false),
lowMemoryState(false),
lock(mutexid::GCLock),
storeBufferLock(mutexid::StoreBuffer),
delayedMarkingLock(mutexid::GCDelayedMarkingLock),
allocTask(this, emptyChunks_.ref()),
unmarkTask(this),
markTask(this),
sweepTask(this),
freeTask(this),
decommitTask(this),
nursery_(this),
storeBuffer_(rt),
lastAllocRateUpdateTime(TimeStamp::Now()) {
}
bool js::gc::SplitStringBy(const char* text, char delimiter,
CharRangeVector* result) {
return SplitStringBy(CharRange(text, strlen(text)), delimiter, result);
}
bool js::gc::SplitStringBy(const CharRange& text, char delimiter,
CharRangeVector* result) {
auto start = text.begin();
for (auto ptr = start; ptr != text.end(); ptr++) {
if (*ptr == delimiter) {
if (!result->emplaceBack(start, ptr)) {
return false;
}
start = ptr + 1;
}
}
return result->emplaceBack(start, text.end());
}
static bool ParseTimeDuration(const CharRange& text,
TimeDuration* durationOut) {
const char* str = text.begin().get();
char* end;
long millis = strtol(str, &end, 10);
*durationOut = TimeDuration::FromMilliseconds(double(millis));
return str != end && end == text.end().get();
}
static void PrintProfileHelpAndExit(const char* envName, const char* helpText) {
fprintf(stderr, "%s=N[,(main|all)]\n", envName);
fprintf(stderr, "%s", helpText);
exit(0);
}
void js::gc::ReadProfileEnv(const char* envName, const char* helpText,
bool* enableOut, bool* workersOut,
TimeDuration* thresholdOut) {
*enableOut = false;
*workersOut = false;
*thresholdOut = TimeDuration::Zero();
const char* env = getenv(envName);
if (!env) {
return;
}
if (strcmp(env, "help") == 0) {
PrintProfileHelpAndExit(envName, helpText);
}
CharRangeVector parts;
if (!SplitStringBy(env, ',', &parts)) {
MOZ_CRASH("OOM parsing environment variable");
}
if (parts.length() == 0 || parts.length() > 2) {
PrintProfileHelpAndExit(envName, helpText);
}
*enableOut = true;
if (!ParseTimeDuration(parts[0], thresholdOut)) {
PrintProfileHelpAndExit(envName, helpText);
}
if (parts.length() == 2) {
const char* threads = parts[1].begin().get();
if (strcmp(threads, "all") == 0) {
*workersOut = true;
} else if (strcmp(threads, "main") != 0) {
PrintProfileHelpAndExit(envName, helpText);
}
}
}
bool js::gc::ShouldPrintProfile(JSRuntime* runtime, bool enable,
bool profileWorkers, TimeDuration threshold,
TimeDuration duration) {
return enable && (runtime->isMainRuntime() || profileWorkers) &&
duration >= threshold;
}
#ifdef JS_GC_ZEAL
void GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency,
uint32_t* scheduled) {
*zealBits = zealModeBits;
*frequency = zealFrequency;
*scheduled = nextScheduled;
}
// Please also update jit-test/tests/gc/gczeal.js when updating this help text.
// clang-format off
const char gc::ZealModeHelpText[] =
" Specifies how zealous the garbage collector should be. Some of these modes\n"
" can be set simultaneously, by passing multiple level options, e.g. \"2;4\"\n"
" will activate both modes 2 and 4. Modes can be specified by name or\n"
" number.\n"
" \n"
" Values:\n"
" 0: (None) Normal amount of collection (resets all modes)\n"
" 1: (RootsChange) Collect when roots are added or removed\n"
" 2: (Alloc) Collect when every N allocations (default: 100)\n"
" 4: (VerifierPre) Verify pre write barriers between instructions\n"
" 6: (YieldBeforeRootMarking) Incremental GC in two slices that yields\n"
" before root marking\n"
" 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
" 8: (YieldBeforeMarking) Incremental GC in two slices that yields\n"
" between the root marking and marking phases\n"
" 9: (YieldBeforeSweeping) Incremental GC in two slices that yields\n"
" between the marking and sweeping phases\n"
" 10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
" 11: (IncrementalMarkingValidator) Verify incremental marking\n"
" 12: (ElementsBarrier) Use the individual element post-write barrier\n"
" regardless of elements size\n"
" 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
" 14: (Compact) Perform a shrinking collection every N allocations\n"
" 15: (CheckHeapAfterGC) Walk the heap to check its integrity after every\n"
" GC\n"
" 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that yields\n"
" before sweeping the atoms table\n"
" 18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
" 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that yields\n"
" before sweeping weak caches\n"
" 21: (YieldBeforeSweepingObjects) Incremental GC that yields once per\n"
" zone before sweeping foreground finalized objects\n"
" 22: (YieldBeforeSweepingNonObjects) Incremental GC that yields once per\n"
" zone before sweeping non-object GC things\n"
" 23: (YieldBeforeSweepingPropMapTrees) Incremental GC that yields once\n"
" per zone before sweeping shape trees\n"
" 24: (CheckWeakMapMarking) Check weak map marking invariants after every\n"
" GC\n"
" 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n"
" during gray marking\n";
// clang-format on
// The set of zeal modes that yield at specific points in collection.
static const EnumSet<ZealMode> YieldPointZealModes = {
ZealMode::YieldBeforeRootMarking,
ZealMode::YieldBeforeMarking,
ZealMode::YieldBeforeSweeping,
ZealMode::YieldBeforeSweepingAtoms,
ZealMode::YieldBeforeSweepingCaches,
ZealMode::YieldBeforeSweepingObjects,
ZealMode::YieldBeforeSweepingNonObjects,
ZealMode::YieldBeforeSweepingPropMapTrees,
ZealMode::YieldWhileGrayMarking};
// The set of zeal modes that control incremental slices.
static const EnumSet<ZealMode> IncrementalSliceZealModes =
YieldPointZealModes +
EnumSet<ZealMode>{ZealMode::IncrementalMultipleSlices};
// The set of zeal modes that trigger GC periodically.
static const EnumSet<ZealMode> PeriodicGCZealModes =
IncrementalSliceZealModes + EnumSet<ZealMode>{ZealMode::Alloc,
ZealMode::GenerationalGC,
ZealMode::Compact};
// The set of zeal modes that are mutually exclusive. All of these trigger GC
// except VerifierPre.
static const EnumSet<ZealMode> ExclusiveZealModes =
PeriodicGCZealModes + EnumSet<ZealMode>{ZealMode::VerifierPre};
void GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) {
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
if (verifyPreData) {
VerifyBarriers(rt, PreBarrierVerifier);
}
if (zeal == 0) {
if (hasZealMode(ZealMode::GenerationalGC)) {
clearZealMode(ZealMode::GenerationalGC);
}
if (isIncrementalGCInProgress()) {
finishGC(JS::GCReason::DEBUG_GC);
}
zealModeBits = 0;
zealFrequency = 0;
nextScheduled = 0;
return;
}
// Modes that trigger periodically are mutually exclusive. If we're setting
// one of those, we first reset all of them.
ZealMode zealMode = ZealMode(zeal);
if (ExclusiveZealModes.contains(zealMode)) {
for (auto mode : ExclusiveZealModes) {
if (hasZealMode(mode)) {
clearZealMode(mode);
}
}
}
if (zealMode == ZealMode::GenerationalGC) {
evictNursery(JS::GCReason::EVICT_NURSERY);
nursery().enterZealMode();
}
zealModeBits |= 1 << zeal;
zealFrequency = frequency;
if (PeriodicGCZealModes.contains(zealMode)) {
nextScheduled = frequency;
}
}
void GCRuntime::unsetZeal(uint8_t zeal) {
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
ZealMode zealMode = ZealMode(zeal);
if (!hasZealMode(zealMode)) {
return;
}
if (verifyPreData) {
VerifyBarriers(rt, PreBarrierVerifier);
}
clearZealMode(zealMode);
if (zealModeBits == 0) {
if (isIncrementalGCInProgress()) {
finishGC(JS::GCReason::DEBUG_GC);
}
zealFrequency = 0;
nextScheduled = 0;
}
}
void GCRuntime::setNextScheduled(uint32_t count) { nextScheduled = count; }
static bool ParseZealModeName(const CharRange& text, uint32_t* modeOut) {
struct ModeInfo {
const char* name;
size_t length;
uint32_t value;
};
static const ModeInfo zealModes[] = {{"None", 0},
# define ZEAL_MODE(name, value) {#name, strlen(#name), value},
JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
# undef ZEAL_MODE
};
for (auto mode : zealModes) {
if (text.length() == mode.length &&
memcmp(text.begin().get(), mode.name, mode.length) == 0) {
*modeOut = mode.value;
return true;
}
}
return false;
}
static bool ParseZealModeNumericParam(const CharRange& text,
uint32_t* paramOut) {
if (text.length() == 0) {
return false;
}
for (auto c : text) {
if (!mozilla::IsAsciiDigit(c)) {
return false;
}
}
*paramOut = atoi(text.begin().get());
return true;
}
static bool PrintZealHelpAndFail() {
fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
fputs(ZealModeHelpText, stderr);
return false;
}
bool GCRuntime::parseAndSetZeal(const char* str) {
// Set the zeal mode from a string consisting of one or more mode specifiers
// separated by ';', optionally followed by a ',' and the trigger frequency.
// The mode specifiers can by a mode name or its number.
auto text = CharRange(str, strlen(str));
CharRangeVector parts;
if (!SplitStringBy(text, ',', &parts)) {
return false;
}
if (parts.length() == 0 || parts.length() > 2) {
return PrintZealHelpAndFail();
}
uint32_t frequency = JS::ShellDefaultGCZealFrequency;
if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency)) {
return PrintZealHelpAndFail();
}
CharRangeVector modes;
if (!SplitStringBy(parts[0], ';', &modes)) {
return false;
}
for (const auto& descr : modes) {
uint32_t mode;
if (!ParseZealModeName(descr, &mode) &&
!(ParseZealModeNumericParam(descr, &mode) &&
mode <= unsigned(ZealMode::Limit))) {
return PrintZealHelpAndFail();
}
setZeal(mode, frequency);
}
return true;
}
bool GCRuntime::needZealousGC() {
if (nextScheduled > 0 && --nextScheduled == 0) {
if (hasAnyZealModeOf(PeriodicGCZealModes)) {
nextScheduled = zealFrequency;
}
return true;
}
return false;
}
bool GCRuntime::zealModeControlsYieldPoint() const {
// Indicates whether a zeal mode is enabled that controls the point at which
// the collector yields to the mutator. Yield can happen once per collection
// or once per zone depending on the mode.
return hasAnyZealModeOf(YieldPointZealModes);
}
bool GCRuntime::hasZealMode(ZealMode mode) const {
static_assert(size_t(ZealMode::Limit) < sizeof(zealModeBits) * 8,
"Zeal modes must fit in zealModeBits");
return zealModeBits & (1 << uint32_t(mode));
}
bool GCRuntime::hasAnyZealModeOf(EnumSet<ZealMode> modes) const {
return zealModeBits & modes.serialize();
}
void GCRuntime::clearZealMode(ZealMode mode) {
MOZ_ASSERT(hasZealMode(mode));
if (mode == ZealMode::GenerationalGC) {
evictNursery();
nursery().leaveZealMode();
}
zealModeBits &= ~(1 << uint32_t(mode));
MOZ_ASSERT(!hasZealMode(mode));
}
const char* js::gc::AllocKindName(AllocKind kind) {
static const char* const names[] = {
# define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) #allocKind,
FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
# undef EXPAND_THING_NAME
};
static_assert(std::size(names) == AllocKindCount,
"names array should have an entry for every AllocKind");
size_t i = size_t(kind);
MOZ_ASSERT(i < std::size(names));
return names[i];
}
void js::gc::DumpArenaInfo() {
fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize);
fprintf(stderr, "GC thing kinds:\n");
fprintf(stderr, "%25s %8s %8s %8s\n",
"AllocKind:", "Size:", "Count:", "Padding:");
for (auto kind : AllAllocKinds()) {
fprintf(stderr, "%25s %8zu %8zu %8zu\n", AllocKindName(kind),
Arena::thingSize(kind), Arena::thingsPerArena(kind),
Arena::firstThingOffset(kind) - ArenaHeaderSize);
}
}
#endif // JS_GC_ZEAL
bool GCRuntime::init(uint32_t maxbytes) {
MOZ_ASSERT(!wasInitialized());
MOZ_ASSERT(SystemPageSize());
Arena::checkLookupTables();
if (!TlsGCContext.init()) {
return false;
}
TlsGCContext.set(&mainThreadContext.ref());
updateHelperThreadCount();
#ifdef JS_GC_ZEAL
const char* size = getenv("JSGC_MARK_STACK_LIMIT");
if (size) {
maybeMarkStackLimit = atoi(size);
}
#endif
if (!updateMarkersVector()) {
return false;
}
{
AutoLockGCBgAlloc lock(this);
MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes));
if (!nursery().init(lock)) {
return false;
}
}
#ifdef JS_GC_ZEAL
const char* zealSpec = getenv("JS_GC_ZEAL");
if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec)) {
return false;
}
#endif
for (auto& marker : markers) {
if (!marker->init()) {
return false;
}
}
if (!initSweepActions()) {
return false;
}
UniquePtr<Zone> zone = MakeUnique<Zone>(rt, Zone::AtomsZone);
if (!zone || !zone->init()) {
return false;
}
// The atoms zone is stored as the first element of the zones vector.
MOZ_ASSERT(zone->isAtomsZone());
MOZ_ASSERT(zones().empty());
MOZ_ALWAYS_TRUE(zones().reserve(1)); // ZonesVector has inline capacity 4.
zones().infallibleAppend(zone.release());
gcprobes::Init(this);
initialized = true;
return true;
}
void GCRuntime::finish() {
MOZ_ASSERT(inPageLoadCount == 0);
MOZ_ASSERT(!sharedAtomsZone_);
// Wait for nursery background free to end and disable it to release memory.
if (nursery().isEnabled()) {
nursery().disable();
}
// Wait until the background finalization and allocation stops and the
// helper thread shuts down before we forcefully release any remaining GC
// memory.
sweepTask.join();
markTask.join();
freeTask.join();
allocTask.cancelAndWait();
decommitTask.cancelAndWait();
#ifdef DEBUG
{
MOZ_ASSERT(dispatchedParallelTasks == 0);
AutoLockHelperThreadState lock;
MOZ_ASSERT(queuedParallelTasks.ref().isEmpty(lock));
}
#endif
releaseMarkingThreads();
#ifdef JS_GC_ZEAL
// Free memory associated with GC verification.
finishVerifier();
#endif
// Delete all remaining zones.
for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
AutoSetThreadIsSweeping threadIsSweeping(rt->gcContext(), zone);
for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) {
js_delete(realm.get());
}
comp->realms().clear();
js_delete(comp.get());
}
zone->compartments().clear();
js_delete(zone.get());
}
zones().clear();
FreeChunkPool(fullChunks_.ref());
FreeChunkPool(availableChunks_.ref());
FreeChunkPool(emptyChunks_.ref());
TlsGCContext.set(nullptr);
gcprobes::Finish(this);
nursery().printTotalProfileTimes();
stats().printTotalProfileTimes();
}
bool GCRuntime::freezeSharedAtomsZone() {
// This is called just after permanent atoms and well-known symbols have been
// created. At this point all existing atoms and symbols are permanent.
//
// This method makes the current atoms zone into a shared atoms zone and
// removes it from the zones list. Everything in it is marked black. A new
// empty atoms zone is created, where all atoms local to this runtime will
// live.
//
// The shared atoms zone will not be collected until shutdown when it is
// returned to the zone list by restoreSharedAtomsZone().
MOZ_ASSERT(rt->isMainRuntime());
MOZ_ASSERT(!sharedAtomsZone_);
MOZ_ASSERT(zones().length() == 1);
MOZ_ASSERT(atomsZone());
MOZ_ASSERT(!atomsZone()->wasGCStarted());
MOZ_ASSERT(!atomsZone()->needsIncrementalBarrier());
AutoAssertEmptyNursery nurseryIsEmpty(rt->mainContextFromOwnThread());
atomsZone()->arenas.clearFreeLists();
for (auto kind : AllAllocKinds()) {
for (auto thing =
atomsZone()->cellIterUnsafe<TenuredCell>(kind, nurseryIsEmpty);
!thing.done(); thing.next()) {
TenuredCell* cell = thing.getCell();
MOZ_ASSERT((cell->is<JSString>() &&
cell->as<JSString>()->isPermanentAndMayBeShared()) ||
(cell->is<JS::Symbol>() &&
cell->as<JS::Symbol>()->isPermanentAndMayBeShared()));
cell->markBlack();
}
}
sharedAtomsZone_ = atomsZone();
zones().clear();
UniquePtr<Zone> zone = MakeUnique<Zone>(rt, Zone::AtomsZone);
if (!zone || !zone->init()) {
return false;
}
MOZ_ASSERT(zone->isAtomsZone());
zones().infallibleAppend(zone.release());
return true;
}
void GCRuntime::restoreSharedAtomsZone() {
// Return the shared atoms zone to the zone list. This allows the contents of
// the shared atoms zone to be collected when the parent runtime is shut down.
if (!sharedAtomsZone_) {
return;
}
MOZ_ASSERT(rt->isMainRuntime());
MOZ_ASSERT(rt->childRuntimeCount == 0);
// Insert at start to preserve invariant that atoms zones come first.
AutoEnterOOMUnsafeRegion oomUnsafe;
if (!zones().insert(zones().begin(), sharedAtomsZone_)) {
oomUnsafe.crash("restoreSharedAtomsZone");
}
sharedAtomsZone_ = nullptr;
}
bool GCRuntime::setParameter(JSContext* cx, JSGCParamKey key, uint32_t value) {
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
AutoStopVerifyingBarriers pauseVerification(rt, false);
FinishGC(cx);
waitBackgroundSweepEnd();
AutoLockGC lock(this);
return setParameter(key, value, lock);
}
static bool IsGCThreadParameter(JSGCParamKey key) {
return key == JSGC_HELPER_THREAD_RATIO || key == JSGC_MAX_HELPER_THREADS ||
key == JSGC_MAX_MARKING_THREADS;
}
bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value,
AutoLockGC& lock) {
switch (key) {
case JSGC_SLICE_TIME_BUDGET_MS:
defaultTimeBudgetMS_ = value;
break;
case JSGC_INCREMENTAL_GC_ENABLED:
setIncrementalGCEnabled(value != 0);
break;
case JSGC_PER_ZONE_GC_ENABLED:
perZoneGCEnabled = value != 0;
break;
case JSGC_COMPACTING_ENABLED:
compactingEnabled = value != 0;
break;
case JSGC_PARALLEL_MARKING_ENABLED:
setParallelMarkingEnabled(value != 0);
break;
case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
for (auto& marker : markers) {
marker->incrementalWeakMapMarkingEnabled = value != 0;
}
break;
case JSGC_SEMISPACE_NURSERY_ENABLED: {
AutoUnlockGC unlock(lock);
nursery().setSemispaceEnabled(value);
break;
}
case JSGC_MIN_EMPTY_CHUNK_COUNT:
setMinEmptyChunkCount(value, lock);
break;
case JSGC_MAX_EMPTY_CHUNK_COUNT:
setMaxEmptyChunkCount(value, lock);
break;
default:
if (IsGCThreadParameter(key)) {
return setThreadParameter(key, value, lock);
}
if (!tunables.setParameter(key, value)) {
return false;
}
updateAllGCStartThresholds();
}
return true;
}
bool GCRuntime::setThreadParameter(JSGCParamKey key, uint32_t value,
AutoLockGC& lock) {
if (rt->parentRuntime) {
// Don't allow these to be set for worker runtimes.
return false;
}
switch (key) {
case JSGC_HELPER_THREAD_RATIO:
if (value == 0) {
return false;
}
helperThreadRatio = double(value) / 100.0;
break;
case JSGC_MAX_HELPER_THREADS:
if (value == 0) {
return false;
}
maxHelperThreads = value;
break;
case JSGC_MAX_MARKING_THREADS:
maxMarkingThreads = std::min(size_t(value), MaxParallelWorkers);
break;
default:
MOZ_CRASH("Unexpected parameter key");
}
updateHelperThreadCount();
initOrDisableParallelMarking();
return true;
}
void GCRuntime::resetParameter(JSContext* cx, JSGCParamKey key) {
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
AutoStopVerifyingBarriers pauseVerification(rt, false);
FinishGC(cx);
waitBackgroundSweepEnd();
AutoLockGC lock(this);
resetParameter(key, lock);
}
void GCRuntime::resetParameter(JSGCParamKey key, AutoLockGC& lock) {
switch (key) {
case JSGC_SLICE_TIME_BUDGET_MS:
defaultTimeBudgetMS_ = TuningDefaults::DefaultTimeBudgetMS;
break;
case JSGC_INCREMENTAL_GC_ENABLED:
setIncrementalGCEnabled(TuningDefaults::IncrementalGCEnabled);
break;
case JSGC_PER_ZONE_GC_ENABLED:
perZoneGCEnabled = TuningDefaults::PerZoneGCEnabled;
break;
case JSGC_COMPACTING_ENABLED:
compactingEnabled = TuningDefaults::CompactingEnabled;
break;
case JSGC_PARALLEL_MARKING_ENABLED:
setParallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled);
break;
case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
for (auto& marker : markers) {
marker->incrementalWeakMapMarkingEnabled =
TuningDefaults::IncrementalWeakMapMarkingEnabled;
}
break;
case JSGC_SEMISPACE_NURSERY_ENABLED: {
AutoUnlockGC unlock(lock);
nursery().setSemispaceEnabled(TuningDefaults::SemispaceNurseryEnabled);
break;
}
case JSGC_MIN_EMPTY_CHUNK_COUNT:
setMinEmptyChunkCount(TuningDefaults::MinEmptyChunkCount, lock);
break;
case JSGC_MAX_EMPTY_CHUNK_COUNT:
setMaxEmptyChunkCount(TuningDefaults::MaxEmptyChunkCount, lock);
break;
default:
if (IsGCThreadParameter(key)) {
resetThreadParameter(key, lock);
return;
}
tunables.resetParameter(key);
updateAllGCStartThresholds();
}
}
void GCRuntime::resetThreadParameter(JSGCParamKey key, AutoLockGC& lock) {
if (rt->parentRuntime) {
return;
}
switch (key) {
case JSGC_HELPER_THREAD_RATIO:
helperThreadRatio = TuningDefaults::HelperThreadRatio;
break;
case JSGC_MAX_HELPER_THREADS:
maxHelperThreads = TuningDefaults::MaxHelperThreads;
break;
case JSGC_MAX_MARKING_THREADS:
maxMarkingThreads = TuningDefaults::MaxMarkingThreads;
break;
default:
MOZ_CRASH("Unexpected parameter key");
}
updateHelperThreadCount();
initOrDisableParallelMarking();
}
uint32_t GCRuntime::getParameter(JSGCParamKey key) {
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
AutoLockGC lock(this);
return getParameter(key, lock);
}
uint32_t GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock) {
switch (key) {
case JSGC_BYTES:
return uint32_t(heapSize.bytes());
case JSGC_NURSERY_BYTES:
return nursery().capacity();
case JSGC_NUMBER:
return uint32_t(number);
case JSGC_MAJOR_GC_NUMBER:
return uint32_t(majorGCNumber);
case JSGC_MINOR_GC_NUMBER:
return uint32_t(minorGCNumber);
case JSGC_SLICE_NUMBER:
return uint32_t(sliceNumber);
case JSGC_INCREMENTAL_GC_ENABLED:
return incrementalGCEnabled;
case JSGC_PER_ZONE_GC_ENABLED:
return perZoneGCEnabled;
case JSGC_UNUSED_CHUNKS:
return uint32_t(emptyChunks(lock).count());
case JSGC_TOTAL_CHUNKS:
return uint32_t(fullChunks(lock).count() + availableChunks(lock).count() +
emptyChunks(lock).count());
case JSGC_SLICE_TIME_BUDGET_MS:
MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ >= 0);
MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ <= UINT32_MAX);
return uint32_t(defaultTimeBudgetMS_);
case JSGC_MIN_EMPTY_CHUNK_COUNT:
return minEmptyChunkCount(lock);
case JSGC_MAX_EMPTY_CHUNK_COUNT:
return maxEmptyChunkCount(lock);
case JSGC_COMPACTING_ENABLED:
return compactingEnabled;
case JSGC_PARALLEL_MARKING_ENABLED:
return parallelMarkingEnabled;
case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
return marker().incrementalWeakMapMarkingEnabled;
case JSGC_SEMISPACE_NURSERY_ENABLED:
return nursery().semispaceEnabled();
case JSGC_CHUNK_BYTES:
return ChunkSize;
case JSGC_HELPER_THREAD_RATIO:
MOZ_ASSERT(helperThreadRatio > 0.0);
return uint32_t(helperThreadRatio * 100.0);
case JSGC_MAX_HELPER_THREADS:
MOZ_ASSERT(maxHelperThreads <= UINT32_MAX);
return maxHelperThreads;
case JSGC_HELPER_THREAD_COUNT:
return helperThreadCount;
case JSGC_MAX_MARKING_THREADS:
return maxMarkingThreads;
case JSGC_MARKING_THREAD_COUNT:
return markingThreadCount;
case JSGC_SYSTEM_PAGE_SIZE_KB:
return SystemPageSize() / 1024;
default:
return tunables.getParameter(key);
}
}
#ifdef JS_GC_ZEAL
void GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock) {
MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
maybeMarkStackLimit = limit;
AutoUnlockGC unlock(lock);
AutoStopVerifyingBarriers pauseVerification(rt, false);
for (auto& marker : markers) {
marker->setMaxCapacity(limit);
}
}
#endif
void GCRuntime::setIncrementalGCEnabled(bool enabled) {
incrementalGCEnabled = enabled;
}
void GCRuntime::updateHelperThreadCount() {
if (!CanUseExtraThreads()) {
// startTask will run the work on the main thread if the count is 1.
MOZ_ASSERT(helperThreadCount == 1);
markingThreadCount = 1;
AutoLockHelperThreadState lock;
maxParallelThreads = 1;
return;
}
// Number of extra threads required during parallel marking to ensure we can
// start the necessary marking tasks. Background free and background
// allocation may already be running and we want to avoid these tasks blocking
// marking. In real configurations there will be enough threads that this
// won't affect anything.
static constexpr size_t SpareThreadsDuringParallelMarking = 2;
// Calculate the target thread count for GC parallel tasks.
size_t cpuCount = GetHelperThreadCPUCount();
helperThreadCount =
std::clamp(size_t(double(cpuCount) * helperThreadRatio.ref()), size_t(1),
maxHelperThreads.ref());
// Calculate the target thread count for parallel marking, which uses separate
// parameters to let us adjust this independently.
markingThreadCount = std::min(cpuCount / 2, maxMarkingThreads.ref());
// Calculate the overall target thread count taking into account the separate
// target for parallel marking threads. Add spare threads to avoid blocking
// parallel marking when there is other GC work happening.
size_t targetCount =
std::max(helperThreadCount.ref(),
markingThreadCount.ref() + SpareThreadsDuringParallelMarking);
// Attempt to create extra threads if possible. This is not supported when
// using an external thread pool.
AutoLockHelperThreadState lock;
(void)HelperThreadState().ensureThreadCount(targetCount, lock);
// Limit all thread counts based on the number of threads available, which may
// be fewer than requested.
size_t availableThreadCount = GetHelperThreadCount();
MOZ_ASSERT(availableThreadCount != 0);
targetCount = std::min(targetCount, availableThreadCount);
helperThreadCount = std::min(helperThreadCount.ref(), availableThreadCount);
if (availableThreadCount < SpareThreadsDuringParallelMarking) {
markingThreadCount = 1;
} else {
markingThreadCount =
std::min(markingThreadCount.ref(),
availableThreadCount - SpareThreadsDuringParallelMarking);
}
// Update the maximum number of threads that will be used for GC work.
maxParallelThreads = targetCount;
}
size_t GCRuntime::markingWorkerCount() const {
if (!CanUseExtraThreads() || !parallelMarkingEnabled) {
return 1;
}
if (markingThreadCount) {
return markingThreadCount;
}
// Limit parallel marking to use at most two threads initially.
return 2;
}
#ifdef DEBUG
void GCRuntime::assertNoMarkingWork() const {
for (const auto& marker : markers) {
MOZ_ASSERT(marker->isDrained());
}
MOZ_ASSERT(!hasDelayedMarking());
}
#endif
bool GCRuntime::setParallelMarkingEnabled(bool enabled) {
if (enabled == parallelMarkingEnabled) {
return true;
}
parallelMarkingEnabled = enabled;
return initOrDisableParallelMarking();
}
bool GCRuntime::initOrDisableParallelMarking() {
// Attempt to initialize parallel marking state or disable it on failure. This
// is called when parallel marking is enabled or disabled.
MOZ_ASSERT(markers.length() != 0);
if (updateMarkersVector()) {
return true;
}
// Failed to initialize parallel marking so disable it instead.
MOZ_ASSERT(parallelMarkingEnabled);
parallelMarkingEnabled = false;
MOZ_ALWAYS_TRUE(updateMarkersVector());
return false;
}
void GCRuntime::releaseMarkingThreads() {
MOZ_ALWAYS_TRUE(reserveMarkingThreads(0));
}
bool GCRuntime::reserveMarkingThreads(size_t newCount) {
if (reservedMarkingThreads == newCount) {
return true;
}
// Update the helper thread system's global count by subtracting this
// runtime's current contribution |reservedMarkingThreads| and adding the new
// contribution |newCount|.
AutoLockHelperThreadState lock;
auto& globalCount = HelperThreadState().gcParallelMarkingThreads;
MOZ_ASSERT(globalCount >= reservedMarkingThreads);
size_t newGlobalCount = globalCount - reservedMarkingThreads + newCount;
if (newGlobalCount > HelperThreadState().threadCount) {
// Not enough total threads.
return false;
}
globalCount = newGlobalCount;
reservedMarkingThreads = newCount;
return true;
}
size_t GCRuntime::getMaxParallelThreads() const {
AutoLockHelperThreadState lock;
return maxParallelThreads.ref();
}
bool GCRuntime::updateMarkersVector() {
MOZ_ASSERT(helperThreadCount >= 1,
"There must always be at least one mark task");
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
assertNoMarkingWork();
// Limit worker count to number of GC parallel tasks that can run
// concurrently, otherwise one thread can deadlock waiting on another.
size_t targetCount = std::min(markingWorkerCount(), getMaxParallelThreads());
if (rt->isMainRuntime()) {
// For the main runtime, reserve helper threads as long as parallel marking
// is enabled. Worker runtimes may not mark in parallel if there are
// insufficient threads available at the time.
size_t threadsToReserve = targetCount > 1 ? targetCount : 0;
if (!reserveMarkingThreads(threadsToReserve)) {
return false;
}
}
if (markers.length() > targetCount) {
return markers.resize(targetCount);
}
while (markers.length() < targetCount) {
auto marker = MakeUnique<GCMarker>(rt);
if (!marker) {
return false;
}
#ifdef JS_GC_ZEAL
if (maybeMarkStackLimit) {
marker->setMaxCapacity(maybeMarkStackLimit);
}
#endif
if (!marker->init()) {
return false;
}
if (!markers.emplaceBack(std::move(marker))) {
return false;
}
}
return true;
}
template <typename F>
static bool EraseCallback(CallbackVector<F>& vector, F callback) {
for (Callback<F>* p = vector.begin(); p != vector.end(); p++) {
if (p->op == callback) {
vector.erase(p);
return true;
}
}
return false;
}
template <typename F>
static bool EraseCallback(CallbackVector<F>& vector, F callback, void* data) {
for (Callback<F>* p = vector.begin(); p != vector.end(); p++) {
if (p->op == callback && p->data == data) {
vector.erase(p);
return true;
}
}
return false;
}
bool GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
AssertHeapIsIdle();
return blackRootTracers.ref().append(Callback<JSTraceDataOp>(traceOp, data));
}
void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
// Can be called from finalizers
MOZ_ALWAYS_TRUE(EraseCallback(blackRootTracers.ref(), traceOp));
}
void GCRuntime::setGrayRootsTracer(JSGrayRootsTracer traceOp, void* data) {
AssertHeapIsIdle();
grayRootTracer.ref() = {traceOp, data};
}
void GCRuntime::clearBlackAndGrayRootTracers() {
MOZ_ASSERT(rt->isBeingDestroyed());
blackRootTracers.ref().clear();
setGrayRootsTracer(nullptr, nullptr);
}
void GCRuntime::setGCCallback(JSGCCallback callback, void* data) {
gcCallback.ref() = {callback, data};
}
void GCRuntime::callGCCallback(JSGCStatus status, JS::GCReason reason) const {
const auto& callback = gcCallback.ref();
MOZ_ASSERT(callback.op);
callback.op(rt->mainContextFromOwnThread(), status, reason, callback.data);
}
void GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback,
void* data) {
tenuredCallback.ref() = {callback, data};
}
void GCRuntime::callObjectsTenuredCallback() {
JS::AutoSuppressGCAnalysis nogc;
const auto& callback = tenuredCallback.ref();
if (callback.op) {
callback.op(&mainThreadContext.ref(), callback.data);
}
}
bool GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data) {
return finalizeCallbacks.ref().append(
Callback<JSFinalizeCallback>(callback, data));
}
void GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback) {
MOZ_ALWAYS_TRUE(EraseCallback(finalizeCallbacks.ref(), callback));
}
void GCRuntime::callFinalizeCallbacks(JS::GCContext* gcx,
JSFinalizeStatus status) const {
for (const auto& p : finalizeCallbacks.ref()) {
p.op(gcx, status, p.data);
}
}
void GCRuntime::setHostCleanupFinalizationRegistryCallback(
JSHostCleanupFinalizationRegistryCallback callback, void* data) {
hostCleanupFinalizationRegistryCallback.ref() = {callback, data};
}
void GCRuntime::callHostCleanupFinalizationRegistryCallback(
JSFunction* doCleanup, GlobalObject* incumbentGlobal) {
JS::AutoSuppressGCAnalysis nogc;
const auto& callback = hostCleanupFinalizationRegistryCallback.ref();
if (callback.op) {
callback.op(doCleanup, incumbentGlobal, callback.data);
}
}
bool GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback,
void* data) {
return updateWeakPointerZonesCallbacks.ref().append(
Callback<JSWeakPointerZonesCallback>(callback, data));
}
void GCRuntime::removeWeakPointerZonesCallback(
JSWeakPointerZonesCallback callback) {
MOZ_ALWAYS_TRUE(
EraseCallback(updateWeakPointerZonesCallbacks.ref(), callback));
}
void GCRuntime::callWeakPointerZonesCallbacks(JSTracer* trc) const {
for (auto const& p : updateWeakPointerZonesCallbacks.ref()) {
p.op(trc, p.data);
}
}
bool GCRuntime::addWeakPointerCompartmentCallback(
JSWeakPointerCompartmentCallback callback, void* data) {
return updateWeakPointerCompartmentCallbacks.ref().append(
Callback<JSWeakPointerCompartmentCallback>(callback, data));
}
void GCRuntime::removeWeakPointerCompartmentCallback(
JSWeakPointerCompartmentCallback callback) {
MOZ_ALWAYS_TRUE(
EraseCallback(updateWeakPointerCompartmentCallbacks.ref(), callback));
}
void GCRuntime::callWeakPointerCompartmentCallbacks(
JSTracer* trc, JS::Compartment* comp) const {
for (auto const& p : updateWeakPointerCompartmentCallbacks.ref()) {
p.op(trc, comp, p.data);
}
}
JS::GCSliceCallback GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
return stats().setSliceCallback(callback);
}
bool GCRuntime::addNurseryCollectionCallback(
JS::GCNurseryCollectionCallback callback, void* data) {
return nurseryCollectionCallbacks.ref().append(
Callback<JS::GCNurseryCollectionCallback>(callback, data));
}
void GCRuntime::removeNurseryCollectionCallback(
JS::GCNurseryCollectionCallback callback, void* data) {
MOZ_ALWAYS_TRUE(
EraseCallback(nurseryCollectionCallbacks.ref(), callback, data));
}
void GCRuntime::callNurseryCollectionCallbacks(JS::GCNurseryProgress progress,
JS::GCReason reason) {
for (auto const& p : nurseryCollectionCallbacks.ref()) {
p.op(rt->mainContextFromOwnThread(), progress, reason, p.data);
}
}
JS::DoCycleCollectionCallback GCRuntime::setDoCycleCollectionCallback(
JS::DoCycleCollectionCallback callback) {
const auto prior = gcDoCycleCollectionCallback.ref();
gcDoCycleCollectionCallback.ref() = {callback, nullptr};
return prior.op;
}
void GCRuntime::callDoCycleCollectionCallback(JSContext* cx) {
const auto& callback = gcDoCycleCollectionCallback.ref();
if (callback.op) {
callback.op(cx);
}
}
bool GCRuntime::addRoot(Value* vp, const char* name) {
/*
* Sometimes Firefox will hold weak references to objects and then convert
* them to strong references by calling AddRoot (e.g., via PreserveWrapper,
* or ModifyBusyCount in workers). We need a read barrier to cover these
* cases.
*/
MOZ_ASSERT(vp);
Value value = *vp;
if (value.isGCThing()) {
ValuePreWriteBarrier(value);
}
return rootsHash.ref().put(vp, name);
}
void GCRuntime::removeRoot(Value* vp) {
rootsHash.ref().remove(vp);
notifyRootsRemoved();
}
/* Compacting GC */
bool js::gc::IsCurrentlyAnimating(const TimeStamp& lastAnimationTime,
const TimeStamp& currentTime) {
// Assume that we're currently animating if js::NotifyAnimationActivity has
// been called in the last second.
static const auto oneSecond = TimeDuration::FromSeconds(1);
return !lastAnimationTime.IsNull() &&
currentTime < (lastAnimationTime + oneSecond);
}
static bool DiscardedCodeRecently(Zone* zone, const TimeStamp& currentTime) {
static const auto thirtySeconds = TimeDuration::FromSeconds(30);
return !zone->lastDiscardedCodeTime().IsNull() &&
currentTime < (zone->lastDiscardedCodeTime() + thirtySeconds);
}
bool GCRuntime::shouldCompact() {
// Compact on shrinking GC if enabled. Skip compacting in incremental GCs
// if we are currently animating, unless the user is inactive or we're
// responding to memory pressure.
if (!isShrinkingGC() || !isCompactingGCEnabled()) {
return false;
}
if (initialReason == JS::GCReason::USER_INACTIVE ||
initialReason == JS::GCReason::MEM_PRESSURE) {
return true;
}
return !isIncremental ||
!IsCurrentlyAnimating(rt->lastAnimationTime, TimeStamp::Now());
}
bool GCRuntime::isCompactingGCEnabled() const {
return compactingEnabled &&
rt->mainContextFromOwnThread()->compactingDisabledCount == 0;