Source code

Revision control

Other Tools

/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
* vim: set ts=8 sts=2 et sw=2 tw=80:
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
/*
* [SMDOC] Garbage Collector
*
* This code implements an incremental mark-and-sweep garbage collector, with
* most sweeping carried out in the background on a parallel thread.
*
* Full vs. zone GC
* ----------------
*
* The collector can collect all zones at once, or a subset. These types of
* collection are referred to as a full GC and a zone GC respectively.
*
* It is possible for an incremental collection that started out as a full GC to
* become a zone GC if new zones are created during the course of the
* collection.
*
* Incremental collection
* ----------------------
*
* For a collection to be carried out incrementally the following conditions
* must be met:
* - the collection must be run by calling js::GCSlice() rather than js::GC()
* - the GC parameter JSGC_INCREMENTAL_GC_ENABLED must be true.
*
* The last condition is an engine-internal mechanism to ensure that incremental
* collection is not carried out without the correct barriers being implemented.
* For more information see 'Incremental marking' below.
*
* If the collection is not incremental, all foreground activity happens inside
* a single call to GC() or GCSlice(). However the collection is not complete
* until the background sweeping activity has finished.
*
* An incremental collection proceeds as a series of slices, interleaved with
* mutator activity, i.e. running JavaScript code. Slices are limited by a time
* budget. The slice finishes as soon as possible after the requested time has
* passed.
*
* Collector states
* ----------------
*
* The collector proceeds through the following states, the current state being
* held in JSRuntime::gcIncrementalState:
*
* - Prepare - unmarks GC things, discards JIT code and other setup
* - MarkRoots - marks the stack and other roots
* - Mark - incrementally marks reachable things
* - Sweep - sweeps zones in groups and continues marking unswept zones
* - Finalize - performs background finalization, concurrent with mutator
* - Compact - incrementally compacts by zone
* - Decommit - performs background decommit and chunk removal
*
* Roots are marked in the first MarkRoots slice; this is the start of the GC
* proper. The following states can take place over one or more slices.
*
* In other words an incremental collection proceeds like this:
*
* Slice 1: Prepare: Starts background task to unmark GC things
*
* ... JS code runs, background unmarking finishes ...
*
* Slice 2: MarkRoots: Roots are pushed onto the mark stack.
* Mark: The mark stack is processed by popping an element,
* marking it, and pushing its children.
*
* ... JS code runs ...
*
* Slice 3: Mark: More mark stack processing.
*
* ... JS code runs ...
*
* Slice n-1: Mark: More mark stack processing.
*
* ... JS code runs ...
*
* Slice n: Mark: Mark stack is completely drained.
* Sweep: Select first group of zones to sweep and sweep them.
*
* ... JS code runs ...
*
* Slice n+1: Sweep: Mark objects in unswept zones that were newly
* identified as alive (see below). Then sweep more zone
* sweep groups.
*
* ... JS code runs ...
*
* Slice n+2: Sweep: Mark objects in unswept zones that were newly
* identified as alive. Then sweep more zones.
*
* ... JS code runs ...
*
* Slice m: Sweep: Sweeping is finished, and background sweeping
* started on the helper thread.
*
* ... JS code runs, remaining sweeping done on background thread ...
*
* When background sweeping finishes the GC is complete.
*
* Incremental marking
* -------------------
*
* Incremental collection requires close collaboration with the mutator (i.e.,
* JS code) to guarantee correctness.
*
* - During an incremental GC, if a memory location (except a root) is written
* to, then the value it previously held must be marked. Write barriers
* ensure this.
*
* - Any object that is allocated during incremental GC must start out marked.
*
* - Roots are marked in the first slice and hence don't need write barriers.
* Roots are things like the C stack and the VM stack.
*
* The problem that write barriers solve is that between slices the mutator can
* change the object graph. We must ensure that it cannot do this in such a way
* that makes us fail to mark a reachable object (marking an unreachable object
* is tolerable).
*
* We use a snapshot-at-the-beginning algorithm to do this. This means that we
* promise to mark at least everything that is reachable at the beginning of
* collection. To implement it we mark the old contents of every non-root memory
* location written to by the mutator while the collection is in progress, using
* write barriers. This is described in gc/Barrier.h.
*
* Incremental sweeping
* --------------------
*
* Sweeping is difficult to do incrementally because object finalizers must be
* run at the start of sweeping, before any mutator code runs. The reason is
* that some objects use their finalizers to remove themselves from caches. If
* mutator code was allowed to run after the start of sweeping, it could observe
* the state of the cache and create a new reference to an object that was just
* about to be destroyed.
*
* Sweeping all finalizable objects in one go would introduce long pauses, so
* instead sweeping broken up into groups of zones. Zones which are not yet
* being swept are still marked, so the issue above does not apply.
*
* The order of sweeping is restricted by cross compartment pointers - for
* example say that object |a| from zone A points to object |b| in zone B and
* neither object was marked when we transitioned to the Sweep phase. Imagine we
* sweep B first and then return to the mutator. It's possible that the mutator
* could cause |a| to become alive through a read barrier (perhaps it was a
* shape that was accessed via a shape table). Then we would need to mark |b|,
* which |a| points to, but |b| has already been swept.
*
* So if there is such a pointer then marking of zone B must not finish before
* marking of zone A. Pointers which form a cycle between zones therefore
* restrict those zones to being swept at the same time, and these are found
* using Tarjan's algorithm for finding the strongly connected components of a
* graph.
*
* GC things without finalizers, and things with finalizers that are able to run
* in the background, are swept on the background thread. This accounts for most
* of the sweeping work.
*
* Reset
* -----
*
* During incremental collection it is possible, although unlikely, for
* conditions to change such that incremental collection is no longer safe. In
* this case, the collection is 'reset' by resetIncrementalGC(). If we are in
* the mark state, this just stops marking, but if we have started sweeping
* already, we continue non-incrementally until we have swept the current sweep
* group. Following a reset, a new collection is started.
*
* Compacting GC
* -------------
*
* Compacting GC happens at the end of a major GC as part of the last slice.
* There are three parts:
*
* - Arenas are selected for compaction.
* - The contents of those arenas are moved to new arenas.
* - All references to moved things are updated.
*
* Collecting Atoms
* ----------------
*
* Atoms are collected differently from other GC things. They are contained in
* a special zone and things in other zones may have pointers to them that are
* not recorded in the cross compartment pointer map. Each zone holds a bitmap
* with the atoms it might be keeping alive, and atoms are only collected if
* they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
* this bitmap is managed.
*/
#include "gc/GC-inl.h"
#include "mozilla/DebugOnly.h"
#include "mozilla/MacroForEach.h"
#include "mozilla/MemoryReporting.h"
#include "mozilla/Range.h"
#include "mozilla/ScopeExit.h"
#include "mozilla/TextUtils.h"
#include "mozilla/TimeStamp.h"
#include <algorithm>
#include <initializer_list>
#include <iterator>
#include <stdlib.h>
#include <string.h>
#include <utility>
#if !defined(XP_WIN) && !defined(__wasi__)
# include <sys/mman.h>
# include <unistd.h>
#endif
#include "jstypes.h"
#include "builtin/FinalizationRegistryObject.h"
#include "builtin/WeakRefObject.h"
#include "debugger/DebugAPI.h"
#include "gc/ClearEdgesTracer.h"
#include "gc/FindSCCs.h"
#include "gc/FreeOp.h"
#include "gc/GCInternals.h"
#include "gc/GCLock.h"
#include "gc/GCProbes.h"
#include "gc/Memory.h"
#include "gc/ParallelWork.h"
#include "gc/Policy.h"
#include "gc/WeakMap.h"
#include "jit/BaselineJIT.h"
#include "jit/JitCode.h"
#include "jit/JitcodeMap.h"
#include "jit/JitRealm.h"
#include "jit/JitRuntime.h"
#include "jit/JitZone.h"
#include "jit/MacroAssembler.h" // js::jit::CodeAlignment
#include "js/HeapAPI.h" // JS::GCCellPtr
#include "js/Object.h" // JS::GetClass
#include "js/PropertyAndElement.h" // JS_DefineProperty
#include "js/SliceBudget.h"
#include "proxy/DeadObjectProxy.h"
#include "util/DifferentialTesting.h"
#include "util/Poison.h"
#include "util/Windows.h"
#include "vm/BigIntType.h"
#include "vm/GeckoProfiler.h"
#include "vm/GetterSetter.h"
#include "vm/HelperThreadState.h"
#include "vm/JSAtom.h"
#include "vm/JSContext.h"
#include "vm/JSObject.h"
#include "vm/JSScript.h"
#include "vm/Printer.h"
#include "vm/PropMap.h"
#include "vm/ProxyObject.h"
#include "vm/Realm.h"
#include "vm/Shape.h"
#include "vm/StringType.h"
#include "vm/SymbolType.h"
#include "vm/Time.h"
#include "vm/TraceLogging.h"
#include "vm/WrapperObject.h"
#include "wasm/TypedObject.h"
#include "gc/Heap-inl.h"
#include "gc/Marking-inl.h"
#include "gc/Nursery-inl.h"
#include "gc/PrivateIterators-inl.h"
#include "gc/Zone-inl.h"
#include "vm/GeckoProfiler-inl.h"
#include "vm/JSObject-inl.h"
#include "vm/JSScript-inl.h"
#include "vm/PropMap-inl.h"
#include "vm/Stack-inl.h"
#include "vm/StringType-inl.h"
using namespace js;
using namespace js::gc;
using mozilla::Maybe;
using mozilla::Nothing;
using mozilla::Some;
using mozilla::TimeDuration;
using mozilla::TimeStamp;
using JS::AutoGCRooter;
/* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
static constexpr int IGC_MARK_SLICE_MULTIPLIER = 2;
const AllocKind gc::slotsToThingKind[] = {
// clang-format off
/* 0 */ AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT2, AllocKind::OBJECT4,
/* 4 */ AllocKind::OBJECT4, AllocKind::OBJECT8, AllocKind::OBJECT8, AllocKind::OBJECT8,
/* 8 */ AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
/* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
/* 16 */ AllocKind::OBJECT16
// clang-format on
};
// Check that reserved bits of a Cell are compatible with our typical allocators
// since most derived classes will store a pointer in the first word.
static const size_t MinFirstWordAlignment = 1u << CellFlagBitsReservedForGC;
static_assert(js::detail::LIFO_ALLOC_ALIGN >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support LifoAlloc");
static_assert(CellAlignBytes >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support gc::Cell");
static_assert(js::jit::CodeAlignment >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support JIT code");
static_assert(js::gc::JSClassAlignBytes >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support JSClass pointers");
static_assert(js::ScopeDataAlignBytes >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support scope data pointers");
static_assert(std::size(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT,
"We have defined a slot count for each kind.");
#define CHECK_THING_SIZE(allocKind, traceKind, type, sizedType, bgFinal, \
nursery, compact) \
static_assert(sizeof(sizedType) >= SortedArenaList::MinThingSize, \
#sizedType " is smaller than SortedArenaList::MinThingSize!"); \
static_assert(sizeof(sizedType) >= sizeof(FreeSpan), \
#sizedType " is smaller than FreeSpan"); \
static_assert(sizeof(sizedType) % CellAlignBytes == 0, \
"Size of " #sizedType " is not a multiple of CellAlignBytes"); \
static_assert(sizeof(sizedType) >= MinCellSize, \
"Size of " #sizedType " is smaller than the minimum size");
FOR_EACH_ALLOCKIND(CHECK_THING_SIZE);
#undef CHECK_THING_SIZE
template <typename T>
struct ArenaLayout {
static constexpr size_t thingSize() { return sizeof(T); }
static constexpr size_t thingsPerArena() {
return (ArenaSize - ArenaHeaderSize) / thingSize();
}
static constexpr size_t firstThingOffset() {
return ArenaSize - thingSize() * thingsPerArena();
}
};
const uint8_t Arena::ThingSizes[] = {
#define EXPAND_THING_SIZE(_1, _2, _3, sizedType, _4, _5, _6) \
ArenaLayout<sizedType>::thingSize(),
FOR_EACH_ALLOCKIND(EXPAND_THING_SIZE)
#undef EXPAND_THING_SIZE
};
const uint8_t Arena::FirstThingOffsets[] = {
#define EXPAND_FIRST_THING_OFFSET(_1, _2, _3, sizedType, _4, _5, _6) \
ArenaLayout<sizedType>::firstThingOffset(),
FOR_EACH_ALLOCKIND(EXPAND_FIRST_THING_OFFSET)
#undef EXPAND_FIRST_THING_OFFSET
};
const uint8_t Arena::ThingsPerArena[] = {
#define EXPAND_THINGS_PER_ARENA(_1, _2, _3, sizedType, _4, _5, _6) \
ArenaLayout<sizedType>::thingsPerArena(),
FOR_EACH_ALLOCKIND(EXPAND_THINGS_PER_ARENA)
#undef EXPAND_THINGS_PER_ARENA
};
FreeSpan FreeLists::emptySentinel;
struct js::gc::FinalizePhase {
gcstats::PhaseKind statsPhase;
AllocKinds kinds;
};
/*
* Finalization order for objects swept incrementally on the main thread.
*/
static constexpr FinalizePhase ForegroundObjectFinalizePhase = {
gcstats::PhaseKind::SWEEP_OBJECT,
{AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT4,
AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT16}};
/*
* Finalization order for GC things swept incrementally on the main thread.
*/
static constexpr FinalizePhase ForegroundNonObjectFinalizePhase = {
gcstats::PhaseKind::SWEEP_SCRIPT, {AllocKind::SCRIPT, AllocKind::JITCODE}};
/*
* Finalization order for GC things swept on the background thread.
*/
static constexpr FinalizePhase BackgroundFinalizePhases[] = {
{gcstats::PhaseKind::SWEEP_OBJECT,
{AllocKind::FUNCTION, AllocKind::FUNCTION_EXTENDED,
AllocKind::OBJECT0_BACKGROUND, AllocKind::OBJECT2_BACKGROUND,
AllocKind::ARRAYBUFFER4, AllocKind::OBJECT4_BACKGROUND,
AllocKind::ARRAYBUFFER8, AllocKind::OBJECT8_BACKGROUND,
AllocKind::ARRAYBUFFER12, AllocKind::OBJECT12_BACKGROUND,
AllocKind::ARRAYBUFFER16, AllocKind::OBJECT16_BACKGROUND}},
{gcstats::PhaseKind::SWEEP_SCOPE,
{
AllocKind::SCOPE,
}},
{gcstats::PhaseKind::SWEEP_REGEXP_SHARED,
{
AllocKind::REGEXP_SHARED,
}},
{gcstats::PhaseKind::SWEEP_STRING,
{AllocKind::FAT_INLINE_STRING, AllocKind::STRING,
AllocKind::EXTERNAL_STRING, AllocKind::FAT_INLINE_ATOM, AllocKind::ATOM,
AllocKind::SYMBOL, AllocKind::BIGINT}},
{gcstats::PhaseKind::SWEEP_SHAPE,
{AllocKind::SHAPE, AllocKind::BASE_SHAPE, AllocKind::GETTER_SETTER,
AllocKind::COMPACT_PROP_MAP, AllocKind::NORMAL_PROP_MAP,
AllocKind::DICT_PROP_MAP}}};
void Arena::unmarkAll() {
MarkBitmapWord* arenaBits = chunk()->markBits.arenaBits(this);
for (size_t i = 0; i < ArenaBitmapWords; i++) {
arenaBits[i] = 0;
}
}
void Arena::unmarkPreMarkedFreeCells() {
for (ArenaFreeCellIter cell(this); !cell.done(); cell.next()) {
MOZ_ASSERT(cell->isMarkedBlack());
cell->unmark();
}
}
#ifdef DEBUG
void Arena::checkNoMarkedFreeCells() {
for (ArenaFreeCellIter cell(this); !cell.done(); cell.next()) {
MOZ_ASSERT(!cell->isMarkedAny());
}
}
void Arena::checkAllCellsMarkedBlack() {
for (ArenaCellIter cell(this); !cell.done(); cell.next()) {
MOZ_ASSERT(cell->isMarkedBlack());
}
}
#endif
#if defined(DEBUG) || defined(JS_GC_ZEAL)
void Arena::checkNoMarkedCells() {
for (ArenaCellIter cell(this); !cell.done(); cell.next()) {
MOZ_ASSERT(!cell->isMarkedAny());
}
}
#endif
/* static */
void Arena::staticAsserts() {
static_assert(size_t(AllocKind::LIMIT) <= 255,
"All AllocKinds and AllocKind::LIMIT must fit in a uint8_t.");
static_assert(std::size(ThingSizes) == AllocKindCount,
"We haven't defined all thing sizes.");
static_assert(std::size(FirstThingOffsets) == AllocKindCount,
"We haven't defined all offsets.");
static_assert(std::size(ThingsPerArena) == AllocKindCount,
"We haven't defined all counts.");
}
/* static */
inline void Arena::checkLookupTables() {
#ifdef DEBUG
for (size_t i = 0; i < AllocKindCount; i++) {
MOZ_ASSERT(
FirstThingOffsets[i] + ThingsPerArena[i] * ThingSizes[i] == ArenaSize,
"Inconsistent arena lookup table data");
}
#endif
}
template <typename T>
inline size_t Arena::finalize(JSFreeOp* fop, AllocKind thingKind,
size_t thingSize) {
/* Enforce requirements on size of T. */
MOZ_ASSERT(thingSize % CellAlignBytes == 0);
MOZ_ASSERT(thingSize >= MinCellSize);
MOZ_ASSERT(thingSize <= 255);
MOZ_ASSERT(allocated());
MOZ_ASSERT(thingKind == getAllocKind());
MOZ_ASSERT(thingSize == getThingSize());
MOZ_ASSERT(!onDelayedMarkingList_);
uint_fast16_t firstThing = firstThingOffset(thingKind);
uint_fast16_t firstThingOrSuccessorOfLastMarkedThing = firstThing;
uint_fast16_t lastThing = ArenaSize - thingSize;
FreeSpan newListHead;
FreeSpan* newListTail = &newListHead;
size_t nmarked = 0, nfinalized = 0;
for (ArenaCellIterUnderFinalize cell(this); !cell.done(); cell.next()) {
T* t = cell.as<T>();
if (t->asTenured().isMarkedAny()) {
uint_fast16_t thing = uintptr_t(t) & ArenaMask;
if (thing != firstThingOrSuccessorOfLastMarkedThing) {
// We just finished passing over one or more free things,
// so record a new FreeSpan.
newListTail->initBounds(firstThingOrSuccessorOfLastMarkedThing,
thing - thingSize, this);
newListTail = newListTail->nextSpanUnchecked(this);
}
firstThingOrSuccessorOfLastMarkedThing = thing + thingSize;
nmarked++;
} else {
t->finalize(fop);
AlwaysPoison(t, JS_SWEPT_TENURED_PATTERN, thingSize,
MemCheckKind::MakeUndefined);
gcprobes::TenuredFinalize(t);
nfinalized++;
}
}
if constexpr (std::is_same_v<T, JSObject>) {
if (isNewlyCreated) {
zone->pretenuring.updateCellCountsInNewlyCreatedArenas(
nmarked + nfinalized, nmarked);
}
}
isNewlyCreated = 0;
if (thingKind == AllocKind::STRING ||
thingKind == AllocKind::FAT_INLINE_STRING) {
zone->markedStrings += nmarked;
zone->finalizedStrings += nfinalized;
}
if (nmarked == 0) {
// Do nothing. The caller will update the arena appropriately.
MOZ_ASSERT(newListTail == &newListHead);
DebugOnlyPoison(data, JS_SWEPT_TENURED_PATTERN, sizeof(data),
MemCheckKind::MakeUndefined);
return nmarked;
}
MOZ_ASSERT(firstThingOrSuccessorOfLastMarkedThing != firstThing);
uint_fast16_t lastMarkedThing =
firstThingOrSuccessorOfLastMarkedThing - thingSize;
if (lastThing == lastMarkedThing) {
// If the last thing was marked, we will have already set the bounds of
// the final span, and we just need to terminate the list.
newListTail->initAsEmpty();
} else {
// Otherwise, end the list with a span that covers the final stretch of free
// things.
newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing, lastThing,
this);
}
firstFreeSpan = newListHead;
#ifdef DEBUG
size_t nfree = numFreeThings(thingSize);
MOZ_ASSERT(nfree + nmarked == thingsPerArena(thingKind));
#endif
return nmarked;
}
// Finalize arenas from src list, releasing empty arenas if keepArenas wasn't
// specified and inserting the others into the appropriate destination size
// bins.
template <typename T>
static inline bool FinalizeTypedArenas(JSFreeOp* fop, Arena** src,
SortedArenaList& dest,
AllocKind thingKind,
SliceBudget& budget) {
AutoSetThreadIsFinalizing setThreadUse;
size_t thingSize = Arena::thingSize(thingKind);
size_t thingsPerArena = Arena::thingsPerArena(thingKind);
while (Arena* arena = *src) {
Arena* next = arena->next;
MOZ_ASSERT_IF(next, next->zone == arena->zone);
*src = next;
size_t nmarked = arena->finalize<T>(fop, thingKind, thingSize);
size_t nfree = thingsPerArena - nmarked;
if (nmarked) {
dest.insertAt(arena, nfree);
} else {
arena->chunk()->recycleArena(arena, dest, thingsPerArena);
}
budget.step(thingsPerArena);
if (budget.isOverBudget()) {
return false;
}
}
return true;
}
/*
* Finalize the list of areans.
*/
static bool FinalizeArenas(JSFreeOp* fop, Arena** src, SortedArenaList& dest,
AllocKind thingKind, SliceBudget& budget) {
switch (thingKind) {
#define EXPAND_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery, \
compact) \
case AllocKind::allocKind: \
return FinalizeTypedArenas<type>(fop, src, dest, thingKind, budget);
FOR_EACH_ALLOCKIND(EXPAND_CASE)
#undef EXPAND_CASE
default:
MOZ_CRASH("Invalid alloc kind");
}
}
TenuredChunk* ChunkPool::pop() {
MOZ_ASSERT(bool(head_) == bool(count_));
if (!count_) {
return nullptr;
}
return remove(head_);
}
void ChunkPool::push(TenuredChunk* chunk) {
MOZ_ASSERT(!chunk->info.next);
MOZ_ASSERT(!chunk->info.prev);
chunk->info.next = head_;
if (head_) {
head_->info.prev = chunk;
}
head_ = chunk;
++count_;
}
TenuredChunk* ChunkPool::remove(TenuredChunk* chunk) {
MOZ_ASSERT(count_ > 0);
MOZ_ASSERT(contains(chunk));
if (head_ == chunk) {
head_ = chunk->info.next;
}
if (chunk->info.prev) {
chunk->info.prev->info.next = chunk->info.next;
}
if (chunk->info.next) {
chunk->info.next->info.prev = chunk->info.prev;
}
chunk->info.next = chunk->info.prev = nullptr;
--count_;
return chunk;
}
// We could keep the chunk pool sorted, but that's likely to be more expensive.
// This sort is nlogn, but keeping it sorted is likely to be m*n, with m being
// the number of operations (likely higher than n).
void ChunkPool::sort() {
// Only sort if the list isn't already sorted.
if (!isSorted()) {
head_ = mergeSort(head(), count());
// Fixup prev pointers.
TenuredChunk* prev = nullptr;
for (TenuredChunk* cur = head_; cur; cur = cur->info.next) {
cur->info.prev = prev;
prev = cur;
}
}
MOZ_ASSERT(verify());
MOZ_ASSERT(isSorted());
}
TenuredChunk* ChunkPool::mergeSort(TenuredChunk* list, size_t count) {
MOZ_ASSERT(bool(list) == bool(count));
if (count < 2) {
return list;
}
size_t half = count / 2;
// Split;
TenuredChunk* front = list;
TenuredChunk* back;
{
TenuredChunk* cur = list;
for (size_t i = 0; i < half - 1; i++) {
MOZ_ASSERT(cur);
cur = cur->info.next;
}
back = cur->info.next;
cur->info.next = nullptr;
}
front = mergeSort(front, half);
back = mergeSort(back, count - half);
// Merge
list = nullptr;
TenuredChunk** cur = &list;
while (front || back) {
if (!front) {
*cur = back;
break;
}
if (!back) {
*cur = front;
break;
}
// Note that the sort is stable due to the <= here. Nothing depends on
// this but it could.
if (front->info.numArenasFree <= back->info.numArenasFree) {
*cur = front;
front = front->info.next;
cur = &(*cur)->info.next;
} else {
*cur = back;
back = back->info.next;
cur = &(*cur)->info.next;
}
}
return list;
}
bool ChunkPool::isSorted() const {
uint32_t last = 1;
for (TenuredChunk* cursor = head_; cursor; cursor = cursor->info.next) {
if (cursor->info.numArenasFree < last) {
return false;
}
last = cursor->info.numArenasFree;
}
return true;
}
#ifdef DEBUG
bool ChunkPool::contains(TenuredChunk* chunk) const {
verify();
for (TenuredChunk* cursor = head_; cursor; cursor = cursor->info.next) {
if (cursor == chunk) {
return true;
}
}
return false;
}
bool ChunkPool::verify() const {
MOZ_ASSERT(bool(head_) == bool(count_));
uint32_t count = 0;
for (TenuredChunk* cursor = head_; cursor;
cursor = cursor->info.next, ++count) {
MOZ_ASSERT_IF(cursor->info.prev, cursor->info.prev->info.next == cursor);
MOZ_ASSERT_IF(cursor->info.next, cursor->info.next->info.prev == cursor);
}
MOZ_ASSERT(count_ == count);
return true;
}
void GCRuntime::verifyAllChunks() {
AutoLockGC lock(this);
fullChunks(lock).verifyChunks();
availableChunks(lock).verifyChunks();
emptyChunks(lock).verifyChunks();
}
void ChunkPool::verifyChunks() const {
for (TenuredChunk* chunk = head_; chunk; chunk = chunk->info.next) {
chunk->verify();
}
}
void TenuredChunk::verify() const {
size_t freeCount = 0;
size_t freeCommittedCount = 0;
for (size_t i = 0; i < ArenasPerChunk; ++i) {
if (decommittedPages[pageIndex(i)]) {
// Free but not committed.
freeCount++;
continue;
}
if (!arenas[i].allocated()) {
// Free and committed.
freeCount++;
freeCommittedCount++;
}
}
MOZ_ASSERT(freeCount == info.numArenasFree);
MOZ_ASSERT(freeCommittedCount == info.numArenasFreeCommitted);
size_t freeListCount = 0;
for (Arena* arena = info.freeArenasHead; arena; arena = arena->next) {
freeListCount++;
}
MOZ_ASSERT(freeListCount == info.numArenasFreeCommitted);
}
#endif
void ChunkPool::Iter::next() {
MOZ_ASSERT(!done());
current_ = current_->info.next;
}
inline bool GCRuntime::tooManyEmptyChunks(const AutoLockGC& lock) {
return emptyChunks(lock).count() > tunables.minEmptyChunkCount(lock);
}
ChunkPool GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock) {
MOZ_ASSERT(emptyChunks(lock).verify());
MOZ_ASSERT(tunables.minEmptyChunkCount(lock) <=
tunables.maxEmptyChunkCount());
ChunkPool expired;
while (tooManyEmptyChunks(lock)) {
TenuredChunk* chunk = emptyChunks(lock).pop();
prepareToFreeChunk(chunk->info);
expired.push(chunk);
}
MOZ_ASSERT(expired.verify());
MOZ_ASSERT(emptyChunks(lock).verify());
MOZ_ASSERT(emptyChunks(lock).count() <= tunables.maxEmptyChunkCount());
MOZ_ASSERT(emptyChunks(lock).count() <= tunables.minEmptyChunkCount(lock));
return expired;
}
static void FreeChunkPool(ChunkPool& pool) {
for (ChunkPool::Iter iter(pool); !iter.done();) {
TenuredChunk* chunk = iter.get();
iter.next();
pool.remove(chunk);
MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
UnmapPages(static_cast<void*>(chunk), ChunkSize);
}
MOZ_ASSERT(pool.count() == 0);
}
void GCRuntime::freeEmptyChunks(const AutoLockGC& lock) {
FreeChunkPool(emptyChunks(lock));
}
inline void GCRuntime::prepareToFreeChunk(TenuredChunkInfo& info) {
MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
numArenasFreeCommitted -= info.numArenasFreeCommitted;
stats().count(gcstats::COUNT_DESTROY_CHUNK);
#ifdef DEBUG
/*
* Let FreeChunkPool detect a missing prepareToFreeChunk call before it
* frees chunk.
*/
info.numArenasFreeCommitted = 0;
#endif
}
inline void GCRuntime::updateOnArenaFree() { ++numArenasFreeCommitted; }
bool TenuredChunk::isPageFree(size_t pageIndex) const {
if (decommittedPages[pageIndex]) {
return true;
}
size_t arenaIndex = pageIndex * ArenasPerPage;
for (size_t i = 0; i < ArenasPerPage; i++) {
if (arenas[arenaIndex + i].allocated()) {
return false;
}
}
return true;
}
bool TenuredChunk::isPageFree(const Arena* arena) const {
MOZ_ASSERT(arena);
// arena must come from the freeArenasHead list.
MOZ_ASSERT(!arena->allocated());
size_t count = 1;
size_t expectedPage = pageIndex(arena);
Arena* nextArena = arena->next;
while (nextArena && (pageIndex(nextArena) == expectedPage)) {
count++;
if (count == ArenasPerPage) {
break;
}
nextArena = nextArena->next;
}
return count == ArenasPerPage;
}
void TenuredChunk::addArenaToFreeList(GCRuntime* gc, Arena* arena) {
MOZ_ASSERT(!arena->allocated());
arena->next = info.freeArenasHead;
info.freeArenasHead = arena;
++info.numArenasFreeCommitted;
++info.numArenasFree;
gc->updateOnArenaFree();
}
void TenuredChunk::addArenasInPageToFreeList(GCRuntime* gc, size_t pageIndex) {
MOZ_ASSERT(isPageFree(pageIndex));
size_t arenaIndex = pageIndex * ArenasPerPage;
for (size_t i = 0; i < ArenasPerPage; i++) {
Arena* a = &arenas[arenaIndex + i];
MOZ_ASSERT(!a->allocated());
a->next = info.freeArenasHead;
info.freeArenasHead = a;
// These arenas are already free, don't need to update numArenasFree.
++info.numArenasFreeCommitted;
gc->updateOnArenaFree();
}
}
void TenuredChunk::rebuildFreeArenasList() {
if (info.numArenasFreeCommitted == 0) {
MOZ_ASSERT(!info.freeArenasHead);
return;
}
mozilla::BitSet<ArenasPerChunk, uint32_t> freeArenas;
freeArenas.ResetAll();
Arena* arena = info.freeArenasHead;
while (arena) {
freeArenas[arenaIndex(arena->address())] = true;
arena = arena->next;
}
info.freeArenasHead = nullptr;
Arena** freeCursor = &info.freeArenasHead;
for (size_t i = 0; i < PagesPerChunk; i++) {
for (size_t j = 0; j < ArenasPerPage; j++) {
size_t arenaIndex = i * ArenasPerPage + j;
if (freeArenas[arenaIndex]) {
*freeCursor = &arenas[arenaIndex];
freeCursor = &arenas[arenaIndex].next;
}
}
}
*freeCursor = nullptr;
}
void TenuredChunk::decommitFreeArenas(GCRuntime* gc, const bool& cancel,
AutoLockGC& lock) {
MOZ_ASSERT(DecommitEnabled());
// We didn't traverse all arenas in the chunk to prevent accessing the arena
// mprotect'ed during compaction in debug build. Instead, we traverse through
// freeArenasHead list.
Arena** freeCursor = &info.freeArenasHead;
while (*freeCursor && !cancel) {
if ((ArenasPerPage > 1) && !isPageFree(*freeCursor)) {
freeCursor = &(*freeCursor)->next;
continue;
}
// Find the next free arena after this page.
Arena* nextArena = *freeCursor;
for (size_t i = 0; i < ArenasPerPage; i++) {
nextArena = nextArena->next;
MOZ_ASSERT_IF(i != ArenasPerPage - 1, isPageFree(pageIndex(nextArena)));
}
size_t pIndex = pageIndex(*freeCursor);
// Remove the free arenas from the list.
*freeCursor = nextArena;
info.numArenasFreeCommitted -= ArenasPerPage;
// When we unlock below, other thread might acquire the lock and do
// allocations. But it may find out the numArenasFree > 0 but there isn't
// any free arena (Because we mark the decommit bit after the
// MarkPagesUnusedSoft call). So we reduce the numArenasFree before unlock
// and add it back once we acquire the lock again.
info.numArenasFree -= ArenasPerPage;
updateChunkListAfterAlloc(gc, lock);
bool ok = decommitOneFreePage(gc, pIndex, lock);
info.numArenasFree += ArenasPerPage;
updateChunkListAfterFree(gc, ArenasPerPage, lock);
if (!ok) {
break;
}
// When we unlock in decommit, freeArenasHead might be updated by other
// threads doing allocations.
// Because the free list is sorted, we check if the freeArenasHead has
// bypassed the page we did in decommit, if so, we forward to the updated
// freeArenasHead, otherwise we just continue to check the next free arena
// in the list.
//
// If the freeArenasHead is nullptr, we set it to the largest index so
// freeArena will be set to freeArenasHead as well.
size_t latestIndex =
info.freeArenasHead ? pageIndex(info.freeArenasHead) : PagesPerChunk;
if (latestIndex > pIndex) {
freeCursor = &info.freeArenasHead;
}
}
}
void TenuredChunk::markArenasInPageDecommitted(size_t pageIndex) {
// The arenas within this page are already free, and numArenasFreeCommitted is
// subtracted in decommitFreeArenas.
decommittedPages[pageIndex] = true;
}
void TenuredChunk::recycleArena(Arena* arena, SortedArenaList& dest,
size_t thingsPerArena) {
arena->setAsFullyUnused();
dest.insertAt(arena, thingsPerArena);
}
void TenuredChunk::releaseArena(GCRuntime* gc, Arena* arena,
const AutoLockGC& lock) {
addArenaToFreeList(gc, arena);
updateChunkListAfterFree(gc, 1, lock);
}
bool TenuredChunk::decommitOneFreePage(GCRuntime* gc, size_t pageIndex,
AutoLockGC& lock) {
MOZ_ASSERT(DecommitEnabled());
#ifdef DEBUG
size_t index = pageIndex * ArenasPerPage;
for (size_t i = 0; i < ArenasPerPage; i++) {
MOZ_ASSERT(!arenas[index + i].allocated());
}
#endif
bool ok;
{
AutoUnlockGC unlock(lock);
ok = MarkPagesUnusedSoft(pageAddress(pageIndex), PageSize);
}
if (ok) {
markArenasInPageDecommitted(pageIndex);
} else {
addArenasInPageToFreeList(gc, pageIndex);
}
return ok;
}
void TenuredChunk::decommitFreeArenasWithoutUnlocking(const AutoLockGC& lock) {
MOZ_ASSERT(DecommitEnabled());
info.freeArenasHead = nullptr;
Arena** freeCursor = &info.freeArenasHead;
for (size_t i = 0; i < PagesPerChunk; i++) {
if (decommittedPages[i]) {
continue;
}
if (!isPageFree(i) || js::oom::ShouldFailWithOOM() ||
!MarkPagesUnusedSoft(pageAddress(i), SystemPageSize())) {
// Find out the free arenas and add it to freeArenasHead.
for (size_t j = 0; j < ArenasPerPage; j++) {
size_t arenaIndex = i * ArenasPerPage + j;
if (!arenas[arenaIndex].allocated()) {
*freeCursor = &arenas[arenaIndex];
freeCursor = &arenas[arenaIndex].next;
}
}
continue;
}
decommittedPages[i] = true;
MOZ_ASSERT(info.numArenasFreeCommitted >= ArenasPerPage);
info.numArenasFreeCommitted -= ArenasPerPage;
}
*freeCursor = nullptr;
#ifdef DEBUG
verify();
#endif
}
void TenuredChunk::updateChunkListAfterAlloc(GCRuntime* gc,
const AutoLockGC& lock) {
if (MOZ_UNLIKELY(!hasAvailableArenas())) {
gc->availableChunks(lock).remove(this);
gc->fullChunks(lock).push(this);
}
}
void TenuredChunk::updateChunkListAfterFree(GCRuntime* gc, size_t numArenasFree,
const AutoLockGC& lock) {
if (info.numArenasFree == numArenasFree) {
gc->fullChunks(lock).remove(this);
gc->availableChunks(lock).push(this);
} else if (!unused()) {
MOZ_ASSERT(gc->availableChunks(lock).contains(this));
} else {
MOZ_ASSERT(unused());
gc->availableChunks(lock).remove(this);
decommitAllArenas();
MOZ_ASSERT(info.numArenasFreeCommitted == 0);
gc->recycleChunk(this, lock);
}
}
void GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock) {
MOZ_ASSERT(arena->allocated());
MOZ_ASSERT(!arena->onDelayedMarkingList());
arena->zone->gcHeapSize.removeGCArena();
arena->release(lock);
arena->chunk()->releaseArena(this, arena, lock);
}
GCRuntime::GCRuntime(JSRuntime* rt)
: rt(rt),
atomsZone(nullptr),
systemZone(nullptr),
heapState_(JS::HeapState::Idle),
stats_(this),
marker(rt),
barrierTracer(rt),
heapSize(nullptr),
helperThreadRatio(TuningDefaults::HelperThreadRatio),
maxHelperThreads(TuningDefaults::MaxHelperThreads),
helperThreadCount(1),
rootsHash(256),
nextCellUniqueId_(LargestTaggedNullCellPointer +
1), // Ensure disjoint from null tagged pointers.
numArenasFreeCommitted(0),
verifyPreData(nullptr),
lastGCStartTime_(ReallyNow()),
lastGCEndTime_(ReallyNow()),
incrementalGCEnabled(TuningDefaults::IncrementalGCEnabled),
perZoneGCEnabled(TuningDefaults::PerZoneGCEnabled),
numActiveZoneIters(0),
cleanUpEverything(false),
grayBufferState(GCRuntime::GrayBufferState::Unused),
grayBitsValid(false),
majorGCTriggerReason(JS::GCReason::NO_REASON),
fullGCForAtomsRequested_(false),
minorGCNumber(0),
majorGCNumber(0),
number(0),
sliceNumber(0),
isFull(false),
incrementalState(gc::State::NotActive),
initialState(gc::State::NotActive),
useZeal(false),
lastMarkSlice(false),
safeToYield(true),
markOnBackgroundThreadDuringSweeping(false),
sweepOnBackgroundThread(false),
requestSliceAfterBackgroundTask(false),
lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
lifoBlocksToFreeAfterMinorGC(
(size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
sweepGroupIndex(0),
sweepGroups(nullptr),
currentSweepGroup(nullptr),
sweepZone(nullptr),
hasMarkedGrayRoots(false),
abortSweepAfterCurrentGroup(false),
sweepMarkResult(IncrementalProgress::NotFinished),
startedCompacting(false),
zonesCompacted(0),
#ifdef DEBUG
relocatedArenasToRelease(nullptr),
#endif
#ifdef JS_GC_ZEAL
markingValidator(nullptr),
#endif
defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS),
incrementalAllowed(true),
compactingEnabled(TuningDefaults::CompactingEnabled),
rootsRemoved(false),
#ifdef JS_GC_ZEAL
zealModeBits(0),
zealFrequency(0),
nextScheduled(0),
deterministicOnly(false),
zealSliceBudget(0),
selectedForMarking(rt),
#endif
fullCompartmentChecks(false),
gcCallbackDepth(0),
alwaysPreserveCode(false),
lowMemoryState(false),
lock(mutexid::GCLock),
allocTask(this, emptyChunks_.ref()),
unmarkTask(this),
markTask(this),
sweepTask(this),
freeTask(this),
decommitTask(this),
nursery_(this),
storeBuffer_(rt, nursery()) {
marker.setIncrementalGCEnabled(incrementalGCEnabled);
}
using CharRange = mozilla::Range<const char>;
using CharRangeVector = Vector<CharRange, 0, SystemAllocPolicy>;
static bool SplitStringBy(CharRange text, char delimiter,
CharRangeVector* result) {
auto start = text.begin();
for (auto ptr = start; ptr != text.end(); ptr++) {
if (*ptr == delimiter) {
if (!result->emplaceBack(start, ptr)) {
return false;
}
start = ptr + 1;
}
}
return result->emplaceBack(start, text.end());
}
static bool ParseTimeDuration(CharRange text, TimeDuration* durationOut) {
const char* str = text.begin().get();
char* end;
*durationOut = TimeDuration::FromMilliseconds(strtol(str, &end, 10));
return str != end && end == text.end().get();
}
static void PrintProfileHelpAndExit(const char* envName, const char* helpText) {
fprintf(stderr, "%s=N[,(main|all)]\n", envName);
fprintf(stderr, "%s", helpText);
exit(0);
}
void js::gc::ReadProfileEnv(const char* envName, const char* helpText,
bool* enableOut, bool* workersOut,
TimeDuration* thresholdOut) {
*enableOut = false;
*workersOut = false;
*thresholdOut = TimeDuration();
const char* env = getenv(envName);
if (!env) {
return;
}
if (strcmp(env, "help") == 0) {
PrintProfileHelpAndExit(envName, helpText);
}
CharRangeVector parts;
auto text = CharRange(env, strlen(env));
if (!SplitStringBy(text, ',', &parts)) {
MOZ_CRASH("OOM parsing environment variable");
}
if (parts.length() == 0 || parts.length() > 2) {
PrintProfileHelpAndExit(envName, helpText);
}
*enableOut = true;
if (!ParseTimeDuration(parts[0], thresholdOut)) {
PrintProfileHelpAndExit(envName, helpText);
}
if (parts.length() == 2) {
const char* threads = parts[1].begin().get();
if (strcmp(threads, "all") == 0) {
*workersOut = true;
} else if (strcmp(threads, "main") != 0) {
PrintProfileHelpAndExit(envName, helpText);
}
}
}
bool js::gc::ShouldPrintProfile(JSRuntime* runtime, bool enable,
bool profileWorkers, TimeDuration threshold,
TimeDuration duration) {
return enable && (runtime->isMainRuntime() || profileWorkers) &&
duration >= threshold;
}
#ifdef JS_GC_ZEAL
void GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency,
uint32_t* scheduled) {
*zealBits = zealModeBits;
*frequency = zealFrequency;
*scheduled = nextScheduled;
}
const char gc::ZealModeHelpText[] =
" Specifies how zealous the garbage collector should be. Some of these "
"modes can\n"
" be set simultaneously, by passing multiple level options, e.g. \"2;4\" "
"will activate\n"
" both modes 2 and 4. Modes can be specified by name or number.\n"
" \n"
" Values:\n"
" 0: (None) Normal amount of collection (resets all modes)\n"
" 1: (RootsChange) Collect when roots are added or removed\n"
" 2: (Alloc) Collect when every N allocations (default: 100)\n"
" 4: (VerifierPre) Verify pre write barriers between instructions\n"
" 6: (YieldBeforeRootMarking) Incremental GC in two slices that yields "
"before root marking\n"
" 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
" 8: (YieldBeforeMarking) Incremental GC in two slices that yields "
"between\n"
" the root marking and marking phases\n"
" 9: (YieldBeforeSweeping) Incremental GC in two slices that yields "
"between\n"
" the marking and sweeping phases\n"
" 10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
" 11: (IncrementalMarkingValidator) Verify incremental marking\n"
" 12: (ElementsBarrier) Use the individual element post-write barrier\n"
" regardless of elements size\n"
" 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
" 14: (Compact) Perform a shrinking collection every N allocations\n"
" 15: (CheckHeapAfterGC) Walk the heap to check its integrity after "
"every GC\n"
" 16: (CheckNursery) Check nursery integrity on minor GC\n"
" 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that "
"yields\n"
" before sweeping the atoms table\n"
" 18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
" 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that "
"yields\n"
" before sweeping weak caches\n"
" 21: (YieldBeforeSweepingObjects) Incremental GC in two slices that "
"yields\n"
" before sweeping foreground finalized objects\n"
" 22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that "
"yields\n"
" before sweeping non-object GC things\n"
" 23: (YieldBeforeSweepingPropMapTrees) Incremental GC in two slices "
"that "
"yields\n"
" before sweeping shape trees\n"
" 24: (CheckWeakMapMarking) Check weak map marking invariants after "
"every GC\n"
" 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n"
" during gray marking\n";
// The set of zeal modes that control incremental slices. These modes are
// mutually exclusive.
static const mozilla::EnumSet<ZealMode> IncrementalSliceZealModes = {
ZealMode::YieldBeforeRootMarking,
ZealMode::YieldBeforeMarking,
ZealMode::YieldBeforeSweeping,
ZealMode::IncrementalMultipleSlices,
ZealMode::YieldBeforeSweepingAtoms,
ZealMode::YieldBeforeSweepingCaches,
ZealMode::YieldBeforeSweepingObjects,
ZealMode::YieldBeforeSweepingNonObjects,
ZealMode::YieldBeforeSweepingPropMapTrees};
void GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) {
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
if (verifyPreData) {
VerifyBarriers(rt, PreBarrierVerifier);
}
if (zeal == 0) {
if (hasZealMode(ZealMode::GenerationalGC)) {
evictNursery(JS::GCReason::DEBUG_GC);
nursery().leaveZealMode();
}
if (isIncrementalGCInProgress()) {
finishGC(JS::GCReason::DEBUG_GC);
}
}
ZealMode zealMode = ZealMode(zeal);
if (zealMode == ZealMode::GenerationalGC) {
evictNursery(JS::GCReason::DEBUG_GC);
nursery().enterZealMode();
}
// Some modes are mutually exclusive. If we're setting one of those, we
// first reset all of them.
if (IncrementalSliceZealModes.contains(zealMode)) {
for (auto mode : IncrementalSliceZealModes) {
clearZealMode(mode);
}
}
bool schedule = zealMode >= ZealMode::Alloc;
if (zeal != 0) {
zealModeBits |= 1 << unsigned(zeal);
} else {
zealModeBits = 0;
}
zealFrequency = frequency;
nextScheduled = schedule ? frequency : 0;
}
void GCRuntime::unsetZeal(uint8_t zeal) {
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
ZealMode zealMode = ZealMode(zeal);
if (!hasZealMode(zealMode)) {
return;
}
if (verifyPreData) {
VerifyBarriers(rt, PreBarrierVerifier);
}
if (zealMode == ZealMode::GenerationalGC) {
evictNursery(JS::GCReason::DEBUG_GC);
nursery().leaveZealMode();
}
clearZealMode(zealMode);
if (zealModeBits == 0) {
if (isIncrementalGCInProgress()) {
finishGC(JS::GCReason::DEBUG_GC);
}
zealFrequency = 0;
nextScheduled = 0;
}
}
void GCRuntime::setNextScheduled(uint32_t count) { nextScheduled = count; }
static bool ParseZealModeName(CharRange text, uint32_t* modeOut) {
struct ModeInfo {
const char* name;
size_t length;
uint32_t value;
};
static const ModeInfo zealModes[] = {{"None", 0},
# define ZEAL_MODE(name, value) {# name, strlen(# name), value},
JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
# undef ZEAL_MODE
};
for (auto mode : zealModes) {
if (text.length() == mode.length &&
memcmp(text.begin().get(), mode.name, mode.length) == 0) {
*modeOut = mode.value;
return true;
}
}
return false;
}
static bool ParseZealModeNumericParam(CharRange text, uint32_t* paramOut) {
if (text.length() == 0) {
return false;
}
for (auto c : text) {
if (!mozilla::IsAsciiDigit(c)) {
return false;
}
}
*paramOut = atoi(text.begin().get());
return true;
}
static bool PrintZealHelpAndFail() {
fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
fputs(ZealModeHelpText, stderr);
return false;
}
bool GCRuntime::parseAndSetZeal(const char* str) {
// Set the zeal mode from a string consisting of one or more mode specifiers
// separated by ';', optionally followed by a ',' and the trigger frequency.
// The mode specifiers can by a mode name or its number.
auto text = CharRange(str, strlen(str));
CharRangeVector parts;
if (!SplitStringBy(text, ',', &parts)) {
return false;
}
if (parts.length() == 0 || parts.length() > 2) {
return PrintZealHelpAndFail();
}
uint32_t frequency = JS_DEFAULT_ZEAL_FREQ;
if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency)) {
return PrintZealHelpAndFail();
}
CharRangeVector modes;
if (!SplitStringBy(parts[0], ';', &modes)) {
return false;
}
for (const auto& descr : modes) {
uint32_t mode;
if (!ParseZealModeName(descr, &mode) &&
!(ParseZealModeNumericParam(descr, &mode) &&
mode <= unsigned(ZealMode::Limit))) {
return PrintZealHelpAndFail();
}
setZeal(mode, frequency);
}
return true;
}
const char* js::gc::AllocKindName(AllocKind kind) {
static const char* const names[] = {
# define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) # allocKind,
FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
# undef EXPAND_THING_NAME
};
static_assert(std::size(names) == AllocKindCount,
"names array should have an entry for every AllocKind");
size_t i = size_t(kind);
MOZ_ASSERT(i < std::size(names));
return names[i];
}
void js::gc::DumpArenaInfo() {
fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize);
fprintf(stderr, "GC thing kinds:\n");
fprintf(stderr, "%25s %8s %8s %8s\n",
"AllocKind:", "Size:", "Count:",