Source code

Revision control

Other Tools

/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
* vim: set ts=8 sts=2 et sw=2 tw=80:
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
/*
* [SMDOC] Garbage Collector
*
* This code implements an incremental mark-and-sweep garbage collector, with
* most sweeping carried out in the background on a parallel thread.
*
* Full vs. zone GC
* ----------------
*
* The collector can collect all zones at once, or a subset. These types of
* collection are referred to as a full GC and a zone GC respectively.
*
* It is possible for an incremental collection that started out as a full GC to
* become a zone GC if new zones are created during the course of the
* collection.
*
* Incremental collection
* ----------------------
*
* For a collection to be carried out incrementally the following conditions
* must be met:
* - the collection must be run by calling js::GCSlice() rather than js::GC()
* - the GC mode must have been set to JSGC_MODE_INCREMENTAL or
* JSGC_MODE_ZONE_INCREMENTAL with JS_SetGCParameter()
* - no thread may have an AutoKeepAtoms instance on the stack
*
* The last condition is an engine-internal mechanism to ensure that incremental
* collection is not carried out without the correct barriers being implemented.
* For more information see 'Incremental marking' below.
*
* If the collection is not incremental, all foreground activity happens inside
* a single call to GC() or GCSlice(). However the collection is not complete
* until the background sweeping activity has finished.
*
* An incremental collection proceeds as a series of slices, interleaved with
* mutator activity, i.e. running JavaScript code. Slices are limited by a time
* budget. The slice finishes as soon as possible after the requested time has
* passed.
*
* Collector states
* ----------------
*
* The collector proceeds through the following states, the current state being
* held in JSRuntime::gcIncrementalState:
*
* - MarkRoots - marks the stack and other roots
* - Mark - incrementally marks reachable things
* - Sweep - sweeps zones in groups and continues marking unswept zones
* - Finalize - performs background finalization, concurrent with mutator
* - Compact - incrementally compacts by zone
* - Decommit - performs background decommit and chunk removal
*
* The MarkRoots activity always takes place in the first slice. The next two
* states can take place over one or more slices.
*
* In other words an incremental collection proceeds like this:
*
* Slice 1: MarkRoots: Roots pushed onto the mark stack.
* Mark: The mark stack is processed by popping an element,
* marking it, and pushing its children.
*
* ... JS code runs ...
*
* Slice 2: Mark: More mark stack processing.
*
* ... JS code runs ...
*
* Slice n-1: Mark: More mark stack processing.
*
* ... JS code runs ...
*
* Slice n: Mark: Mark stack is completely drained.
* Sweep: Select first group of zones to sweep and sweep them.
*
* ... JS code runs ...
*
* Slice n+1: Sweep: Mark objects in unswept zones that were newly
* identified as alive (see below). Then sweep more zone
* sweep groups.
*
* ... JS code runs ...
*
* Slice n+2: Sweep: Mark objects in unswept zones that were newly
* identified as alive. Then sweep more zones.
*
* ... JS code runs ...
*
* Slice m: Sweep: Sweeping is finished, and background sweeping
* started on the helper thread.
*
* ... JS code runs, remaining sweeping done on background thread ...
*
* When background sweeping finishes the GC is complete.
*
* Incremental marking
* -------------------
*
* Incremental collection requires close collaboration with the mutator (i.e.,
* JS code) to guarantee correctness.
*
* - During an incremental GC, if a memory location (except a root) is written
* to, then the value it previously held must be marked. Write barriers
* ensure this.
*
* - Any object that is allocated during incremental GC must start out marked.
*
* - Roots are marked in the first slice and hence don't need write barriers.
* Roots are things like the C stack and the VM stack.
*
* The problem that write barriers solve is that between slices the mutator can
* change the object graph. We must ensure that it cannot do this in such a way
* that makes us fail to mark a reachable object (marking an unreachable object
* is tolerable).
*
* We use a snapshot-at-the-beginning algorithm to do this. This means that we
* promise to mark at least everything that is reachable at the beginning of
* collection. To implement it we mark the old contents of every non-root memory
* location written to by the mutator while the collection is in progress, using
* write barriers. This is described in gc/Barrier.h.
*
* Incremental sweeping
* --------------------
*
* Sweeping is difficult to do incrementally because object finalizers must be
* run at the start of sweeping, before any mutator code runs. The reason is
* that some objects use their finalizers to remove themselves from caches. If
* mutator code was allowed to run after the start of sweeping, it could observe
* the state of the cache and create a new reference to an object that was just
* about to be destroyed.
*
* Sweeping all finalizable objects in one go would introduce long pauses, so
* instead sweeping broken up into groups of zones. Zones which are not yet
* being swept are still marked, so the issue above does not apply.
*
* The order of sweeping is restricted by cross compartment pointers - for
* example say that object |a| from zone A points to object |b| in zone B and
* neither object was marked when we transitioned to the Sweep phase. Imagine we
* sweep B first and then return to the mutator. It's possible that the mutator
* could cause |a| to become alive through a read barrier (perhaps it was a
* shape that was accessed via a shape table). Then we would need to mark |b|,
* which |a| points to, but |b| has already been swept.
*
* So if there is such a pointer then marking of zone B must not finish before
* marking of zone A. Pointers which form a cycle between zones therefore
* restrict those zones to being swept at the same time, and these are found
* using Tarjan's algorithm for finding the strongly connected components of a
* graph.
*
* GC things without finalizers, and things with finalizers that are able to run
* in the background, are swept on the background thread. This accounts for most
* of the sweeping work.
*
* Reset
* -----
*
* During incremental collection it is possible, although unlikely, for
* conditions to change such that incremental collection is no longer safe. In
* this case, the collection is 'reset' by resetIncrementalGC(). If we are in
* the mark state, this just stops marking, but if we have started sweeping
* already, we continue non-incrementally until we have swept the current sweep
* group. Following a reset, a new collection is started.
*
* Compacting GC
* -------------
*
* Compacting GC happens at the end of a major GC as part of the last slice.
* There are three parts:
*
* - Arenas are selected for compaction.
* - The contents of those arenas are moved to new arenas.
* - All references to moved things are updated.
*
* Collecting Atoms
* ----------------
*
* Atoms are collected differently from other GC things. They are contained in
* a special zone and things in other zones may have pointers to them that are
* not recorded in the cross compartment pointer map. Each zone holds a bitmap
* with the atoms it might be keeping alive, and atoms are only collected if
* they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
* this bitmap is managed.
*/
#include "gc/GC-inl.h"
#include "mozilla/ArrayUtils.h"
#include "mozilla/DebugOnly.h"
#include "mozilla/MacroForEach.h"
#include "mozilla/MemoryReporting.h"
#include "mozilla/Range.h"
#include "mozilla/ScopeExit.h"
#include "mozilla/TextUtils.h"
#include "mozilla/TimeStamp.h"
#include "mozilla/Unused.h"
#include <algorithm>
#include <initializer_list>
#include <string.h>
#include <utility>
#ifndef XP_WIN
# include <sys/mman.h>
# include <unistd.h>
#endif
#include "jsapi.h"
#include "jsfriendapi.h"
#include "jstypes.h"
#include "builtin/FinalizationRegistryObject.h"
#include "builtin/WeakRefObject.h"
#include "debugger/DebugAPI.h"
#include "gc/FindSCCs.h"
#include "gc/FreeOp.h"
#include "gc/GCInternals.h"
#include "gc/GCLock.h"
#include "gc/GCProbes.h"
#include "gc/Memory.h"
#include "gc/ParallelWork.h"
#include "gc/Policy.h"
#include "gc/WeakMap.h"
#include "jit/BaselineJIT.h"
#include "jit/JitCode.h"
#include "jit/JitcodeMap.h"
#include "jit/JitRealm.h"
#include "jit/MacroAssembler.h"
#include "js/SliceBudget.h"
#include "proxy/DeadObjectProxy.h"
#include "util/Poison.h"
#include "util/Windows.h"
#include "vm/BigIntType.h"
#include "vm/GeckoProfiler.h"
#include "vm/JSAtom.h"
#include "vm/JSContext.h"
#include "vm/JSObject.h"
#include "vm/JSScript.h"
#include "vm/Printer.h"
#include "vm/ProxyObject.h"
#include "vm/Realm.h"
#include "vm/Shape.h"
#include "vm/StringType.h"
#include "vm/SymbolType.h"
#include "vm/Time.h"
#include "vm/TraceLogging.h"
#include "vm/WrapperObject.h"
#include "gc/Heap-inl.h"
#include "gc/Marking-inl.h"
#include "gc/Nursery-inl.h"
#include "gc/PrivateIterators-inl.h"
#include "gc/Zone-inl.h"
#include "vm/GeckoProfiler-inl.h"
#include "vm/JSObject-inl.h"
#include "vm/JSScript-inl.h"
#include "vm/Stack-inl.h"
#include "vm/StringType-inl.h"
using namespace js;
using namespace js::gc;
using mozilla::ArrayLength;
using mozilla::Maybe;
using mozilla::Nothing;
using mozilla::Some;
using mozilla::TimeDuration;
using mozilla::TimeStamp;
using JS::AutoGCRooter;
/* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
static constexpr int IGC_MARK_SLICE_MULTIPLIER = 2;
const AllocKind gc::slotsToThingKind[] = {
// clang-format off
/* 0 */ AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT2, AllocKind::OBJECT4,
/* 4 */ AllocKind::OBJECT4, AllocKind::OBJECT8, AllocKind::OBJECT8, AllocKind::OBJECT8,
/* 8 */ AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
/* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
/* 16 */ AllocKind::OBJECT16
// clang-format on
};
// Check that reserved bits of a Cell are compatible with our typical allocators
// since most derived classes will store a pointer in the first word.
static const size_t MinFirstWordAlignment = 1u << CellFlagBitsReservedForGC;
static_assert(js::detail::LIFO_ALLOC_ALIGN >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support LifoAlloc");
static_assert(CellAlignBytes >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support gc::Cell");
static_assert(js::jit::CodeAlignment >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support JIT code");
static_assert(js::gc::JSClassAlignBytes >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support JSClass pointers");
static_assert(js::ScopeDataAlignBytes >= MinFirstWordAlignment,
"CellFlagBitsReservedForGC should support scope data pointers");
static_assert(mozilla::ArrayLength(slotsToThingKind) ==
SLOTS_TO_THING_KIND_LIMIT,
"We have defined a slot count for each kind.");
#define CHECK_THING_SIZE(allocKind, traceKind, type, sizedType, bgFinal, \
nursery, compact) \
static_assert(sizeof(sizedType) >= SortedArenaList::MinThingSize, \
#sizedType " is smaller than SortedArenaList::MinThingSize!"); \
static_assert(sizeof(sizedType) >= sizeof(FreeSpan), \
#sizedType " is smaller than FreeSpan"); \
static_assert(sizeof(sizedType) % CellAlignBytes == 0, \
"Size of " #sizedType " is not a multiple of CellAlignBytes"); \
static_assert(sizeof(sizedType) >= MinCellSize, \
"Size of " #sizedType " is smaller than the minimum size");
FOR_EACH_ALLOCKIND(CHECK_THING_SIZE);
#undef CHECK_THING_SIZE
template <typename T>
struct ArenaLayout {
static constexpr size_t thingSize() { return sizeof(T); }
static constexpr size_t thingsPerArena() {
return (ArenaSize - ArenaHeaderSize) / thingSize();
}
static constexpr size_t firstThingOffset() {
return ArenaSize - thingSize() * thingsPerArena();
}
};
const uint8_t Arena::ThingSizes[] = {
#define EXPAND_THING_SIZE(_1, _2, _3, sizedType, _4, _5, _6) \
ArenaLayout<sizedType>::thingSize(),
FOR_EACH_ALLOCKIND(EXPAND_THING_SIZE)
#undef EXPAND_THING_SIZE
};
const uint8_t Arena::FirstThingOffsets[] = {
#define EXPAND_FIRST_THING_OFFSET(_1, _2, _3, sizedType, _4, _5, _6) \
ArenaLayout<sizedType>::firstThingOffset(),
FOR_EACH_ALLOCKIND(EXPAND_FIRST_THING_OFFSET)
#undef EXPAND_FIRST_THING_OFFSET
};
const uint8_t Arena::ThingsPerArena[] = {
#define EXPAND_THINGS_PER_ARENA(_1, _2, _3, sizedType, _4, _5, _6) \
ArenaLayout<sizedType>::thingsPerArena(),
FOR_EACH_ALLOCKIND(EXPAND_THINGS_PER_ARENA)
#undef EXPAND_THINGS_PER_ARENA
};
FreeSpan FreeLists::emptySentinel;
struct js::gc::FinalizePhase {
gcstats::PhaseKind statsPhase;
AllocKinds kinds;
};
/*
* Finalization order for objects swept incrementally on the main thread.
*/
static constexpr FinalizePhase ForegroundObjectFinalizePhase = {
gcstats::PhaseKind::SWEEP_OBJECT,
{AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT4,
AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT16}};
/*
* Finalization order for GC things swept incrementally on the main thread.
*/
static constexpr FinalizePhase ForegroundNonObjectFinalizePhase = {
gcstats::PhaseKind::SWEEP_SCRIPT, {AllocKind::SCRIPT, AllocKind::JITCODE}};
/*
* Finalization order for GC things swept on the background thread.
*/
static constexpr FinalizePhase BackgroundFinalizePhases[] = {
{gcstats::PhaseKind::SWEEP_OBJECT,
{AllocKind::FUNCTION, AllocKind::FUNCTION_EXTENDED,
AllocKind::OBJECT0_BACKGROUND, AllocKind::OBJECT2_BACKGROUND,
AllocKind::ARRAYBUFFER4, AllocKind::OBJECT4_BACKGROUND,
AllocKind::ARRAYBUFFER8, AllocKind::OBJECT8_BACKGROUND,
AllocKind::ARRAYBUFFER12, AllocKind::OBJECT12_BACKGROUND,
AllocKind::ARRAYBUFFER16, AllocKind::OBJECT16_BACKGROUND}},
{gcstats::PhaseKind::SWEEP_SCOPE,
{
AllocKind::SCOPE,
}},
{gcstats::PhaseKind::SWEEP_REGEXP_SHARED,
{
AllocKind::REGEXP_SHARED,
}},
{gcstats::PhaseKind::SWEEP_STRING,
{AllocKind::FAT_INLINE_STRING, AllocKind::STRING,
AllocKind::EXTERNAL_STRING, AllocKind::FAT_INLINE_ATOM, AllocKind::ATOM,
AllocKind::SYMBOL, AllocKind::BIGINT}},
{gcstats::PhaseKind::SWEEP_SHAPE,
{AllocKind::SHAPE, AllocKind::ACCESSOR_SHAPE, AllocKind::BASE_SHAPE,
AllocKind::OBJECT_GROUP}}};
void Arena::unmarkAll() {
uintptr_t* word = chunk()->bitmap.arenaBits(this);
memset(word, 0, ArenaBitmapWords * sizeof(uintptr_t));
}
void Arena::unmarkPreMarkedFreeCells() {
for (ArenaFreeCellIter iter(this); !iter.done(); iter.next()) {
TenuredCell* cell = iter.getCell();
MOZ_ASSERT(cell->isMarkedBlack());
cell->unmark();
}
}
#ifdef DEBUG
void Arena::checkNoMarkedFreeCells() {
for (ArenaFreeCellIter iter(this); !iter.done(); iter.next()) {
MOZ_ASSERT(!iter.getCell()->isMarkedAny());
}
}
#endif
/* static */
void Arena::staticAsserts() {
static_assert(size_t(AllocKind::LIMIT) <= 255,
"All AllocKinds and AllocKind::LIMIT must fit in a uint8_t.");
static_assert(mozilla::ArrayLength(ThingSizes) == AllocKindCount,
"We haven't defined all thing sizes.");
static_assert(mozilla::ArrayLength(FirstThingOffsets) == AllocKindCount,
"We haven't defined all offsets.");
static_assert(mozilla::ArrayLength(ThingsPerArena) == AllocKindCount,
"We haven't defined all counts.");
}
/* static */
inline void Arena::checkLookupTables() {
#ifdef DEBUG
for (size_t i = 0; i < AllocKindCount; i++) {
MOZ_ASSERT(
FirstThingOffsets[i] + ThingsPerArena[i] * ThingSizes[i] == ArenaSize,
"Inconsistent arena lookup table data");
}
#endif
}
template <typename T>
inline size_t Arena::finalize(JSFreeOp* fop, AllocKind thingKind,
size_t thingSize) {
/* Enforce requirements on size of T. */
MOZ_ASSERT(thingSize % CellAlignBytes == 0);
MOZ_ASSERT(thingSize >= MinCellSize);
MOZ_ASSERT(thingSize <= 255);
MOZ_ASSERT(allocated());
MOZ_ASSERT(thingKind == getAllocKind());
MOZ_ASSERT(thingSize == getThingSize());
MOZ_ASSERT(!onDelayedMarkingList_);
uint_fast16_t firstThing = firstThingOffset(thingKind);
uint_fast16_t firstThingOrSuccessorOfLastMarkedThing = firstThing;
uint_fast16_t lastThing = ArenaSize - thingSize;
FreeSpan newListHead;
FreeSpan* newListTail = &newListHead;
size_t nmarked = 0;
for (ArenaCellIterUnderFinalize cell(this); !cell.done(); cell.next()) {
T* t = cell.as<T>();
if (t->asTenured().isMarkedAny()) {
uint_fast16_t thing = uintptr_t(t) & ArenaMask;
if (thing != firstThingOrSuccessorOfLastMarkedThing) {
// We just finished passing over one or more free things,
// so record a new FreeSpan.
newListTail->initBounds(firstThingOrSuccessorOfLastMarkedThing,
thing - thingSize, this);
newListTail = newListTail->nextSpanUnchecked(this);
}
firstThingOrSuccessorOfLastMarkedThing = thing + thingSize;
nmarked++;
} else {
t->finalize(fop);
AlwaysPoison(t, JS_SWEPT_TENURED_PATTERN, thingSize,
MemCheckKind::MakeUndefined);
gcprobes::TenuredFinalize(t);
}
}
if (nmarked == 0) {
// Do nothing. The caller will update the arena appropriately.
MOZ_ASSERT(newListTail == &newListHead);
DebugOnlyPoison(data, JS_SWEPT_TENURED_PATTERN, sizeof(data),
MemCheckKind::MakeUndefined);
return nmarked;
}
MOZ_ASSERT(firstThingOrSuccessorOfLastMarkedThing != firstThing);
uint_fast16_t lastMarkedThing =
firstThingOrSuccessorOfLastMarkedThing - thingSize;
if (lastThing == lastMarkedThing) {
// If the last thing was marked, we will have already set the bounds of
// the final span, and we just need to terminate the list.
newListTail->initAsEmpty();
} else {
// Otherwise, end the list with a span that covers the final stretch of free
// things.
newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing, lastThing,
this);
}
firstFreeSpan = newListHead;
#ifdef DEBUG
size_t nfree = numFreeThings(thingSize);
MOZ_ASSERT(nfree + nmarked == thingsPerArena(thingKind));
#endif
return nmarked;
}
// Finalize arenas from src list, releasing empty arenas if keepArenas wasn't
// specified and inserting the others into the appropriate destination size
// bins.
template <typename T>
static inline bool FinalizeTypedArenas(JSFreeOp* fop, Arena** src,
SortedArenaList& dest,
AllocKind thingKind,
SliceBudget& budget) {
AutoSetThreadIsFinalizing setThreadUse;
size_t thingSize = Arena::thingSize(thingKind);
size_t thingsPerArena = Arena::thingsPerArena(thingKind);
while (Arena* arena = *src) {
Arena* next = arena->next;
MOZ_ASSERT_IF(next, next->zone == arena->zone);
*src = next;
size_t nmarked = arena->finalize<T>(fop, thingKind, thingSize);
size_t nfree = thingsPerArena - nmarked;
if (nmarked) {
dest.insertAt(arena, nfree);
} else {
arena->chunk()->recycleArena(arena, dest, thingsPerArena);
}
budget.step(thingsPerArena);
if (budget.isOverBudget()) {
return false;
}
}
return true;
}
/*
* Finalize the list of areans.
*/
static bool FinalizeArenas(JSFreeOp* fop, Arena** src, SortedArenaList& dest,
AllocKind thingKind, SliceBudget& budget) {
switch (thingKind) {
#define EXPAND_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery, \
compact) \
case AllocKind::allocKind: \
return FinalizeTypedArenas<type>(fop, src, dest, thingKind, budget);
FOR_EACH_ALLOCKIND(EXPAND_CASE)
#undef EXPAND_CASE
default:
MOZ_CRASH("Invalid alloc kind");
}
}
Chunk* ChunkPool::pop() {
MOZ_ASSERT(bool(head_) == bool(count_));
if (!count_) {
return nullptr;
}
return remove(head_);
}
void ChunkPool::push(Chunk* chunk) {
MOZ_ASSERT(!chunk->info.next);
MOZ_ASSERT(!chunk->info.prev);
chunk->info.next = head_;
if (head_) {
head_->info.prev = chunk;
}
head_ = chunk;
++count_;
}
Chunk* ChunkPool::remove(Chunk* chunk) {
MOZ_ASSERT(count_ > 0);
MOZ_ASSERT(contains(chunk));
if (head_ == chunk) {
head_ = chunk->info.next;
}
if (chunk->info.prev) {
chunk->info.prev->info.next = chunk->info.next;
}
if (chunk->info.next) {
chunk->info.next->info.prev = chunk->info.prev;
}
chunk->info.next = chunk->info.prev = nullptr;
--count_;
return chunk;
}
// We could keep the chunk pool sorted, but that's likely to be more expensive.
// This sort is nlogn, but keeping it sorted is likely to be m*n, with m being
// the number of operations (likely higher than n).
void ChunkPool::sort() {
// Only sort if the list isn't already sorted.
if (!isSorted()) {
head_ = mergeSort(head(), count());
// Fixup prev pointers.
Chunk* prev = nullptr;
for (Chunk* cur = head_; cur; cur = cur->info.next) {
cur->info.prev = prev;
prev = cur;
}
}
MOZ_ASSERT(verify());
MOZ_ASSERT(isSorted());
}
Chunk* ChunkPool::mergeSort(Chunk* list, size_t count) {
MOZ_ASSERT(bool(list) == bool(count));
if (count < 2) {
return list;
}
size_t half = count / 2;
// Split;
Chunk* front = list;
Chunk* back;
{
Chunk* cur = list;
for (size_t i = 0; i < half - 1; i++) {
MOZ_ASSERT(cur);
cur = cur->info.next;
}
back = cur->info.next;
cur->info.next = nullptr;
}
front = mergeSort(front, half);
back = mergeSort(back, count - half);
// Merge
list = nullptr;
Chunk** cur = &list;
while (front || back) {
if (!front) {
*cur = back;
break;
}
if (!back) {
*cur = front;
break;
}
// Note that the sort is stable due to the <= here. Nothing depends on
// this but it could.
if (front->info.numArenasFree <= back->info.numArenasFree) {
*cur = front;
front = front->info.next;
cur = &(*cur)->info.next;
} else {
*cur = back;
back = back->info.next;
cur = &(*cur)->info.next;
}
}
return list;
}
bool ChunkPool::isSorted() const {
uint32_t last = 1;
for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) {
if (cursor->info.numArenasFree < last) {
return false;
}
last = cursor->info.numArenasFree;
}
return true;
}
#ifdef DEBUG
bool ChunkPool::contains(Chunk* chunk) const {
verify();
for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) {
if (cursor == chunk) {
return true;
}
}
return false;
}
bool ChunkPool::verify() const {
MOZ_ASSERT(bool(head_) == bool(count_));
uint32_t count = 0;
for (Chunk* cursor = head_; cursor; cursor = cursor->info.next, ++count) {
MOZ_ASSERT_IF(cursor->info.prev, cursor->info.prev->info.next == cursor);
MOZ_ASSERT_IF(cursor->info.next, cursor->info.next->info.prev == cursor);
}
MOZ_ASSERT(count_ == count);
return true;
}
#endif
void ChunkPool::Iter::next() {
MOZ_ASSERT(!done());
current_ = current_->info.next;
}
inline bool GCRuntime::tooManyEmptyChunks(const AutoLockGC& lock) {
return emptyChunks(lock).count() > tunables.minEmptyChunkCount(lock);
}
ChunkPool GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock) {
MOZ_ASSERT(emptyChunks(lock).verify());
MOZ_ASSERT(tunables.minEmptyChunkCount(lock) <=
tunables.maxEmptyChunkCount());
ChunkPool expired;
while (tooManyEmptyChunks(lock)) {
Chunk* chunk = emptyChunks(lock).pop();
prepareToFreeChunk(chunk->info);
expired.push(chunk);
}
MOZ_ASSERT(expired.verify());
MOZ_ASSERT(emptyChunks(lock).verify());
MOZ_ASSERT(emptyChunks(lock).count() <= tunables.maxEmptyChunkCount());
MOZ_ASSERT(emptyChunks(lock).count() <= tunables.minEmptyChunkCount(lock));
return expired;
}
static void FreeChunkPool(ChunkPool& pool) {
for (ChunkPool::Iter iter(pool); !iter.done();) {
Chunk* chunk = iter.get();
iter.next();
pool.remove(chunk);
MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
UnmapPages(static_cast<void*>(chunk), ChunkSize);
}
MOZ_ASSERT(pool.count() == 0);
}
void GCRuntime::freeEmptyChunks(const AutoLockGC& lock) {
FreeChunkPool(emptyChunks(lock));
}
inline void GCRuntime::prepareToFreeChunk(ChunkInfo& info) {
MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
numArenasFreeCommitted -= info.numArenasFreeCommitted;
stats().count(gcstats::COUNT_DESTROY_CHUNK);
#ifdef DEBUG
/*
* Let FreeChunkPool detect a missing prepareToFreeChunk call before it
* frees chunk.
*/
info.numArenasFreeCommitted = 0;
#endif
}
inline void GCRuntime::updateOnArenaFree() { ++numArenasFreeCommitted; }
void Chunk::addArenaToFreeList(GCRuntime* gc, Arena* arena) {
MOZ_ASSERT(!arena->allocated());
arena->next = info.freeArenasHead;
info.freeArenasHead = arena;
++info.numArenasFreeCommitted;
++info.numArenasFree;
gc->updateOnArenaFree();
}
void Chunk::addArenaToDecommittedList(const Arena* arena) {
++info.numArenasFree;
decommittedArenas.set(Chunk::arenaIndex(arena->address()));
}
void Chunk::recycleArena(Arena* arena, SortedArenaList& dest,
size_t thingsPerArena) {
arena->setAsFullyUnused();
dest.insertAt(arena, thingsPerArena);
}
void Chunk::releaseArena(GCRuntime* gc, Arena* arena, const AutoLockGC& lock) {
addArenaToFreeList(gc, arena);
updateChunkListAfterFree(gc, lock);
}
bool Chunk::decommitOneFreeArena(GCRuntime* gc, AutoLockGC& lock) {
MOZ_ASSERT(info.numArenasFreeCommitted > 0);
Arena* arena = fetchNextFreeArena(gc);
updateChunkListAfterAlloc(gc, lock);
bool ok;
{
AutoUnlockGC unlock(lock);
ok = MarkPagesUnusedSoft(arena, ArenaSize);
}
if (ok) {
addArenaToDecommittedList(arena);
} else {
addArenaToFreeList(gc, arena);
}
updateChunkListAfterFree(gc, lock);
return ok;
}
void Chunk::decommitFreeArenasWithoutUnlocking(const AutoLockGC& lock) {
for (size_t i = 0; i < ArenasPerChunk; ++i) {
if (decommittedArenas.get(i) || arenas[i].allocated()) {
continue;
}
if (MarkPagesUnusedSoft(&arenas[i], ArenaSize)) {
info.numArenasFreeCommitted--;
decommittedArenas.set(i);
}
}
}
void Chunk::updateChunkListAfterAlloc(GCRuntime* gc, const AutoLockGC& lock) {
if (MOZ_UNLIKELY(!hasAvailableArenas())) {
gc->availableChunks(lock).remove(this);
gc->fullChunks(lock).push(this);
}
}
void Chunk::updateChunkListAfterFree(GCRuntime* gc, const AutoLockGC& lock) {
if (info.numArenasFree == 1) {
gc->fullChunks(lock).remove(this);
gc->availableChunks(lock).push(this);
} else if (!unused()) {
MOZ_ASSERT(gc->availableChunks(lock).contains(this));
} else {
MOZ_ASSERT(unused());
gc->availableChunks(lock).remove(this);
decommitAllArenas();
MOZ_ASSERT(info.numArenasFreeCommitted == 0);
gc->recycleChunk(this, lock);
}
}
void GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock) {
MOZ_ASSERT(arena->allocated());
MOZ_ASSERT(!arena->onDelayedMarkingList());
arena->zone->gcHeapSize.removeGCArena();
arena->release(lock);
arena->chunk()->releaseArena(this, arena, lock);
}
GCRuntime::GCRuntime(JSRuntime* rt)
: rt(rt),
systemZone(nullptr),
atomsZone(nullptr),
heapState_(JS::HeapState::Idle),
stats_(this),
marker(rt),
heapSize(nullptr),
rootsHash(256),
nextCellUniqueId_(LargestTaggedNullCellPointer +
1), // Ensure disjoint from null tagged pointers.
numArenasFreeCommitted(0),
verifyPreData(nullptr),
lastGCStartTime_(ReallyNow()),
lastGCEndTime_(ReallyNow()),
mode(TuningDefaults::Mode),
numActiveZoneIters(0),
cleanUpEverything(false),
grayBufferState(GCRuntime::GrayBufferState::Unused),
grayBitsValid(false),
majorGCTriggerReason(JS::GCReason::NO_REASON),
fullGCForAtomsRequested_(false),
minorGCNumber(0),
majorGCNumber(0),
number(0),
sliceNumber(0),
isFull(false),
incrementalState(gc::State::NotActive),
initialState(gc::State::NotActive),
#ifdef JS_GC_ZEAL
useZeal(false),
#endif
lastMarkSlice(false),
safeToYield(true),
markOnBackgroundThreadDuringSweeping(false),
sweepOnBackgroundThread(false),
requestSliceAfterBackgroundTask(false),
lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
lifoBlocksToFreeAfterMinorGC(
(size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
sweepGroupIndex(0),
sweepGroups(nullptr),
currentSweepGroup(nullptr),
sweepZone(nullptr),
hasMarkedGrayRoots(false),
abortSweepAfterCurrentGroup(false),
sweepMarkTaskStarted(false),
sweepMarkResult(IncrementalProgress::NotFinished),
startedCompacting(false),
relocatedArenasToRelease(nullptr),
zonesCompacted(0),
#ifdef JS_GC_ZEAL
markingValidator(nullptr),
#endif
defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS),
incrementalAllowed(true),
compactingEnabled(TuningDefaults::CompactingEnabled),
rootsRemoved(false),
#ifdef JS_GC_ZEAL
zealModeBits(0),
zealFrequency(0),
nextScheduled(0),
deterministicOnly(false),
zealSliceBudget(0),
selectedForMarking(rt),
#endif
fullCompartmentChecks(false),
gcCallbackDepth(0),
alwaysPreserveCode(false),
lowMemoryState(false),
lock(mutexid::GCLock),
allocTask(this, emptyChunks_.ref()),
sweepTask(this),
freeTask(this),
decommitTask(this),
sweepMarkTask(this),
nursery_(this),
storeBuffer_(rt, nursery()) {
setGCMode(JSGC_MODE_GLOBAL);
}
#ifdef JS_GC_ZEAL
void GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency,
uint32_t* scheduled) {
*zealBits = zealModeBits;
*frequency = zealFrequency;
*scheduled = nextScheduled;
}
const char gc::ZealModeHelpText[] =
" Specifies how zealous the garbage collector should be. Some of these "
"modes can\n"
" be set simultaneously, by passing multiple level options, e.g. \"2;4\" "
"will activate\n"
" both modes 2 and 4. Modes can be specified by name or number.\n"
" \n"
" Values:\n"
" 0: (None) Normal amount of collection (resets all modes)\n"
" 1: (RootsChange) Collect when roots are added or removed\n"
" 2: (Alloc) Collect when every N allocations (default: 100)\n"
" 4: (VerifierPre) Verify pre write barriers between instructions\n"
" 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
" 8: (YieldBeforeMarking) Incremental GC in two slices that yields "
"between\n"
" the root marking and marking phases\n"
" 9: (YieldBeforeSweeping) Incremental GC in two slices that yields "
"between\n"
" the marking and sweeping phases\n"
" 10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
" 11: (IncrementalMarkingValidator) Verify incremental marking\n"
" 12: (ElementsBarrier) Use the individual element post-write barrier\n"
" regardless of elements size\n"
" 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
" 14: (Compact) Perform a shrinking collection every N allocations\n"
" 15: (CheckHeapAfterGC) Walk the heap to check its integrity after "
"every GC\n"
" 16: (CheckNursery) Check nursery integrity on minor GC\n"
" 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that "
"yields\n"
" before sweeping the atoms table\n"
" 18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
" 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that "
"yields\n"
" before sweeping weak caches\n"
" 20: (YieldBeforeSweepingTypes) Incremental GC in two slices that "
"yields\n"
" before sweeping type information\n"
" 21: (YieldBeforeSweepingObjects) Incremental GC in two slices that "
"yields\n"
" before sweeping foreground finalized objects\n"
" 22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that "
"yields\n"
" before sweeping non-object GC things\n"
" 23: (YieldBeforeSweepingShapeTrees) Incremental GC in two slices that "
"yields\n"
" before sweeping shape trees\n"
" 24: (CheckWeakMapMarking) Check weak map marking invariants after "
"every GC\n"
" 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n"
" during gray marking\n";
// The set of zeal modes that control incremental slices. These modes are
// mutually exclusive.
static const mozilla::EnumSet<ZealMode> IncrementalSliceZealModes = {
ZealMode::YieldBeforeMarking,
ZealMode::YieldBeforeSweeping,
ZealMode::IncrementalMultipleSlices,
ZealMode::YieldBeforeSweepingAtoms,
ZealMode::YieldBeforeSweepingCaches,
ZealMode::YieldBeforeSweepingTypes,
ZealMode::YieldBeforeSweepingObjects,
ZealMode::YieldBeforeSweepingNonObjects,
ZealMode::YieldBeforeSweepingShapeTrees};
void GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) {
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
if (verifyPreData) {
VerifyBarriers(rt, PreBarrierVerifier);
}
if (zeal == 0) {
if (hasZealMode(ZealMode::GenerationalGC)) {
evictNursery(JS::GCReason::DEBUG_GC);
nursery().leaveZealMode();
}
if (isIncrementalGCInProgress()) {
finishGC(JS::GCReason::DEBUG_GC);
}
}
ZealMode zealMode = ZealMode(zeal);
if (zealMode == ZealMode::GenerationalGC) {
evictNursery(JS::GCReason::DEBUG_GC);
nursery().enterZealMode();
}
// Some modes are mutually exclusive. If we're setting one of those, we
// first reset all of them.
if (IncrementalSliceZealModes.contains(zealMode)) {
for (auto mode : IncrementalSliceZealModes) {
clearZealMode(mode);
}
}
bool schedule = zealMode >= ZealMode::Alloc;
if (zeal != 0) {
zealModeBits |= 1 << unsigned(zeal);
} else {
zealModeBits = 0;
}
zealFrequency = frequency;
nextScheduled = schedule ? frequency : 0;
}
void GCRuntime::unsetZeal(uint8_t zeal) {
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
ZealMode zealMode = ZealMode(zeal);
if (!hasZealMode(zealMode)) {
return;
}
if (verifyPreData) {
VerifyBarriers(rt, PreBarrierVerifier);
}
if (zealMode == ZealMode::GenerationalGC) {
evictNursery(JS::GCReason::DEBUG_GC);
nursery().leaveZealMode();
}
clearZealMode(zealMode);
if (zealModeBits == 0) {
if (isIncrementalGCInProgress()) {
finishGC(JS::GCReason::DEBUG_GC);
}
zealFrequency = 0;
nextScheduled = 0;
}
}
void GCRuntime::setNextScheduled(uint32_t count) { nextScheduled = count; }
using CharRange = mozilla::Range<const char>;
using CharRangeVector = Vector<CharRange, 0, SystemAllocPolicy>;
static bool ParseZealModeName(CharRange text, uint32_t* modeOut) {
struct ModeInfo {
const char* name;
size_t length;
uint32_t value;
};
static const ModeInfo zealModes[] = {{"None", 0},
# define ZEAL_MODE(name, value) {# name, strlen(# name), value},
JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
# undef ZEAL_MODE
};
for (auto mode : zealModes) {
if (text.length() == mode.length &&
memcmp(text.begin().get(), mode.name, mode.length) == 0) {
*modeOut = mode.value;
return true;
}
}
return false;
}
static bool ParseZealModeNumericParam(CharRange text, uint32_t* paramOut) {
if (text.length() == 0) {
return false;
}
for (auto c : text) {
if (!mozilla::IsAsciiDigit(c)) {
return false;
}
}
*paramOut = atoi(text.begin().get());
return true;
}
static bool SplitStringBy(CharRange text, char delimiter,
CharRangeVector* result) {
auto start = text.begin();
for (auto ptr = start; ptr != text.end(); ptr++) {
if (*ptr == delimiter) {
if (!result->emplaceBack(start, ptr)) {
return false;
}
start = ptr + 1;
}
}
return result->emplaceBack(start, text.end());
}
static bool PrintZealHelpAndFail() {
fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
fputs(ZealModeHelpText, stderr);
return false;
}
bool GCRuntime::parseAndSetZeal(const char* str) {
// Set the zeal mode from a string consisting of one or more mode specifiers
// separated by ';', optionally followed by a ',' and the trigger frequency.
// The mode specifiers can by a mode name or its number.
auto text = CharRange(str, strlen(str));
CharRangeVector parts;
if (!SplitStringBy(text, ',', &parts)) {
return false;
}
if (parts.length() == 0 || parts.length() > 2) {
return PrintZealHelpAndFail();
}
uint32_t frequency = JS_DEFAULT_ZEAL_FREQ;
if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency)) {
return PrintZealHelpAndFail();
}
CharRangeVector modes;
if (!SplitStringBy(parts[0], ';', &modes)) {
return false;
}
for (const auto& descr : modes) {
uint32_t mode;
if (!ParseZealModeName(descr, &mode) &&
!(ParseZealModeNumericParam(descr, &mode) &&
mode <= unsigned(ZealMode::Limit))) {
return PrintZealHelpAndFail();
}
setZeal(mode, frequency);
}
return true;
}
const char* js::gc::AllocKindName(AllocKind kind) {
static const char* const names[] = {
# define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) # allocKind,
FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
# undef EXPAND_THING_NAME
};
static_assert(ArrayLength(names) == AllocKindCount,
"names array should have an entry for every AllocKind");
size_t i = size_t(kind);
MOZ_ASSERT(i < ArrayLength(names));
return names[i];
}
void js::gc::DumpArenaInfo() {
fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize);
fprintf(stderr, "GC thing kinds:\n");
fprintf(stderr, "%25s %8s %8s %8s\n",
"AllocKind:", "Size:", "Count:", "Padding:");
for (auto kind : AllAllocKinds()) {
fprintf(stderr, "%25s %8zu %8zu %8zu\n", AllocKindName(kind),
Arena::thingSize(kind), Arena::thingsPerArena(kind),
Arena::firstThingOffset(kind) - ArenaHeaderSize);
}
}
#endif // JS_GC_ZEAL
bool GCRuntime::init(uint32_t maxbytes) {
MOZ_ASSERT(SystemPageSize());
Arena::checkLookupTables();
{
AutoLockGCBgAlloc lock(this);
MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes, lock));
const char* size = getenv("JSGC_MARK_STACK_LIMIT");
if (size) {
setMarkStackLimit(atoi(size), lock);
}
if (!nursery().init(lock)) {
return false;
}
const char* pretenureThresholdStr = getenv("JSGC_PRETENURE_THRESHOLD");
if (pretenureThresholdStr && pretenureThresholdStr[0]) {
char* last;
long pretenureThreshold = strtol(pretenureThresholdStr, &last, 10);
if (last[0] || !tunables.setParameter(JSGC_PRETENURE_THRESHOLD,
pretenureThreshold, lock)) {
fprintf(stderr, "Invalid value for JSGC_PRETENURE_THRESHOLD: %s\n",
pretenureThresholdStr);
}
}
}
#ifdef JS_GC_ZEAL
const char* zealSpec = getenv("JS_GC_ZEAL");
if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec)) {
return false;
}
#endif
if (!marker.init(mode)) {
return false;
}
if (!initSweepActions()) {
return false;
}
gcprobes::Init(this);
return true;
}
void GCRuntime::freezeSelfHostingZone() {
MOZ_ASSERT(!selfHostingZoneFrozen);
MOZ_ASSERT(!isIncrementalGCInProgress());
for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
MOZ_ASSERT(!zone->isGCScheduled());
if (zone->isSelfHostingZone()) {
zone->scheduleGC();
}
}
gc(GC_SHRINK, JS::GCReason::INIT_SELF_HOSTING);
selfHostingZoneFrozen = true;
}
void GCRuntime::finish() {
MOZ_ASSERT(inPageLoadCount == 0);
// Wait for nursery background free to end and disable it to release memory.
if (nursery().isEnabled()) {
nursery().disable();
}
// Wait until the background finalization and allocation stops and the
// helper thread shuts down before we forcefully release any remaining GC
// memory.
sweepTask.join();
freeTask.join();
allocTask.cancelAndWait();
decommitTask.cancelAndWait();
#ifdef JS_GC_ZEAL
// Free memory associated with GC verification.
finishVerifier();
#endif
// Delete all remaining zones.
if (rt->gcInitialized) {
for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
AutoSetThreadIsSweeping threadIsSweeping(zone);
for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) {
js_delete(realm.get());
}
comp->realms().clear();
js_delete(comp.get());
}
zone->compartments().clear();
js_delete(zone.get());
}
}
zones().clear();
FreeChunkPool(fullChunks_.ref());
FreeChunkPool(availableChunks_.ref());
FreeChunkPool(emptyChunks_.ref());
gcprobes::Finish(this);
nursery().printTotalProfileTimes();
stats().printTotalProfileTimes();
}
bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value) {
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
waitBackgroundSweepEnd();
AutoLockGC lock(this);
return setParameter(key, value, lock);
}
bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value,
AutoLockGC& lock) {
switch (key) {
case JSGC_SLICE_TIME_BUDGET_MS:
defaultTimeBudgetMS_ = value ? value : SliceBudget::UnlimitedTimeBudget;
break;
case JSGC_MARK_STACK_LIMIT:
if (value == 0) {
return false;
}
setMarkStackLimit(value, lock);
break;
case JSGC_MODE:
if (mode != JSGC_MODE_GLOBAL && mode != JSGC_MODE_ZONE &&
mode != JSGC_MODE_INCREMENTAL && mode != JSGC_MODE_ZONE_INCREMENTAL) {
return false;
}
mode = JSGCMode(value);
break;
case JSGC_COMPACTING_ENABLED:
compactingEnabled = value != 0;
break;
case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
marker.incrementalWeakMapMarkingEnabled = value != 0;
break;
default:
if (!tunables.setParameter(key, value, lock)) {
return false;
}
for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
zone->updateGCStartThresholds(*this, GC_NORMAL, lock);
}
}
return true;
}
void GCRuntime::resetParameter(JSGCParamKey key) {
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
waitBackgroundSweepEnd();
AutoLockGC lock(this);
resetParameter(key, lock);
}
void GCRuntime::resetParameter(JSGCParamKey key, AutoLockGC& lock) {
switch (key) {
case JSGC_SLICE_TIME_BUDGET_MS:
defaultTimeBudgetMS_ = TuningDefaults::DefaultTimeBudgetMS;
break;
case JSGC_MARK_STACK_LIMIT:
setMarkStackLimit(MarkStack::DefaultCapacity, lock);
break;
case JSGC_MODE:
mode = TuningDefaults::Mode;
break;
case JSGC_COMPACTING_ENABLED:
compactingEnabled = TuningDefaults::CompactingEnabled;
break;
case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
marker.incrementalWeakMapMarkingEnabled =
TuningDefaults::IncrementalWeakMapMarkingEnabled;
break;
default:
tunables.resetParameter(key, lock);
for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
zone->updateGCStartThresholds(*this, GC_NORMAL, lock);
}
}
}
uint32_t GCRuntime::getParameter(JSGCParamKey key) {
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
AutoLockGC lock(this);
return getParameter(key, lock);
}
uint32_t GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock) {
switch (key) {
case JSGC_MAX_BYTES:
return uint32_t(tunables.gcMaxBytes());
case JSGC_MIN_NURSERY_BYTES:
MOZ_ASSERT(tunables.gcMinNurseryBytes() < UINT32_MAX);
return uint32_t(tunables.gcMinNurseryBytes());
case JSGC_MAX_NURSERY_BYTES:
MOZ_ASSERT(tunables.gcMaxNurseryBytes() < UINT32_MAX);
return uint32_t(tunables.gcMaxNurseryBytes());
case JSGC_BYTES:
return uint32_t(heapSize.bytes());
case JSGC_NURSERY_BYTES:
return nursery().capacity();
case JSGC_NUMBER:
return uint32_t(number);
case JSGC_MODE:
return uint32_t(mode);
case JSGC_UNUSED_CHUNKS:
return uint32_t(emptyChunks(lock).count());
case JSGC_TOTAL_CHUNKS:
return uint32_t(fullChunks(lock).count() + availableChunks(lock).count() +
emptyChunks(lock).count());
case JSGC_SLICE_TIME_BUDGET_MS:
if (defaultTimeBudgetMS_.ref() == SliceBudget::UnlimitedTimeBudget) {
return 0;
} else {
MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ >= 0);
MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ <= UINT32_MAX);
return uint32_t(defaultTimeBudgetMS_);
}
case JSGC_MARK_STACK_LIMIT:
return marker.maxCapacity();
case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
return tunables.highFrequencyThreshold().ToMilliseconds();
case JSGC_SMALL_HEAP_SIZE_MAX:
return tunables.smallHeapSizeMaxBytes() / 1024 / 1024;
case JSGC_LARGE_HEAP_SIZE_MIN:
return tunables.largeHeapSizeMinBytes() / 1024 / 1024;
case JSGC_HIGH_FREQUENCY_SMALL_HEAP_GROWTH:
return uint32_t(tunables.highFrequencySmallHeapGrowth() * 100);
case JSGC_HIGH_FREQUENCY_LARGE_HEAP_GROWTH:
return uint32_t(tunables.highFrequencyLargeHeapGrowth() * 100);
case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
return uint32_t(tunables.lowFrequencyHeapGrowth() * 100);
case JSGC_ALLOCATION_THRESHOLD:
return tunables.gcZoneAllocThresholdBase() / 1024 / 1024;
case JSGC_SMALL_HEAP_INCREMENTAL_LIMIT:
return uint32_t(tunables.smallHeapIncrementalLimit() * 100);
case JSGC_LARGE_HEAP_INCREMENTAL_LIMIT:
return uint32_t(tunables.largeHeapIncrementalLimit() * 100);
case JSGC_MIN_EMPTY_CHUNK_COUNT:
return tunables.minEmptyChunkCount(lock);
case JSGC_MAX_EMPTY_CHUNK_COUNT:
return tunables.maxEmptyChunkCount();
case JSGC_COMPACTING_ENABLED:
return compactingEnabled;
case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
return marker.incrementalWeakMapMarkingEnabled;
case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION:
return tunables.nurseryFreeThresholdForIdleCollection();
case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION_PERCENT:
return uint32_t(tunables.nurseryFreeThresholdForIdleCollectionFraction() *
100.0f);
case JSGC_PRETENURE_THRESHOLD:
return uint32_t(tunables.pretenureThreshold() * 100);
case JSGC_PRETENURE_GROUP_THRESHOLD:
return tunables.pretenureGroupThreshold();
case JSGC_MIN_LAST_DITCH_GC_PERIOD:
return tunables.minLastDitchGCPeriod().ToSeconds();
case JSGC_ZONE_ALLOC_DELAY_KB:
return tunables.zoneAllocDelayBytes() / 1024;
case JSGC_MALLOC_THRESHOLD_BASE:
return tunables.mallocThresholdBase() / 1024 / 1024;
case JSGC_MALLOC_GROWTH_FACTOR:
return uint32_t(tunables.mallocGrowthFactor() * 100);
case JSGC_CHUNK_BYTES:
return ChunkSize;
default:
MOZ_CRASH("Unknown parameter key");
}
}
void GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock) {
MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
AutoUnlockGC unlock(lock);
AutoStopVerifyingBarriers pauseVerification(rt, false);
marker.setMaxCapacity(limit);
}
bool GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
AssertHeapIsIdle();
return !!blackRootTracers.ref().append(
Callback<JSTraceDataOp>(traceOp, data));
}
void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
// Can be called from finalizers
for (size_t i = 0; i < blackRootTracers.ref().length(); i++) {
Callback<JSTraceDataOp>* e = &blackRootTracers.ref()[i];
if (e->op == traceOp && e->data == data) {
blackRootTracers.ref().erase(e);
break;
}
}
}
void GCRuntime::setGrayRootsTracer(JSTraceDataOp traceOp, void* data) {
AssertHeapIsIdle();
grayRootTracer.ref() = {traceOp, data};
}
void GCRuntime::clearBlackAndGrayRootTracers() {
MOZ_ASSERT(rt->isBeingDestroyed());
blackRootTracers.ref().clear();
setGrayRootsTracer(nullptr, nullptr);
}
void GCRuntime::setGCCallback(JSGCCallback callback, void* data) {
gcCallback.ref() = {callback, data};
}
void GCRuntime::callGCCallback(JSGCStatus status, JS::GCReason reason) const {
const auto& callback = gcCallback.ref();
MOZ_ASSERT(callback.op);
callback.op(rt->mainContextFromOwnThread(), status, reason, callback.data);
}
void GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback,
void* data) {
tenuredCallback.ref() = {callback, data};
}
void GCRuntime::callObjectsTenuredCallback() {
JS::AutoSuppressGCAnalysis nogc;
const auto& callback = tenuredCallback.ref();
if (callback.op) {
callback.op(rt->mainContextFromOwnThread(), callback.data);
}
}
bool GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data) {
return finalizeCallbacks.ref().append(
Callback<JSFinalizeCallback>(callback, data));
}
template <typename F>
static void EraseCallback(CallbackVector<F>& vector, F callback) {
for (Callback<F>* p = vector.begin(); p != vector.end(); p++) {
if (p->op == callback) {
vector.erase(p);
return;
}
}
}
void GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback) {
EraseCallback(finalizeCallbacks.ref(), callback);
}
void GCRuntime::callFinalizeCallbacks(JSFreeOp* fop,
JSFinalizeStatus status) const {
for (auto& p : finalizeCallbacks.ref()) {
p.op(fop, status, p.data);
}
}
void GCRuntime::setHostCleanupFinalizationRegistryCallback(
JSHostCleanupFinalizationRegistryCallback callback, void* data) {
hostCleanupFinalizationRegistryCallback.ref() = {callback, data};
}
void GCRuntime::callHostCleanupFinalizationRegistryCallback(
FinalizationRegistryObject* registry) {
JS::AutoSuppressGCAnalysis nogc;
const auto& callback = hostCleanupFinalizationRegistryCallback.ref();
if (callback.op) {
callback.op(registry, callback.data);
}
}
bool GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback,
void* data) {
return updateWeakPointerZonesCallbacks.ref().append(
Callback<JSWeakPointerZonesCallback>(callback, data));
}
void GCRuntime::removeWeakPointerZonesCallback(
JSWeakPointerZonesCallback callback) {
EraseCallback(updateWeakPointerZonesCallbacks.ref(), callback);
}
void GCRuntime::callWeakPointerZonesCallbacks() const {
JSContext* cx = rt->mainContextFromOwnThread();
for (auto const& p : updateWeakPointerZonesCallbacks.ref()) {
p.op(cx, p.data);
}
}
bool GCRuntime::addWeakPointerCompartmentCallback(
JSWeakPointerCompartmentCallback callback, void* data) {
return updateWeakPointerCompartmentCallbacks.ref().append(
Callback<JSWeakPointerCompartmentCallback>(callback, data));
}
void GCRuntime::removeWeakPointerCompartmentCallback(
JSWeakPointerCompartmentCallback callback) {
EraseCallback(updateWeakPointerCompartmentCallbacks.ref(), callback);
}
void GCRuntime::callWeakPointerCompartmentCallbacks(
JS::Compartment* comp) const {
JSContext* cx = rt->mainContextFromOwnThread();
for (auto const& p : updateWeakPointerCompartmentCallbacks.ref()) {
p.op(cx, comp, p.data);
}
}
JS::GCSliceCallback GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
return stats().setSliceCallback(callback);
}
JS::GCNurseryCollectionCallback GCRuntime::setNurseryCollectionCallback(
JS::GCNurseryCollectionCallback callback) {
return stats().setNurseryCollectionCallback(callback);
}
JS::DoCycleCollectionCallback GCRuntime::setDoCycleCollectionCallback(
JS::DoCycleCollectionCallback callback) {
const auto prior = gcDoCycleCollectionCallback.ref();
gcDoCycleCollectionCallback.ref() = {callback, nullptr};
return prior.op;
}
void GCRuntime::callDoCycleCollectionCallback(JSContext* cx) {
const auto& callback = gcDoCycleCollectionCallback.ref();
if (callback.op) {
callback.op(cx);
}
}
bool GCRuntime::addRoot(Value* vp, const char* name) {
/*
* Sometimes Firefox will hold weak references to objects and then convert
* them to strong references by calling AddRoot (e.g., via PreserveWrapper,
* or ModifyBusyCount in workers). We need a read barrier to cover these
* cases.
*/
if (isIncrementalGCInProgress()) {
GCPtrValue::writeBarrierPre(*vp);
}
return rootsHash.ref().put(vp, name);
}
void GCRuntime::removeRoot(Value* vp) {
rootsHash.ref().remove(vp);
notifyRootsRemoved();
}
extern JS_FRIEND_API bool js::AddRawValueRoot(JSContext* cx, Value* vp,
const char* name) {
MOZ_ASSERT(vp);
MOZ_ASSERT(name);
bool ok = cx->runtime()->gc.addRoot(vp, name);
if (!ok) {
JS_ReportOutOfMemory(cx);
}
return ok;
}
extern JS_FRIEND_API void js::RemoveRawValueRoot(JSContext* cx, Value* vp) {
cx->runtime()->gc.removeRoot(vp);
}
/* Compacting GC */
bool js::gc::IsCurrentlyAnimating(const TimeStamp& lastAnimationTime,
const TimeStamp& currentTime) {
// Assume that we're currently animating if js::NotifyAnimationActivity has
// been called in the last second.
static const auto oneSecond = TimeDuration::FromSeconds(1);
return !lastAnimationTime.IsNull() &&
currentTime < (lastAnimationTime + oneSecond);
}
bool GCRuntime::shouldCompact() {
// Compact on shrinking GC if enabled. Skip compacting in incremental GCs
// if we are currently animating, unless the user is inactive or we're
// responding to memory pressure.
if (invocationKind != GC_SHRINK || !isCompactingGCEnabled()) {
return false;
}
if (initialReason == JS::GCReason::USER_INACTIVE ||
initialReason == JS::GCReason::MEM_PRESSURE) {
return true;
}
return !isIncremental ||
!IsCurrentlyAnimating(rt->lastAnimationTime, TimeStamp::Now());
}
bool GCRuntime::isCompactingGCEnabled() const {
return compactingEnabled &&
rt->mainContextFromOwnThread()->compactingDisabledCount == 0;
}
AutoDisableCompactingGC::AutoDisableCompactingGC(JSContext* cx) : cx(cx) {
++cx->compactingDisabledCount;
if (cx->runtime()->gc.isIncrementalGCInProgress() &&
cx->runtime()->gc.isCompactingGc()) {
FinishGC(cx);
}
}
AutoDisableCompactingGC::~AutoDisableCompactingGC() {
MOZ_ASSERT(cx->compactingDisabledCount > 0);
--cx->compactingDisabledCount;
}
bool GCRuntime::canRelocateZone(Zone* zone) const {
if (zone->isAtomsZone()) {
return false;
}
if (zone->isSelfHostingZone() && selfHostingZoneFrozen) {
return false;
}
return true;
}
Arena* ArenaList::removeRemainingArenas(Arena** arenap) {
// This is only ever called to remove arenas that are after the cursor, so
// we don't need to update it.
#ifdef DEBUG
for (Arena* arena = *arenap; arena; arena = arena->next) {
MOZ_ASSERT(cursorp_ != &arena->next);
}
#endif
Arena* remainingArenas = *arenap;
*arenap = nullptr;
check();
return remainingArenas;
}
static bool ShouldRelocateAllArenas(JS::GCReason reason) {
return reason == JS::GCReason::DEBUG_GC;
}
/*
* Choose which arenas to relocate all cells from. Return an arena cursor that
* can be passed to removeRemainingArenas().
*/
Arena** ArenaList::pickArenasToRelocate(size_t& arenaTotalOut,
size_t& relocTotalOut) {
// Relocate the greatest number of arenas such that the number of used cells
// in relocated arenas is less than or equal to the number of free cells in
// unrelocated arenas. In other words we only relocate cells we can move
// into existing arenas, and we choose the least full areans to relocate.
//
// This is made easier by the fact that the arena list has been sorted in
// descending order of number of used cells, so we will always relocate a
// tail of the arena list. All we need to do is find the point at which to
// start relocating.
check();
if (isCursorAtEnd()) {
return nullptr;
}
Arena** arenap = cursorp_; // Next arena to consider for relocation.
size_t previousFreeCells = 0; // Count of free cells before arenap.
size_t followingUsedCells = 0; // Count of used cells after arenap.
size_t fullArenaCount = 0; // Number of full arenas (not relocated).
size_t nonFullArenaCount =
0; // Number of non-full arenas (considered for relocation).
size_t arenaIndex = 0; // Index of the next arena to consider.
for (Arena* arena = head_; arena != *cursorp_; arena = arena->next) {
fullArenaCount++;
}
for (Arena* arena = *cursorp_; arena; arena = arena->next) {
followingUsedCells += arena->countUsedCells();
nonFullArenaCount++;
}
mozilla::DebugOnly<size_t> lastFreeCells(0);
size_t cellsPerArena = Arena::thingsPerArena((*arenap)->getAllocKind());
while (*arenap) {
Arena* arena = *arenap;
if (followingUsedCells <= previousFreeCells) {
break;
}
size_t freeCells = arena->countFreeCells();
size_t usedCells = cellsPerArena - freeCells;
followingUsedCells -= usedCells;
#ifdef DEBUG
MOZ_ASSERT(freeCells >= lastFreeCells);
lastFreeCells = freeCells;
#endif
previousFreeCells += freeCells;
arenap = &arena->next;
arenaIndex++;
}
size_t relocCount = nonFullArenaCount - arenaIndex;
MOZ_ASSERT(relocCount < nonFullArenaCount);
MOZ_ASSERT((relocCount == 0) == (!*arenap));
arenaTotalOut += fullArenaCount + nonFullArenaCount;
relocTotalOut += relocCount;
return arenap;
}
#ifdef DEBUG
inline bool PtrIsInRange(const void* ptr, const void* start, size_t length) {
return uintptr_t(ptr) - uintptr_t(start) < length;
}
#endif
static void RelocateCell(Zone* zone, TenuredCell* src, AllocKind thingKind,
size_t thingSize) {
JS::AutoSuppressGCAnalysis nogc(TlsContext.get());
// Allocate a new cell.
MOZ_ASSERT(zone == src->zone());
TenuredCell* dst = AllocateCellInGC(zone, thingKind);
// Copy source cell contents to destination.
memcpy(dst, src, thingSize);
// Move any uid attached to the object.
src->zone()->transferUniqueId(dst, src);
if (IsObjectAllocKind(thingKind)) {
JSObject* srcObj = static_cast<JSObject*>(static_cast<Cell*>(src));
JSObject* dstObj = static_cast<JSObject*>(static_cast<Cell*>(dst));
if (srcObj->isNative()) {
NativeObject* srcNative = &srcObj->as<NativeObject>();
NativeObject* dstNative = &dstObj->as<NativeObject>();