Source code

Revision control

Other Tools

1
/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
2
* vim: set ts=8 sts=2 et sw=2 tw=80:
3
* This Source Code Form is subject to the terms of the Mozilla Public
4
* License, v. 2.0. If a copy of the MPL was not distributed with this
5
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
6
7
/*
8
* [SMDOC] Garbage Collector
9
*
10
* This code implements an incremental mark-and-sweep garbage collector, with
11
* most sweeping carried out in the background on a parallel thread.
12
*
13
* Full vs. zone GC
14
* ----------------
15
*
16
* The collector can collect all zones at once, or a subset. These types of
17
* collection are referred to as a full GC and a zone GC respectively.
18
*
19
* It is possible for an incremental collection that started out as a full GC to
20
* become a zone GC if new zones are created during the course of the
21
* collection.
22
*
23
* Incremental collection
24
* ----------------------
25
*
26
* For a collection to be carried out incrementally the following conditions
27
* must be met:
28
* - the collection must be run by calling js::GCSlice() rather than js::GC()
29
* - the GC mode must have been set to JSGC_MODE_INCREMENTAL or
30
* JSGC_MODE_ZONE_INCREMENTAL with JS_SetGCParameter()
31
* - no thread may have an AutoKeepAtoms instance on the stack
32
*
33
* The last condition is an engine-internal mechanism to ensure that incremental
34
* collection is not carried out without the correct barriers being implemented.
35
* For more information see 'Incremental marking' below.
36
*
37
* If the collection is not incremental, all foreground activity happens inside
38
* a single call to GC() or GCSlice(). However the collection is not complete
39
* until the background sweeping activity has finished.
40
*
41
* An incremental collection proceeds as a series of slices, interleaved with
42
* mutator activity, i.e. running JavaScript code. Slices are limited by a time
43
* budget. The slice finishes as soon as possible after the requested time has
44
* passed.
45
*
46
* Collector states
47
* ----------------
48
*
49
* The collector proceeds through the following states, the current state being
50
* held in JSRuntime::gcIncrementalState:
51
*
52
* - MarkRoots - marks the stack and other roots
53
* - Mark - incrementally marks reachable things
54
* - Sweep - sweeps zones in groups and continues marking unswept zones
55
* - Finalize - performs background finalization, concurrent with mutator
56
* - Compact - incrementally compacts by zone
57
* - Decommit - performs background decommit and chunk removal
58
*
59
* The MarkRoots activity always takes place in the first slice. The next two
60
* states can take place over one or more slices.
61
*
62
* In other words an incremental collection proceeds like this:
63
*
64
* Slice 1: MarkRoots: Roots pushed onto the mark stack.
65
* Mark: The mark stack is processed by popping an element,
66
* marking it, and pushing its children.
67
*
68
* ... JS code runs ...
69
*
70
* Slice 2: Mark: More mark stack processing.
71
*
72
* ... JS code runs ...
73
*
74
* Slice n-1: Mark: More mark stack processing.
75
*
76
* ... JS code runs ...
77
*
78
* Slice n: Mark: Mark stack is completely drained.
79
* Sweep: Select first group of zones to sweep and sweep them.
80
*
81
* ... JS code runs ...
82
*
83
* Slice n+1: Sweep: Mark objects in unswept zones that were newly
84
* identified as alive (see below). Then sweep more zone
85
* sweep groups.
86
*
87
* ... JS code runs ...
88
*
89
* Slice n+2: Sweep: Mark objects in unswept zones that were newly
90
* identified as alive. Then sweep more zones.
91
*
92
* ... JS code runs ...
93
*
94
* Slice m: Sweep: Sweeping is finished, and background sweeping
95
* started on the helper thread.
96
*
97
* ... JS code runs, remaining sweeping done on background thread ...
98
*
99
* When background sweeping finishes the GC is complete.
100
*
101
* Incremental marking
102
* -------------------
103
*
104
* Incremental collection requires close collaboration with the mutator (i.e.,
105
* JS code) to guarantee correctness.
106
*
107
* - During an incremental GC, if a memory location (except a root) is written
108
* to, then the value it previously held must be marked. Write barriers
109
* ensure this.
110
*
111
* - Any object that is allocated during incremental GC must start out marked.
112
*
113
* - Roots are marked in the first slice and hence don't need write barriers.
114
* Roots are things like the C stack and the VM stack.
115
*
116
* The problem that write barriers solve is that between slices the mutator can
117
* change the object graph. We must ensure that it cannot do this in such a way
118
* that makes us fail to mark a reachable object (marking an unreachable object
119
* is tolerable).
120
*
121
* We use a snapshot-at-the-beginning algorithm to do this. This means that we
122
* promise to mark at least everything that is reachable at the beginning of
123
* collection. To implement it we mark the old contents of every non-root memory
124
* location written to by the mutator while the collection is in progress, using
125
* write barriers. This is described in gc/Barrier.h.
126
*
127
* Incremental sweeping
128
* --------------------
129
*
130
* Sweeping is difficult to do incrementally because object finalizers must be
131
* run at the start of sweeping, before any mutator code runs. The reason is
132
* that some objects use their finalizers to remove themselves from caches. If
133
* mutator code was allowed to run after the start of sweeping, it could observe
134
* the state of the cache and create a new reference to an object that was just
135
* about to be destroyed.
136
*
137
* Sweeping all finalizable objects in one go would introduce long pauses, so
138
* instead sweeping broken up into groups of zones. Zones which are not yet
139
* being swept are still marked, so the issue above does not apply.
140
*
141
* The order of sweeping is restricted by cross compartment pointers - for
142
* example say that object |a| from zone A points to object |b| in zone B and
143
* neither object was marked when we transitioned to the Sweep phase. Imagine we
144
* sweep B first and then return to the mutator. It's possible that the mutator
145
* could cause |a| to become alive through a read barrier (perhaps it was a
146
* shape that was accessed via a shape table). Then we would need to mark |b|,
147
* which |a| points to, but |b| has already been swept.
148
*
149
* So if there is such a pointer then marking of zone B must not finish before
150
* marking of zone A. Pointers which form a cycle between zones therefore
151
* restrict those zones to being swept at the same time, and these are found
152
* using Tarjan's algorithm for finding the strongly connected components of a
153
* graph.
154
*
155
* GC things without finalizers, and things with finalizers that are able to run
156
* in the background, are swept on the background thread. This accounts for most
157
* of the sweeping work.
158
*
159
* Reset
160
* -----
161
*
162
* During incremental collection it is possible, although unlikely, for
163
* conditions to change such that incremental collection is no longer safe. In
164
* this case, the collection is 'reset' by resetIncrementalGC(). If we are in
165
* the mark state, this just stops marking, but if we have started sweeping
166
* already, we continue non-incrementally until we have swept the current sweep
167
* group. Following a reset, a new collection is started.
168
*
169
* Compacting GC
170
* -------------
171
*
172
* Compacting GC happens at the end of a major GC as part of the last slice.
173
* There are three parts:
174
*
175
* - Arenas are selected for compaction.
176
* - The contents of those arenas are moved to new arenas.
177
* - All references to moved things are updated.
178
*
179
* Collecting Atoms
180
* ----------------
181
*
182
* Atoms are collected differently from other GC things. They are contained in
183
* a special zone and things in other zones may have pointers to them that are
184
* not recorded in the cross compartment pointer map. Each zone holds a bitmap
185
* with the atoms it might be keeping alive, and atoms are only collected if
186
* they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
187
* this bitmap is managed.
188
*/
189
190
#include "gc/GC-inl.h"
191
192
#include "mozilla/ArrayUtils.h"
193
#include "mozilla/DebugOnly.h"
194
#include "mozilla/MacroForEach.h"
195
#include "mozilla/MemoryReporting.h"
196
#include "mozilla/Range.h"
197
#include "mozilla/ScopeExit.h"
198
#include "mozilla/TextUtils.h"
199
#include "mozilla/TimeStamp.h"
200
#include "mozilla/TypeTraits.h"
201
#include "mozilla/Unused.h"
202
203
#include <algorithm>
204
#include <initializer_list>
205
#include <string.h>
206
#include <utility>
207
#ifndef XP_WIN
208
# include <sys/mman.h>
209
# include <unistd.h>
210
#endif
211
212
#include "jsapi.h"
213
#include "jsfriendapi.h"
214
#include "jstypes.h"
215
216
#include "builtin/FinalizationGroupObject.h"
217
#include "debugger/DebugAPI.h"
218
#include "gc/FindSCCs.h"
219
#include "gc/FreeOp.h"
220
#include "gc/GCInternals.h"
221
#include "gc/GCLock.h"
222
#include "gc/GCTrace.h"
223
#include "gc/Memory.h"
224
#include "gc/Policy.h"
225
#include "gc/WeakMap.h"
226
#include "jit/BaselineJIT.h"
227
#include "jit/IonCode.h"
228
#include "jit/JitcodeMap.h"
229
#include "jit/JitRealm.h"
230
#include "jit/MacroAssembler.h"
231
#include "js/SliceBudget.h"
232
#include "proxy/DeadObjectProxy.h"
233
#include "util/Poison.h"
234
#include "util/Windows.h"
235
#include "vm/BigIntType.h"
236
#include "vm/GeckoProfiler.h"
237
#include "vm/JSAtom.h"
238
#include "vm/JSContext.h"
239
#include "vm/JSObject.h"
240
#include "vm/JSScript.h"
241
#include "vm/Printer.h"
242
#include "vm/ProxyObject.h"
243
#include "vm/Realm.h"
244
#include "vm/Shape.h"
245
#include "vm/StringType.h"
246
#include "vm/SymbolType.h"
247
#include "vm/Time.h"
248
#include "vm/TraceLogging.h"
249
#include "vm/WrapperObject.h"
250
251
#include "gc/Heap-inl.h"
252
#include "gc/Marking-inl.h"
253
#include "gc/Nursery-inl.h"
254
#include "gc/PrivateIterators-inl.h"
255
#include "gc/Zone-inl.h"
256
#include "vm/GeckoProfiler-inl.h"
257
#include "vm/JSObject-inl.h"
258
#include "vm/JSScript-inl.h"
259
#include "vm/Stack-inl.h"
260
#include "vm/StringType-inl.h"
261
262
using namespace js;
263
using namespace js::gc;
264
265
using mozilla::ArrayLength;
266
using mozilla::Maybe;
267
using mozilla::Nothing;
268
using mozilla::Some;
269
using mozilla::TimeDuration;
270
using mozilla::TimeStamp;
271
272
using JS::AutoGCRooter;
273
274
/* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
275
static constexpr int IGC_MARK_SLICE_MULTIPLIER = 2;
276
277
const AllocKind gc::slotsToThingKind[] = {
278
// clang-format off
279
/* 0 */ AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT2, AllocKind::OBJECT4,
280
/* 4 */ AllocKind::OBJECT4, AllocKind::OBJECT8, AllocKind::OBJECT8, AllocKind::OBJECT8,
281
/* 8 */ AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
282
/* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
283
/* 16 */ AllocKind::OBJECT16
284
// clang-format on
285
};
286
287
// Check that reserved bits of a Cell are compatible with our typical allocators
288
// since most derived classes will store a pointer in the first word.
289
static_assert(js::detail::LIFO_ALLOC_ALIGN > BitMask(Cell::ReservedBits),
290
"Cell::ReservedBits should support LifoAlloc");
291
static_assert(CellAlignBytes > BitMask(Cell::ReservedBits),
292
"Cell::ReservedBits should support gc::Cell");
293
static_assert(js::jit::CodeAlignment > BitMask(Cell::ReservedBits),
294
"Cell::ReservedBits should support JIT code");
295
296
static_assert(mozilla::ArrayLength(slotsToThingKind) ==
297
SLOTS_TO_THING_KIND_LIMIT,
298
"We have defined a slot count for each kind.");
299
300
#define CHECK_THING_SIZE(allocKind, traceKind, type, sizedType, bgFinal, \
301
nursery, compact) \
302
static_assert(sizeof(sizedType) >= SortedArenaList::MinThingSize, \
303
#sizedType " is smaller than SortedArenaList::MinThingSize!"); \
304
static_assert(sizeof(sizedType) >= sizeof(FreeSpan), \
305
#sizedType " is smaller than FreeSpan"); \
306
static_assert(sizeof(sizedType) % CellAlignBytes == 0, \
307
"Size of " #sizedType " is not a multiple of CellAlignBytes"); \
308
static_assert(sizeof(sizedType) >= MinCellSize, \
309
"Size of " #sizedType " is smaller than the minimum size");
310
FOR_EACH_ALLOCKIND(CHECK_THING_SIZE);
311
#undef CHECK_THING_SIZE
312
313
template <typename T>
314
struct ArenaLayout {
315
static constexpr size_t thingSize() { return sizeof(T); }
316
static constexpr size_t thingsPerArena() {
317
return (ArenaSize - ArenaHeaderSize) / thingSize();
318
}
319
static constexpr size_t firstThingOffset() {
320
return ArenaSize - thingSize() * thingsPerArena();
321
}
322
};
323
324
const uint8_t Arena::ThingSizes[] = {
325
#define EXPAND_THING_SIZE(_1, _2, _3, sizedType, _4, _5, _6) \
326
ArenaLayout<sizedType>::thingSize(),
327
FOR_EACH_ALLOCKIND(EXPAND_THING_SIZE)
328
#undef EXPAND_THING_SIZE
329
};
330
331
const uint8_t Arena::FirstThingOffsets[] = {
332
#define EXPAND_FIRST_THING_OFFSET(_1, _2, _3, sizedType, _4, _5, _6) \
333
ArenaLayout<sizedType>::firstThingOffset(),
334
FOR_EACH_ALLOCKIND(EXPAND_FIRST_THING_OFFSET)
335
#undef EXPAND_FIRST_THING_OFFSET
336
};
337
338
const uint8_t Arena::ThingsPerArena[] = {
339
#define EXPAND_THINGS_PER_ARENA(_1, _2, _3, sizedType, _4, _5, _6) \
340
ArenaLayout<sizedType>::thingsPerArena(),
341
FOR_EACH_ALLOCKIND(EXPAND_THINGS_PER_ARENA)
342
#undef EXPAND_THINGS_PER_ARENA
343
};
344
345
FreeSpan FreeLists::emptySentinel;
346
347
struct js::gc::FinalizePhase {
348
gcstats::PhaseKind statsPhase;
349
AllocKinds kinds;
350
};
351
352
/*
353
* Finalization order for objects swept incrementally on the main thread.
354
*/
355
static constexpr FinalizePhase ForegroundObjectFinalizePhase = {
356
gcstats::PhaseKind::SWEEP_OBJECT,
357
{AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT4,
358
AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT16}};
359
360
/*
361
* Finalization order for GC things swept incrementally on the main thread.
362
*/
363
static constexpr FinalizePhase ForegroundNonObjectFinalizePhase = {
364
gcstats::PhaseKind::SWEEP_SCRIPT, {AllocKind::SCRIPT, AllocKind::JITCODE}};
365
366
/*
367
* Finalization order for GC things swept on the background thread.
368
*/
369
static constexpr FinalizePhase BackgroundFinalizePhases[] = {
370
{gcstats::PhaseKind::SWEEP_OBJECT,
371
{AllocKind::FUNCTION, AllocKind::FUNCTION_EXTENDED,
372
AllocKind::OBJECT0_BACKGROUND, AllocKind::OBJECT2_BACKGROUND,
373
AllocKind::ARRAYBUFFER4, AllocKind::OBJECT4_BACKGROUND,
374
AllocKind::ARRAYBUFFER8, AllocKind::OBJECT8_BACKGROUND,
375
AllocKind::ARRAYBUFFER12, AllocKind::OBJECT12_BACKGROUND,
376
AllocKind::ARRAYBUFFER16, AllocKind::OBJECT16_BACKGROUND}},
377
{gcstats::PhaseKind::SWEEP_SCOPE,
378
{
379
AllocKind::SCOPE,
380
}},
381
{gcstats::PhaseKind::SWEEP_REGEXP_SHARED,
382
{
383
AllocKind::REGEXP_SHARED,
384
}},
385
{gcstats::PhaseKind::SWEEP_STRING,
386
{AllocKind::FAT_INLINE_STRING, AllocKind::STRING,
387
AllocKind::EXTERNAL_STRING, AllocKind::FAT_INLINE_ATOM, AllocKind::ATOM,
388
AllocKind::SYMBOL, AllocKind::BIGINT}},
389
{gcstats::PhaseKind::SWEEP_SHAPE,
390
{AllocKind::SHAPE, AllocKind::ACCESSOR_SHAPE, AllocKind::BASE_SHAPE,
391
AllocKind::OBJECT_GROUP}}};
392
393
void Arena::unmarkAll() {
394
uintptr_t* word = chunk()->bitmap.arenaBits(this);
395
memset(word, 0, ArenaBitmapWords * sizeof(uintptr_t));
396
}
397
398
void Arena::unmarkPreMarkedFreeCells() {
399
for (ArenaFreeCellIter iter(this); !iter.done(); iter.next()) {
400
TenuredCell* cell = iter.getCell();
401
MOZ_ASSERT(cell->isMarkedBlack());
402
cell->unmark();
403
}
404
}
405
406
#ifdef DEBUG
407
void Arena::checkNoMarkedFreeCells() {
408
for (ArenaFreeCellIter iter(this); !iter.done(); iter.next()) {
409
MOZ_ASSERT(!iter.getCell()->isMarkedAny());
410
}
411
}
412
#endif
413
414
/* static */
415
void Arena::staticAsserts() {
416
static_assert(size_t(AllocKind::LIMIT) <= 255,
417
"We must be able to fit the allockind into uint8_t.");
418
static_assert(mozilla::ArrayLength(ThingSizes) == size_t(AllocKind::LIMIT),
419
"We haven't defined all thing sizes.");
420
static_assert(
421
mozilla::ArrayLength(FirstThingOffsets) == size_t(AllocKind::LIMIT),
422
"We haven't defined all offsets.");
423
static_assert(
424
mozilla::ArrayLength(ThingsPerArena) == size_t(AllocKind::LIMIT),
425
"We haven't defined all counts.");
426
}
427
428
/* static */
429
inline void Arena::checkLookupTables() {
430
#ifdef DEBUG
431
for (size_t i = 0; i < size_t(AllocKind::LIMIT); i++) {
432
MOZ_ASSERT(
433
FirstThingOffsets[i] + ThingsPerArena[i] * ThingSizes[i] == ArenaSize,
434
"Inconsistent arena lookup table data");
435
}
436
#endif
437
}
438
439
template <typename T>
440
inline size_t Arena::finalize(JSFreeOp* fop, AllocKind thingKind,
441
size_t thingSize) {
442
/* Enforce requirements on size of T. */
443
MOZ_ASSERT(thingSize % CellAlignBytes == 0);
444
MOZ_ASSERT(thingSize >= MinCellSize);
445
MOZ_ASSERT(thingSize <= 255);
446
447
MOZ_ASSERT(allocated());
448
MOZ_ASSERT(thingKind == getAllocKind());
449
MOZ_ASSERT(thingSize == getThingSize());
450
MOZ_ASSERT(!onDelayedMarkingList_);
451
452
uint_fast16_t firstThing = firstThingOffset(thingKind);
453
uint_fast16_t firstThingOrSuccessorOfLastMarkedThing = firstThing;
454
uint_fast16_t lastThing = ArenaSize - thingSize;
455
456
FreeSpan newListHead;
457
FreeSpan* newListTail = &newListHead;
458
size_t nmarked = 0;
459
460
for (ArenaCellIterUnderFinalize i(this); !i.done(); i.next()) {
461
T* t = i.get<T>();
462
if (t->asTenured().isMarkedAny()) {
463
uint_fast16_t thing = uintptr_t(t) & ArenaMask;
464
if (thing != firstThingOrSuccessorOfLastMarkedThing) {
465
// We just finished passing over one or more free things,
466
// so record a new FreeSpan.
467
newListTail->initBounds(firstThingOrSuccessorOfLastMarkedThing,
468
thing - thingSize, this);
469
newListTail = newListTail->nextSpanUnchecked(this);
470
}
471
firstThingOrSuccessorOfLastMarkedThing = thing + thingSize;
472
nmarked++;
473
} else {
474
t->finalize(fop);
475
AlwaysPoison(t, JS_SWEPT_TENURED_PATTERN, thingSize,
476
MemCheckKind::MakeUndefined);
477
gcTracer.traceTenuredFinalize(t);
478
}
479
}
480
481
if (nmarked == 0) {
482
// Do nothing. The caller will update the arena appropriately.
483
MOZ_ASSERT(newListTail == &newListHead);
484
DebugOnlyPoison(data, JS_SWEPT_TENURED_PATTERN, sizeof(data),
485
MemCheckKind::MakeUndefined);
486
return nmarked;
487
}
488
489
MOZ_ASSERT(firstThingOrSuccessorOfLastMarkedThing != firstThing);
490
uint_fast16_t lastMarkedThing =
491
firstThingOrSuccessorOfLastMarkedThing - thingSize;
492
if (lastThing == lastMarkedThing) {
493
// If the last thing was marked, we will have already set the bounds of
494
// the final span, and we just need to terminate the list.
495
newListTail->initAsEmpty();
496
} else {
497
// Otherwise, end the list with a span that covers the final stretch of free
498
// things.
499
newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing, lastThing,
500
this);
501
}
502
503
firstFreeSpan = newListHead;
504
#ifdef DEBUG
505
size_t nfree = numFreeThings(thingSize);
506
MOZ_ASSERT(nfree + nmarked == thingsPerArena(thingKind));
507
#endif
508
return nmarked;
509
}
510
511
// Finalize arenas from src list, releasing empty arenas if keepArenas wasn't
512
// specified and inserting the others into the appropriate destination size
513
// bins.
514
template <typename T>
515
static inline bool FinalizeTypedArenas(JSFreeOp* fop, Arena** src,
516
SortedArenaList& dest,
517
AllocKind thingKind,
518
SliceBudget& budget) {
519
AutoSetThreadIsFinalizing setThreadUse;
520
521
size_t thingSize = Arena::thingSize(thingKind);
522
size_t thingsPerArena = Arena::thingsPerArena(thingKind);
523
524
while (Arena* arena = *src) {
525
*src = arena->next;
526
size_t nmarked = arena->finalize<T>(fop, thingKind, thingSize);
527
size_t nfree = thingsPerArena - nmarked;
528
529
if (nmarked) {
530
dest.insertAt(arena, nfree);
531
} else {
532
arena->chunk()->recycleArena(arena, dest, thingsPerArena);
533
}
534
535
budget.step(thingsPerArena);
536
if (budget.isOverBudget()) {
537
return false;
538
}
539
}
540
541
return true;
542
}
543
544
/*
545
* Finalize the list of areans.
546
*/
547
static bool FinalizeArenas(JSFreeOp* fop, Arena** src, SortedArenaList& dest,
548
AllocKind thingKind, SliceBudget& budget) {
549
switch (thingKind) {
550
#define EXPAND_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery, \
551
compact) \
552
case AllocKind::allocKind: \
553
return FinalizeTypedArenas<type>(fop, src, dest, thingKind, budget);
554
FOR_EACH_ALLOCKIND(EXPAND_CASE)
555
#undef EXPAND_CASE
556
557
default:
558
MOZ_CRASH("Invalid alloc kind");
559
}
560
}
561
562
Chunk* ChunkPool::pop() {
563
MOZ_ASSERT(bool(head_) == bool(count_));
564
if (!count_) {
565
return nullptr;
566
}
567
return remove(head_);
568
}
569
570
void ChunkPool::push(Chunk* chunk) {
571
MOZ_ASSERT(!chunk->info.next);
572
MOZ_ASSERT(!chunk->info.prev);
573
574
chunk->info.next = head_;
575
if (head_) {
576
head_->info.prev = chunk;
577
}
578
head_ = chunk;
579
++count_;
580
}
581
582
Chunk* ChunkPool::remove(Chunk* chunk) {
583
MOZ_ASSERT(count_ > 0);
584
MOZ_ASSERT(contains(chunk));
585
586
if (head_ == chunk) {
587
head_ = chunk->info.next;
588
}
589
if (chunk->info.prev) {
590
chunk->info.prev->info.next = chunk->info.next;
591
}
592
if (chunk->info.next) {
593
chunk->info.next->info.prev = chunk->info.prev;
594
}
595
chunk->info.next = chunk->info.prev = nullptr;
596
--count_;
597
598
return chunk;
599
}
600
601
// We could keep the chunk pool sorted, but that's likely to be more expensive.
602
// This sort is nlogn, but keeping it sorted is likely to be m*n, with m being
603
// the number of operations (likely higher than n).
604
void ChunkPool::sort() {
605
// Only sort if the list isn't already sorted.
606
if (!isSorted()) {
607
head_ = mergeSort(head(), count());
608
609
// Fixup prev pointers.
610
Chunk* prev = nullptr;
611
for (Chunk* cur = head_; cur; cur = cur->info.next) {
612
cur->info.prev = prev;
613
prev = cur;
614
}
615
}
616
617
MOZ_ASSERT(verify());
618
MOZ_ASSERT(isSorted());
619
}
620
621
Chunk* ChunkPool::mergeSort(Chunk* list, size_t count) {
622
MOZ_ASSERT(bool(list) == bool(count));
623
624
if (count < 2) {
625
return list;
626
}
627
628
size_t half = count / 2;
629
630
// Split;
631
Chunk* front = list;
632
Chunk* back;
633
{
634
Chunk* cur = list;
635
for (size_t i = 0; i < half - 1; i++) {
636
MOZ_ASSERT(cur);
637
cur = cur->info.next;
638
}
639
back = cur->info.next;
640
cur->info.next = nullptr;
641
}
642
643
front = mergeSort(front, half);
644
back = mergeSort(back, count - half);
645
646
// Merge
647
list = nullptr;
648
Chunk** cur = &list;
649
while (front || back) {
650
if (!front) {
651
*cur = back;
652
break;
653
}
654
if (!back) {
655
*cur = front;
656
break;
657
}
658
659
// Note that the sort is stable due to the <= here. Nothing depends on
660
// this but it could.
661
if (front->info.numArenasFree <= back->info.numArenasFree) {
662
*cur = front;
663
front = front->info.next;
664
cur = &(*cur)->info.next;
665
} else {
666
*cur = back;
667
back = back->info.next;
668
cur = &(*cur)->info.next;
669
}
670
}
671
672
return list;
673
}
674
675
bool ChunkPool::isSorted() const {
676
uint32_t last = 1;
677
for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) {
678
if (cursor->info.numArenasFree < last) {
679
return false;
680
}
681
last = cursor->info.numArenasFree;
682
}
683
return true;
684
}
685
686
#ifdef DEBUG
687
bool ChunkPool::contains(Chunk* chunk) const {
688
verify();
689
for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) {
690
if (cursor == chunk) {
691
return true;
692
}
693
}
694
return false;
695
}
696
697
bool ChunkPool::verify() const {
698
MOZ_ASSERT(bool(head_) == bool(count_));
699
uint32_t count = 0;
700
for (Chunk* cursor = head_; cursor; cursor = cursor->info.next, ++count) {
701
MOZ_ASSERT_IF(cursor->info.prev, cursor->info.prev->info.next == cursor);
702
MOZ_ASSERT_IF(cursor->info.next, cursor->info.next->info.prev == cursor);
703
}
704
MOZ_ASSERT(count_ == count);
705
return true;
706
}
707
#endif
708
709
void ChunkPool::Iter::next() {
710
MOZ_ASSERT(!done());
711
current_ = current_->info.next;
712
}
713
714
ChunkPool GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock) {
715
MOZ_ASSERT(emptyChunks(lock).verify());
716
MOZ_ASSERT(tunables.minEmptyChunkCount(lock) <=
717
tunables.maxEmptyChunkCount());
718
719
ChunkPool expired;
720
while (emptyChunks(lock).count() > tunables.minEmptyChunkCount(lock)) {
721
Chunk* chunk = emptyChunks(lock).pop();
722
prepareToFreeChunk(chunk->info);
723
expired.push(chunk);
724
}
725
726
MOZ_ASSERT(expired.verify());
727
MOZ_ASSERT(emptyChunks(lock).verify());
728
MOZ_ASSERT(emptyChunks(lock).count() <= tunables.maxEmptyChunkCount());
729
MOZ_ASSERT(emptyChunks(lock).count() <= tunables.minEmptyChunkCount(lock));
730
return expired;
731
}
732
733
static void FreeChunkPool(ChunkPool& pool) {
734
for (ChunkPool::Iter iter(pool); !iter.done();) {
735
Chunk* chunk = iter.get();
736
iter.next();
737
pool.remove(chunk);
738
MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
739
UnmapPages(static_cast<void*>(chunk), ChunkSize);
740
}
741
MOZ_ASSERT(pool.count() == 0);
742
}
743
744
void GCRuntime::freeEmptyChunks(const AutoLockGC& lock) {
745
FreeChunkPool(emptyChunks(lock));
746
}
747
748
inline void GCRuntime::prepareToFreeChunk(ChunkInfo& info) {
749
MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
750
numArenasFreeCommitted -= info.numArenasFreeCommitted;
751
stats().count(gcstats::COUNT_DESTROY_CHUNK);
752
#ifdef DEBUG
753
/*
754
* Let FreeChunkPool detect a missing prepareToFreeChunk call before it
755
* frees chunk.
756
*/
757
info.numArenasFreeCommitted = 0;
758
#endif
759
}
760
761
inline void GCRuntime::updateOnArenaFree() { ++numArenasFreeCommitted; }
762
763
void Chunk::addArenaToFreeList(GCRuntime* gc, Arena* arena) {
764
MOZ_ASSERT(!arena->allocated());
765
arena->next = info.freeArenasHead;
766
info.freeArenasHead = arena;
767
++info.numArenasFreeCommitted;
768
++info.numArenasFree;
769
gc->updateOnArenaFree();
770
}
771
772
void Chunk::addArenaToDecommittedList(const Arena* arena) {
773
++info.numArenasFree;
774
decommittedArenas.set(Chunk::arenaIndex(arena->address()));
775
}
776
777
void Chunk::recycleArena(Arena* arena, SortedArenaList& dest,
778
size_t thingsPerArena) {
779
arena->setAsFullyUnused();
780
dest.insertAt(arena, thingsPerArena);
781
}
782
783
void Chunk::releaseArena(GCRuntime* gc, Arena* arena, const AutoLockGC& lock) {
784
addArenaToFreeList(gc, arena);
785
updateChunkListAfterFree(gc, lock);
786
}
787
788
bool Chunk::decommitOneFreeArena(GCRuntime* gc, AutoLockGC& lock) {
789
MOZ_ASSERT(info.numArenasFreeCommitted > 0);
790
Arena* arena = fetchNextFreeArena(gc);
791
updateChunkListAfterAlloc(gc, lock);
792
793
bool ok;
794
{
795
AutoUnlockGC unlock(lock);
796
ok = MarkPagesUnusedSoft(arena, ArenaSize);
797
}
798
799
if (ok) {
800
addArenaToDecommittedList(arena);
801
} else {
802
addArenaToFreeList(gc, arena);
803
}
804
updateChunkListAfterFree(gc, lock);
805
806
return ok;
807
}
808
809
void Chunk::decommitFreeArenasWithoutUnlocking(const AutoLockGC& lock) {
810
for (size_t i = 0; i < ArenasPerChunk; ++i) {
811
if (decommittedArenas.get(i) || arenas[i].allocated()) {
812
continue;
813
}
814
815
if (MarkPagesUnusedSoft(&arenas[i], ArenaSize)) {
816
info.numArenasFreeCommitted--;
817
decommittedArenas.set(i);
818
}
819
}
820
}
821
822
void Chunk::updateChunkListAfterAlloc(GCRuntime* gc, const AutoLockGC& lock) {
823
if (MOZ_UNLIKELY(!hasAvailableArenas())) {
824
gc->availableChunks(lock).remove(this);
825
gc->fullChunks(lock).push(this);
826
}
827
}
828
829
void Chunk::updateChunkListAfterFree(GCRuntime* gc, const AutoLockGC& lock) {
830
if (info.numArenasFree == 1) {
831
gc->fullChunks(lock).remove(this);
832
gc->availableChunks(lock).push(this);
833
} else if (!unused()) {
834
MOZ_ASSERT(gc->availableChunks(lock).contains(this));
835
} else {
836
MOZ_ASSERT(unused());
837
gc->availableChunks(lock).remove(this);
838
decommitAllArenas();
839
MOZ_ASSERT(info.numArenasFreeCommitted == 0);
840
gc->recycleChunk(this, lock);
841
}
842
}
843
844
void GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock) {
845
MOZ_ASSERT(arena->allocated());
846
MOZ_ASSERT(!arena->onDelayedMarkingList());
847
848
arena->zone->gcHeapSize.removeGCArena();
849
arena->release(lock);
850
arena->chunk()->releaseArena(this, arena, lock);
851
}
852
853
GCRuntime::GCRuntime(JSRuntime* rt)
854
: rt(rt),
855
systemZone(nullptr),
856
atomsZone(nullptr),
857
heapState_(JS::HeapState::Idle),
858
stats_(this),
859
marker(rt),
860
heapSize(nullptr),
861
rootsHash(256),
862
nextCellUniqueId_(LargestTaggedNullCellPointer +
863
1), // Ensure disjoint from null tagged pointers.
864
numArenasFreeCommitted(0),
865
verifyPreData(nullptr),
866
lastGCStartTime_(ReallyNow()),
867
lastGCEndTime_(ReallyNow()),
868
mode(TuningDefaults::Mode),
869
numActiveZoneIters(0),
870
cleanUpEverything(false),
871
grayBufferState(GCRuntime::GrayBufferState::Unused),
872
grayBitsValid(false),
873
majorGCTriggerReason(JS::GCReason::NO_REASON),
874
fullGCForAtomsRequested_(false),
875
minorGCNumber(0),
876
majorGCNumber(0),
877
number(0),
878
sliceNumber(0),
879
isFull(false),
880
incrementalState(gc::State::NotActive),
881
initialState(gc::State::NotActive),
882
#ifdef JS_GC_ZEAL
883
useZeal(false),
884
#endif
885
lastMarkSlice(false),
886
safeToYield(true),
887
sweepOnBackgroundThread(false),
888
lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
889
lifoBlocksToFreeAfterMinorGC(
890
(size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
891
sweepGroupIndex(0),
892
sweepGroups(nullptr),
893
currentSweepGroup(nullptr),
894
sweepZone(nullptr),
895
hasMarkedGrayRoots(false),
896
abortSweepAfterCurrentGroup(false),
897
sweepMarkTaskStarted(false),
898
sweepMarkResult(IncrementalProgress::NotFinished),
899
startedCompacting(false),
900
relocatedArenasToRelease(nullptr),
901
#ifdef JS_GC_ZEAL
902
markingValidator(nullptr),
903
#endif
904
defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS),
905
incrementalAllowed(true),
906
compactingEnabled(TuningDefaults::CompactingEnabled),
907
rootsRemoved(false),
908
#ifdef JS_GC_ZEAL
909
zealModeBits(0),
910
zealFrequency(0),
911
nextScheduled(0),
912
deterministicOnly(false),
913
incrementalLimit(0),
914
selectedForMarking(rt),
915
#endif
916
fullCompartmentChecks(false),
917
gcCallbackDepth(0),
918
alwaysPreserveCode(false),
919
lowMemoryState(false),
920
lock(mutexid::GCLock),
921
allocTask(this, emptyChunks_.ref()),
922
sweepTask(this),
923
freeTask(this),
924
decommitTask(this),
925
sweepMarkTask(this),
926
nursery_(this),
927
storeBuffer_(rt, nursery()) {
928
setGCMode(JSGC_MODE_GLOBAL);
929
}
930
931
#ifdef JS_GC_ZEAL
932
933
void GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency,
934
uint32_t* scheduled) {
935
*zealBits = zealModeBits;
936
*frequency = zealFrequency;
937
*scheduled = nextScheduled;
938
}
939
940
const char gc::ZealModeHelpText[] =
941
" Specifies how zealous the garbage collector should be. Some of these "
942
"modes can\n"
943
" be set simultaneously, by passing multiple level options, e.g. \"2;4\" "
944
"will activate\n"
945
" both modes 2 and 4. Modes can be specified by name or number.\n"
946
" \n"
947
" Values:\n"
948
" 0: (None) Normal amount of collection (resets all modes)\n"
949
" 1: (RootsChange) Collect when roots are added or removed\n"
950
" 2: (Alloc) Collect when every N allocations (default: 100)\n"
951
" 4: (VerifierPre) Verify pre write barriers between instructions\n"
952
" 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
953
" 8: (YieldBeforeMarking) Incremental GC in two slices that yields "
954
"between\n"
955
" the root marking and marking phases\n"
956
" 9: (YieldBeforeSweeping) Incremental GC in two slices that yields "
957
"between\n"
958
" the marking and sweeping phases\n"
959
" 10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
960
" 11: (IncrementalMarkingValidator) Verify incremental marking\n"
961
" 12: (ElementsBarrier) Use the individual element post-write barrier\n"
962
" regardless of elements size\n"
963
" 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
964
" 14: (Compact) Perform a shrinking collection every N allocations\n"
965
" 15: (CheckHeapAfterGC) Walk the heap to check its integrity after "
966
"every GC\n"
967
" 16: (CheckNursery) Check nursery integrity on minor GC\n"
968
" 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that "
969
"yields\n"
970
" before sweeping the atoms table\n"
971
" 18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
972
" 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that "
973
"yields\n"
974
" before sweeping weak caches\n"
975
" 20: (YieldBeforeSweepingTypes) Incremental GC in two slices that "
976
"yields\n"
977
" before sweeping type information\n"
978
" 21: (YieldBeforeSweepingObjects) Incremental GC in two slices that "
979
"yields\n"
980
" before sweeping foreground finalized objects\n"
981
" 22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that "
982
"yields\n"
983
" before sweeping non-object GC things\n"
984
" 23: (YieldBeforeSweepingShapeTrees) Incremental GC in two slices that "
985
"yields\n"
986
" before sweeping shape trees\n"
987
" 24: (CheckWeakMapMarking) Check weak map marking invariants after "
988
"every GC\n"
989
" 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n"
990
" during gray marking\n";
991
992
// The set of zeal modes that control incremental slices. These modes are
993
// mutually exclusive.
994
static const mozilla::EnumSet<ZealMode> IncrementalSliceZealModes = {
995
ZealMode::YieldBeforeMarking,
996
ZealMode::YieldBeforeSweeping,
997
ZealMode::IncrementalMultipleSlices,
998
ZealMode::YieldBeforeSweepingAtoms,
999
ZealMode::YieldBeforeSweepingCaches,
1000
ZealMode::YieldBeforeSweepingTypes,
1001
ZealMode::YieldBeforeSweepingObjects,
1002
ZealMode::YieldBeforeSweepingNonObjects,
1003
ZealMode::YieldBeforeSweepingShapeTrees};
1004
1005
void GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) {
1006
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
1007
1008
if (verifyPreData) {
1009
VerifyBarriers(rt, PreBarrierVerifier);
1010
}
1011
1012
if (zeal == 0) {
1013
if (hasZealMode(ZealMode::GenerationalGC)) {
1014
evictNursery(JS::GCReason::DEBUG_GC);
1015
nursery().leaveZealMode();
1016
}
1017
1018
if (isIncrementalGCInProgress()) {
1019
finishGC(JS::GCReason::DEBUG_GC);
1020
}
1021
}
1022
1023
ZealMode zealMode = ZealMode(zeal);
1024
if (zealMode == ZealMode::GenerationalGC) {
1025
evictNursery(JS::GCReason::DEBUG_GC);
1026
nursery().enterZealMode();
1027
}
1028
1029
// Some modes are mutually exclusive. If we're setting one of those, we
1030
// first reset all of them.
1031
if (IncrementalSliceZealModes.contains(zealMode)) {
1032
for (auto mode : IncrementalSliceZealModes) {
1033
clearZealMode(mode);
1034
}
1035
}
1036
1037
bool schedule = zealMode >= ZealMode::Alloc;
1038
if (zeal != 0) {
1039
zealModeBits |= 1 << unsigned(zeal);
1040
} else {
1041
zealModeBits = 0;
1042
}
1043
zealFrequency = frequency;
1044
nextScheduled = schedule ? frequency : 0;
1045
}
1046
1047
void GCRuntime::unsetZeal(uint8_t zeal) {
1048
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
1049
ZealMode zealMode = ZealMode(zeal);
1050
1051
if (!hasZealMode(zealMode)) {
1052
return;
1053
}
1054
1055
if (verifyPreData) {
1056
VerifyBarriers(rt, PreBarrierVerifier);
1057
}
1058
1059
if (zealMode == ZealMode::GenerationalGC) {
1060
evictNursery(JS::GCReason::DEBUG_GC);
1061
nursery().leaveZealMode();
1062
}
1063
1064
clearZealMode(zealMode);
1065
1066
if (zealModeBits == 0) {
1067
if (isIncrementalGCInProgress()) {
1068
finishGC(JS::GCReason::DEBUG_GC);
1069
}
1070
1071
zealFrequency = 0;
1072
nextScheduled = 0;
1073
}
1074
}
1075
1076
void GCRuntime::setNextScheduled(uint32_t count) { nextScheduled = count; }
1077
1078
using CharRange = mozilla::Range<const char>;
1079
using CharRangeVector = Vector<CharRange, 0, SystemAllocPolicy>;
1080
1081
static bool ParseZealModeName(CharRange text, uint32_t* modeOut) {
1082
struct ModeInfo {
1083
const char* name;
1084
size_t length;
1085
uint32_t value;
1086
};
1087
1088
static const ModeInfo zealModes[] = {{"None", 0},
1089
# define ZEAL_MODE(name, value) {# name, strlen(# name), value},
1090
JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
1091
# undef ZEAL_MODE
1092
};
1093
1094
for (auto mode : zealModes) {
1095
if (text.length() == mode.length &&
1096
memcmp(text.begin().get(), mode.name, mode.length) == 0) {
1097
*modeOut = mode.value;
1098
return true;
1099
}
1100
}
1101
1102
return false;
1103
}
1104
1105
static bool ParseZealModeNumericParam(CharRange text, uint32_t* paramOut) {
1106
if (text.length() == 0) {
1107
return false;
1108
}
1109
1110
for (auto c : text) {
1111
if (!mozilla::IsAsciiDigit(c)) {
1112
return false;
1113
}
1114
}
1115
1116
*paramOut = atoi(text.begin().get());
1117
return true;
1118
}
1119
1120
static bool SplitStringBy(CharRange text, char delimiter,
1121
CharRangeVector* result) {
1122
auto start = text.begin();
1123
for (auto ptr = start; ptr != text.end(); ptr++) {
1124
if (*ptr == delimiter) {
1125
if (!result->emplaceBack(start, ptr)) {
1126
return false;
1127
}
1128
start = ptr + 1;
1129
}
1130
}
1131
1132
return result->emplaceBack(start, text.end());
1133
}
1134
1135
static bool PrintZealHelpAndFail() {
1136
fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
1137
fputs(ZealModeHelpText, stderr);
1138
return false;
1139
}
1140
1141
bool GCRuntime::parseAndSetZeal(const char* str) {
1142
// Set the zeal mode from a string consisting of one or more mode specifiers
1143
// separated by ';', optionally followed by a ',' and the trigger frequency.
1144
// The mode specifiers can by a mode name or its number.
1145
1146
auto text = CharRange(str, strlen(str));
1147
1148
CharRangeVector parts;
1149
if (!SplitStringBy(text, ',', &parts)) {
1150
return false;
1151
}
1152
1153
if (parts.length() == 0 || parts.length() > 2) {
1154
return PrintZealHelpAndFail();
1155
}
1156
1157
uint32_t frequency = JS_DEFAULT_ZEAL_FREQ;
1158
if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency)) {
1159
return PrintZealHelpAndFail();
1160
}
1161
1162
CharRangeVector modes;
1163
if (!SplitStringBy(parts[0], ';', &modes)) {
1164
return false;
1165
}
1166
1167
for (const auto& descr : modes) {
1168
uint32_t mode;
1169
if (!ParseZealModeName(descr, &mode) &&
1170
!(ParseZealModeNumericParam(descr, &mode) &&
1171
mode <= unsigned(ZealMode::Limit))) {
1172
return PrintZealHelpAndFail();
1173
}
1174
1175
setZeal(mode, frequency);
1176
}
1177
1178
return true;
1179
}
1180
1181
const char* js::gc::AllocKindName(AllocKind kind) {
1182
static const char* const names[] = {
1183
# define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) # allocKind,
1184
FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
1185
# undef EXPAND_THING_NAME
1186
};
1187
static_assert(ArrayLength(names) == size_t(AllocKind::LIMIT),
1188
"names array should have an entry for every AllocKind");
1189
1190
size_t i = size_t(kind);
1191
MOZ_ASSERT(i < ArrayLength(names));
1192
return names[i];
1193
}
1194
1195
void js::gc::DumpArenaInfo() {
1196
fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize);
1197
1198
fprintf(stderr, "GC thing kinds:\n");
1199
fprintf(stderr, "%25s %8s %8s %8s\n",
1200
"AllocKind:", "Size:", "Count:", "Padding:");
1201
for (auto kind : AllAllocKinds()) {
1202
fprintf(stderr, "%25s %8zu %8zu %8zu\n", AllocKindName(kind),
1203
Arena::thingSize(kind), Arena::thingsPerArena(kind),
1204
Arena::firstThingOffset(kind) - ArenaHeaderSize);
1205
}
1206
}
1207
1208
#endif // JS_GC_ZEAL
1209
1210
bool GCRuntime::init(uint32_t maxbytes) {
1211
MOZ_ASSERT(SystemPageSize());
1212
Arena::checkLookupTables();
1213
1214
{
1215
AutoLockGCBgAlloc lock(this);
1216
1217
MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes, lock));
1218
1219
const char* size = getenv("JSGC_MARK_STACK_LIMIT");
1220
if (size) {
1221
setMarkStackLimit(atoi(size), lock);
1222
}
1223
1224
if (!nursery().init(lock)) {
1225
return false;
1226
}
1227
1228
const char* pretenureThresholdStr = getenv("JSGC_PRETENURE_THRESHOLD");
1229
if (pretenureThresholdStr && pretenureThresholdStr[0]) {
1230
char* last;
1231
long pretenureThreshold = strtol(pretenureThresholdStr, &last, 10);
1232
if (last[0] || !tunables.setParameter(JSGC_PRETENURE_THRESHOLD,
1233
pretenureThreshold, lock)) {
1234
fprintf(stderr, "Invalid value for JSGC_PRETENURE_THRESHOLD: %s\n",
1235
pretenureThresholdStr);
1236
}
1237
}
1238
}
1239
1240
#ifdef JS_GC_ZEAL
1241
const char* zealSpec = getenv("JS_GC_ZEAL");
1242
if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec)) {
1243
return false;
1244
}
1245
#endif
1246
1247
if (!gcTracer.initTrace(*this)) {
1248
return false;
1249
}
1250
1251
if (!marker.init(mode)) {
1252
return false;
1253
}
1254
1255
if (!initSweepActions()) {
1256
return false;
1257
}
1258
1259
return true;
1260
}
1261
1262
void GCRuntime::freezeSelfHostingZone() {
1263
MOZ_ASSERT(!selfHostingZoneFrozen);
1264
MOZ_ASSERT(!isIncrementalGCInProgress());
1265
1266
for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
1267
MOZ_ASSERT(!zone->isGCScheduled());
1268
if (zone->isSelfHostingZone()) {
1269
zone->scheduleGC();
1270
}
1271
}
1272
1273
gc(GC_SHRINK, JS::GCReason::INIT_SELF_HOSTING);
1274
selfHostingZoneFrozen = true;
1275
}
1276
1277
void GCRuntime::finish() {
1278
MOZ_ASSERT(inPageLoadCount == 0);
1279
1280
// Wait for nursery background free to end and disable it to release memory.
1281
if (nursery().isEnabled()) {
1282
nursery().disable();
1283
}
1284
1285
// Wait until the background finalization and allocation stops and the
1286
// helper thread shuts down before we forcefully release any remaining GC
1287
// memory.
1288
sweepTask.join();
1289
freeTask.join();
1290
allocTask.cancelAndWait();
1291
decommitTask.cancelAndWait();
1292
1293
#ifdef JS_GC_ZEAL
1294
// Free memory associated with GC verification.
1295
finishVerifier();
1296
#endif
1297
1298
// Delete all remaining zones.
1299
if (rt->gcInitialized) {
1300
for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
1301
AutoSetThreadIsSweeping threadIsSweeping(zone);
1302
for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
1303
for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) {
1304
js_delete(realm.get());
1305
}
1306
comp->realms().clear();
1307
js_delete(comp.get());
1308
}
1309
zone->compartments().clear();
1310
js_delete(zone.get());
1311
}
1312
}
1313
1314
zones().clear();
1315
1316
FreeChunkPool(fullChunks_.ref());
1317
FreeChunkPool(availableChunks_.ref());
1318
FreeChunkPool(emptyChunks_.ref());
1319
1320
gcTracer.finishTrace();
1321
1322
nursery().printTotalProfileTimes();
1323
stats().printTotalProfileTimes();
1324
}
1325
1326
bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value) {
1327
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1328
waitBackgroundSweepEnd();
1329
AutoLockGC lock(this);
1330
return setParameter(key, value, lock);
1331
}
1332
1333
bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value,
1334
AutoLockGC& lock) {
1335
switch (key) {
1336
case JSGC_SLICE_TIME_BUDGET_MS:
1337
defaultTimeBudgetMS_ = value ? value : SliceBudget::UnlimitedTimeBudget;
1338
break;
1339
case JSGC_MARK_STACK_LIMIT:
1340
if (value == 0) {
1341
return false;
1342
}
1343
setMarkStackLimit(value, lock);
1344
break;
1345
case JSGC_MODE:
1346
if (mode != JSGC_MODE_GLOBAL && mode != JSGC_MODE_ZONE &&
1347
mode != JSGC_MODE_INCREMENTAL && mode != JSGC_MODE_ZONE_INCREMENTAL) {
1348
return false;
1349
}
1350
mode = JSGCMode(value);
1351
break;
1352
case JSGC_COMPACTING_ENABLED:
1353
compactingEnabled = value != 0;
1354
break;
1355
default:
1356
if (!tunables.setParameter(key, value, lock)) {
1357
return false;
1358
}
1359
for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
1360
zone->updateGCThresholds(*this, GC_NORMAL, lock);
1361
}
1362
}
1363
1364
return true;
1365
}
1366
1367
void GCRuntime::resetParameter(JSGCParamKey key) {
1368
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1369
waitBackgroundSweepEnd();
1370
AutoLockGC lock(this);
1371
resetParameter(key, lock);
1372
}
1373
1374
void GCRuntime::resetParameter(JSGCParamKey key, AutoLockGC& lock) {
1375
switch (key) {
1376
case JSGC_SLICE_TIME_BUDGET_MS:
1377
defaultTimeBudgetMS_ = TuningDefaults::DefaultTimeBudgetMS;
1378
break;
1379
case JSGC_MARK_STACK_LIMIT:
1380
setMarkStackLimit(MarkStack::DefaultCapacity, lock);
1381
break;
1382
case JSGC_MODE:
1383
mode = TuningDefaults::Mode;
1384
break;
1385
case JSGC_COMPACTING_ENABLED:
1386
compactingEnabled = TuningDefaults::CompactingEnabled;
1387
break;
1388
default:
1389
tunables.resetParameter(key, lock);
1390
for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
1391
zone->updateGCThresholds(*this, GC_NORMAL, lock);
1392
}
1393
}
1394
}
1395
1396
uint32_t GCRuntime::getParameter(JSGCParamKey key) {
1397
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1398
AutoLockGC lock(this);
1399
return getParameter(key, lock);
1400
}
1401
1402
uint32_t GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock) {
1403
switch (key) {
1404
case JSGC_MAX_BYTES:
1405
return uint32_t(tunables.gcMaxBytes());
1406
case JSGC_MIN_NURSERY_BYTES:
1407
MOZ_ASSERT(tunables.gcMinNurseryBytes() < UINT32_MAX);
1408
return uint32_t(tunables.gcMinNurseryBytes());
1409
case JSGC_MAX_NURSERY_BYTES:
1410
MOZ_ASSERT(tunables.gcMaxNurseryBytes() < UINT32_MAX);
1411
return uint32_t(tunables.gcMaxNurseryBytes());
1412
case JSGC_BYTES:
1413
return uint32_t(heapSize.bytes());
1414
case JSGC_NURSERY_BYTES:
1415
return nursery().capacity();
1416
case JSGC_NUMBER:
1417
return uint32_t(number);
1418
case JSGC_MODE:
1419
return uint32_t(mode);
1420
case JSGC_UNUSED_CHUNKS:
1421
return uint32_t(emptyChunks(lock).count());
1422
case JSGC_TOTAL_CHUNKS:
1423
return uint32_t(fullChunks(lock).count() + availableChunks(lock).count() +
1424
emptyChunks(lock).count());
1425
case JSGC_SLICE_TIME_BUDGET_MS:
1426
if (defaultTimeBudgetMS_.ref() == SliceBudget::UnlimitedTimeBudget) {
1427
return 0;
1428
} else {
1429
MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ >= 0);
1430
MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ <= UINT32_MAX);
1431
return uint32_t(defaultTimeBudgetMS_);
1432
}
1433
case JSGC_MARK_STACK_LIMIT:
1434
return marker.maxCapacity();
1435
case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
1436
return tunables.highFrequencyThreshold().ToMilliseconds();
1437
case JSGC_HIGH_FREQUENCY_LOW_LIMIT:
1438
return tunables.highFrequencyLowLimitBytes() / 1024 / 1024;
1439
case JSGC_HIGH_FREQUENCY_HIGH_LIMIT:
1440
return tunables.highFrequencyHighLimitBytes() / 1024 / 1024;
1441
case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:
1442
return uint32_t(tunables.highFrequencyHeapGrowthMax() * 100);
1443
case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:
1444
return uint32_t(tunables.highFrequencyHeapGrowthMin() * 100);
1445
case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
1446
return uint32_t(tunables.lowFrequencyHeapGrowth() * 100);
1447
case JSGC_DYNAMIC_HEAP_GROWTH:
1448
return tunables.isDynamicHeapGrowthEnabled();
1449
case JSGC_DYNAMIC_MARK_SLICE:
1450
return tunables.isDynamicMarkSliceEnabled();
1451
case JSGC_ALLOCATION_THRESHOLD:
1452
return tunables.gcZoneAllocThresholdBase() / 1024 / 1024;
1453
case JSGC_NON_INCREMENTAL_FACTOR:
1454
return uint32_t(tunables.nonIncrementalFactor() * 100);
1455
case JSGC_AVOID_INTERRUPT_FACTOR:
1456
return uint32_t(tunables.avoidInterruptFactor() * 100);
1457
case JSGC_MIN_EMPTY_CHUNK_COUNT:
1458
return tunables.minEmptyChunkCount(lock);
1459
case JSGC_MAX_EMPTY_CHUNK_COUNT:
1460
return tunables.maxEmptyChunkCount();
1461
case JSGC_COMPACTING_ENABLED:
1462
return compactingEnabled;
1463
case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION:
1464
return tunables.nurseryFreeThresholdForIdleCollection();
1465
case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION_PERCENT:
1466
return uint32_t(tunables.nurseryFreeThresholdForIdleCollectionFraction() *
1467
100.0f);
1468
case JSGC_PRETENURE_THRESHOLD:
1469
return uint32_t(tunables.pretenureThreshold() * 100);
1470
case JSGC_PRETENURE_GROUP_THRESHOLD:
1471
return tunables.pretenureGroupThreshold();
1472
case JSGC_MIN_LAST_DITCH_GC_PERIOD:
1473
return tunables.minLastDitchGCPeriod().ToSeconds();
1474
case JSGC_ZONE_ALLOC_DELAY_KB:
1475
return tunables.zoneAllocDelayBytes() / 1024;
1476
case JSGC_MALLOC_THRESHOLD_BASE:
1477
return tunables.mallocThresholdBase() / 1024 / 1024;
1478
case JSGC_MALLOC_GROWTH_FACTOR:
1479
return uint32_t(tunables.mallocGrowthFactor() * 100);
1480
default:
1481
MOZ_CRASH("Unknown parameter key");
1482
}
1483
}
1484
1485
void GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock) {
1486
MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
1487
AutoUnlockGC unlock(lock);
1488
AutoStopVerifyingBarriers pauseVerification(rt, false);
1489
marker.setMaxCapacity(limit);
1490
}
1491
1492
bool GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
1493
AssertHeapIsIdle();
1494
return !!blackRootTracers.ref().append(
1495
Callback<JSTraceDataOp>(traceOp, data));
1496
}
1497
1498
void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
1499
// Can be called from finalizers
1500
for (size_t i = 0; i < blackRootTracers.ref().length(); i++) {
1501
Callback<JSTraceDataOp>* e = &blackRootTracers.ref()[i];
1502
if (e->op == traceOp && e->data == data) {
1503
blackRootTracers.ref().erase(e);
1504
break;
1505
}
1506
}
1507
}
1508
1509
void GCRuntime::setGrayRootsTracer(JSTraceDataOp traceOp, void* data) {
1510
AssertHeapIsIdle();
1511
grayRootTracer.ref() = {traceOp, data};
1512
}
1513
1514
void GCRuntime::clearBlackAndGrayRootTracers() {
1515
MOZ_ASSERT(rt->isBeingDestroyed());
1516
blackRootTracers.ref().clear();
1517
setGrayRootsTracer(nullptr, nullptr);
1518
}
1519
1520
void GCRuntime::setGCCallback(JSGCCallback callback, void* data) {
1521
gcCallback.ref() = {callback, data};
1522
}
1523
1524
void GCRuntime::callGCCallback(JSGCStatus status) const {
1525
const auto& callback = gcCallback.ref();
1526
MOZ_ASSERT(callback.op);
1527
callback.op(rt->mainContextFromOwnThread(), status, callback.data);
1528
}
1529
1530
void GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback,
1531
void* data) {
1532
tenuredCallback.ref() = {callback, data};
1533
}
1534
1535
void GCRuntime::callObjectsTenuredCallback() {
1536
JS::AutoSuppressGCAnalysis nogc;
1537
const auto& callback = tenuredCallback.ref();
1538
if (callback.op) {
1539
callback.op(rt->mainContextFromOwnThread(), callback.data);
1540
}
1541
}
1542
1543
bool GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data) {
1544
return finalizeCallbacks.ref().append(
1545
Callback<JSFinalizeCallback>(callback, data));
1546
}
1547
1548
template <typename F>
1549
static void EraseCallback(CallbackVector<F>& vector, F callback) {
1550
for (Callback<F>* p = vector.begin(); p != vector.end(); p++) {
1551
if (p->op == callback) {
1552
vector.erase(p);
1553
return;
1554
}
1555
}
1556
}
1557
1558
void GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback) {
1559
EraseCallback(finalizeCallbacks.ref(), callback);
1560
}
1561
1562
void GCRuntime::callFinalizeCallbacks(JSFreeOp* fop,
1563
JSFinalizeStatus status) const {
1564
for (auto& p : finalizeCallbacks.ref()) {
1565
p.op(fop, status, p.data);
1566
}
1567
}
1568
1569
void GCRuntime::setHostCleanupFinalizationGroupCallback(
1570
JSHostCleanupFinalizationGroupCallback callback, void* data) {
1571
hostCleanupFinalizationGroupCallback.ref() = {callback, data};
1572
}
1573
1574
void GCRuntime::callHostCleanupFinalizationGroupCallback(
1575
FinalizationGroupObject* group) {
1576
JS::AutoSuppressGCAnalysis nogc;
1577
const auto& callback = hostCleanupFinalizationGroupCallback.ref();
1578
if (callback.op) {
1579
callback.op(group, callback.data);
1580
}
1581
}
1582
1583
bool GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback,
1584
void* data) {
1585
return updateWeakPointerZonesCallbacks.ref().append(
1586
Callback<JSWeakPointerZonesCallback>(callback, data));
1587
}
1588
1589
void GCRuntime::removeWeakPointerZonesCallback(
1590
JSWeakPointerZonesCallback callback) {
1591
EraseCallback(updateWeakPointerZonesCallbacks.ref(), callback);
1592
}
1593
1594
void GCRuntime::callWeakPointerZonesCallbacks() const {
1595
JSContext* cx = rt->mainContextFromOwnThread();
1596
for (auto const& p : updateWeakPointerZonesCallbacks.ref()) {
1597
p.op(cx, p.data);
1598
}
1599
}
1600
1601
bool GCRuntime::addWeakPointerCompartmentCallback(
1602
JSWeakPointerCompartmentCallback callback, void* data) {
1603
return updateWeakPointerCompartmentCallbacks.ref().append(
1604
Callback<JSWeakPointerCompartmentCallback>(callback, data));
1605
}
1606
1607
void GCRuntime::removeWeakPointerCompartmentCallback(
1608
JSWeakPointerCompartmentCallback callback) {
1609
EraseCallback(updateWeakPointerCompartmentCallbacks.ref(), callback);
1610
}
1611
1612
void GCRuntime::callWeakPointerCompartmentCallbacks(
1613
JS::Compartment* comp) const {
1614
JSContext* cx = rt->mainContextFromOwnThread();
1615
for (auto const& p : updateWeakPointerCompartmentCallbacks.ref()) {
1616
p.op(cx, comp, p.data);
1617
}
1618
}
1619
1620
JS::GCSliceCallback GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
1621
return stats().setSliceCallback(callback);
1622
}
1623
1624
JS::GCNurseryCollectionCallback GCRuntime::setNurseryCollectionCallback(
1625
JS::GCNurseryCollectionCallback callback) {
1626
return stats().setNurseryCollectionCallback(callback);
1627
}
1628
1629
JS::DoCycleCollectionCallback GCRuntime::setDoCycleCollectionCallback(
1630
JS::DoCycleCollectionCallback callback) {
1631
const auto prior = gcDoCycleCollectionCallback.ref();
1632
gcDoCycleCollectionCallback.ref() = {callback, nullptr};
1633
return prior.op;
1634
}
1635
1636
void GCRuntime::callDoCycleCollectionCallback(JSContext* cx) {
1637
const auto& callback = gcDoCycleCollectionCallback.ref();
1638
if (callback.op) {
1639
callback.op(cx);
1640
}
1641
}
1642
1643
bool GCRuntime::addRoot(Value* vp, const char* name) {
1644
/*
1645
* Sometimes Firefox will hold weak references to objects and then convert
1646
* them to strong references by calling AddRoot (e.g., via PreserveWrapper,
1647
* or ModifyBusyCount in workers). We need a read barrier to cover these
1648
* cases.
1649
*/
1650
if (isIncrementalGCInProgress()) {
1651
GCPtrValue::writeBarrierPre(*vp);
1652
}
1653
1654
return rootsHash.ref().put(vp, name);
1655
}
1656
1657
void GCRuntime::removeRoot(Value* vp) {
1658
rootsHash.ref().remove(vp);
1659
notifyRootsRemoved();
1660
}
1661
1662
extern JS_FRIEND_API bool js::AddRawValueRoot(JSContext* cx, Value* vp,
1663
const char* name) {
1664
MOZ_ASSERT(vp);
1665
MOZ_ASSERT(name);
1666
bool ok = cx->runtime()->gc.addRoot(vp, name);
1667
if (!ok) {
1668
JS_ReportOutOfMemory(cx);
1669
}
1670
return ok;
1671
}
1672
1673
extern JS_FRIEND_API void js::RemoveRawValueRoot(JSContext* cx, Value* vp) {
1674
cx->runtime()->gc.removeRoot(vp);
1675
}
1676
1677
/* Compacting GC */
1678
1679
bool js::gc::IsCurrentlyAnimating(const TimeStamp& lastAnimationTime,
1680
const TimeStamp& currentTime) {
1681
// Assume that we're currently animating if js::NotifyAnimationActivity has
1682
// been called in the last second.
1683
static const auto oneSecond = TimeDuration::FromSeconds(1);
1684
return !lastAnimationTime.IsNull() &&
1685
currentTime < (lastAnimationTime + oneSecond);
1686
}
1687
1688
bool GCRuntime::shouldCompact() {
1689
// Compact on shrinking GC if enabled. Skip compacting in incremental GCs
1690
// if we are currently animating, unless the user is inactive or we're
1691
// responding to memory pressure.
1692
1693
if (invocationKind != GC_SHRINK || !isCompactingGCEnabled()) {
1694
return false;
1695
}
1696
1697
if (initialReason == JS::GCReason::USER_INACTIVE ||
1698
initialReason == JS::GCReason::MEM_PRESSURE) {
1699
return true;
1700
}
1701
1702
return !isIncremental ||
1703
!IsCurrentlyAnimating(rt->lastAnimationTime, TimeStamp::Now());
1704
}
1705
1706
bool GCRuntime::isCompactingGCEnabled() const {
1707
return compactingEnabled &&
1708
rt->mainContextFromOwnThread()->compactingDisabledCount == 0 &&
1709
!mozilla::recordreplay::IsRecordingOrReplaying();
1710
}
1711
1712
AutoDisableCompactingGC::AutoDisableCompactingGC(JSContext* cx) : cx(cx) {
1713
++cx->compactingDisabledCount;
1714
if (cx->runtime()->gc.isIncrementalGCInProgress() &&
1715
cx->runtime()->gc.isCompactingGc()) {
1716
FinishGC(cx);
1717
}
1718
}
1719
1720
AutoDisableCompactingGC::~AutoDisableCompactingGC() {
1721
MOZ_ASSERT(cx->compactingDisabledCount > 0);
1722
--cx->compactingDisabledCount;
1723
}
1724
1725
bool GCRuntime::canRelocateZone(Zone* zone) const {
1726
if (zone->isAtomsZone()) {
1727
return false;
1728
}
1729
1730
if (zone->isSelfHostingZone() && selfHostingZoneFrozen) {
1731
return false;
1732
}
1733
1734
return true;
1735
}
1736
1737
Arena* ArenaList::removeRemainingArenas(Arena** arenap) {
1738
// This is only ever called to remove arenas that are after the cursor, so
1739
// we don't need to update it.
1740
#ifdef DEBUG
1741
for (Arena* arena = *arenap; arena; arena = arena->next) {
1742
MOZ_ASSERT(cursorp_ != &arena->next);
1743
}
1744
#endif
1745
Arena* remainingArenas = *arenap;
1746
*arenap = nullptr;
1747
check();
1748
return remainingArenas;
1749
}
1750
1751
static bool ShouldRelocateAllArenas(JS::GCReason reason) {
1752
return reason == JS::GCReason::DEBUG_GC;
1753
}
1754
1755
/*
1756
* Choose which arenas to relocate all cells from. Return an arena cursor that
1757
* can be passed to removeRemainingArenas().
1758
*/
1759
Arena** ArenaList::pickArenasToRelocate(size_t& arenaTotalOut,
1760
size_t& relocTotalOut) {
1761
// Relocate the greatest number of arenas such that the number of used cells
1762
// in relocated arenas is less than or equal to the number of free cells in
1763
// unrelocated arenas. In other words we only relocate cells we can move
1764
// into existing arenas, and we choose the least full areans to relocate.
1765
//
1766
// This is made easier by the fact that the arena list has been sorted in
1767
// descending order of number of used cells, so we will always relocate a
1768
// tail of the arena list. All we need to do is find the point at which to
1769
// start relocating.
1770
1771
check();
1772
1773
if (isCursorAtEnd()) {
1774
return nullptr;
1775
}
1776
1777
Arena** arenap = cursorp_; // Next arena to consider for relocation.
1778
size_t previousFreeCells = 0; // Count of free cells before arenap.
1779
size_t followingUsedCells = 0; // Count of used cells after arenap.
1780
size_t fullArenaCount = 0; // Number of full arenas (not relocated).
1781
size_t nonFullArenaCount =
1782
0; // Number of non-full arenas (considered for relocation).
1783
size_t arenaIndex = 0; // Index of the next arena to consider.
1784
1785
for (Arena* arena = head_; arena != *cursorp_; arena = arena->next) {
1786
fullArenaCount++;
1787
}
1788
1789
for (Arena* arena = *cursorp_; arena; arena = arena->next) {
1790
followingUsedCells += arena->countUsedCells();
1791
nonFullArenaCount++;
1792
}
1793
1794
mozilla::DebugOnly<size_t> lastFreeCells(0);
1795
size_t cellsPerArena = Arena::thingsPerArena((*arenap)->getAllocKind());
1796
1797
while (*arenap) {
1798
Arena* arena = *arenap;
1799
if (followingUsedCells <= previousFreeCells) {