Source code

Revision control

Other Tools

1
/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
2
* vim: set ts=8 sts=2 et sw=2 tw=80:
3
* This Source Code Form is subject to the terms of the Mozilla Public
4
* License, v. 2.0. If a copy of the MPL was not distributed with this
5
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
6
7
/*
8
* [SMDOC] Garbage Collector
9
*
10
* This code implements an incremental mark-and-sweep garbage collector, with
11
* most sweeping carried out in the background on a parallel thread.
12
*
13
* Full vs. zone GC
14
* ----------------
15
*
16
* The collector can collect all zones at once, or a subset. These types of
17
* collection are referred to as a full GC and a zone GC respectively.
18
*
19
* It is possible for an incremental collection that started out as a full GC to
20
* become a zone GC if new zones are created during the course of the
21
* collection.
22
*
23
* Incremental collection
24
* ----------------------
25
*
26
* For a collection to be carried out incrementally the following conditions
27
* must be met:
28
* - the collection must be run by calling js::GCSlice() rather than js::GC()
29
* - the GC mode must have been set to JSGC_MODE_INCREMENTAL or
30
* JSGC_MODE_ZONE_INCREMENTAL with JS_SetGCParameter()
31
* - no thread may have an AutoKeepAtoms instance on the stack
32
*
33
* The last condition is an engine-internal mechanism to ensure that incremental
34
* collection is not carried out without the correct barriers being implemented.
35
* For more information see 'Incremental marking' below.
36
*
37
* If the collection is not incremental, all foreground activity happens inside
38
* a single call to GC() or GCSlice(). However the collection is not complete
39
* until the background sweeping activity has finished.
40
*
41
* An incremental collection proceeds as a series of slices, interleaved with
42
* mutator activity, i.e. running JavaScript code. Slices are limited by a time
43
* budget. The slice finishes as soon as possible after the requested time has
44
* passed.
45
*
46
* Collector states
47
* ----------------
48
*
49
* The collector proceeds through the following states, the current state being
50
* held in JSRuntime::gcIncrementalState:
51
*
52
* - MarkRoots - marks the stack and other roots
53
* - Mark - incrementally marks reachable things
54
* - Sweep - sweeps zones in groups and continues marking unswept zones
55
* - Finalize - performs background finalization, concurrent with mutator
56
* - Compact - incrementally compacts by zone
57
* - Decommit - performs background decommit and chunk removal
58
*
59
* The MarkRoots activity always takes place in the first slice. The next two
60
* states can take place over one or more slices.
61
*
62
* In other words an incremental collection proceeds like this:
63
*
64
* Slice 1: MarkRoots: Roots pushed onto the mark stack.
65
* Mark: The mark stack is processed by popping an element,
66
* marking it, and pushing its children.
67
*
68
* ... JS code runs ...
69
*
70
* Slice 2: Mark: More mark stack processing.
71
*
72
* ... JS code runs ...
73
*
74
* Slice n-1: Mark: More mark stack processing.
75
*
76
* ... JS code runs ...
77
*
78
* Slice n: Mark: Mark stack is completely drained.
79
* Sweep: Select first group of zones to sweep and sweep them.
80
*
81
* ... JS code runs ...
82
*
83
* Slice n+1: Sweep: Mark objects in unswept zones that were newly
84
* identified as alive (see below). Then sweep more zone
85
* sweep groups.
86
*
87
* ... JS code runs ...
88
*
89
* Slice n+2: Sweep: Mark objects in unswept zones that were newly
90
* identified as alive. Then sweep more zones.
91
*
92
* ... JS code runs ...
93
*
94
* Slice m: Sweep: Sweeping is finished, and background sweeping
95
* started on the helper thread.
96
*
97
* ... JS code runs, remaining sweeping done on background thread ...
98
*
99
* When background sweeping finishes the GC is complete.
100
*
101
* Incremental marking
102
* -------------------
103
*
104
* Incremental collection requires close collaboration with the mutator (i.e.,
105
* JS code) to guarantee correctness.
106
*
107
* - During an incremental GC, if a memory location (except a root) is written
108
* to, then the value it previously held must be marked. Write barriers
109
* ensure this.
110
*
111
* - Any object that is allocated during incremental GC must start out marked.
112
*
113
* - Roots are marked in the first slice and hence don't need write barriers.
114
* Roots are things like the C stack and the VM stack.
115
*
116
* The problem that write barriers solve is that between slices the mutator can
117
* change the object graph. We must ensure that it cannot do this in such a way
118
* that makes us fail to mark a reachable object (marking an unreachable object
119
* is tolerable).
120
*
121
* We use a snapshot-at-the-beginning algorithm to do this. This means that we
122
* promise to mark at least everything that is reachable at the beginning of
123
* collection. To implement it we mark the old contents of every non-root memory
124
* location written to by the mutator while the collection is in progress, using
125
* write barriers. This is described in gc/Barrier.h.
126
*
127
* Incremental sweeping
128
* --------------------
129
*
130
* Sweeping is difficult to do incrementally because object finalizers must be
131
* run at the start of sweeping, before any mutator code runs. The reason is
132
* that some objects use their finalizers to remove themselves from caches. If
133
* mutator code was allowed to run after the start of sweeping, it could observe
134
* the state of the cache and create a new reference to an object that was just
135
* about to be destroyed.
136
*
137
* Sweeping all finalizable objects in one go would introduce long pauses, so
138
* instead sweeping broken up into groups of zones. Zones which are not yet
139
* being swept are still marked, so the issue above does not apply.
140
*
141
* The order of sweeping is restricted by cross compartment pointers - for
142
* example say that object |a| from zone A points to object |b| in zone B and
143
* neither object was marked when we transitioned to the Sweep phase. Imagine we
144
* sweep B first and then return to the mutator. It's possible that the mutator
145
* could cause |a| to become alive through a read barrier (perhaps it was a
146
* shape that was accessed via a shape table). Then we would need to mark |b|,
147
* which |a| points to, but |b| has already been swept.
148
*
149
* So if there is such a pointer then marking of zone B must not finish before
150
* marking of zone A. Pointers which form a cycle between zones therefore
151
* restrict those zones to being swept at the same time, and these are found
152
* using Tarjan's algorithm for finding the strongly connected components of a
153
* graph.
154
*
155
* GC things without finalizers, and things with finalizers that are able to run
156
* in the background, are swept on the background thread. This accounts for most
157
* of the sweeping work.
158
*
159
* Reset
160
* -----
161
*
162
* During incremental collection it is possible, although unlikely, for
163
* conditions to change such that incremental collection is no longer safe. In
164
* this case, the collection is 'reset' by resetIncrementalGC(). If we are in
165
* the mark state, this just stops marking, but if we have started sweeping
166
* already, we continue non-incrementally until we have swept the current sweep
167
* group. Following a reset, a new collection is started.
168
*
169
* Compacting GC
170
* -------------
171
*
172
* Compacting GC happens at the end of a major GC as part of the last slice.
173
* There are three parts:
174
*
175
* - Arenas are selected for compaction.
176
* - The contents of those arenas are moved to new arenas.
177
* - All references to moved things are updated.
178
*
179
* Collecting Atoms
180
* ----------------
181
*
182
* Atoms are collected differently from other GC things. They are contained in
183
* a special zone and things in other zones may have pointers to them that are
184
* not recorded in the cross compartment pointer map. Each zone holds a bitmap
185
* with the atoms it might be keeping alive, and atoms are only collected if
186
* they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
187
* this bitmap is managed.
188
*/
189
190
#include "gc/GC-inl.h"
191
192
#include "mozilla/ArrayUtils.h"
193
#include "mozilla/DebugOnly.h"
194
#include "mozilla/MacroForEach.h"
195
#include "mozilla/MemoryReporting.h"
196
#include "mozilla/Move.h"
197
#include "mozilla/Range.h"
198
#include "mozilla/ScopeExit.h"
199
#include "mozilla/TextUtils.h"
200
#include "mozilla/TimeStamp.h"
201
#include "mozilla/TypeTraits.h"
202
#include "mozilla/Unused.h"
203
204
#include <initializer_list>
205
#include <string.h>
206
#ifndef XP_WIN
207
# include <sys/mman.h>
208
# include <unistd.h>
209
#endif
210
211
#include "jsapi.h"
212
#include "jsfriendapi.h"
213
#include "jstypes.h"
214
#include "jsutil.h"
215
216
#include "debugger/DebugAPI.h"
217
#include "gc/FindSCCs.h"
218
#include "gc/FreeOp.h"
219
#include "gc/GCInternals.h"
220
#include "gc/GCLock.h"
221
#include "gc/GCTrace.h"
222
#include "gc/Memory.h"
223
#include "gc/Policy.h"
224
#include "gc/WeakMap.h"
225
#include "jit/BaselineJIT.h"
226
#include "jit/IonCode.h"
227
#include "jit/JitcodeMap.h"
228
#include "jit/JitRealm.h"
229
#include "jit/MacroAssembler.h"
230
#include "js/SliceBudget.h"
231
#include "proxy/DeadObjectProxy.h"
232
#include "util/Windows.h"
233
#include "vm/BigIntType.h"
234
#include "vm/GeckoProfiler.h"
235
#include "vm/JSAtom.h"
236
#include "vm/JSContext.h"
237
#include "vm/JSObject.h"
238
#include "vm/JSScript.h"
239
#include "vm/Printer.h"
240
#include "vm/ProxyObject.h"
241
#include "vm/Realm.h"
242
#include "vm/Shape.h"
243
#include "vm/StringType.h"
244
#include "vm/SymbolType.h"
245
#include "vm/Time.h"
246
#include "vm/TraceLogging.h"
247
#include "vm/WrapperObject.h"
248
249
#include "gc/Heap-inl.h"
250
#include "gc/Marking-inl.h"
251
#include "gc/Nursery-inl.h"
252
#include "gc/PrivateIterators-inl.h"
253
#include "gc/Zone-inl.h"
254
#include "vm/GeckoProfiler-inl.h"
255
#include "vm/JSObject-inl.h"
256
#include "vm/JSScript-inl.h"
257
#include "vm/Stack-inl.h"
258
#include "vm/StringType-inl.h"
259
260
using namespace js;
261
using namespace js::gc;
262
263
using mozilla::ArrayLength;
264
using mozilla::Maybe;
265
using mozilla::Nothing;
266
using mozilla::Some;
267
using mozilla::Swap;
268
using mozilla::TimeDuration;
269
using mozilla::TimeStamp;
270
271
using JS::AutoGCRooter;
272
273
/* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
274
static constexpr int IGC_MARK_SLICE_MULTIPLIER = 2;
275
276
const AllocKind gc::slotsToThingKind[] = {
277
// clang-format off
278
/* 0 */ AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT2, AllocKind::OBJECT4,
279
/* 4 */ AllocKind::OBJECT4, AllocKind::OBJECT8, AllocKind::OBJECT8, AllocKind::OBJECT8,
280
/* 8 */ AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
281
/* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
282
/* 16 */ AllocKind::OBJECT16
283
// clang-format on
284
};
285
286
// Check that reserved bits of a Cell are compatible with our typical allocators
287
// since most derived classes will store a pointer in the first word.
288
static_assert(js::detail::LIFO_ALLOC_ALIGN > JS_BITMASK(Cell::ReservedBits),
289
"Cell::ReservedBits should support LifoAlloc");
290
static_assert(CellAlignBytes > JS_BITMASK(Cell::ReservedBits),
291
"Cell::ReservedBits should support gc::Cell");
292
static_assert(
293
sizeof(uintptr_t) > JS_BITMASK(Cell::ReservedBits),
294
"Cell::ReservedBits should support small malloc / aligned globals");
295
static_assert(js::jit::CodeAlignment > JS_BITMASK(Cell::ReservedBits),
296
"Cell::ReservedBits should support JIT code");
297
298
static_assert(mozilla::ArrayLength(slotsToThingKind) ==
299
SLOTS_TO_THING_KIND_LIMIT,
300
"We have defined a slot count for each kind.");
301
302
#define CHECK_THING_SIZE(allocKind, traceKind, type, sizedType, bgFinal, \
303
nursery, compact) \
304
static_assert(sizeof(sizedType) >= SortedArenaList::MinThingSize, \
305
#sizedType " is smaller than SortedArenaList::MinThingSize!"); \
306
static_assert(sizeof(sizedType) >= sizeof(FreeSpan), \
307
#sizedType " is smaller than FreeSpan"); \
308
static_assert(sizeof(sizedType) % CellAlignBytes == 0, \
309
"Size of " #sizedType " is not a multiple of CellAlignBytes"); \
310
static_assert(sizeof(sizedType) >= MinCellSize, \
311
"Size of " #sizedType " is smaller than the minimum size");
312
FOR_EACH_ALLOCKIND(CHECK_THING_SIZE);
313
#undef CHECK_THING_SIZE
314
315
const uint32_t Arena::ThingSizes[] = {
316
#define EXPAND_THING_SIZE(allocKind, traceKind, type, sizedType, bgFinal, \
317
nursery, compact) \
318
sizeof(sizedType),
319
FOR_EACH_ALLOCKIND(EXPAND_THING_SIZE)
320
#undef EXPAND_THING_SIZE
321
};
322
323
FreeSpan FreeLists::emptySentinel;
324
325
#undef CHECK_THING_SIZE_INNER
326
#undef CHECK_THING_SIZE
327
328
#define OFFSET(type) \
329
uint32_t(ArenaHeaderSize + (ArenaSize - ArenaHeaderSize) % sizeof(type))
330
331
const uint32_t Arena::FirstThingOffsets[] = {
332
#define EXPAND_FIRST_THING_OFFSET(allocKind, traceKind, type, sizedType, \
333
bgFinal, nursery, compact) \
334
OFFSET(sizedType),
335
FOR_EACH_ALLOCKIND(EXPAND_FIRST_THING_OFFSET)
336
#undef EXPAND_FIRST_THING_OFFSET
337
};
338
339
#undef OFFSET
340
341
#define COUNT(type) uint32_t((ArenaSize - ArenaHeaderSize) / sizeof(type))
342
343
const uint32_t Arena::ThingsPerArena[] = {
344
#define EXPAND_THINGS_PER_ARENA(allocKind, traceKind, type, sizedType, \
345
bgFinal, nursery, compact) \
346
COUNT(sizedType),
347
FOR_EACH_ALLOCKIND(EXPAND_THINGS_PER_ARENA)
348
#undef EXPAND_THINGS_PER_ARENA
349
};
350
351
#undef COUNT
352
353
struct js::gc::FinalizePhase {
354
gcstats::PhaseKind statsPhase;
355
AllocKinds kinds;
356
};
357
358
/*
359
* Finalization order for objects swept incrementally on the main thread.
360
*/
361
static constexpr FinalizePhase ForegroundObjectFinalizePhase = {
362
gcstats::PhaseKind::SWEEP_OBJECT,
363
{AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT4,
364
AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT16}};
365
366
/*
367
* Finalization order for GC things swept incrementally on the main thread.
368
*/
369
static constexpr FinalizePhase ForegroundNonObjectFinalizePhase = {
370
gcstats::PhaseKind::SWEEP_SCRIPT, {AllocKind::SCRIPT, AllocKind::JITCODE}};
371
372
/*
373
* Finalization order for GC things swept on the background thread.
374
*/
375
static constexpr FinalizePhase BackgroundFinalizePhases[] = {
376
{gcstats::PhaseKind::SWEEP_SCRIPT, {AllocKind::LAZY_SCRIPT}},
377
{gcstats::PhaseKind::SWEEP_OBJECT,
378
{AllocKind::FUNCTION, AllocKind::FUNCTION_EXTENDED,
379
AllocKind::OBJECT0_BACKGROUND, AllocKind::OBJECT2_BACKGROUND,
380
AllocKind::ARRAYBUFFER4, AllocKind::OBJECT4_BACKGROUND,
381
AllocKind::ARRAYBUFFER8, AllocKind::OBJECT8_BACKGROUND,
382
AllocKind::ARRAYBUFFER12, AllocKind::OBJECT12_BACKGROUND,
383
AllocKind::ARRAYBUFFER16, AllocKind::OBJECT16_BACKGROUND}},
384
{gcstats::PhaseKind::SWEEP_SCOPE,
385
{
386
AllocKind::SCOPE,
387
}},
388
{gcstats::PhaseKind::SWEEP_REGEXP_SHARED,
389
{
390
AllocKind::REGEXP_SHARED,
391
}},
392
{gcstats::PhaseKind::SWEEP_STRING,
393
{AllocKind::FAT_INLINE_STRING, AllocKind::STRING,
394
AllocKind::EXTERNAL_STRING, AllocKind::FAT_INLINE_ATOM, AllocKind::ATOM,
395
AllocKind::SYMBOL, AllocKind::BIGINT}},
396
{gcstats::PhaseKind::SWEEP_SHAPE,
397
{AllocKind::SHAPE, AllocKind::ACCESSOR_SHAPE, AllocKind::BASE_SHAPE,
398
AllocKind::OBJECT_GROUP}}};
399
400
void Arena::unmarkAll() {
401
uintptr_t* word = chunk()->bitmap.arenaBits(this);
402
memset(word, 0, ArenaBitmapWords * sizeof(uintptr_t));
403
}
404
405
void Arena::unmarkPreMarkedFreeCells() {
406
for (ArenaFreeCellIter iter(this); !iter.done(); iter.next()) {
407
TenuredCell* cell = iter.getCell();
408
MOZ_ASSERT(cell->isMarkedBlack());
409
cell->unmark();
410
}
411
}
412
413
#ifdef DEBUG
414
void Arena::checkNoMarkedFreeCells() {
415
for (ArenaFreeCellIter iter(this); !iter.done(); iter.next()) {
416
MOZ_ASSERT(!iter.getCell()->isMarkedAny());
417
}
418
}
419
#endif
420
421
/* static */
422
void Arena::staticAsserts() {
423
static_assert(size_t(AllocKind::LIMIT) <= 255,
424
"We must be able to fit the allockind into uint8_t.");
425
static_assert(mozilla::ArrayLength(ThingSizes) == size_t(AllocKind::LIMIT),
426
"We haven't defined all thing sizes.");
427
static_assert(
428
mozilla::ArrayLength(FirstThingOffsets) == size_t(AllocKind::LIMIT),
429
"We haven't defined all offsets.");
430
static_assert(
431
mozilla::ArrayLength(ThingsPerArena) == size_t(AllocKind::LIMIT),
432
"We haven't defined all counts.");
433
}
434
435
template <typename T>
436
inline size_t Arena::finalize(JSFreeOp* fop, AllocKind thingKind,
437
size_t thingSize) {
438
/* Enforce requirements on size of T. */
439
MOZ_ASSERT(thingSize % CellAlignBytes == 0);
440
MOZ_ASSERT(thingSize >= MinCellSize);
441
MOZ_ASSERT(thingSize <= 255);
442
443
MOZ_ASSERT(allocated());
444
MOZ_ASSERT(thingKind == getAllocKind());
445
MOZ_ASSERT(thingSize == getThingSize());
446
MOZ_ASSERT(!onDelayedMarkingList_);
447
448
uint_fast16_t firstThing = firstThingOffset(thingKind);
449
uint_fast16_t firstThingOrSuccessorOfLastMarkedThing = firstThing;
450
uint_fast16_t lastThing = ArenaSize - thingSize;
451
452
FreeSpan newListHead;
453
FreeSpan* newListTail = &newListHead;
454
size_t nmarked = 0;
455
456
for (ArenaCellIterUnderFinalize i(this); !i.done(); i.next()) {
457
T* t = i.get<T>();
458
if (t->asTenured().isMarkedAny()) {
459
uint_fast16_t thing = uintptr_t(t) & ArenaMask;
460
if (thing != firstThingOrSuccessorOfLastMarkedThing) {
461
// We just finished passing over one or more free things,
462
// so record a new FreeSpan.
463
newListTail->initBounds(firstThingOrSuccessorOfLastMarkedThing,
464
thing - thingSize, this);
465
newListTail = newListTail->nextSpanUnchecked(this);
466
}
467
firstThingOrSuccessorOfLastMarkedThing = thing + thingSize;
468
nmarked++;
469
} else {
470
t->finalize(fop);
471
AlwaysPoison(t, JS_SWEPT_TENURED_PATTERN, thingSize,
472
MemCheckKind::MakeUndefined);
473
gcTracer.traceTenuredFinalize(t);
474
}
475
}
476
477
if (nmarked == 0) {
478
// Do nothing. The caller will update the arena appropriately.
479
MOZ_ASSERT(newListTail == &newListHead);
480
DebugOnlyPoison(data, JS_SWEPT_TENURED_PATTERN, sizeof(data),
481
MemCheckKind::MakeUndefined);
482
return nmarked;
483
}
484
485
MOZ_ASSERT(firstThingOrSuccessorOfLastMarkedThing != firstThing);
486
uint_fast16_t lastMarkedThing =
487
firstThingOrSuccessorOfLastMarkedThing - thingSize;
488
if (lastThing == lastMarkedThing) {
489
// If the last thing was marked, we will have already set the bounds of
490
// the final span, and we just need to terminate the list.
491
newListTail->initAsEmpty();
492
} else {
493
// Otherwise, end the list with a span that covers the final stretch of free
494
// things.
495
newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing, lastThing,
496
this);
497
}
498
499
firstFreeSpan = newListHead;
500
#ifdef DEBUG
501
size_t nfree = numFreeThings(thingSize);
502
MOZ_ASSERT(nfree + nmarked == thingsPerArena(thingKind));
503
#endif
504
return nmarked;
505
}
506
507
// Finalize arenas from src list, releasing empty arenas if keepArenas wasn't
508
// specified and inserting the others into the appropriate destination size
509
// bins.
510
template <typename T>
511
static inline bool FinalizeTypedArenas(JSFreeOp* fop, Arena** src,
512
SortedArenaList& dest,
513
AllocKind thingKind,
514
SliceBudget& budget) {
515
// When operating in the foreground, take the lock at the top.
516
Maybe<AutoLockGC> maybeLock;
517
if (fop->onMainThread()) {
518
maybeLock.emplace(fop->runtime());
519
}
520
521
size_t thingSize = Arena::thingSize(thingKind);
522
size_t thingsPerArena = Arena::thingsPerArena(thingKind);
523
524
while (Arena* arena = *src) {
525
*src = arena->next;
526
size_t nmarked = arena->finalize<T>(fop, thingKind, thingSize);
527
size_t nfree = thingsPerArena - nmarked;
528
529
if (nmarked) {
530
dest.insertAt(arena, nfree);
531
} else {
532
arena->chunk()->recycleArena(arena, dest, thingsPerArena);
533
}
534
535
budget.step(thingsPerArena);
536
if (budget.isOverBudget()) {
537
return false;
538
}
539
}
540
541
return true;
542
}
543
544
/*
545
* Finalize the list of areans.
546
*/
547
static bool FinalizeArenas(JSFreeOp* fop, Arena** src, SortedArenaList& dest,
548
AllocKind thingKind, SliceBudget& budget) {
549
switch (thingKind) {
550
#define EXPAND_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery, \
551
compact) \
552
case AllocKind::allocKind: \
553
return FinalizeTypedArenas<type>(fop, src, dest, thingKind, budget);
554
FOR_EACH_ALLOCKIND(EXPAND_CASE)
555
#undef EXPAND_CASE
556
557
default:
558
MOZ_CRASH("Invalid alloc kind");
559
}
560
}
561
562
Chunk* ChunkPool::pop() {
563
MOZ_ASSERT(bool(head_) == bool(count_));
564
if (!count_) {
565
return nullptr;
566
}
567
return remove(head_);
568
}
569
570
void ChunkPool::push(Chunk* chunk) {
571
MOZ_ASSERT(!chunk->info.next);
572
MOZ_ASSERT(!chunk->info.prev);
573
574
chunk->info.next = head_;
575
if (head_) {
576
head_->info.prev = chunk;
577
}
578
head_ = chunk;
579
++count_;
580
}
581
582
Chunk* ChunkPool::remove(Chunk* chunk) {
583
MOZ_ASSERT(count_ > 0);
584
MOZ_ASSERT(contains(chunk));
585
586
if (head_ == chunk) {
587
head_ = chunk->info.next;
588
}
589
if (chunk->info.prev) {
590
chunk->info.prev->info.next = chunk->info.next;
591
}
592
if (chunk->info.next) {
593
chunk->info.next->info.prev = chunk->info.prev;
594
}
595
chunk->info.next = chunk->info.prev = nullptr;
596
--count_;
597
598
return chunk;
599
}
600
601
// We could keep the chunk pool sorted, but that's likely to be more expensive.
602
// This sort is nlogn, but keeping it sorted is likely to be m*n, with m being
603
// the number of operations (likely higher than n).
604
void ChunkPool::sort() {
605
// Only sort if the list isn't already sorted.
606
if (!isSorted()) {
607
head_ = mergeSort(head(), count());
608
609
// Fixup prev pointers.
610
Chunk* prev = nullptr;
611
for (Chunk* cur = head_; cur; cur = cur->info.next) {
612
cur->info.prev = prev;
613
prev = cur;
614
}
615
}
616
617
MOZ_ASSERT(verify());
618
MOZ_ASSERT(isSorted());
619
}
620
621
Chunk* ChunkPool::mergeSort(Chunk* list, size_t count) {
622
MOZ_ASSERT(bool(list) == bool(count));
623
624
if (count < 2) {
625
return list;
626
}
627
628
size_t half = count / 2;
629
630
// Split;
631
Chunk* front = list;
632
Chunk* back;
633
{
634
Chunk* cur = list;
635
for (size_t i = 0; i < half - 1; i++) {
636
MOZ_ASSERT(cur);
637
cur = cur->info.next;
638
}
639
back = cur->info.next;
640
cur->info.next = nullptr;
641
}
642
643
front = mergeSort(front, half);
644
back = mergeSort(back, count - half);
645
646
// Merge
647
list = nullptr;
648
Chunk** cur = &list;
649
while (front || back) {
650
if (!front) {
651
*cur = back;
652
break;
653
}
654
if (!back) {
655
*cur = front;
656
break;
657
}
658
659
// Note that the sort is stable due to the <= here. Nothing depends on
660
// this but it could.
661
if (front->info.numArenasFree <= back->info.numArenasFree) {
662
*cur = front;
663
front = front->info.next;
664
cur = &(*cur)->info.next;
665
} else {
666
*cur = back;
667
back = back->info.next;
668
cur = &(*cur)->info.next;
669
}
670
}
671
672
return list;
673
}
674
675
bool ChunkPool::isSorted() const {
676
uint32_t last = 1;
677
for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) {
678
if (cursor->info.numArenasFree < last) {
679
return false;
680
}
681
last = cursor->info.numArenasFree;
682
}
683
return true;
684
}
685
686
#ifdef DEBUG
687
bool ChunkPool::contains(Chunk* chunk) const {
688
verify();
689
for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) {
690
if (cursor == chunk) {
691
return true;
692
}
693
}
694
return false;
695
}
696
697
bool ChunkPool::verify() const {
698
MOZ_ASSERT(bool(head_) == bool(count_));
699
uint32_t count = 0;
700
for (Chunk* cursor = head_; cursor; cursor = cursor->info.next, ++count) {
701
MOZ_ASSERT_IF(cursor->info.prev, cursor->info.prev->info.next == cursor);
702
MOZ_ASSERT_IF(cursor->info.next, cursor->info.next->info.prev == cursor);
703
}
704
MOZ_ASSERT(count_ == count);
705
return true;
706
}
707
#endif
708
709
void ChunkPool::Iter::next() {
710
MOZ_ASSERT(!done());
711
current_ = current_->info.next;
712
}
713
714
ChunkPool GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock) {
715
MOZ_ASSERT(emptyChunks(lock).verify());
716
MOZ_ASSERT(tunables.minEmptyChunkCount(lock) <=
717
tunables.maxEmptyChunkCount());
718
719
ChunkPool expired;
720
while (emptyChunks(lock).count() > tunables.minEmptyChunkCount(lock)) {
721
Chunk* chunk = emptyChunks(lock).pop();
722
prepareToFreeChunk(chunk->info);
723
expired.push(chunk);
724
}
725
726
MOZ_ASSERT(expired.verify());
727
MOZ_ASSERT(emptyChunks(lock).verify());
728
MOZ_ASSERT(emptyChunks(lock).count() <= tunables.maxEmptyChunkCount());
729
MOZ_ASSERT(emptyChunks(lock).count() <= tunables.minEmptyChunkCount(lock));
730
return expired;
731
}
732
733
static void FreeChunkPool(ChunkPool& pool) {
734
for (ChunkPool::Iter iter(pool); !iter.done();) {
735
Chunk* chunk = iter.get();
736
iter.next();
737
pool.remove(chunk);
738
MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
739
UnmapPages(static_cast<void*>(chunk), ChunkSize);
740
}
741
MOZ_ASSERT(pool.count() == 0);
742
}
743
744
void GCRuntime::freeEmptyChunks(const AutoLockGC& lock) {
745
FreeChunkPool(emptyChunks(lock));
746
}
747
748
inline void GCRuntime::prepareToFreeChunk(ChunkInfo& info) {
749
MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
750
numArenasFreeCommitted -= info.numArenasFreeCommitted;
751
stats().count(gcstats::COUNT_DESTROY_CHUNK);
752
#ifdef DEBUG
753
/*
754
* Let FreeChunkPool detect a missing prepareToFreeChunk call before it
755
* frees chunk.
756
*/
757
info.numArenasFreeCommitted = 0;
758
#endif
759
}
760
761
inline void GCRuntime::updateOnArenaFree() { ++numArenasFreeCommitted; }
762
763
void Chunk::addArenaToFreeList(JSRuntime* rt, Arena* arena) {
764
MOZ_ASSERT(!arena->allocated());
765
arena->next = info.freeArenasHead;
766
info.freeArenasHead = arena;
767
++info.numArenasFreeCommitted;
768
++info.numArenasFree;
769
rt->gc.updateOnArenaFree();
770
}
771
772
void Chunk::addArenaToDecommittedList(const Arena* arena) {
773
++info.numArenasFree;
774
decommittedArenas.set(Chunk::arenaIndex(arena->address()));
775
}
776
777
void Chunk::recycleArena(Arena* arena, SortedArenaList& dest,
778
size_t thingsPerArena) {
779
arena->setAsFullyUnused();
780
dest.insertAt(arena, thingsPerArena);
781
}
782
783
void Chunk::releaseArena(JSRuntime* rt, Arena* arena, const AutoLockGC& lock) {
784
addArenaToFreeList(rt, arena);
785
updateChunkListAfterFree(rt, lock);
786
}
787
788
bool Chunk::decommitOneFreeArena(JSRuntime* rt, AutoLockGC& lock) {
789
MOZ_ASSERT(info.numArenasFreeCommitted > 0);
790
Arena* arena = fetchNextFreeArena(rt);
791
updateChunkListAfterAlloc(rt, lock);
792
793
bool ok;
794
{
795
AutoUnlockGC unlock(lock);
796
ok = MarkPagesUnusedSoft(arena, ArenaSize);
797
}
798
799
if (ok) {
800
addArenaToDecommittedList(arena);
801
} else {
802
addArenaToFreeList(rt, arena);
803
}
804
updateChunkListAfterFree(rt, lock);
805
806
return ok;
807
}
808
809
void Chunk::decommitFreeArenasWithoutUnlocking(const AutoLockGC& lock) {
810
for (size_t i = 0; i < ArenasPerChunk; ++i) {
811
if (decommittedArenas.get(i) || arenas[i].allocated()) {
812
continue;
813
}
814
815
if (MarkPagesUnusedSoft(&arenas[i], ArenaSize)) {
816
info.numArenasFreeCommitted--;
817
decommittedArenas.set(i);
818
}
819
}
820
}
821
822
void Chunk::updateChunkListAfterAlloc(JSRuntime* rt, const AutoLockGC& lock) {
823
if (MOZ_UNLIKELY(!hasAvailableArenas())) {
824
rt->gc.availableChunks(lock).remove(this);
825
rt->gc.fullChunks(lock).push(this);
826
}
827
}
828
829
void Chunk::updateChunkListAfterFree(JSRuntime* rt, const AutoLockGC& lock) {
830
if (info.numArenasFree == 1) {
831
rt->gc.fullChunks(lock).remove(this);
832
rt->gc.availableChunks(lock).push(this);
833
} else if (!unused()) {
834
MOZ_ASSERT(rt->gc.availableChunks(lock).contains(this));
835
} else {
836
MOZ_ASSERT(unused());
837
rt->gc.availableChunks(lock).remove(this);
838
decommitAllArenas();
839
MOZ_ASSERT(info.numArenasFreeCommitted == 0);
840
rt->gc.recycleChunk(this, lock);
841
}
842
}
843
844
void GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock) {
845
MOZ_ASSERT(arena->allocated());
846
MOZ_ASSERT(!arena->onDelayedMarkingList());
847
848
arena->zone->gcHeapSize.removeGCArena();
849
arena->release(lock);
850
arena->chunk()->releaseArena(rt, arena, lock);
851
}
852
853
GCRuntime::GCRuntime(JSRuntime* rt)
854
: rt(rt),
855
systemZone(nullptr),
856
atomsZone(nullptr),
857
stats_(rt),
858
marker(rt),
859
heapSize(nullptr),
860
rootsHash(256),
861
nextCellUniqueId_(LargestTaggedNullCellPointer +
862
1), // Ensure disjoint from null tagged pointers.
863
numArenasFreeCommitted(0),
864
verifyPreData(nullptr),
865
chunkAllocationSinceLastGC(false),
866
lastGCStartTime_(ReallyNow()),
867
lastGCEndTime_(ReallyNow()),
868
mode(TuningDefaults::Mode),
869
numActiveZoneIters(0),
870
cleanUpEverything(false),
871
grayBufferState(GCRuntime::GrayBufferState::Unused),
872
grayBitsValid(false),
873
majorGCTriggerReason(JS::GCReason::NO_REASON),
874
fullGCForAtomsRequested_(false),
875
minorGCNumber(0),
876
majorGCNumber(0),
877
number(0),
878
sliceNumber(0),
879
isFull(false),
880
incrementalState(gc::State::NotActive),
881
initialState(gc::State::NotActive),
882
#ifdef JS_GC_ZEAL
883
useZeal(false),
884
#endif
885
lastMarkSlice(false),
886
safeToYield(true),
887
sweepOnBackgroundThread(false),
888
lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
889
lifoBlocksToFreeAfterMinorGC(
890
(size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
891
sweepGroupIndex(0),
892
sweepGroups(nullptr),
893
currentSweepGroup(nullptr),
894
sweepZone(nullptr),
895
hasMarkedGrayRoots(false),
896
abortSweepAfterCurrentGroup(false),
897
startedCompacting(false),
898
relocatedArenasToRelease(nullptr),
899
#ifdef JS_GC_ZEAL
900
markingValidator(nullptr),
901
#endif
902
defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS),
903
incrementalAllowed(true),
904
compactingEnabled(TuningDefaults::CompactingEnabled),
905
rootsRemoved(false),
906
#ifdef JS_GC_ZEAL
907
zealModeBits(0),
908
zealFrequency(0),
909
nextScheduled(0),
910
deterministicOnly(false),
911
incrementalLimit(0),
912
#endif
913
fullCompartmentChecks(false),
914
gcCallbackDepth(0),
915
alwaysPreserveCode(false),
916
lowMemoryState(false),
917
lock(mutexid::GCLock),
918
allocTask(rt, emptyChunks_.ref()),
919
sweepTask(rt),
920
freeTask(rt),
921
decommitTask(rt),
922
nursery_(rt),
923
storeBuffer_(rt, nursery()) {
924
setGCMode(JSGC_MODE_GLOBAL);
925
}
926
927
#ifdef JS_GC_ZEAL
928
929
void GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency,
930
uint32_t* scheduled) {
931
*zealBits = zealModeBits;
932
*frequency = zealFrequency;
933
*scheduled = nextScheduled;
934
}
935
936
const char gc::ZealModeHelpText[] =
937
" Specifies how zealous the garbage collector should be. Some of these "
938
"modes can\n"
939
" be set simultaneously, by passing multiple level options, e.g. \"2;4\" "
940
"will activate\n"
941
" both modes 2 and 4. Modes can be specified by name or number.\n"
942
" \n"
943
" Values:\n"
944
" 0: (None) Normal amount of collection (resets all modes)\n"
945
" 1: (RootsChange) Collect when roots are added or removed\n"
946
" 2: (Alloc) Collect when every N allocations (default: 100)\n"
947
" 4: (VerifierPre) Verify pre write barriers between instructions\n"
948
" 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
949
" 8: (YieldBeforeMarking) Incremental GC in two slices that yields "
950
"between\n"
951
" the root marking and marking phases\n"
952
" 9: (YieldBeforeSweeping) Incremental GC in two slices that yields "
953
"between\n"
954
" the marking and sweeping phases\n"
955
" 10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
956
" 11: (IncrementalMarkingValidator) Verify incremental marking\n"
957
" 12: (ElementsBarrier) Use the individual element post-write barrier\n"
958
" regardless of elements size\n"
959
" 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
960
" 14: (Compact) Perform a shrinking collection every N allocations\n"
961
" 15: (CheckHeapAfterGC) Walk the heap to check its integrity after "
962
"every GC\n"
963
" 16: (CheckNursery) Check nursery integrity on minor GC\n"
964
" 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that "
965
"yields\n"
966
" before sweeping the atoms table\n"
967
" 18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
968
" 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that "
969
"yields\n"
970
" before sweeping weak caches\n"
971
" 20: (YieldBeforeSweepingTypes) Incremental GC in two slices that "
972
"yields\n"
973
" before sweeping type information\n"
974
" 21: (YieldBeforeSweepingObjects) Incremental GC in two slices that "
975
"yields\n"
976
" before sweeping foreground finalized objects\n"
977
" 22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that "
978
"yields\n"
979
" before sweeping non-object GC things\n"
980
" 23: (YieldBeforeSweepingShapeTrees) Incremental GC in two slices that "
981
"yields\n"
982
" before sweeping shape trees\n"
983
" 24: (CheckWeakMapMarking) Check weak map marking invariants after "
984
"every GC\n"
985
" 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n"
986
" during gray marking\n";
987
988
// The set of zeal modes that control incremental slices. These modes are
989
// mutually exclusive.
990
static const mozilla::EnumSet<ZealMode> IncrementalSliceZealModes = {
991
ZealMode::YieldBeforeMarking,
992
ZealMode::YieldBeforeSweeping,
993
ZealMode::IncrementalMultipleSlices,
994
ZealMode::YieldBeforeSweepingAtoms,
995
ZealMode::YieldBeforeSweepingCaches,
996
ZealMode::YieldBeforeSweepingTypes,
997
ZealMode::YieldBeforeSweepingObjects,
998
ZealMode::YieldBeforeSweepingNonObjects,
999
ZealMode::YieldBeforeSweepingShapeTrees};
1000
1001
void GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) {
1002
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
1003
1004
if (verifyPreData) {
1005
VerifyBarriers(rt, PreBarrierVerifier);
1006
}
1007
1008
if (zeal == 0) {
1009
if (hasZealMode(ZealMode::GenerationalGC)) {
1010
evictNursery(JS::GCReason::DEBUG_GC);
1011
nursery().leaveZealMode();
1012
}
1013
1014
if (isIncrementalGCInProgress()) {
1015
finishGC(JS::GCReason::DEBUG_GC);
1016
}
1017
}
1018
1019
ZealMode zealMode = ZealMode(zeal);
1020
if (zealMode == ZealMode::GenerationalGC) {
1021
nursery().enterZealMode();
1022
}
1023
1024
// Some modes are mutually exclusive. If we're setting one of those, we
1025
// first reset all of them.
1026
if (IncrementalSliceZealModes.contains(zealMode)) {
1027
for (auto mode : IncrementalSliceZealModes) {
1028
clearZealMode(mode);
1029
}
1030
}
1031
1032
bool schedule = zealMode >= ZealMode::Alloc;
1033
if (zeal != 0) {
1034
zealModeBits |= 1 << unsigned(zeal);
1035
} else {
1036
zealModeBits = 0;
1037
}
1038
zealFrequency = frequency;
1039
nextScheduled = schedule ? frequency : 0;
1040
}
1041
1042
void GCRuntime::unsetZeal(uint8_t zeal) {
1043
MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
1044
ZealMode zealMode = ZealMode(zeal);
1045
1046
if (!hasZealMode(zealMode)) {
1047
return;
1048
}
1049
1050
if (verifyPreData) {
1051
VerifyBarriers(rt, PreBarrierVerifier);
1052
}
1053
1054
if (zealMode == ZealMode::GenerationalGC) {
1055
evictNursery(JS::GCReason::DEBUG_GC);
1056
nursery().leaveZealMode();
1057
}
1058
1059
clearZealMode(zealMode);
1060
1061
if (zealModeBits == 0) {
1062
if (isIncrementalGCInProgress()) {
1063
finishGC(JS::GCReason::DEBUG_GC);
1064
}
1065
1066
zealFrequency = 0;
1067
nextScheduled = 0;
1068
}
1069
}
1070
1071
void GCRuntime::setNextScheduled(uint32_t count) { nextScheduled = count; }
1072
1073
using CharRange = mozilla::Range<const char>;
1074
using CharRangeVector = Vector<CharRange, 0, SystemAllocPolicy>;
1075
1076
static bool ParseZealModeName(CharRange text, uint32_t* modeOut) {
1077
struct ModeInfo {
1078
const char* name;
1079
size_t length;
1080
uint32_t value;
1081
};
1082
1083
static const ModeInfo zealModes[] = {{"None", 0},
1084
# define ZEAL_MODE(name, value) {# name, strlen(# name), value},
1085
JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
1086
# undef ZEAL_MODE
1087
};
1088
1089
for (auto mode : zealModes) {
1090
if (text.length() == mode.length &&
1091
memcmp(text.begin().get(), mode.name, mode.length) == 0) {
1092
*modeOut = mode.value;
1093
return true;
1094
}
1095
}
1096
1097
return false;
1098
}
1099
1100
static bool ParseZealModeNumericParam(CharRange text, uint32_t* paramOut) {
1101
if (text.length() == 0) {
1102
return false;
1103
}
1104
1105
for (auto c : text) {
1106
if (!mozilla::IsAsciiDigit(c)) {
1107
return false;
1108
}
1109
}
1110
1111
*paramOut = atoi(text.begin().get());
1112
return true;
1113
}
1114
1115
static bool SplitStringBy(CharRange text, char delimiter,
1116
CharRangeVector* result) {
1117
auto start = text.begin();
1118
for (auto ptr = start; ptr != text.end(); ptr++) {
1119
if (*ptr == delimiter) {
1120
if (!result->emplaceBack(start, ptr)) {
1121
return false;
1122
}
1123
start = ptr + 1;
1124
}
1125
}
1126
1127
return result->emplaceBack(start, text.end());
1128
}
1129
1130
static bool PrintZealHelpAndFail() {
1131
fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
1132
fputs(ZealModeHelpText, stderr);
1133
return false;
1134
}
1135
1136
bool GCRuntime::parseAndSetZeal(const char* str) {
1137
// Set the zeal mode from a string consisting of one or more mode specifiers
1138
// separated by ';', optionally followed by a ',' and the trigger frequency.
1139
// The mode specifiers can by a mode name or its number.
1140
1141
auto text = CharRange(str, strlen(str));
1142
1143
CharRangeVector parts;
1144
if (!SplitStringBy(text, ',', &parts)) {
1145
return false;
1146
}
1147
1148
if (parts.length() == 0 || parts.length() > 2) {
1149
return PrintZealHelpAndFail();
1150
}
1151
1152
uint32_t frequency = JS_DEFAULT_ZEAL_FREQ;
1153
if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency)) {
1154
return PrintZealHelpAndFail();
1155
}
1156
1157
CharRangeVector modes;
1158
if (!SplitStringBy(parts[0], ';', &modes)) {
1159
return false;
1160
}
1161
1162
for (const auto& descr : modes) {
1163
uint32_t mode;
1164
if (!ParseZealModeName(descr, &mode) &&
1165
!(ParseZealModeNumericParam(descr, &mode) &&
1166
mode <= unsigned(ZealMode::Limit))) {
1167
return PrintZealHelpAndFail();
1168
}
1169
1170
setZeal(mode, frequency);
1171
}
1172
1173
return true;
1174
}
1175
1176
const char* js::gc::AllocKindName(AllocKind kind) {
1177
static const char* const names[] = {
1178
# define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) # allocKind,
1179
FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
1180
# undef EXPAND_THING_NAME
1181
};
1182
static_assert(ArrayLength(names) == size_t(AllocKind::LIMIT),
1183
"names array should have an entry for every AllocKind");
1184
1185
size_t i = size_t(kind);
1186
MOZ_ASSERT(i < ArrayLength(names));
1187
return names[i];
1188
}
1189
1190
void js::gc::DumpArenaInfo() {
1191
fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize);
1192
1193
fprintf(stderr, "GC thing kinds:\n");
1194
fprintf(stderr, "%25s %8s %8s %8s\n",
1195
"AllocKind:", "Size:", "Count:", "Padding:");
1196
for (auto kind : AllAllocKinds()) {
1197
fprintf(stderr, "%25s %8zu %8zu %8zu\n", AllocKindName(kind),
1198
Arena::thingSize(kind), Arena::thingsPerArena(kind),
1199
Arena::firstThingOffset(kind) - ArenaHeaderSize);
1200
}
1201
}
1202
1203
#endif // JS_GC_ZEAL
1204
1205
bool GCRuntime::init(uint32_t maxbytes, uint32_t maxNurseryBytes) {
1206
MOZ_ASSERT(SystemPageSize());
1207
1208
{
1209
AutoLockGCBgAlloc lock(rt);
1210
1211
MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes, lock));
1212
MOZ_ALWAYS_TRUE(
1213
tunables.setParameter(JSGC_MAX_NURSERY_BYTES, maxNurseryBytes, lock));
1214
1215
const char* size = getenv("JSGC_MARK_STACK_LIMIT");
1216
if (size) {
1217
setMarkStackLimit(atoi(size), lock);
1218
}
1219
1220
if (!nursery().init(lock)) {
1221
return false;
1222
}
1223
1224
const char* pretenureThresholdStr = getenv("JSGC_PRETENURE_THRESHOLD");
1225
if (pretenureThresholdStr && pretenureThresholdStr[0]) {
1226
char* last;
1227
long pretenureThreshold = strtol(pretenureThresholdStr, &last, 10);
1228
if (last[0] || !tunables.setParameter(JSGC_PRETENURE_THRESHOLD,
1229
pretenureThreshold, lock)) {
1230
fprintf(stderr, "Invalid value for JSGC_PRETENURE_THRESHOLD: %s\n",
1231
pretenureThresholdStr);
1232
}
1233
}
1234
}
1235
1236
#ifdef JS_GC_ZEAL
1237
const char* zealSpec = getenv("JS_GC_ZEAL");
1238
if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec)) {
1239
return false;
1240
}
1241
#endif
1242
1243
if (!gcTracer.initTrace(*this)) {
1244
return false;
1245
}
1246
1247
if (!marker.init(mode)) {
1248
return false;
1249
}
1250
1251
if (!initSweepActions()) {
1252
return false;
1253
}
1254
1255
return true;
1256
}
1257
1258
void GCRuntime::finish() {
1259
// Wait for nursery background free to end and disable it to release memory.
1260
if (nursery().isEnabled()) {
1261
nursery().disable();
1262
}
1263
1264
// Wait until the background finalization and allocation stops and the
1265
// helper thread shuts down before we forcefully release any remaining GC
1266
// memory.
1267
sweepTask.join();
1268
freeTask.join();
1269
allocTask.cancelAndWait();
1270
decommitTask.cancelAndWait();
1271
1272
#ifdef JS_GC_ZEAL
1273
// Free memory associated with GC verification.
1274
finishVerifier();
1275
#endif
1276
1277
// Delete all remaining zones.
1278
if (rt->gcInitialized) {
1279
AutoSetThreadIsSweeping threadIsSweeping;
1280
for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
1281
for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
1282
for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) {
1283
js_delete(realm.get());
1284
}
1285
comp->realms().clear();
1286
js_delete(comp.get());
1287
}
1288
zone->compartments().clear();
1289
js_delete(zone.get());
1290
}
1291
}
1292
1293
zones().clear();
1294
1295
FreeChunkPool(fullChunks_.ref());
1296
FreeChunkPool(availableChunks_.ref());
1297
FreeChunkPool(emptyChunks_.ref());
1298
1299
gcTracer.finishTrace();
1300
1301
nursery().printTotalProfileTimes();
1302
stats().printTotalProfileTimes();
1303
}
1304
1305
bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value,
1306
AutoLockGC& lock) {
1307
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1308
1309
switch (key) {
1310
case JSGC_SLICE_TIME_BUDGET_MS:
1311
defaultTimeBudgetMS_ = value ? value : SliceBudget::UnlimitedTimeBudget;
1312
break;
1313
case JSGC_MARK_STACK_LIMIT:
1314
if (value == 0) {
1315
return false;
1316
}
1317
setMarkStackLimit(value, lock);
1318
break;
1319
case JSGC_MODE:
1320
if (mode != JSGC_MODE_GLOBAL && mode != JSGC_MODE_ZONE &&
1321
mode != JSGC_MODE_INCREMENTAL && mode != JSGC_MODE_ZONE_INCREMENTAL) {
1322
return false;
1323
}
1324
mode = JSGCMode(value);
1325
break;
1326
case JSGC_COMPACTING_ENABLED:
1327
compactingEnabled = value != 0;
1328
break;
1329
default:
1330
if (!tunables.setParameter(key, value, lock)) {
1331
return false;
1332
}
1333
for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
1334
zone->updateGCThresholds(*this, GC_NORMAL, lock);
1335
}
1336
}
1337
1338
return true;
1339
}
1340
1341
void GCRuntime::resetParameter(JSGCParamKey key, AutoLockGC& lock) {
1342
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1343
1344
switch (key) {
1345
case JSGC_SLICE_TIME_BUDGET_MS:
1346
defaultTimeBudgetMS_ = TuningDefaults::DefaultTimeBudgetMS;
1347
break;
1348
case JSGC_MARK_STACK_LIMIT:
1349
setMarkStackLimit(MarkStack::DefaultCapacity, lock);
1350
break;
1351
case JSGC_MODE:
1352
mode = TuningDefaults::Mode;
1353
break;
1354
case JSGC_COMPACTING_ENABLED:
1355
compactingEnabled = TuningDefaults::CompactingEnabled;
1356
break;
1357
default:
1358
tunables.resetParameter(key, lock);
1359
for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
1360
zone->updateGCThresholds(*this, GC_NORMAL, lock);
1361
}
1362
}
1363
}
1364
1365
uint32_t GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock) {
1366
switch (key) {
1367
case JSGC_MAX_BYTES:
1368
return uint32_t(tunables.gcMaxBytes());
1369
case JSGC_MIN_NURSERY_BYTES:
1370
MOZ_ASSERT(tunables.gcMinNurseryBytes() < UINT32_MAX);
1371
return uint32_t(tunables.gcMinNurseryBytes());
1372
case JSGC_MAX_NURSERY_BYTES:
1373
MOZ_ASSERT(tunables.gcMaxNurseryBytes() < UINT32_MAX);
1374
return uint32_t(tunables.gcMaxNurseryBytes());
1375
case JSGC_BYTES:
1376
return uint32_t(heapSize.bytes());
1377
case JSGC_NURSERY_BYTES:
1378
return nursery().capacity();
1379
case JSGC_NUMBER:
1380
return uint32_t(number);
1381
case JSGC_MODE:
1382
return uint32_t(mode);
1383
case JSGC_UNUSED_CHUNKS:
1384
return uint32_t(emptyChunks(lock).count());
1385
case JSGC_TOTAL_CHUNKS:
1386
return uint32_t(fullChunks(lock).count() + availableChunks(lock).count() +
1387
emptyChunks(lock).count());
1388
case JSGC_SLICE_TIME_BUDGET_MS:
1389
if (defaultTimeBudgetMS_.ref() == SliceBudget::UnlimitedTimeBudget) {
1390
return 0;
1391
} else {
1392
MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ >= 0);
1393
MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ <= UINT32_MAX);
1394
return uint32_t(defaultTimeBudgetMS_);
1395
}
1396
case JSGC_MARK_STACK_LIMIT:
1397
return marker.maxCapacity();
1398
case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
1399
return tunables.highFrequencyThreshold().ToMilliseconds();
1400
case JSGC_HIGH_FREQUENCY_LOW_LIMIT:
1401
return tunables.highFrequencyLowLimitBytes() / 1024 / 1024;
1402
case JSGC_HIGH_FREQUENCY_HIGH_LIMIT:
1403
return tunables.highFrequencyHighLimitBytes() / 1024 / 1024;
1404
case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:
1405
return uint32_t(tunables.highFrequencyHeapGrowthMax() * 100);
1406
case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:
1407
return uint32_t(tunables.highFrequencyHeapGrowthMin() * 100);
1408
case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
1409
return uint32_t(tunables.lowFrequencyHeapGrowth() * 100);
1410
case JSGC_DYNAMIC_HEAP_GROWTH:
1411
return tunables.isDynamicHeapGrowthEnabled();
1412
case JSGC_DYNAMIC_MARK_SLICE:
1413
return tunables.isDynamicMarkSliceEnabled();
1414
case JSGC_ALLOCATION_THRESHOLD:
1415
return tunables.gcZoneAllocThresholdBase() / 1024 / 1024;
1416
case JSGC_NON_INCREMENTAL_FACTOR:
1417
return uint32_t(tunables.nonIncrementalFactor() * 100);
1418
case JSGC_AVOID_INTERRUPT_FACTOR:
1419
return uint32_t(tunables.avoidInterruptFactor() * 100);
1420
case JSGC_MIN_EMPTY_CHUNK_COUNT:
1421
return tunables.minEmptyChunkCount(lock);
1422
case JSGC_MAX_EMPTY_CHUNK_COUNT:
1423
return tunables.maxEmptyChunkCount();
1424
case JSGC_COMPACTING_ENABLED:
1425
return compactingEnabled;
1426
case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION:
1427
return tunables.nurseryFreeThresholdForIdleCollection();
1428
case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION_PERCENT:
1429
return uint32_t(tunables.nurseryFreeThresholdForIdleCollectionFraction() *
1430
100.0f);
1431
case JSGC_PRETENURE_THRESHOLD:
1432
return uint32_t(tunables.pretenureThreshold() * 100);
1433
case JSGC_PRETENURE_GROUP_THRESHOLD:
1434
return tunables.pretenureGroupThreshold();
1435
case JSGC_MIN_LAST_DITCH_GC_PERIOD:
1436
return tunables.minLastDitchGCPeriod().ToSeconds();
1437
case JSGC_ZONE_ALLOC_DELAY_KB:
1438
return tunables.zoneAllocDelayBytes() / 1024;
1439
case JSGC_MALLOC_THRESHOLD_BASE:
1440
return tunables.mallocThresholdBase() / 1024 / 1024;
1441
case JSGC_MALLOC_GROWTH_FACTOR:
1442
return uint32_t(tunables.mallocGrowthFactor() * 100);
1443
default:
1444
MOZ_CRASH("Unknown parameter key");
1445
}
1446
}
1447
1448
void GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock) {
1449
MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
1450
AutoUnlockGC unlock(lock);
1451
AutoStopVerifyingBarriers pauseVerification(rt, false);
1452
marker.setMaxCapacity(limit);
1453
}
1454
1455
bool GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
1456
AssertHeapIsIdle();
1457
return !!blackRootTracers.ref().append(
1458
Callback<JSTraceDataOp>(traceOp, data));
1459
}
1460
1461
void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
1462
// Can be called from finalizers
1463
for (size_t i = 0; i < blackRootTracers.ref().length(); i++) {
1464
Callback<JSTraceDataOp>* e = &blackRootTracers.ref()[i];
1465
if (e->op == traceOp && e->data == data) {
1466
blackRootTracers.ref().erase(e);
1467
break;
1468
}
1469
}
1470
}
1471
1472
void GCRuntime::setGrayRootsTracer(JSTraceDataOp traceOp, void* data) {
1473
AssertHeapIsIdle();
1474
grayRootTracer.op = traceOp;
1475
grayRootTracer.data = data;
1476
}
1477
1478
void GCRuntime::clearBlackAndGrayRootTracers() {
1479
MOZ_ASSERT(rt->isBeingDestroyed());
1480
blackRootTracers.ref().clear();
1481
setGrayRootsTracer(nullptr, nullptr);
1482
}
1483
1484
void GCRuntime::setGCCallback(JSGCCallback callback, void* data) {
1485
gcCallback.op = callback;
1486
gcCallback.data = data;
1487
}
1488
1489
void GCRuntime::callGCCallback(JSGCStatus status) const {
1490
MOZ_ASSERT(gcCallback.op);
1491
gcCallback.op(rt->mainContextFromOwnThread(), status, gcCallback.data);
1492
}
1493
1494
void GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback,
1495
void* data) {
1496
tenuredCallback.op = callback;
1497
tenuredCallback.data = data;
1498
}
1499
1500
void GCRuntime::callObjectsTenuredCallback() {
1501
JS::AutoSuppressGCAnalysis nogc;
1502
if (tenuredCallback.op) {
1503
tenuredCallback.op(rt->mainContextFromOwnThread(), tenuredCallback.data);
1504
}
1505
}
1506
1507
bool GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data) {
1508
return finalizeCallbacks.ref().append(
1509
Callback<JSFinalizeCallback>(callback, data));
1510
}
1511
1512
void GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback) {
1513
for (Callback<JSFinalizeCallback>* p = finalizeCallbacks.ref().begin();
1514
p < finalizeCallbacks.ref().end(); p++) {
1515
if (p->op == callback) {
1516
finalizeCallbacks.ref().erase(p);
1517
break;
1518
}
1519
}
1520
}
1521
1522
void GCRuntime::callFinalizeCallbacks(JSFreeOp* fop,
1523
JSFinalizeStatus status) const {
1524
for (auto& p : finalizeCallbacks.ref()) {
1525
p.op(fop, status, p.data);
1526
}
1527
}
1528
1529
bool GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback,
1530
void* data) {
1531
return updateWeakPointerZonesCallbacks.ref().append(
1532
Callback<JSWeakPointerZonesCallback>(callback, data));
1533
}
1534
1535
void GCRuntime::removeWeakPointerZonesCallback(
1536
JSWeakPointerZonesCallback callback) {
1537
for (auto& p : updateWeakPointerZonesCallbacks.ref()) {
1538
if (p.op == callback) {
1539
updateWeakPointerZonesCallbacks.ref().erase(&p);
1540
break;
1541
}
1542
}
1543
}
1544
1545
void GCRuntime::callWeakPointerZonesCallbacks() const {
1546
JSContext* cx = rt->mainContextFromOwnThread();
1547
for (auto const& p : updateWeakPointerZonesCallbacks.ref()) {
1548
p.op(cx, p.data);
1549
}
1550
}
1551
1552
bool GCRuntime::addWeakPointerCompartmentCallback(
1553
JSWeakPointerCompartmentCallback callback, void* data) {
1554
return updateWeakPointerCompartmentCallbacks.ref().append(
1555
Callback<JSWeakPointerCompartmentCallback>(callback, data));
1556
}
1557
1558
void GCRuntime::removeWeakPointerCompartmentCallback(
1559
JSWeakPointerCompartmentCallback callback) {
1560
for (auto& p : updateWeakPointerCompartmentCallbacks.ref()) {
1561
if (p.op == callback) {
1562
updateWeakPointerCompartmentCallbacks.ref().erase(&p);
1563
break;
1564
}
1565
}
1566
}
1567
1568
void GCRuntime::callWeakPointerCompartmentCallbacks(
1569
JS::Compartment* comp) const {
1570
JSContext* cx = rt->mainContextFromOwnThread();
1571
for (auto const& p : updateWeakPointerCompartmentCallbacks.ref()) {
1572
p.op(cx, comp, p.data);
1573
}
1574
}
1575
1576
JS::GCSliceCallback GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
1577
return stats().setSliceCallback(callback);
1578
}
1579
1580
JS::GCNurseryCollectionCallback GCRuntime::setNurseryCollectionCallback(
1581
JS::GCNurseryCollectionCallback callback) {
1582
return stats().setNurseryCollectionCallback(callback);
1583
}
1584
1585
JS::DoCycleCollectionCallback GCRuntime::setDoCycleCollectionCallback(
1586
JS::DoCycleCollectionCallback callback) {
1587
auto prior = gcDoCycleCollectionCallback;
1588
gcDoCycleCollectionCallback =
1589
Callback<JS::DoCycleCollectionCallback>(callback, nullptr);
1590
return prior.op;
1591
}
1592
1593
void GCRuntime::callDoCycleCollectionCallback(JSContext* cx) {
1594
if (gcDoCycleCollectionCallback.op) {
1595
gcDoCycleCollectionCallback.op(cx);
1596
}
1597
}
1598
1599
bool GCRuntime::addRoot(Value* vp, const char* name) {
1600
/*
1601
* Sometimes Firefox will hold weak references to objects and then convert
1602
* them to strong references by calling AddRoot (e.g., via PreserveWrapper,
1603
* or ModifyBusyCount in workers). We need a read barrier to cover these
1604
* cases.
1605
*/
1606
if (isIncrementalGCInProgress()) {
1607
GCPtrValue::writeBarrierPre(*vp);
1608
}
1609
1610
return rootsHash.ref().put(vp, name);
1611
}
1612
1613
void GCRuntime::removeRoot(Value* vp) {
1614
rootsHash.ref().remove(vp);
1615
notifyRootsRemoved();
1616
}
1617
1618
extern JS_FRIEND_API bool js::AddRawValueRoot(JSContext* cx, Value* vp,
1619
const char* name) {
1620
MOZ_ASSERT(vp);
1621
MOZ_ASSERT(name);
1622
bool ok = cx->runtime()->gc.addRoot(vp, name);
1623
if (!ok) {
1624
JS_ReportOutOfMemory(cx);
1625
}
1626
return ok;
1627
}
1628
1629
extern JS_FRIEND_API void js::RemoveRawValueRoot(JSContext* cx, Value* vp) {
1630
cx->runtime()->gc.removeRoot(vp);
1631
}
1632
1633
/* Compacting GC */
1634
1635
bool js::gc::IsCurrentlyAnimating(const TimeStamp& lastAnimationTime,
1636
const TimeStamp& currentTime) {
1637
// Assume that we're currently animating if js::NotifyAnimationActivity has
1638
// been called in the last second.
1639
static const auto oneSecond = TimeDuration::FromSeconds(1);
1640
return !lastAnimationTime.IsNull() &&
1641
currentTime < (lastAnimationTime + oneSecond);
1642
}
1643
1644
bool GCRuntime::shouldCompact() {
1645
// Compact on shrinking GC if enabled. Skip compacting in incremental GCs
1646
// if we are currently animating, unless the user is inactive or we're
1647
// responding to memory pressure.
1648
1649
if (invocationKind != GC_SHRINK || !isCompactingGCEnabled()) {
1650
return false;
1651
}
1652
1653
if (initialReason == JS::GCReason::USER_INACTIVE ||
1654
initialReason == JS::GCReason::MEM_PRESSURE) {
1655
return true;
1656
}
1657
1658
return !isIncremental ||
1659
!IsCurrentlyAnimating(rt->lastAnimationTime, TimeStamp::Now());
1660
}
1661
1662
bool GCRuntime::isCompactingGCEnabled() const {
1663
return compactingEnabled &&
1664
rt->mainContextFromOwnThread()->compactingDisabledCount == 0 &&
1665
!mozilla::recordreplay::IsRecordingOrReplaying();
1666
}
1667
1668
AutoDisableCompactingGC::AutoDisableCompactingGC(JSContext* cx) : cx(cx) {
1669
++cx->compactingDisabledCount;
1670
if (cx->runtime()->gc.isIncrementalGCInProgress() &&
1671
cx->runtime()->gc.isCompactingGc()) {
1672
FinishGC(cx);
1673
}
1674
}
1675
1676
AutoDisableCompactingGC::~AutoDisableCompactingGC() {
1677
MOZ_ASSERT(cx->compactingDisabledCount > 0);
1678
--cx->compactingDisabledCount;
1679
}
1680
1681
static bool CanRelocateZone(Zone* zone) {
1682
return !zone->isAtomsZone() && !zone->isSelfHostingZone();
1683
}
1684
1685
Arena* ArenaList::removeRemainingArenas(Arena** arenap) {
1686
// This is only ever called to remove arenas that are after the cursor, so
1687
// we don't need to update it.
1688
#ifdef DEBUG
1689
for (Arena* arena = *arenap; arena; arena = arena->next) {
1690
MOZ_ASSERT(cursorp_ != &arena->next);
1691
}
1692
#endif
1693
Arena* remainingArenas = *arenap;
1694
*arenap = nullptr;
1695
check();
1696
return remainingArenas;
1697
}
1698
1699
static bool ShouldRelocateAllArenas(JS::GCReason reason) {
1700
return reason == JS::GCReason::DEBUG_GC;
1701
}
1702
1703
/*
1704
* Choose which arenas to relocate all cells from. Return an arena cursor that
1705
* can be passed to removeRemainingArenas().
1706
*/
1707
Arena** ArenaList::pickArenasToRelocate(size_t& arenaTotalOut,
1708
size_t& relocTotalOut) {
1709
// Relocate the greatest number of arenas such that the number of used cells
1710
// in relocated arenas is less than or equal to the number of free cells in
1711
// unrelocated arenas. In other words we only relocate cells we can move
1712
// into existing arenas, and we choose the least full areans to relocate.
1713
//
1714
// This is made easier by the fact that the arena list has been sorted in
1715
// descending order of number of used cells, so we will always relocate a
1716
// tail of the arena list. All we need to do is find the point at which to
1717
// start relocating.
1718
1719
check();
1720
1721
if (isCursorAtEnd()) {
1722
return nullptr;
1723
}
1724
1725
Arena** arenap = cursorp_; // Next arena to consider for relocation.
1726
size_t previousFreeCells = 0; // Count of free cells before arenap.
1727
size_t followingUsedCells = 0; // Count of used cells after arenap.
1728
size_t fullArenaCount = 0; // Number of full arenas (not relocated).
1729
size_t nonFullArenaCount =
1730
0; // Number of non-full arenas (considered for relocation).
1731
size_t arenaIndex = 0; // Index of the next arena to consider.
1732
1733
for (Arena* arena = head_; arena != *cursorp_; arena = arena->next) {
1734
fullArenaCount++;
1735
}
1736
1737
for (Arena* arena = *cursorp_; arena; arena = arena->next) {
1738
followingUsedCells += arena->countUsedCells();
1739
nonFullArenaCount++;
1740
}
1741
1742
mozilla::DebugOnly<size_t> lastFreeCells(0);
1743
size_t cellsPerArena = Arena::thingsPerArena((*arenap)->getAllocKind());
1744
1745
while (*arenap) {
1746
Arena* arena = *arenap;
1747
if (followingUsedCells <= previousFreeCells) {
1748
break;
1749
}
1750
1751
size_t freeCells = arena->countFreeCells();
1752
size_t usedCells = cellsPerArena - freeCells;
1753
followingUsedCells -= usedCells;
1754
#ifdef DEBUG
1755
MOZ_ASSERT(freeCells >= lastFreeCells);
1756
lastFreeCells = freeCells;
1757
#endif
1758
previousFreeCells += freeCells;
1759
arenap = &arena->next;
1760
arenaIndex++;
1761
}
1762
1763
size_t relocCount = nonFullArenaCount - arenaIndex;
1764
MOZ_ASSERT(relocCount < nonFullArenaCount);
1765
MOZ_ASSERT((relocCount == 0) == (!*arenap));
1766
arenaTotalOut += fullArenaCount + nonFullArenaCount;
1767
relocTotalOut += relocCount;
1768
1769
return arenap;
1770
}
1771
1772
#ifdef DEBUG
1773
inline bool PtrIsInRange(const void* ptr, const void* start, size_t length) {
1774
return uintptr_t(ptr) - uintptr_t(start) < length;
1775
}
1776
#endif
1777
1778
static void RelocateCell(Zone* zone, TenuredCell* src, AllocKind thingKind,
1779
size_t thingSize) {
1780
JS::AutoSuppressGCAnalysis nogc(TlsContext.get());
1781
1782
// Allocate a new cell.
1783
MOZ_ASSERT(zone == src->zone());
1784
TenuredCell* dst = AllocateCellInGC(zone, thingKind);
1785
1786
// Copy source cell contents to destination.
1787
memcpy(dst, src, thingSize);
1788
1789
// Move any uid attached to the object.
1790
src->