Find
C
ase-sensitive
R
egexp search
Path
mozilla-central
/
third_party
/
rust
/
wgpu-core
/
src
/
lock
Navigation
Enable keyboard shortcuts
Name
Description
Size
mod.rs
Instrumented lock types. This module defines a set of instrumented wrappers for the lock types used in `wgpu-core` ([`Mutex`], [`RwLock`], and [`SnatchLock`]) that help us understand and validate `wgpu-core` synchronization. - The [`ranked`] module defines lock types that perform run-time checks to ensure that each thread acquires locks only in a specific order, to prevent deadlocks. - The [`observing`] module defines lock types that record `wgpu-core`'s lock acquisition activity to disk, for later analysis by the `lock-analyzer` binary. - The [`vanilla`] module defines lock types that are uninstrumented, no-overhead wrappers around the standard lock types. If the `wgpu_validate_locks` config is set (for example, with `RUSTFLAGS='--cfg wgpu_validate_locks'`), `wgpu-core` uses the [`ranked`] module's locks. We hope to make this the default for debug builds soon. If the `observe_locks` feature is enabled, `wgpu-core` uses the [`observing`] module's locks. Otherwise, `wgpu-core` uses the [`vanilla`] module's locks. [`Mutex`]: parking_lot::Mutex [`RwLock`]: parking_lot::RwLock [`SnatchLock`]: crate::snatch::SnatchLock
1784
observing.rs
Lock types that observe lock acquisition order. This module's [`Mutex`] type is instrumented to observe the nesting of `wgpu-core` lock acquisitions. Whenever `wgpu-core` acquires one lock while it is already holding another, we note that nesting pair. This tells us what the [`LockRank::followers`] set for each lock would need to include to accommodate `wgpu-core`'s observed behavior. When `wgpu-core`'s `observe_locks` feature is enabled, if the `WGPU_CORE_LOCK_OBSERVE_DIR` environment variable is set to the path of an existing directory, then every thread that acquires a lock in `wgpu-core` will write its own log file to that directory. You can then run the `wgpu` workspace's `lock-analyzer` binary to read those files and summarize the results. The output from `lock-analyzer` has the same form as the lock ranks given in [`lock/rank.rs`]. If the `WGPU_CORE_LOCK_OBSERVE_DIR` environment variable is not set, then no instrumentation takes place, and the locks behave normally. To make sure we capture all acquisitions regardless of when the program exits, each thread writes events directly to its log file as they occur. A `write` system call is generally just a copy from userspace into the kernel's buffer, so hopefully this approach will still have tolerable performance. [`lock/rank.rs`]: ../../../src/wgpu_core/lock/rank.rs.html
15437
rank.rs
Ranks for `wgpu-core` locks, restricting acquisition order. See [`LockRank`].
5959
ranked.rs
Lock types that enforce well-ranked lock acquisition order. This module's [`Mutex`] and [`RwLock` types are instrumented to check that `wgpu-core` acquires locks according to their rank, to prevent deadlocks. To use it, put `--cfg wgpu_validate_locks` in `RUSTFLAGS`. The [`LockRank`] constants in the [`lock::rank`] module describe edges in a directed graph of lock acquisitions: each lock's rank says, if this is the most recently acquired lock that you are still holding, then these are the locks you are allowed to acquire next. As long as this graph doesn't have cycles, any number of threads can acquire locks along paths through the graph without deadlock: - Assume that if a thread is holding a lock, then it will either release it, or block trying to acquire another one. No thread just sits on its locks forever for unrelated reasons. If it did, then that would be a source of deadlock "outside the system" that we can't do anything about. - This module asserts that threads acquire and release locks in a stack-like order: a lock is dropped only when it is the *most recently acquired* lock *still held* - call this the "youngest" lock. This stack-like ordering isn't a Rust requirement; Rust lets you drop guards in any order you like. This is a restriction we impose. - Consider the directed graph whose nodes are locks, and whose edges go from each lock to its permitted followers, the locks in its [`LockRank::followers`] set. The definition of the [`lock::rank`] module's [`LockRank`] constants ensures that this graph has no cycles, including trivial cycles from a node to itself. - This module then asserts that each thread attempts to acquire a lock only if it is among its youngest lock's permitted followers. Thus, as a thread acquires locks, it must be traversing a path through the graph along its edges. - Because there are no cycles in the graph, whenever one thread is blocked waiting to acquire a lock, that lock must be held by a different thread: if you were allowed to acquire a lock you already hold, that would be a cycle in the graph. - Furthermore, because the graph has no cycles, as we work our way from each thread to the thread it is blocked waiting for, we must eventually reach an end point: there must be some thread that is able to acquire its next lock, or that is about to release a lock. Thus, the system as a whole is always able to make progress: it is free of deadlocks. Note that this validation only monitors each thread's behavior in isolation: there's only thread-local state, nothing communicated between threads. So we don't detect deadlocks, per se, only the potential to cause deadlocks. This means that the validation is conservative, but more reproducible, since it's not dependent on any particular interleaving of execution. [`lock::rank`]: crate::lock::rank
12389
vanilla.rs
Plain, uninstrumented wrappers around [`parking_lot`] lock types. These definitions are used when no particular lock instrumentation Cargo feature is selected.
3673