Name Description Size
ADTSDecoder.cpp static 1475
ADTSDecoder.h 914
ADTSDemuxer.cpp 25945
ADTSDemuxer.h 4832
AsyncLogger.h Implementation of an asynchronous lock-free logging system. 11011
AudibilityMonitor.h 3285
AudioBufferUtils.h The classes in this file provide a interface that uses frames as a unit. However, they store their offsets in samples (because it's handy for pointer operations). Those functions can convert between the two units. 6632
AudioCaptureTrack.cpp 4795
AudioCaptureTrack.h See MediaTrackGraph::CreateAudioCaptureTrack. 1234
AudioChannelFormat.cpp 510
AudioChannelFormat.h This file provides utilities for upmixing and downmixing channels. The channel layouts, upmixing and downmixing are consistent with the Web Audio spec. Channel layouts for up to 6 channels: mono { M } stereo { L, R } { L, R, C } quad { L, R, SL, SR } { L, R, C, SL, SR } 5.1 { L, R, C, LFE, SL, SR } Only 1, 2, 4 and 6 are currently defined in Web Audio. 9112
AudioCompactor.cpp 2329
AudioCompactor.h 4607
AudioConfig.cpp AudioConfig::ChannelLayout 10797
AudioConfig.h 11147
AudioConverter.cpp Parts derived from MythTV AudioConvert Class Created by Jean-Yves Avenard. Copyright (C) Bubblestuff Pty Ltd 2013 Copyright (C) 2010 16867
AudioConverter.h 10010
AudioDeviceInfo.cpp readonly attribute DOMString name; 5273
AudioDeviceInfo.h 1925
AudioDriftCorrection.h ClockDrift calculates the diverge of the source clock from the nominal (provided) rate compared to the target clock, which is considered the master clock. In the case of different sampling rates, it is assumed that resampling will take place so the returned correction is estimated after the resampling. That means that resampling is taken into account in the calculations but it does appear in the correction. The correction must be applied to the top of the resampling. It works by measuring the incoming, the outgoing frames, and the amount of buffered data and estimates the correction needed. The correction logic has been created with two things in mind. First, not to run out of frames because that means the audio will glitch. Second, not to change the correction very often because this will result in a change in the resampling ratio. The resampler recreates its internal memory when the ratio changes which has a performance impact. The pref `media.clock drift.buffering` can be used to configure the desired internal buffering. Right now it is at 50ms. But it can be increased if there are audio quality problems. 9180
AudioInputSource.cpp 7771
AudioInputSource.h 4775
AudioMixer.h This class mixes multiple streams of audio together to output a single audio stream. AudioMixer::Mix is to be called repeatedly with buffers that have the same length, sample rate, sample format and channel count. This class works with interleaved and plannar buffers, but the buffer mixed must be of the same type during a mixing cycle. When all the tracks have been mixed, calling FinishMixing will call back with a buffer containing the mixed audio data. This class is not thread safe. 4573
AudioPacketizer.h This class takes arbitrary input data, and returns packets of a specific size. In the process, it can convert audio samples from 16bit integers to float (or vice-versa). Input and output, as well as length units in the public interface are interleaved frames. Allocations of output buffer can be performed by this class. Buffers can simply be delete-d. This is because packets are intended to be sent off to non-gecko code using normal pointers/length pairs Alternatively, consumers can pass in a buffer in which the output is copied. The buffer needs to be large enough to store a packet worth of audio. The implementation uses a circular buffer using absolute virtual indices. 6324
AudioRingBuffer.cpp RingBuffer is used to preallocate a buffer of a specific size in bytes and then to use it for writing and reading values without any re-allocation or memory moving. Please note that the total byte size of the buffer modulo the size of the chosen type must be zero. The RingBuffer has been created with audio sample values types in mind which are integer or float. However, it can be used with any trivial type. It is _not_ thread-safe! The constructor can be called on any thread but the reads and write must happen on the same thread, which can be different than the construction thread. 15772
AudioRingBuffer.h AudioRingBuffer works with audio sample format float or short. The implementation wrap around the RingBuffer thus it is not thread-safe. Reads and writes must happen in the same thread which may be different than the construction thread. The memory is pre-allocated in the constructor. The sample format has to be specified in order to be used. 3127
AudioSampleFormat.h Audio formats supported in MediaTracks and media elements. Only one of these is supported by AudioStream, and that is determined at compile time (roughly, FLOAT32 on desktops, S16 on mobile). Media decoders produce that format only; queued AudioData always uses that format. 6622
AudioSegment.cpp 9543
AudioSegment.h This allows compilation of nsTArray<AudioSegment> and AutoTArray<AudioSegment> since without it, static analysis fails on the mChunks member being a non-memmovable AutoTArray. Note that AudioSegment(const AudioSegment&) is deleted, so this should never come into effect. 20266
AudioStream.cpp Keep a list of frames sent to the audio engine in each DataCallback along with the playback rate at the moment. Since the playback rate and number of underrun frames can vary in each callback. We need to keep the whole history in order to calculate the playback position of the audio engine correctly. 24734
AudioStream.h @param aFrames The playback position in frames of the audio engine. @return The playback position in frames of the stream, adjusted by playback rate changes and underrun frames. 13582
AudioStreamTrack.cpp 2918
AudioStreamTrack.h AUDIOSTREAMTRACK_H_ 2225
AudioTrack.cpp 2277
AudioTrack.h 1588
AudioTrackList.cpp 1181
AudioTrackList.h 1111
BackgroundVideoDecodingPermissionObserver.cpp 5306
BackgroundVideoDecodingPermissionObserver.h 1439
BaseMediaResource.cpp 5525
BaseMediaResource.h Create a resource, reading data from the channel. Call on main thread only. The caller must follow up by calling resource->Open(). 5742
Benchmark.cpp 12668
Benchmark.h IsExclusive = 3479
BitReader.cpp static 4362
BitReader.h 1611
BitWriter.cpp 3147
BitWriter.h 1226
BufferMediaResource.h 2498
BufferReader.h 8678
ByteWriter.h 1480
CallbackThreadRegistry.cpp static 3005
CallbackThreadRegistry.h 2047
CanvasCaptureMediaStream.cpp 6771
CanvasCaptureMediaStream.h The CanvasCaptureMediaStream is a MediaStream subclass that provides a video track containing frames from a canvas. See an architectural overview below. ---------------------------------------------------------------------------- === Main Thread === __________________________ | | | CanvasCaptureMediaStream | |__________________________| | | RequestFrame() v ________________________ ________ FrameCaptureRequested? | | | | ------------------------> | OutputStreamDriver | | Canvas | SetFrameCapture() | (FrameCaptureListener) | |________| ------------------------> |________________________| | | SetImage() - | AppendToTrack() | v __________________________ | | | MTG / SourceMediaTrack | |__________________________| ---------------------------------------------------------------------------- 4522
ChannelMediaDecoder.cpp static 19178
ChannelMediaDecoder.h MediaResourceCallback functions 6308
ChannelMediaResource.cpp 37987
ChannelMediaResource.h This class is responsible for managing the suspend count and report suspend status of channel. 10083
CloneableWithRangeMediaResource.cpp 6065
CloneableWithRangeMediaResource.h 3216
components.conf 3093
CrossGraphPort.cpp CrossGraphTransmitter * 6748
CrossGraphPort.h See MediaTrackGraph::CreateCrossGraphTransmitter() 3416
CubebInputStream.cpp static 6197
CubebInputStream.h 2737
CubebUtils.cpp 28445
CubebUtils.h 3648
DecoderTraits.cpp static 11149
DecoderTraits.h 2815
DeviceInputTrack.cpp 23875
DeviceInputTrack.h 11600
DOMMediaStream.cpp static 15421
DOMMediaStream.h DOMMediaStream is the implementation of the js-exposed MediaStream interface. This is a thin main-thread class grouping MediaStreamTracks together. 8193
DriftCompensation.h DriftCompensator can be used to handle drift between audio and video tracks from the MediaTrackGraph. Drift can occur because audio is driven by a MediaTrackGraph running off an audio callback, thus it's progressed by the clock of one the audio output devices on the user's machine. Video on the other hand is always expressed in wall-clock TimeStamps, i.e., it's progressed by the system clock. These clocks will, over time, drift apart. Do not use the DriftCompensator across multiple audio tracks, as it will automatically record the start time of the first audio samples, and all samples for the same audio track on the same audio clock will have to be processed to retain accuracy. DriftCompensator is designed to be used from two threads: - The audio thread for notifications of audio samples. - The video thread for compensating drift of video frames to match the audio clock. 4763
DynamicResampler.cpp 16683
DynamicResampler.h DynamicResampler allows updating on the fly the output sample rate and the number of channels. In addition to that, it maintains an internal buffer for the input data and allows pre-buffering as well. The Resample() method strives to provide the requested number of output frames by using the input data including any pre-buffering. If this is not possible then it will not attempt to resample and it will return failure. Input data buffering makes use of the AudioRingBuffer. The capacity of the buffer is 100ms of float audio and it is pre-allocated at the constructor. No extra allocations take place when the input is appended. In addition to that, due to special feature of AudioRingBuffer, no extra copies take place when the input data is fed to the resampler. The sample format must be set before using any method. If the provided sample format is of type short the pre-allocated capacity of the input buffer becomes 200ms of short audio. The DynamicResampler is not thread-safe, so all the methods appart from the constructor must be called on the same thread. 15828
ExternalEngineStateMachine.cpp This class monitors the amount of crash happened for a remote engine process. It the amount of crash of the remote process exceeds the defined threshold, then `ShouldRecoverProcess()` will return false to indicate that we should not keep spawning that remote process because it's too easy to crash. In addition, we also have another mechanism in the media format reader (MFR) to detect crash amount of remote processes, but that would only happen during the decoding process. The main reason to choose using this simple monitor, instead of the mechanism in the MFR is because that mechanism can't detect every crash happening in the remote process, such as crash happening during initializing the remote engine, or setting the CDM pipepline, which can happen prior to decoding. 42555
ExternalEngineStateMachine.h ExternalPlaybackEngine represents a media engine which is responsible for decoding and playback, which are not controlled by Gecko. 11461
FileBlockCache.cpp 18054
FileBlockCache.h 8113
FileMediaResource.cpp 6887
FileMediaResource.h unknown size 4975
ForwardedInputTrack.cpp 10344
ForwardedInputTrack.h See MediaTrackGraph::CreateForwardedInputTrack. 2580
FrameStatistics.h 6827
GetUserMediaRequest.cpp 4644
GetUserMediaRequest.h 3048
GraphDriver.cpp 46985
GraphDriver.h Assume we can run an iteration of the MediaTrackGraph loop in this much time or less. We try to run the control loop at this rate. 32409
GraphRunner.cpp static 5305
GraphRunner.h Marks us as shut down and signals mThread, so that it runs until the end. 3988
IdpSandbox.sys.mjs This little class ensures that redirects maintain an https:// origin 8287
ImageToI420.cpp 5446
ImageToI420.h Converts aImage to an I420 image and writes it to the given buffers. 787
Intervals.h Interval defines an interval between two points. Unlike a traditional interval [A,B] where A <= x <= B, the upper boundary B is exclusive: A <= x < B (e.g [A,B[ or [A,B) depending on where you're living) It provides basic interval arithmetic and fuzzy edges. The type T must provides a default constructor and +, -, <, <= and == operators. 20870
MediaBlockCacheBase.h MEDIA_BLOCK_CACHE_BASE_H_ 3317
MediaCache.cpp static 105823
MediaCache.h 25908
MediaChannelStatistics.h This class is useful for estimating rates of data passing through some channel. The idea is that activity on the channel "starts" and "stops" over time. At certain times data passes through the channel (usually while the channel is active; data passing through an inactive channel is ignored). The GetRate() function computes an estimate of the "current rate" of the channel, which is some kind of average of the data passing through over the time the channel is active. All methods take "now" as a parameter so the user of this class can control the timeline used. 2967
MediaContainerType.cpp 1119
MediaContainerType.h 1796
MediaData.cpp 20783
MediaData.h 24442
MediaDataDemuxer.h IsExclusive = 8270
MediaDecoder.cpp 57238
MediaDecoder.h aIgnored 30143
MediaDecoderOwner.h 7842
MediaDecoderStateMachine.cpp 174391
MediaDecoderStateMachine.h Each media element for a media file has one thread called the "audio thread". The audio thread writes the decoded audio data to the audio hardware. This is done in a separate thread to ensure that the audio hardware gets a constant stream of data without interruption due to decoding or display. At some point AudioStream will be refactored to have a callback interface where it asks for data and this thread will no longer be needed. The element/state machine also has a TaskQueue which runs in a SharedThreadPool that is shared with all other elements/decoders. The state machine dispatches tasks to this to call into the MediaDecoderReader to request decoded audio or video data. The Reader will callback with decoded sampled when it has them available, and the state machine places the decoded samples into its queues for the consuming threads to pull from. The MediaDecoderReader can choose to decode asynchronously, or synchronously and return requested samples synchronously inside it's Request*Data() functions via callback. Asynchronous decoding is preferred, and should be used for any new readers. Synchronisation of state between the thread is done via a monitor owned by MediaDecoder. The lifetime of the audio thread is controlled by the state machine when it runs on the shared state machine thread. When playback needs to occur the audio thread is created and an event dispatched to run it. The audio thread exits when audio playback is completed or no longer required. A/V synchronisation is handled by the state machine. It examines the audio playback time and compares this to the next frame in the queue of video frames. If it is time to play the video frame it is then displayed, otherwise it schedules the state machine to run again at the time of the next frame. Frame skipping is done in the following ways: 1) The state machine will skip all frames in the video queue whose display time is less than the current audio time. This ensures the correct frame for the current time is always displayed. 2) The decode tasks will stop decoding interframes and read to the next keyframe if it determines that decoding the remaining interframes will cause playback issues. It detects this by: a) If the amount of audio data in the audio queue drops below a threshold whereby audio may start to skip. b) If the video queue drops below a threshold where it will be decoding video data that won't be displayed due to the decode thread dropping the frame immediately. TODO: In future we should only do this when the Reader is decoding synchronously. When hardware accelerated graphics is not available, YCbCr conversion is done on the decode task queue when video frames are decoded. The decode task queue pushes decoded audio and videos frames into two separate queues - one for audio and one for video. These are kept separate to make it easy to constantly feed audio data to the audio hardware while allowing frame skipping of video data. These queues are threadsafe, and neither the decode, audio, or state machine should be able to monopolize them, and cause starvation of the other threads. Both queues are bounded by a maximum size. When this size is reached the decode tasks will no longer request video or audio depending on the queue that has reached the threshold. If both queues are full, no more decode tasks will be dispatched to the decode task queue, so other decoders will have an opportunity to run. During playback the audio thread will be idle (via a Wait() on the monitor) if the audio queue is empty. Otherwise it constantly pops audio data off the queue and plays it with a blocking write to the audio hardware (via AudioStream). 22119
MediaDecoderStateMachineBase.cpp aSupportsTailDispatch = 7365
MediaDecoderStateMachineBase.h The state machine class. This manages the decoding and seeking in the MediaDecoderReader on the decode task queue, and A/V sync on the shared state machine thread, and controls the audio "push" thread. All internal state is synchronised via the decoder monitor. State changes are propagated by scheduling the state machine to run another cycle on the shared state machine thread. 10735
MediaDeviceInfo.cpp 1582
MediaDeviceInfo.h 1887
MediaDevices.cpp If requestedMediaTypes is the empty set, return a promise rejected with a TypeError. 32281
MediaDevices.h 4964
MediaEventSource.h A thread-safe tool to communicate "revocation" across threads. It is used to disconnect a listener from the event source to prevent future notifications from coming. Revoke() can be called on any thread. However, it is recommended to be called on the target thread to avoid race condition. RevocableToken is not exposed to the client code directly. Use MediaEventListener below to do the job. 18183
MediaFormatReader.cpp This class tracks shutdown promises to ensure all decoders are shut down completely before MFR continues the rest of the shutdown procedure. 125926
MediaFormatReader.h 31816
MediaInfo.cpp 2353
MediaInfo.h 22611
MediaManager.cpp Using WebRTC backend on Desktops (Mac, Windows, Linux), otherwise default 164794
MediaManager.h Device info that is independent of any Window. MediaDevices can be shared, unlike LocalMediaDevices. 14745
MediaMetadataManager.h 3471
MediaMIMETypes.cpp 8229
MediaMIMETypes.h 8916
MediaPlaybackDelayPolicy.cpp 5203
MediaPlaybackDelayPolicy.h We usaually start AudioChannelAgent when media starts and stop it when media stops. However, when we decide to delay media playback for unvisited tab, we would start AudioChannelAgent even if media doesn't start in order to register the agent to AudioChannelService, so that the service could notify us when we are able to resume media playback. Therefore, ResumeDelayedPlaybackAgent is used to handle this special use case of AudioChannelAgent. - Use `GetResumePromise()` to require resume-promise and then do follow-up resume behavior when promise is resolved. - Use `UpdateAudibleState()` to update audible state only when media info changes. As having audio track or not is the only thing for us to decide whether we would show the delayed media playback icon on the tab bar. 3056
MediaPromiseDefs.h aIgnored 592
MediaQueue.h 8277
MediaRecorder.cpp MediaRecorderReporter measures memory being used by the Media Recorder. It is a singleton reporter and the single class object lives as long as at least one Recorder is registered. In MediaRecorder, the reporter is unregistered when it is destroyed. 69184
MediaRecorder.h Implementation of The MediaRecorder accepts a MediaStream as input passed from an application. When the MediaRecorder starts, a MediaEncoder will be created and accepts the MediaStreamTracks in the MediaStream as input source. For each track it creates a TrackEncoder. The MediaEncoder automatically encodes and muxes data from the tracks by the given MIME type, then it stores this data into a MutableBlobStorage object. When a timeslice is set and the MediaEncoder has stored enough data to fill the timeslice, it extracts a Blob from the storage and passes it to MediaRecorder. On RequestData() or Stop(), the MediaEncoder extracts the blob from the storage and returns it to MediaRecorder through a MozPromise. Thread model: When the recorder starts, it creates a worker thread (called the encoder thread) that does all the heavy lifting - encoding, time keeping, muxing. 6984
MediaResource.cpp 16457
MediaResource.h Provides a thread-safe, seek/read interface to resources loaded from a URI. Uses MediaCache to cache data received over Necko's async channel API, thus resolving the mismatch between clients that need efficient random access to the data and protocols that do not support efficient random access, such as HTTP. Instances of this class must be created on the main thread. Most methods must be called on the main thread only. Read, Seek and Tell must only be called on non-main threads. In the case of the Ogg Decoder they are called on the Decode thread for example. You must ensure that no threads are calling these methods once Close is called. Instances of this class are reference counted. Use nsRefPtr for managing the lifetime of instances of this class. The generic implementation of this class is ChannelMediaResource, which can handle any URI for which Necko supports AsyncOpen. The 'file:' protocol can be implemented efficiently with direct random access, so the FileMediaResource implementation class bypasses the cache. For cross-process blob URL, CloneableWithRangeMediaResource is used. MediaResource::Create automatically chooses the best implementation class. 13127
MediaResourceCallback.h A callback used by MediaResource (sub-classes like FileMediaResource, RtspMediaResource, and ChannelMediaResource) to notify various events. Currently this is implemented by MediaDecoder only. Since this class has no pure virtual function, it is convenient to write gtests for the readers without using a mock MediaResource when you don't care about the events notified by the MediaResource. 2311
MediaResult.h 2821
MediaSegment.h Track or graph rate in Hz. Maximum 1 << TRACK_RATE_MAX_BITS Hz. This maximum avoids overflow in conversions between track rates and conversions from seconds. 16410
MediaShutdownManager.cpp 6075
MediaShutdownManager.h 3532
MediaSpan.h 4577
MediaStatistics.h 3271
MediaStreamError.cpp 3929
MediaStreamError.h 3416
MediaStreamTrack.cpp MTGListener monitors state changes of the media flowing through the MediaTrackGraph. For changes to PrincipalHandle the following applies: When the main thread principal for a MediaStreamTrack changes, its principal will be set to the combination of the previous principal and the new one. As a PrincipalHandle change later happens on the MediaTrackGraph thread, we will be notified. If the latest principal on main thread matches the PrincipalHandle we just saw on MTG thread, we will set the track's principal to the new one. We know at this point that the old principal has been flushed out and data under it cannot leak to consumers. In case of multiple changes to the main thread state, the track's principal will be a combination of its old principal and all the new ones until the latest main thread principal matches the PrincipalHandle on the MTG thread. 20296
MediaStreamTrack.h Common interface through which a MediaStreamTrack can communicate with its producer on the main thread. Kept alive by a strong ref in all MediaStreamTracks (original and clones) sharing this source. 19998
MediaStreamWindowCapturer.cpp 2215
MediaStreamWindowCapturer.h Given a DOMMediaStream and a window id, this class will pipe the audio from all live audio tracks in the stream to the MediaTrackGraph's window capture mechanism. 1555
MediaTimer.cpp 5912
MediaTimer.h IsExclusive = 5358
MediaTrack.cpp 1240
MediaTrack.h Base class of AudioTrack and VideoTrack. The AudioTrack and VideoTrack objects represent specific tracks of a media resource. Each track has aspects of an identifier, category, label, and language, even if a track is removed from its corresponding track list, those aspects do not change. When fetching the media resource, an audio/video track is created if the media resource is found to have an audio/video track. When the UA has learned that an audio/video track has ended, this audio/video track will be removed from its corresponding track list. Although AudioTrack and VideoTrack are not EventTargets, TextTrack is, and TextTrack inherits from MediaTrack as well (or is going to). 2694
MediaTrackGraph.cpp A hash table containing the graph instances, one per Window ID, sample rate, and device ID combination. 148030
MediaTrackGraph.h MediaTrackGraph is a framework for synchronized audio/video processing and playback. It is designed to be used by other browser components such as HTML media elements, media capture APIs, real-time media streaming APIs, multitrack media APIs, and advanced audio APIs. The MediaTrackGraph uses a dedicated thread to process media --- the media graph thread. This ensures that we can process media through the graph without blocking on main-thread activity. The media graph is only modified on the media graph thread, to ensure graph changes can be processed without interfering with media processing. All interaction with the media graph thread is done with message passing. APIs that modify the graph or its properties are described as "control APIs". These APIs are asynchronous; they queue graph changes internally and those changes are processed all-at-once by the MediaTrackGraph. The MediaTrackGraph monitors the main thread event loop via nsIAppShell::RunInStableState to ensure that graph changes from a single event loop task are always processed all together. Control APIs should only be used on the main thread, currently; we may be able to relax that later. To allow precise synchronization of times in the control API, the MediaTrackGraph maintains a "media timeline". Control APIs that take or return times use that timeline. Those times never advance during an event loop task. This time is returned by MediaTrackGraph::GetCurrentTime(). Media decoding, audio processing and media playback use thread-safe APIs to the media graph to ensure they can continue while the main thread is blocked. When the graph is changed, we may need to throw out buffered data and reprocess it. This is triggered automatically by the MediaTrackGraph. 44007
MediaTrackGraphImpl.h A per-track update message passed from the media graph thread to the main thread. 38386
MediaTrackList.cpp 4838
MediaTrackList.h Base class of AudioTrackList and VideoTrackList. The AudioTrackList and VideoTrackList objects represent a dynamic list of zero or more audio and video tracks respectively. When a media element is to forget its media-resource-specific tracks, its audio track list and video track list will be emptied. 3555
MediaTrackListener.cpp 3628
MediaTrackListener.h This is a base class for media graph thread listener callbacks locked to specific tracks. Override methods to be notified of audio or video data or changes in track state. All notification methods are called from the media graph thread. Overriders of these methods are responsible for all synchronization. Beware! These methods are called without the media graph monitor held, so reentry into media graph methods is possible, although very much discouraged! You should do something non-blocking and non-reentrant (e.g. dispatch an event to some thread) and return. The listener is not allowed to add/remove any listeners from the parent track. If a listener is attached to a track that has already ended, we guarantee to call NotifyEnded. 7681
MemoryBlockCache.cpp 8173
MemoryBlockCache.h MEMORY_BLOCK_CACHE_H_ 3144
metrics.yaml 1669 10406
MPSCQueue.h 5539
nsIAudioDeviceInfo.idl 1959
nsIDocumentActivity.h Use this macro when declaring classes that implement this interface. 1127
nsIMediaDevice.idl 731
nsIMediaManager.idl return a array of inner windows that have active captures 1818
Pacer.h Pacer<T> takes a queue of Ts tied to timestamps, and emits PacedItemEvents for every T at its corresponding timestamp. The queue is ordered. Enqueing an item at time t will drop all items at times later than T. This is because of how video sources work (some send out frames in the future, some don't), and to allow swapping one source for another. It supports a duplication interval. If there is no new item enqueued within the duplication interval since the last enqueued item, the last enqueud item is emitted again. 4807
PeerConnection.sys.mjs 58370
PeerConnectionIdp.sys.mjs Creates an IdP helper. @param win (object) the window we are working for @param timeout (int) the timeout in milliseconds 11137
PrincipalChangeObserver.h A PrincipalChangeObserver for any type, but originating from DOMMediaStream, then expanded to MediaStreamTrack. Used to learn about dynamic changes to an object's principal. Operations relating to these observers must be confined to the main thread. 904
PrincipalHandle.h The level of privacy of a principal as considered by RTCPeerConnection. 1945
QueueObject.cpp 947
QueueObject.h 845
ReaderProxy.cpp 8363
ReaderProxy.h A wrapper around MediaFormatReader to offset the timestamps of Audio/Video samples by the start time to ensure MDSM can always assume zero start time. It also adjusts the seek target passed to Seek() to ensure correct seek time is passed to the underlying reader. 3906
SeekJob.cpp 853
SeekJob.h SEEK_JOB_H 862
SeekTarget.h SEEK_TARGET_H 2811
SelfRef.h 1015
SharedBuffer.h Base class for objects with a thread-safe refcount and a virtual destructor. 3707
TimeUnits.cpp 13162
TimeUnits.h 12197
Tracing.cpp 2715
Tracing.h TRACING_H 3524
UnderrunHandler.h 970
UnderrunHandlerLinux.cpp 2322
UnderrunHandlerNoop.cpp 460
VideoFrameContainer.cpp 8741
VideoFrameContainer.h This object is used in the decoder backend threads and the main thread to manage the "current video frame" state. This state includes timing data and an intrinsic size (see below). This has to be a thread-safe object since it's accessed by resource decoders and other off-main-thread components. So we can't put this state in the media element itself ... well, maybe we could, but it could be risky and/or confusing. 6025
VideoFrameConverter.h An active VideoFrameConverter actively converts queued video frames. While inactive, we keep track of the frame most recently queued for processing, so it can be immediately sent out once activated. 16635
VideoLimits.h 775
VideoOutput.h 11182
VideoPlaybackQuality.cpp 1373
VideoPlaybackQuality.h 1598
VideoSegment.cpp static 3509
VideoSegment.h 5654
VideoStreamTrack.cpp 3003
VideoStreamTrack.h Whether this VideoStreamTrack's video frames will have an alpha channel. 1742
VideoTrack.cpp 2864
VideoTrack.h 2172
VideoTrackList.cpp 2796
VideoTrackList.h 1387
VideoUtils.cpp 32689
VideoUtils.h ReentrantMonitorConditionallyEnter Enters the supplied monitor only if the conditional value |aEnter| is true. E.g. Used to allow unmonitored read access on the decode thread, and monitored access on all other threads. 20659
VorbisUtils.h MOZ_SAMPLE_TYPE_FLOAT32 867
WavDumper.h If MOZ_DUMP_AUDIO is set, this dumps a file to disk containing the output of an audio stream, in 16bits integers. The sandbox needs to be disabled for this to work. 4070
WebMSample.h 1731681
XiphExtradata.cpp 2931
XiphExtradata.h This converts a list of headers to the canonical form of extradata for Xiph codecs in non-Ogg containers. We use it to pass those headers from demuxer to decoder even when demuxing from an Ogg cotainer. 1169