Name Description Size
ADTSDecoder.cpp static 1540
ADTSDecoder.h 914
ADTSDemuxer.cpp 24563
ADTSDemuxer.h 4829
AsyncLogger.h Implementation of an asynchronous lock-free logging system. 9362
AudioBufferUtils.h The classes in this file provide a interface that uses frames as a unit. However, they store their offsets in samples (because it's handy for pointer operations). Those functions can convert between the two units. 7369
AudioCaptureStream.cpp 5175
AudioCaptureStream.h See MediaStreamGraph::CreateAudioCaptureStream. 1245
AudioChannelFormat.cpp 510
AudioChannelFormat.h This file provides utilities for upmixing and downmixing channels. The channel layouts, upmixing and downmixing are consistent with the Web Audio spec. Channel layouts for up to 6 channels: mono { M } stereo { L, R } { L, R, C } quad { L, R, SL, SR } { L, R, C, SL, SR } 5.1 { L, R, C, LFE, SL, SR } Only 1, 2, 4 and 6 are currently defined in Web Audio. 9077
AudioCompactor.cpp 2329
AudioCompactor.h 4608
AudioConfig.cpp AudioConfig::ChannelLayout 10797
AudioConfig.h 9843
AudioConverter.cpp Parts derived from MythTV AudioConvert Class Created by Jean-Yves Avenard. Copyright (C) Bubblestuff Pty Ltd 2013 Copyright (C) foobum@gmail.com 2010 16557
AudioConverter.h 9927
AudioDeviceInfo.cpp readonly attribute DOMString name; 5273
AudioDeviceInfo.h 1925
AudioMixer.h This class mixes multiple streams of audio together to output a single audio stream. AudioMixer::Mix is to be called repeatedly with buffers that have the same length, sample rate, sample format and channel count. This class works with interleaved and plannar buffers, but the buffer mixed must be of the same type during a mixing cycle. When all the tracks have been mixed, calling FinishMixing will call back with a buffer containing the mixed audio data. This class is not thread safe. 4515
AudioNotificationReceiver.cpp A list containing all clients subscribering the device-changed notifications. 2662
AudioNotificationReceiver.h Architecture to send/receive default device-changed notification: Chrome Process ContentProcess 1 ------------------ ------------------ AudioNotification DeviceChangeListener 1 DeviceChangeListener N | ^ | ^ ^ (4)| |(2) |(3) |(8) . v | v | v AudioNotificationSender AudioNotificationReceiver ^ | ^ ^ . (5)| |(1) |(7) . v | | . (P)ContentParent 1 (P)ContentChild 1 . | ^ . (6)| | . | | . | | . +------------------------------------+ . PContent IPC . . Content Process M . ------------------ . . v . (P)ContentParent M < . . . . . . . . . > (P)ContentChild M PContent IPC Steps -------- 1) Initailize the AudioNotificationSender when ContentParent is created. 2) Create an AudioNotification to get the device-changed signal from the system. 3) Register the DeviceChangeListener to AudioNotificationReceiver when it's created. 4) When the default device is changed, AudioNotification get the signal and 5) Pass this message by AudioNotificationSender. 6) The AudioNotificationSender sends the device-changed notification via the PContent. 7) The ContentChild will call AudioNotificationReceiver to 8) Notify all the registered audio streams to reconfigure the output devices. Notes -------- a) There is only one AudioNotificationSender and AudioNotification in a chrome process. b) There is only one AudioNotificationReceiver and might be many DeviceChangeListeners in a content process. c) There might be many ContentParent in a chrome process. d) There is only one ContentChild in a content process. e) All the DeviceChangeListeners are registered in the AudioNotificationReceiver. f) All the ContentParents are registered in the AudioNotificationSender. 4179
AudioNotificationSender.cpp A runnable task to notify the audio device-changed event. 6826
AudioNotificationSender.h 984
AudioPacketizer.h This class takes arbitrary input data, and returns packets of a specific size. In the process, it can convert audio samples from 16bit integers to float (or vice-versa). Input and output, as well as length units in the public interface are interleaved frames. Allocations of output buffer can be performed by this class. Buffers can simply be delete-d. This is because packets are intended to be sent off to non-gecko code using normal pointers/length pairs Alternatively, consumers can pass in a buffer in which the output is copied. The buffer needs to be large enough to store a packet worth of audio. The implementation uses a circular buffer using absolute virtual indices. 6284
AudioSampleFormat.h Audio formats supported in MediaStreams and media elements. Only one of these is supported by AudioStream, and that is determined at compile time (roughly, FLOAT32 on desktops, S16 on mobile). Media decoders produce that format only; queued AudioData always uses that format. 6623
AudioSegment.cpp 7577
AudioSegment.h This allows compilation of nsTArray<AudioSegment> and AutoTArray<AudioSegment> since without it, static analysis fails on the mChunks member being a non-memmovable AutoTArray. Note that AudioSegment(const AudioSegment&) is deleted, so this should never come into effect. 16563
AudioStream.cpp Keep a list of frames sent to the audio engine in each DataCallback along with the playback rate at the moment. Since the playback rate and number of underrun frames can vary in each callback. We need to keep the whole history in order to calculate the playback position of the audio engine correctly. 23207
AudioStream.h @param aFrames The playback position in frames of the audio engine. @return The playback position in frames of the stream, adjusted by playback rate changes and underrun frames. 10988
AudioStreamTrack.cpp 681
AudioStreamTrack.h AUDIOSTREAMTRACK_H_ 1453
AudioTrack.cpp 1854
AudioTrack.h 1115
AudioTrackList.cpp 1207
AudioTrackList.h 1137
AutoplayPolicy.cpp 9031
AutoplayPolicy.h AutoplayPolicy is used to manage autoplay logic for all kinds of media, including MediaElement, Web Audio and Web Speech. Autoplay could be disable by setting the pref "media.autoplay.default" to anything but nsIAutoplay::Allowed. Once user disables autoplay, media could only be played if one of following conditions is true. 1) Owner document is activated by user gestures We restrict user gestures to "mouse click", "keyboard press" and "touch". 2) Muted media content or video without audio content. 3) Document's origin has the "autoplay-media" permission. 2444
BackgroundVideoDecodingPermissionObserver.cpp 5496
BackgroundVideoDecodingPermissionObserver.h 1417
BaseMediaResource.cpp 5140
BaseMediaResource.h Create a resource, reading data from the channel. Call on main thread only. The caller must follow up by calling resource->Open(). 5541
benchmark 1
Benchmark.cpp 12232
Benchmark.h IsExclusive = 3407
BitReader.cpp static 3921
BitReader.h 1489
BitWriter.cpp 2883
BitWriter.h 1130
bridge 4
BufferMediaResource.h 2490
BufferReader.h 8105
ByteWriter.h 1480
CanvasCaptureMediaStream.cpp 6894
CanvasCaptureMediaStream.h The CanvasCaptureMediaStream is a MediaStream subclass that provides a video track containing frames from a canvas. See an architectural overview below. ---------------------------------------------------------------------------- === Main Thread === __________________________ | | | CanvasCaptureMediaStream | |__________________________| | | RequestFrame() v ________________________ ________ FrameCaptureRequested? | | | | ------------------------> | OutputStreamDriver | | Canvas | SetFrameCapture() | (FrameCaptureListener) | |________| ------------------------> |________________________| | | SetImage() - | AppendToTrack() | v ___________________________ | | | MSG / SourceMediaStream | |___________________________| ---------------------------------------------------------------------------- 4822
ChannelMediaDecoder.cpp static 18344
ChannelMediaDecoder.h MediaResourceCallback functions 5960
ChannelMediaResource.cpp 34640
ChannelMediaResource.h This class is responsible for managing the suspend count and report suspend status of channel. 9612
CloneableWithRangeMediaResource.cpp 5516
CloneableWithRangeMediaResource.h 3151
components.conf 2994
CubebUtils.cpp 23547
CubebUtils.h 2073
DecoderTraits.cpp static 11606
DecoderTraits.h 2596
doctor 30
DOMMediaStream.cpp 36754
DOMMediaStream.h 25353
DriftCompensation.h DriftCompensator can be used to handle drift between audio and video tracks from the MediaStreamGraph. Drift can occur because audio is driven by a MediaStreamGraph running off an audio callback, thus it's progressed by the clock of one the audio output devices on the user's machine. Video on the other hand is always expressed in wall-clock TimeStamps, i.e., it's progressed by the system clock. These clocks will, over time, drift apart. Do not use the DriftCompensator across multiple audio tracks, as it will automatically record the start time of the first audio samples, and all samples for the same audio track on the same audio clock will have to be processed to retain accuracy. DriftCompensator is designed to be used from two threads: - The audio thread for notifications of audio samples. - The video thread for compensating drift of video frames to match the audio clock. 4768
eme 26
encoder 12
fake-cdm 8
FileBlockCache.cpp 17789
FileBlockCache.h 7903
FileMediaResource.cpp 6366
FileMediaResource.h unknown size 4665
flac 7
FrameStatistics.h 6142
GetUserMediaRequest.cpp 2827
GetUserMediaRequest.h 1881
gmp This directory contains code supporting Gecko Media Plugins (GMPs). The GMP API is not the same thing as the Media Plugin API (MPAPI). 93
gmp-plugin-openh264 3
GraphDriver.cpp 40022
GraphDriver.h Assume we can run an iteration of the MediaStreamGraph loop in this much time or less. We try to run the control loop at this rate. 22439
GraphRunner.cpp 3802
GraphRunner.h Marks us as shut down and signals mThread, so that it runs until the end. 2885
gtest 61
hls 7
IdpSandbox.jsm This little class ensures that redirects maintain an https:// origin 8430
imagecapture 5
ImageToI420.cpp 5353
ImageToI420.h Converts aImage to an I420 image and writes it to the given buffers. 765
Intervals.h Interval defines an interval between two points. Unlike a traditional interval [A,B] where A <= x <= B, the upper boundary B is exclusive: A <= x < B (e.g [A,B[ or [A,B) depending on where you're living) It provides basic interval arithmetic and fuzzy edges. The type T must provides a default constructor and +, -, <, <= and == operators. 18873
ipc 45
MediaBlockCacheBase.h MEDIA_BLOCK_CACHE_BASE_H_ 3317
MediaCache.cpp static 104000
MediaCache.h 25898
mediacapabilities 3
MediaChannelStatistics.h This class is useful for estimating rates of data passing through some channel. The idea is that activity on the channel "starts" and "stops" over time. At certain times data passes through the channel (usually while the channel is active; data passing through an inactive channel is ignored). The GetRate() function computes an estimate of the "current rate" of the channel, which is some kind of average of the data passing through over the time the channel is active. All methods take "now" as a parameter so the user of this class can control the timeline used. 2967
MediaContainerType.cpp 1119
MediaContainerType.h 1796
MediaData.cpp 18462
MediaData.h 22569
MediaDataDemuxer.h IsExclusive = 7713
MediaDecoder.cpp 47059
MediaDecoder.h aIgnored 23133
MediaDecoderOwner.h Fires a timeupdate event. If aPeriodic is true, the event will only be fired if we've not fired a timeupdate event (for any reason) in the last 250ms, as required by the spec when the current time is periodically increasing during playback. 7819
MediaDecoderStateMachine.cpp 135895
MediaDecoderStateMachine.h Each media element for a media file has one thread called the "audio thread". The audio thread writes the decoded audio data to the audio hardware. This is done in a separate thread to ensure that the audio hardware gets a constant stream of data without interruption due to decoding or display. At some point AudioStream will be refactored to have a callback interface where it asks for data and this thread will no longer be needed. The element/state machine also has a TaskQueue which runs in a SharedThreadPool that is shared with all other elements/decoders. The state machine dispatches tasks to this to call into the MediaDecoderReader to request decoded audio or video data. The Reader will callback with decoded sampled when it has them available, and the state machine places the decoded samples into its queues for the consuming threads to pull from. The MediaDecoderReader can choose to decode asynchronously, or synchronously and return requested samples synchronously inside it's Request*Data() functions via callback. Asynchronous decoding is preferred, and should be used for any new readers. Synchronisation of state between the thread is done via a monitor owned by MediaDecoder. The lifetime of the audio thread is controlled by the state machine when it runs on the shared state machine thread. When playback needs to occur the audio thread is created and an event dispatched to run it. The audio thread exits when audio playback is completed or no longer required. A/V synchronisation is handled by the state machine. It examines the audio playback time and compares this to the next frame in the queue of video frames. If it is time to play the video frame it is then displayed, otherwise it schedules the state machine to run again at the time of the next frame. Frame skipping is done in the following ways: 1) The state machine will skip all frames in the video queue whose display time is less than the current audio time. This ensures the correct frame for the current time is always displayed. 2) The decode tasks will stop decoding interframes and read to the next keyframe if it determines that decoding the remaining interframes will cause playback issues. It detects this by: a) If the amount of audio data in the audio queue drops below a threshold whereby audio may start to skip. b) If the video queue drops below a threshold where it will be decoding video data that won't be displayed due to the decode thread dropping the frame immediately. TODO: In future we should only do this when the Reader is decoding synchronously. When hardware accelerated graphics is not available, YCbCr conversion is done on the decode task queue when video frames are decoded. The decode task queue pushes decoded audio and videos frames into two separate queues - one for audio and one for video. These are kept separate to make it easy to constantly feed audio data to the audio hardware while allowing frame skipping of video data. These queues are threadsafe, and neither the decode, audio, or state machine should be able to monopolize them, and cause starvation of the other threads. Both queues are bounded by a maximum size. When this size is reached the decode tasks will no longer request video or audio depending on the queue that has reached the threshold. If both queues are full, no more decode tasks will be dispatched to the decode task queue, so other decoders will have an opportunity to run. During playback the audio thread will be idle (via a Wait() on the monitor) if the audio queue is empty. Otherwise it constantly pops audio data off the queue and plays it with a blocking write to the audio hardware (via AudioStream). 29285
MediaDeviceInfo.cpp 1608
MediaDeviceInfo.h 1824
MediaDevices.cpp 9241
MediaDevices.h 2638
MediaEventSource.h A thread-safe tool to communicate "revocation" across threads. It is used to disconnect a listener from the event source to prevent future notifications from coming. Revoke() can be called on any thread. However, it is recommended to be called on the target thread to avoid race condition. RevocableToken is not exposed to the client code directly. Use MediaEventListener below to do the job. 13975
MediaFormatReader.cpp This helper class is used to report telemetry of the time used to recover a decoder from GPU crash. It uses MediaDecoderOwnerID to identify which video we're dealing with. It uses MediaDataDecoderID to make sure that the old MediaDataDecoder has been deleted and we're already recovered. It reports two recovery times, one is calculated from GPU crashed (that is, the time when VideoDecoderChild::ActorDestory() is called) and the other is calculated from the MFR is notified with NS_ERROR_DOM_MEDIA_NEED_NEW_DECODER error. 110591
MediaFormatReader.h 27549
MediaInfo.cpp 2399
MediaInfo.h 13980
MediaManager.cpp Using WebRTC backend on Desktops (Mac, Windows, Linux), otherwise default 174575
MediaManager.h 12729
MediaMetadataManager.h 3496
MediaMIMETypes.cpp static 9708
MediaMIMETypes.h 9412
MediaPromiseDefs.h aIgnored 592
MediaQueue.h 5836
MediaRecorder.cpp static 58653
MediaRecorder.h Implementation of https://dvcs.w3.org/hg/dap/raw-file/default/media-stream-capture/MediaRecorder.html The MediaRecorder accepts a mediaStream as input source passed from UA. When recorder starts, a MediaEncoder will be created and accept the mediaStream as input source. Encoder will get the raw data by track data changes, encode it by selected MIME Type, then store the encoded in a MutableBlobStorage object. The encoded data will be extracted on every timeslice passed from Start function call or by RequestData function. Thread model: When the recorder starts, it creates a "Media Encoder" thread to read data from MediaEncoder object and store buffer in MutableBlobStorage object. Also extract the encoded data and create blobs on every timeslice passed from start function or RequestData function called by UA. 7740
MediaResource.cpp 16919
MediaResource.h Provides a thread-safe, seek/read interface to resources loaded from a URI. Uses MediaCache to cache data received over Necko's async channel API, thus resolving the mismatch between clients that need efficient random access to the data and protocols that do not support efficient random access, such as HTTP. Instances of this class must be created on the main thread. Most methods must be called on the main thread only. Read, Seek and Tell must only be called on non-main threads. In the case of the Ogg Decoder they are called on the Decode thread for example. You must ensure that no threads are calling these methods once Close is called. Instances of this class are reference counted. Use nsRefPtr for managing the lifetime of instances of this class. The generic implementation of this class is ChannelMediaResource, which can handle any URI for which Necko supports AsyncOpen. The 'file:' protocol can be implemented efficiently with direct random access, so the FileMediaResource implementation class bypasses the cache. For cross-process blob URL, CloneableWithRangeMediaResource is used. MediaResource::Create automatically chooses the best implementation class. 13236
MediaResourceCallback.h A callback used by MediaResource (sub-classes like FileMediaResource, RtspMediaResource, and ChannelMediaResource) to notify various events. Currently this is implemented by MediaDecoder only. Since this class has no pure virtual function, it is convenient to write gtests for the readers without using a mock MediaResource when you don't care about the events notified by the MediaResource. 2303
MediaResult.h 2692
MediaSegment.h Track or graph rate in Hz. Maximum 1 << TRACK_RATE_MAX_BITS Hz. This maximum avoids overflow in conversions between track rates and conversions from seconds. 16529
MediaShutdownManager.cpp 5164
MediaShutdownManager.h 3571
mediasink 12
mediasource 26
MediaStatistics.h 3271
MediaStreamError.cpp 3024
MediaStreamError.h 3244
MediaStreamGraph.cpp A hash table containing the graph instances, one per document. The key is a hash of nsPIDOMWindowInner, see `WindowToHash`. 142816
MediaStreamGraph.h MediaStreamGraph is a framework for synchronized audio/video processing and playback. It is designed to be used by other browser components such as HTML media elements, media capture APIs, real-time media streaming APIs, multitrack media APIs, and advanced audio APIs. The MediaStreamGraph uses a dedicated thread to process media --- the media graph thread. This ensures that we can process media through the graph without blocking on main-thread activity. The media graph is only modified on the media graph thread, to ensure graph changes can be processed without interfering with media processing. All interaction with the media graph thread is done with message passing. APIs that modify the graph or its properties are described as "control APIs". These APIs are asynchronous; they queue graph changes internally and those changes are processed all-at-once by the MediaStreamGraph. The MediaStreamGraph monitors the main thread event loop via nsIAppShell::RunInStableState to ensure that graph changes from a single event loop task are always processed all together. Control APIs should only be used on the main thread, currently; we may be able to relax that later. To allow precise synchronization of times in the control API, the MediaStreamGraph maintains a "media timeline". Control APIs that take or return times use that timeline. Those times never advance during an event loop task. This time is returned by MediaStreamGraph::GetCurrentTime(). Media decoding, audio processing and media playback use thread-safe APIs to the media graph to ensure they can continue while the main thread is blocked. When the graph is changed, we may need to throw out buffered data and reprocess it. This is triggered automatically by the MediaStreamGraph. 50328
MediaStreamGraphImpl.h A per-stream update message passed from the media graph thread to the main thread. 34446
MediaStreamListener.cpp 3750
MediaStreamListener.h This is a base class for media graph thread listener callbacks locked to specific tracks. Override methods to be notified of audio or video data or changes in track state. All notification methods are called from the media graph thread. Overriders of these methods are responsible for all synchronization. Beware! These methods are called without the media graph monitor held, so reentry into media graph methods is possible, although very much discouraged! You should do something non-blocking and non-reentrant (e.g. dispatch an event to some thread) and return. The listener is not allowed to add/remove any listeners from the parent stream. If a listener is attached to a track that has already ended, we guarantee to call NotifyEnded. 8002
MediaStreamTrack.cpp MSGListener monitors state changes of the media flowing through the MediaStreamGraph. For changes to PrincipalHandle the following applies: When the main thread principal for a MediaStreamTrack changes, its principal will be set to the combination of the previous principal and the new one. As a PrincipalHandle change later happens on the MediaStreamGraph thread, we will be notified. If the latest principal on main thread matches the PrincipalHandle we just saw on MSG thread, we will set the track's principal to the new one. We know at this point that the old principal has been flushed out and data under it cannot leak to consumers. In case of multiple changes to the main thread state, the track's principal will be a combination of its old principal and all the new ones until the latest main thread principal matches the PrincipalHandle on the MSG thread. 20922
MediaStreamTrack.h Common interface through which a MediaStreamTrack can communicate with its producer on the main thread. Kept alive by a strong ref in all MediaStreamTracks (original and clones) sharing this source. 19044
MediaStreamTypes.h Describes how a track should be disabled. ENABLED Not disabled. SILENCE_BLACK Audio data is turned into silence, video frames are made black. SILENCE_FREEZE Audio data is turned into silence, video freezes at last frame. 1353
MediaTimer.cpp 6316
MediaTimer.h IsExclusive = 5397
MediaTrack.cpp 1238
MediaTrack.h Base class of AudioTrack and VideoTrack. The AudioTrack and VideoTrack objects represent specific tracks of a media resource. Each track has aspects of an identifier, category, label, and language, even if a track is removed from its corresponding track list, those aspects do not change. When fetching the media resource, an audio/video track is created if the media resource is found to have an audio/video track. When the UA has learned that an audio/video track has ended, this audio/video track will be removed from its corresponding track list. Although AudioTrack and VideoTrack are not EventTargets, TextTrack is, and TextTrack inherits from MediaTrack as well (or is going to). 2672
MediaTrackList.cpp 4649
MediaTrackList.h Base class of AudioTrackList and VideoTrackList. The AudioTrackList and VideoTrackList objects represent a dynamic list of zero or more audio and video tracks respectively. When a media element is to forget its media-resource-specific tracks, its audio track list and video track list will be emptied. 3371
MemoryBlockCache.cpp static 12888
MemoryBlockCache.h MEMORY_BLOCK_CACHE_H_ 3128
moz.build 8751
mp3 7
mp4 25
nsIAudioDeviceInfo.idl 1959
nsIAutoplay.idl Possible values for the "media.autoplay.default" preference. 549
nsIDocumentActivity.h Use this macro when declaring classes that implement this interface. 1127
nsIDOMNavigatorUserMedia.idl 630
nsIMediaManager.idl return a array of inner windows that have active captures 1640
ogg 13
PeerConnection.jsm jshint moz:true, browser:true 78968
PeerConnectionIdp.jsm jshint moz:true, browser:true 11271
platforms 19
PrincipalChangeObserver.h A PrincipalChangeObserver for any type, but originating from DOMMediaStream, then expanded to MediaStreamTrack. Used to learn about dynamic changes to an object's principal. Operations relating to these observers must be confined to the main thread. 930
QueueObject.cpp 939
QueueObject.h 845
ReaderProxy.cpp 7551
ReaderProxy.h A wrapper around MediaFormatReader to offset the timestamps of Audio/Video samples by the start time to ensure MDSM can always assume zero start time. It also adjusts the seek target passed to Seek() to ensure correct seek time is passed to the underlying reader. 3712
RTCStatsReport.jsm 847
SeekJob.cpp 853
SeekJob.h SEEK_JOB_H 862
SeekTarget.h SEEK_TARGET_H 2359
SelfRef.h 1015
SharedBuffer.h Base class for objects with a thread-safe refcount and a virtual destructor. 3001
StreamTracks.cpp 2671
StreamTracks.h This object contains the decoded data for a stream's tracks. A StreamTracks can be appended to. Logically a StreamTracks only gets longer, but we also have the ability to "forget" data before a certain time that we know won't be used again. (We prune a whole number of seconds internally.) StreamTrackss should only be used from one thread at a time. A StreamTracks has a set of tracks that can be of arbitrary types --- the data for each track is a MediaSegment. The set of tracks can vary over the timeline of the StreamTracks. 9466
systemservices 40
test 963
tests 2
TextTrack.cpp 11427
TextTrack.h 4583
TextTrackCue.cpp Save a reference to our creating document so we don't have to keep getting it from our window. 7229
TextTrackCue.h 8867
TextTrackCueList.cpp 3440
TextTrackCueList.h 2394
TextTrackList.cpp 6186
TextTrackList.h 2566
TextTrackRegion.cpp 1963
TextTrackRegion.h WebIDL Methods. 3621
ThreadPoolCOMListener.cpp 814
ThreadPoolCOMListener.h 908
TimeUnits.h 7970
Tracing.cpp 3031
Tracing.h TRACE is for use in the real-time audio rendering thread. It would be better to always pass in the thread id. However, the thread an audio callback runs on can change when the underlying audio device change, and also it seems to be called from a thread pool in a round-robin fashion when audio remoting is activated, making the traces unreadable. The thread on which the AudioCallbackDriver::DataCallback is to always be thread 0, and the budget is set to always be thread 1. This allows displaying those elements in two separate lanes. The other thread have "normal" tid. Hashing allows being able to get a string representation that is unique and guaranteed to be portable. 5571
TrackID.h Unique ID for track within a StreamTracks. Tracks from different StreamTrackss may have the same ID; this matters when appending StreamTrackss, since tracks with the same ID are matched. Only IDs greater than 0 are allowed. 864
TrackUnionStream.cpp 17550
TrackUnionStream.h See MediaStreamGraph::CreateTrackUnionStream. 4040
VideoFrameContainer.cpp 8448
VideoFrameContainer.h This object is used in the decoder backend threads and the main thread to manage the "current video frame" state. This state includes timing data and an intrinsic size (see below). This has to be a thread-safe object since it's accessed by resource decoders and other off-main-thread components. So we can't put this state in the media element itself ... well, maybe we could, but it could be risky and/or confusing. 5728
VideoFrameConverter.h 13209
VideoLimits.h 775
VideoPlaybackQuality.cpp 1649
VideoPlaybackQuality.h 1743
VideoSegment.cpp static 3439
VideoSegment.h 5716
VideoStreamTrack.cpp 8927
VideoStreamTrack.h VIDEOSTREAMTRACK_H_ 1584
VideoTrack.cpp 2882
VideoTrack.h 2198
VideoTrackList.cpp 2822
VideoTrackList.h 1413
VideoUtils.cpp 24463
VideoUtils.h ReentrantMonitorConditionallyEnter Enters the supplied monitor only if the conditional value |aEnter| is true. E.g. Used to allow unmonitored read access on the decode thread, and monitored access on all other threads. 19309
VorbisUtils.h MOZ_SAMPLE_TYPE_FLOAT32 867
wave 5
webaudio 102
webm 12
WebMSample.h 1731354
webrtc 33
webspeech 3
webvtt 9
WebVTTListener.cpp 5566
WebVTTListener.h Loads the WebVTTListener. Must call this in order for the listener to be ready to parse data that is passed to it. 1954
XiphExtradata.cpp 2931
XiphExtradata.h This converts a list of headers to the canonical form of extradata for Xiph codecs in non-Ogg containers. We use it to pass those headers from demuxer to decoder even when demuxing from an Ogg cotainer. 1169