ADTSDecoder.cpp |
static |
1440 |
ADTSDecoder.h |
|
914 |
ADTSDemuxer.cpp |
|
19827 |
ADTSDemuxer.h |
|
4779 |
AsyncLogger.h |
Implementation of an asynchronous lock-free logging system. |
11019 |
AudibilityMonitor.h |
|
3287 |
AudioBufferUtils.h |
The classes in this file provide a interface that uses frames as a unit.
However, they store their offsets in samples (because it's handy for pointer
operations). Those functions can convert between the two units.
|
7187 |
AudioCaptureTrack.cpp |
|
3355 |
AudioCaptureTrack.h |
See MediaTrackGraph::CreateAudioCaptureTrack.
|
988 |
AudioChannelFormat.cpp |
|
510 |
AudioChannelFormat.h |
This file provides utilities for upmixing and downmixing channels.
The channel layouts, upmixing and downmixing are consistent with the
Web Audio spec.
Channel layouts for up to 6 channels:
mono { M }
stereo { L, R }
{ L, R, C }
quad { L, R, SL, SR }
{ L, R, C, SL, SR }
5.1 { L, R, C, LFE, SL, SR }
Only 1, 2, 4 and 6 are currently defined in Web Audio.
|
9110 |
AudioCompactor.cpp |
|
2329 |
AudioCompactor.h |
|
4607 |
AudioConfig.cpp |
AudioConfig::ChannelLayout
|
12384 |
AudioConfig.h |
|
11135 |
AudioConverter.cpp |
Parts derived from MythTV AudioConvert Class
Created by Jean-Yves Avenard.
Copyright (C) Bubblestuff Pty Ltd 2013
Copyright (C) foobum@gmail.com 2010
|
16816 |
AudioConverter.h |
|
10002 |
AudioDeviceInfo.cpp |
readonly attribute DOMString name; |
5273 |
AudioDeviceInfo.h |
|
1981 |
AudioInputSource.cpp |
|
11354 |
AudioInputSource.h |
|
5767 |
AudioMixer.h |
This class mixes multiple streams of audio together to output a single audio
stream.
AudioMixer::Mix is to be called repeatedly with buffers that have the same
length, sample rate, sample format and channel count. This class works with
planar buffers.
When all the tracks have been mixed, calling MixedChunk() will provide
a buffer containing the mixed audio data.
This class is not thread safe.
|
3637 |
AudioPacketizer.h |
This class takes arbitrary input data, and returns packets of a specific
size. In the process, it can convert audio samples from 16bit integers to
float (or vice-versa).
Input and output, as well as length units in the public interface are
interleaved frames.
Allocations of output buffer can be performed by this class. Buffers can
simply be delete-d. This is because packets are intended to be sent off to
non-gecko code using normal pointers/length pairs
Alternatively, consumers can pass in a buffer in which the output is copied.
The buffer needs to be large enough to store a packet worth of audio.
The implementation uses a circular buffer using absolute virtual indices.
|
6566 |
AudioRingBuffer.cpp |
RingBuffer is used to preallocate a buffer of a specific size in bytes and
then to use it for writing and reading values without requiring re-allocation
or memory moving. Note that re-allocations can happen if the length of the
buffer is explicitly set to something larger than is already allocated.
Also note that the total byte size of the buffer modulo the size of the
chosen type must be zero. The RingBuffer has been created with audio sample
values types in mind which are integer or float. However, it can be used with
any trivial type. It is _not_ thread-safe! The constructor can be called on
any thread but the reads and write must happen on the same thread, which can
be different than the construction thread.
|
20409 |
AudioRingBuffer.h |
AudioRingBuffer works with audio sample format float or short. The
implementation wrap around the RingBuffer thus it is not thread-safe. Reads
and writes must happen in the same thread which may be different than the
construction thread. The memory is pre-allocated in the constructor, but may
also be re-allocated on the fly should a larger length be needed. The sample
format has to be specified in order to be used.
|
3775 |
AudioSampleFormat.h |
Audio formats supported in MediaTracks and media elements.
Only one of these is supported by AudioStream, and that is determined
at compile time (roughly, FLOAT32 on desktops, S16 on mobile). Media decoders
produce that format only; queued AudioData always uses that format.
|
10386 |
AudioSegment.cpp |
|
10638 |
AudioSegment.h |
This allows compilation of nsTArray<AudioSegment> and
AutoTArray<AudioSegment> since without it, static analysis fails on the
mChunks member being a non-memmovable AutoTArray.
Note that AudioSegment(const AudioSegment&) is deleted, so this should
never come into effect.
|
17984 |
AudioStream.cpp |
Keep a list of frames sent to the audio engine in each DataCallback along
with the playback rate at the moment. Since the playback rate and number of
underrun frames can vary in each callback. We need to keep the whole history
in order to calculate the playback position of the audio engine correctly.
|
25273 |
AudioStream.h |
@param aFrames The playback position in frames of the audio engine.
@return The playback position in frames of the stream,
adjusted by playback rate changes and underrun frames.
|
13331 |
AudioStreamTrack.cpp |
|
1350 |
AudioStreamTrack.h |
AUDIOSTREAMTRACK_H_ |
1810 |
AudioTrack.cpp |
|
2277 |
AudioTrack.h |
|
1588 |
AudioTrackList.cpp |
|
1181 |
AudioTrackList.h |
|
1111 |
autoplay |
|
|
BackgroundVideoDecodingPermissionObserver.cpp |
|
5211 |
BackgroundVideoDecodingPermissionObserver.h |
|
1439 |
BaseMediaResource.cpp |
|
5525 |
BaseMediaResource.h |
Create a resource, reading data from the channel. Call on main thread only.
The caller must follow up by calling resource->Open().
|
5742 |
benchmark |
|
|
Benchmark.cpp |
|
12665 |
Benchmark.h |
IsExclusive = |
3479 |
BitReader.cpp |
static |
4353 |
BitReader.h |
|
1611 |
BitWriter.cpp |
|
3294 |
BitWriter.h |
|
1554 |
bridge |
|
|
BufferMediaResource.h |
|
2498 |
BufferReader.h |
|
9205 |
ByteWriter.h |
|
1480 |
CallbackThreadRegistry.cpp |
static |
3032 |
CallbackThreadRegistry.h |
|
2047 |
CanvasCaptureMediaStream.cpp |
|
6720 |
CanvasCaptureMediaStream.h |
The CanvasCaptureMediaStream is a MediaStream subclass that provides a video
track containing frames from a canvas. See an architectural overview below.
----------------------------------------------------------------------------
=== Main Thread === __________________________
| |
| CanvasCaptureMediaStream |
|__________________________|
|
| RequestFrame()
v
________________________
________ FrameCaptureRequested? | |
| | ------------------------> | OutputStreamDriver |
| Canvas | SetFrameCapture() | (FrameCaptureListener) |
|________| ------------------------> |________________________|
|
| SetImage() -
| AppendToTrack()
|
v
__________________________
| |
| MTG / SourceMediaTrack |
|__________________________|
----------------------------------------------------------------------------
|
4522 |
ChannelMediaDecoder.cpp |
static |
19357 |
ChannelMediaDecoder.h |
MediaResourceCallback functions |
6308 |
ChannelMediaResource.cpp |
|
38173 |
ChannelMediaResource.h |
This class is responsible for managing the suspend count and report suspend
status of channel.
|
10185 |
CloneableWithRangeMediaResource.cpp |
|
6065 |
CloneableWithRangeMediaResource.h |
|
3216 |
components.conf |
|
3093 |
CrossGraphPort.cpp |
CrossGraphTransmitter * |
5311 |
CrossGraphPort.h |
CrossGraphTransmitter and CrossGraphPort are currently unused, but intended
for connecting MediaTracks of different MediaTrackGraphs with different
sample rates or clock sources for bug 1674892.
Create with MediaTrackGraph::CreateCrossGraphTransmitter()
|
3303 |
CubebInputStream.cpp |
static |
6577 |
CubebInputStream.h |
|
3045 |
CubebUtils.cpp |
|
33537 |
CubebUtils.h |
|
4678 |
DecoderTraits.cpp |
static |
9804 |
DecoderTraits.h |
|
2436 |
DeviceInputTrack.cpp |
|
25264 |
DeviceInputTrack.h |
|
12844 |
doctor |
|
|
DOMMediaStream.cpp |
|
16083 |
DOMMediaStream.h |
DOMMediaStream is the implementation of the js-exposed MediaStream interface.
This is a thin main-thread class grouping MediaStreamTracks together.
|
8331 |
DriftCompensation.h |
DriftCompensator can be used to handle drift between audio and video tracks
from the MediaTrackGraph.
Drift can occur because audio is driven by a MediaTrackGraph running off an
audio callback, thus it's progressed by the clock of one the audio output
devices on the user's machine. Video on the other hand is always expressed in
wall-clock TimeStamps, i.e., it's progressed by the system clock. These
clocks will, over time, drift apart.
Do not use the DriftCompensator across multiple audio tracks, as it will
automatically record the start time of the first audio samples, and all
samples for the same audio track on the same audio clock will have to be
processed to retain accuracy.
DriftCompensator is designed to be used from two threads:
- The audio thread for notifications of audio samples.
- The video thread for compensating drift of video frames to match the audio
clock.
|
4763 |
driftcontrol |
|
|
eme |
|
|
encoder |
|
|
EncoderTraits.cpp |
|
647 |
EncoderTraits.h |
|
762 |
ExternalEngineStateMachine.cpp |
This class monitors the amount of crash happened for a remote engine
process. It the amount of crash of the remote process exceeds the defined
threshold, then `ShouldRecoverProcess()` will return false to indicate that
we should not keep spawning that remote process because it's too easy to
crash.
In addition, we also have another mechanism in the media format reader
(MFR) to detect crash amount of remote processes, but that would only
happen during the decoding process. The main reason to choose using this
simple monitor, instead of the mechanism in the MFR is because that
mechanism can't detect every crash happening in the remote process, such as
crash happening during initializing the remote engine, or setting the CDM
pipepline, which can happen prior to decoding.
|
51165 |
ExternalEngineStateMachine.h |
ExternalPlaybackEngine represents a media engine which is responsible for
decoding and playback, which are not controlled by Gecko.
|
13078 |
fake-cdm |
|
|
FileBlockCache.cpp |
|
18054 |
FileBlockCache.h |
|
8113 |
FileMediaResource.cpp |
|
6887 |
FileMediaResource.h |
unknown size |
4975 |
flac |
|
|
ForwardedInputTrack.cpp |
|
10340 |
ForwardedInputTrack.h |
See MediaTrackGraph::CreateForwardedInputTrack.
|
2580 |
FrameStatistics.h |
|
6827 |
fuzz |
|
|
GetUserMediaRequest.cpp |
|
4644 |
GetUserMediaRequest.h |
|
3048 |
gmp |
|
|
gmp-plugin-openh264 |
|
|
GraphDriver.cpp |
|
56770 |
GraphDriver.h |
Assume we can run an iteration of the MediaTrackGraph loop in this much time
or less.
We try to run the control loop at this rate.
|
33043 |
GraphRunner.cpp |
static |
5440 |
GraphRunner.h |
Marks us as shut down and signals mThread, so that it runs until the end.
|
4088 |
gtest |
|
|
hls |
|
|
IdpSandbox.sys.mjs |
This little class ensures that redirects maintain an https:// origin |
8268 |
imagecapture |
|
|
ImageConversion.cpp |
|
17468 |
ImageConversion.h |
Gets a SourceSurface from given image.
|
2008 |
Intervals.h |
Interval defines an interval between two points. Unlike a traditional
interval [A,B] where A <= x <= B, the upper boundary B is exclusive: A <= x <
B (e.g [A,B[ or [A,B) depending on where you're living) It provides basic
interval arithmetic and fuzzy edges. The type T must provides a default
constructor and +, -, <, <= and == operators.
|
21032 |
ipc |
|
|
MediaBlockCacheBase.h |
MEDIA_BLOCK_CACHE_BASE_H_ |
3317 |
MediaCache.cpp |
static |
105823 |
MediaCache.h |
|
25908 |
mediacapabilities |
|
|
MediaChannelStatistics.h |
This class is useful for estimating rates of data passing through
some channel. The idea is that activity on the channel "starts"
and "stops" over time. At certain times data passes through the
channel (usually while the channel is active; data passing through
an inactive channel is ignored). The GetRate() function computes
an estimate of the "current rate" of the channel, which is some
kind of average of the data passing through over the time the
channel is active.
All methods take "now" as a parameter so the user of this class can
control the timeline used.
|
2967 |
MediaContainerType.cpp |
|
1119 |
MediaContainerType.h |
|
1796 |
mediacontrol |
|
|
MediaData.cpp |
|
21630 |
MediaData.h |
|
26721 |
MediaDataDemuxer.h |
IsExclusive = |
8270 |
MediaDecoder.cpp |
|
59958 |
MediaDecoder.h |
aIgnored |
30653 |
MediaDecoderOwner.h |
|
7775 |
MediaDecoderStateMachine.cpp |
|
176585 |
MediaDecoderStateMachine.h |
Each media element for a media file has one thread called the "audio thread".
The audio thread writes the decoded audio data to the audio
hardware. This is done in a separate thread to ensure that the
audio hardware gets a constant stream of data without
interruption due to decoding or display. At some point
AudioStream will be refactored to have a callback interface
where it asks for data and this thread will no longer be
needed.
The element/state machine also has a TaskQueue which runs in a
SharedThreadPool that is shared with all other elements/decoders. The state
machine dispatches tasks to this to call into the MediaDecoderReader to
request decoded audio or video data. The Reader will callback with decoded
sampled when it has them available, and the state machine places the decoded
samples into its queues for the consuming threads to pull from.
The MediaDecoderReader can choose to decode asynchronously, or synchronously
and return requested samples synchronously inside it's Request*Data()
functions via callback. Asynchronous decoding is preferred, and should be
used for any new readers.
Synchronisation of state between the thread is done via a monitor owned
by MediaDecoder.
The lifetime of the audio thread is controlled by the state machine when
it runs on the shared state machine thread. When playback needs to occur
the audio thread is created and an event dispatched to run it. The audio
thread exits when audio playback is completed or no longer required.
A/V synchronisation is handled by the state machine. It examines the audio
playback time and compares this to the next frame in the queue of video
frames. If it is time to play the video frame it is then displayed, otherwise
it schedules the state machine to run again at the time of the next frame.
Frame skipping is done in the following ways:
1) The state machine will skip all frames in the video queue whose
display time is less than the current audio time. This ensures
the correct frame for the current time is always displayed.
2) The decode tasks will stop decoding interframes and read to the
next keyframe if it determines that decoding the remaining
interframes will cause playback issues. It detects this by:
a) If the amount of audio data in the audio queue drops
below a threshold whereby audio may start to skip.
b) If the video queue drops below a threshold where it
will be decoding video data that won't be displayed due
to the decode thread dropping the frame immediately.
TODO: In future we should only do this when the Reader is decoding
synchronously.
When hardware accelerated graphics is not available, YCbCr conversion
is done on the decode task queue when video frames are decoded.
The decode task queue pushes decoded audio and videos frames into two
separate queues - one for audio and one for video. These are kept
separate to make it easy to constantly feed audio data to the audio
hardware while allowing frame skipping of video data. These queues are
threadsafe, and neither the decode, audio, or state machine should
be able to monopolize them, and cause starvation of the other threads.
Both queues are bounded by a maximum size. When this size is reached
the decode tasks will no longer request video or audio depending on the
queue that has reached the threshold. If both queues are full, no more
decode tasks will be dispatched to the decode task queue, so other
decoders will have an opportunity to run.
During playback the audio thread will be idle (via a Wait() on the
monitor) if the audio queue is empty. Otherwise it constantly pops
audio data off the queue and plays it with a blocking write to the audio
hardware (via AudioStream).
|
21393 |
MediaDecoderStateMachineBase.cpp |
aSupportsTailDispatch = |
7719 |
MediaDecoderStateMachineBase.h |
The state machine class. This manages the decoding and seeking in the
MediaDecoderReader on the decode task queue, and A/V sync on the shared
state machine thread, and controls the audio "push" thread.
All internal state is synchronised via the decoder monitor. State changes
are propagated by scheduling the state machine to run another cycle on the
shared state machine thread.
|
11537 |
MediaDeviceInfo.cpp |
|
1582 |
MediaDeviceInfo.h |
|
1887 |
MediaDevices.cpp |
If requestedMediaTypes is the empty set, return a promise rejected with a
TypeError. |
33558 |
MediaDevices.h |
|
4965 |
MediaEventSource.h |
A thread-safe tool to communicate "revocation" across threads. It is used to
disconnect a listener from the event source to prevent future notifications
from coming. Revoke() can be called on any thread. However, it is recommended
to be called on the target thread to avoid race condition.
RevocableToken is not exposed to the client code directly.
Use MediaEventListener below to do the job.
|
18183 |
MediaFormatReader.cpp |
This class tracks shutdown promises to ensure all decoders are shut down
completely before MFR continues the rest of the shutdown procedure.
|
128970 |
MediaFormatReader.h |
|
33054 |
MediaInfo.cpp |
|
3684 |
MediaInfo.h |
|
26083 |
MediaManager.cpp |
Using WebRTC backend on Desktops (Mac, Windows, Linux), otherwise default |
182378 |
MediaManager.h |
Device info that is independent of any Window.
MediaDevices can be shared, unlike LocalMediaDevices.
|
16702 |
MediaMetadataManager.h |
|
3471 |
MediaMIMETypes.cpp |
|
8229 |
MediaMIMETypes.h |
|
8916 |
MediaPlaybackDelayPolicy.cpp |
|
5203 |
MediaPlaybackDelayPolicy.h |
We usaually start AudioChannelAgent when media starts and stop it when media
stops. However, when we decide to delay media playback for unvisited tab, we
would start AudioChannelAgent even if media doesn't start in order to
register the agent to AudioChannelService, so that the service could notify
us when we are able to resume media playback. Therefore,
ResumeDelayedPlaybackAgent is used to handle this special use case of
AudioChannelAgent.
- Use `GetResumePromise()` to require resume-promise and then do follow-up
resume behavior when promise is resolved.
- Use `UpdateAudibleState()` to update audible state only when media info
changes. As having audio track or not is the only thing for us to decide
whether we would show the delayed media playback icon on the tab bar.
|
3056 |
MediaPromiseDefs.h |
aIgnored |
592 |
MediaQueue.h |
|
11089 |
MediaRecorder.cpp |
MediaRecorderReporter measures memory being used by the Media Recorder.
It is a singleton reporter and the single class object lives as long as at
least one Recorder is registered. In MediaRecorder, the reporter is
unregistered when it is destroyed.
|
69323 |
MediaRecorder.h |
Implementation of
https://w3c.github.io/mediacapture-record/MediaRecorder.html
The MediaRecorder accepts a MediaStream as input passed from an application.
When the MediaRecorder starts, a MediaEncoder will be created and accepts the
MediaStreamTracks in the MediaStream as input source. For each track it
creates a TrackEncoder.
The MediaEncoder automatically encodes and muxes data from the tracks by the
given MIME type, then it stores this data into a MutableBlobStorage object.
When a timeslice is set and the MediaEncoder has stored enough data to fill
the timeslice, it extracts a Blob from the storage and passes it to
MediaRecorder. On RequestData() or Stop(), the MediaEncoder extracts the blob
from the storage and returns it to MediaRecorder through a MozPromise.
Thread model: When the recorder starts, it creates a worker thread (called
the encoder thread) that does all the heavy lifting - encoding, time keeping,
muxing.
|
6984 |
MediaResource.cpp |
|
16457 |
MediaResource.h |
Provides a thread-safe, seek/read interface to resources
loaded from a URI. Uses MediaCache to cache data received over
Necko's async channel API, thus resolving the mismatch between clients
that need efficient random access to the data and protocols that do not
support efficient random access, such as HTTP.
Instances of this class must be created on the main thread.
Most methods must be called on the main thread only. Read, Seek and
Tell must only be called on non-main threads. In the case of the Ogg
Decoder they are called on the Decode thread for example. You must
ensure that no threads are calling these methods once Close is called.
Instances of this class are reference counted. Use nsRefPtr for
managing the lifetime of instances of this class.
The generic implementation of this class is ChannelMediaResource, which can
handle any URI for which Necko supports AsyncOpen.
The 'file:' protocol can be implemented efficiently with direct random
access, so the FileMediaResource implementation class bypasses the cache.
For cross-process blob URL, CloneableWithRangeMediaResource is used.
MediaResource::Create automatically chooses the best implementation class.
|
13127 |
MediaResourceCallback.h |
A callback used by MediaResource (sub-classes like FileMediaResource,
RtspMediaResource, and ChannelMediaResource) to notify various events.
Currently this is implemented by MediaDecoder only.
Since this class has no pure virtual function, it is convenient to write
gtests for the readers without using a mock MediaResource when you don't
care about the events notified by the MediaResource.
|
2311 |
MediaResult.cpp |
|
2087 |
MediaResult.h |
|
2982 |
MediaSegment.h |
Track or graph rate in Hz. Maximum 1 << TRACK_RATE_MAX_BITS Hz. This
maximum avoids overflow in conversions between track rates and conversions
from seconds.
|
16410 |
mediasession |
|
|
MediaShutdownManager.cpp |
|
6075 |
MediaShutdownManager.h |
|
3532 |
mediasink |
|
|
mediasource |
|
|
MediaSpan.h |
|
4577 |
MediaStatistics.h |
|
3271 |
MediaStreamError.cpp |
|
3929 |
MediaStreamError.h |
|
3416 |
MediaStreamTrack.cpp |
MTGListener monitors state changes of the media flowing through the
MediaTrackGraph.
For changes to PrincipalHandle the following applies:
When the main thread principal for a MediaStreamTrack changes, its principal
will be set to the combination of the previous principal and the new one.
As a PrincipalHandle change later happens on the MediaTrackGraph thread, we
will be notified. If the latest principal on main thread matches the
PrincipalHandle we just saw on MTG thread, we will set the track's principal
to the new one.
We know at this point that the old principal has been flushed out and data
under it cannot leak to consumers.
In case of multiple changes to the main thread state, the track's principal
will be a combination of its old principal and all the new ones until the
latest main thread principal matches the PrincipalHandle on the MTG thread.
|
20268 |
MediaStreamTrack.h |
Common interface through which a MediaStreamTrack can communicate with its
producer on the main thread.
Kept alive by a strong ref in all MediaStreamTracks (original and clones)
sharing this source.
|
20629 |
MediaStreamWindowCapturer.cpp |
|
2594 |
MediaStreamWindowCapturer.h |
Given a DOMMediaStream and a window id, this class will pipe the audio from
all live audio tracks in the stream to the MediaTrackGraph's window capture
mechanism.
|
1729 |
MediaTimer.cpp |
|
6563 |
MediaTimer.h |
We use a callback function, rather than a callback method, to ensure that
the nsITimer does not artifically keep the refcount of the MediaTimer above
zero. When the MediaTimer is destroyed, it safely cancels the nsITimer so
that we never fire against a dangling closure.
|
5456 |
MediaTrack.cpp |
|
1240 |
MediaTrack.h |
Base class of AudioTrack and VideoTrack. The AudioTrack and VideoTrack
objects represent specific tracks of a media resource. Each track has aspects
of an identifier, category, label, and language, even if a track is removed
from its corresponding track list, those aspects do not change.
When fetching the media resource, an audio/video track is created if the
media resource is found to have an audio/video track. When the UA has learned
that an audio/video track has ended, this audio/video track will be removed
from its corresponding track list.
Although AudioTrack and VideoTrack are not EventTargets, TextTrack is, and
TextTrack inherits from MediaTrack as well (or is going to).
|
2694 |
MediaTrackGraph.cpp |
A hash table containing the graph instances, one per Window ID,
sample rate, and device ID combination.
|
157482 |
MediaTrackGraph.h |
MediaTrackGraph is a framework for synchronized audio/video processing
and playback. It is designed to be used by other browser components such as
HTML media elements, media capture APIs, real-time media streaming APIs,
multitrack media APIs, and advanced audio APIs.
The MediaTrackGraph uses a dedicated thread to process media --- the media
graph thread. This ensures that we can process media through the graph
without blocking on main-thread activity. The media graph is only modified
on the media graph thread, to ensure graph changes can be processed without
interfering with media processing. All interaction with the media graph
thread is done with message passing.
APIs that modify the graph or its properties are described as "control APIs".
These APIs are asynchronous; they queue graph changes internally and
those changes are processed all-at-once by the MediaTrackGraph. The
MediaTrackGraph monitors the main thread event loop via
nsIAppShell::RunInStableState to ensure that graph changes from a single
event loop task are always processed all together. Control APIs should only
be used on the main thread, currently; we may be able to relax that later.
To allow precise synchronization of times in the control API, the
MediaTrackGraph maintains a "media timeline". Control APIs that take or
return times use that timeline. Those times never advance during
an event loop task. This time is returned by
MediaTrackGraph::GetCurrentTime().
Media decoding, audio processing and media playback use thread-safe APIs to
the media graph to ensure they can continue while the main thread is blocked.
When the graph is changed, we may need to throw out buffered data and
reprocess it. This is triggered automatically by the MediaTrackGraph.
|
51516 |
MediaTrackGraphImpl.h |
A per-track update message passed from the media graph thread to the
main thread.
|
43490 |
MediaTrackList.cpp |
|
4847 |
MediaTrackList.h |
Base class of AudioTrackList and VideoTrackList. The AudioTrackList and
VideoTrackList objects represent a dynamic list of zero or more audio and
video tracks respectively.
When a media element is to forget its media-resource-specific tracks, its
audio track list and video track list will be emptied.
|
3555 |
MediaTrackListener.cpp |
|
3504 |
MediaTrackListener.h |
This is a base class for media graph thread listener callbacks locked to
specific tracks. Override methods to be notified of audio or video data or
changes in track state.
All notification methods are called from the media graph thread. Overriders
of these methods are responsible for all synchronization. Beware!
These methods are called without the media graph monitor held, so
reentry into media graph methods is possible, although very much discouraged!
You should do something non-blocking and non-reentrant (e.g. dispatch an
event to some thread) and return.
The listener is not allowed to add/remove any listeners from the parent
track.
If a listener is attached to a track that has already ended, we guarantee
to call NotifyEnded.
|
7681 |
MemoryBlockCache.cpp |
|
8173 |
MemoryBlockCache.h |
MEMORY_BLOCK_CACHE_H_ |
3144 |
metrics.yaml |
|
11123 |
moz.build |
|
10517 |
mp3 |
|
|
mp4 |
|
|
MPSCQueue.h |
|
5539 |
nsIAudioDeviceInfo.idl |
|
1959 |
nsIDocumentActivity.h |
Use this macro when declaring classes that implement this interface. |
1127 |
nsIMediaDevice.idl |
|
731 |
nsIMediaManager.idl |
return a array of inner windows that have active captures |
1618 |
ogg |
|
|
Pacer.h |
Pacer<T> takes a queue of Ts tied to timestamps, and emits PacedItemEvents
for every T at its corresponding timestamp.
The queue is ordered. Enqueing an item at time t will drop all items at times
later than T. This is because of how video sources work (some send out frames
in the future, some don't), and to allow swapping one source for another.
It supports a duplication interval. If there is no new item enqueued within
the duplication interval since the last enqueued item, the last enqueud item
is emitted again.
|
7514 |
PeerConnection.sys.mjs |
|
59718 |
PeerConnectionIdp.sys.mjs |
Creates an IdP helper.
@param win (object) the window we are working for
@param timeout (int) the timeout in milliseconds
|
11113 |
platforms |
|
|
PrincipalChangeObserver.h |
A PrincipalChangeObserver for any type, but originating from DOMMediaStream,
then expanded to MediaStreamTrack.
Used to learn about dynamic changes to an object's principal.
Operations relating to these observers must be confined to the main thread.
|
904 |
PrincipalHandle.h |
The level of privacy of a principal as considered by RTCPeerConnection.
|
1945 |
QueueObject.cpp |
|
947 |
QueueObject.h |
|
845 |
ReaderProxy.cpp |
|
8390 |
ReaderProxy.h |
A wrapper around MediaFormatReader to offset the timestamps of Audio/Video
samples by the start time to ensure MDSM can always assume zero start time.
It also adjusts the seek target passed to Seek() to ensure correct seek time
is passed to the underlying reader.
|
3966 |
SeekJob.cpp |
|
855 |
SeekJob.h |
SEEK_JOB_H |
864 |
SeekTarget.h |
SEEK_TARGET_H |
2574 |
SelfRef.h |
|
1015 |
SharedBuffer.h |
Base class for objects with a thread-safe refcount and a virtual
destructor.
|
3707 |
systemservices |
|
|
test |
|
|
tests |
|
|
TimedPacketizer.h |
This class wraps an AudioPacketizer and provides packets of audio with
timestamps.
|
2421 |
TimeUnits.cpp |
|
13677 |
TimeUnits.h |
|
13508 |
tools |
|
|
Tracing.cpp |
|
2790 |
Tracing.h |
TRACING_H |
3891 |
UnderrunHandler.h |
|
970 |
UnderrunHandlerLinux.cpp |
|
2322 |
UnderrunHandlerNoop.cpp |
|
460 |
utils |
|
|
VideoFrameContainer.cpp |
|
8562 |
VideoFrameContainer.h |
This object is used in the decoder backend threads and the main thread
to manage the "current video frame" state. This state includes timing data
and an intrinsic size (see below).
This has to be a thread-safe object since it's accessed by resource decoders
and other off-main-thread components. So we can't put this state in the media
element itself ... well, maybe we could, but it could be risky and/or
confusing.
|
5980 |
VideoFrameConverter.h |
An active VideoFrameConverter actively converts queued video frames.
While inactive, we keep track of the frame most recently queued for
processing, so it can be immediately sent out once activated.
|
17731 |
VideoLimits.h |
|
775 |
VideoOutput.h |
|
11165 |
VideoPlaybackQuality.cpp |
|
1373 |
VideoPlaybackQuality.h |
|
1598 |
VideoSegment.cpp |
static |
6316 |
VideoSegment.h |
|
7083 |
VideoStreamTrack.cpp |
|
2611 |
VideoStreamTrack.h |
Whether this VideoStreamTrack's video frames will have an alpha channel.
|
1742 |
VideoTrack.cpp |
|
2864 |
VideoTrack.h |
|
2172 |
VideoTrackList.cpp |
|
2796 |
VideoTrackList.h |
|
1387 |
VideoUtils.cpp |
|
40436 |
VideoUtils.h |
ReentrantMonitorConditionallyEnter
Enters the supplied monitor only if the conditional value |aEnter| is true.
E.g. Used to allow unmonitored read access on the decode thread,
and monitored access on all other threads.
|
20873 |
WavDumper.h |
If MOZ_DUMP_AUDIO is set, this dumps a file to disk containing the output of
an audio stream, in 16bits integers.
The sandbox needs to be disabled for this to work.
|
4361 |
wave |
|
|
webaudio |
|
|
webcodecs |
|
|
webm |
|
|
WebMSample.h |
|
1731681 |
webrtc |
|
|
webspeech |
|
|
webvtt |
|
|
XiphExtradata.cpp |
|
2931 |
XiphExtradata.h |
This converts a list of headers to the canonical form of extradata for Xiph
codecs in non-Ogg containers. We use it to pass those headers from demuxer
to decoder even when demuxing from an Ogg cotainer. |
1169 |