gstaudio.c.types

C types for gstaudio1 library

Types 81

Different possible reasons for discontinuities. This enum is useful for the custom slave method.

NoDiscont = 0No discontinuity occurred
NewCaps = 1New caps are set, causing renegotiotion
Flush = 2Samples have been flushed
SyncLatency = 3Sink was synchronized to the estimated latency (occurs during initialization)
Alignment = 4Aligning buffers failed because the timestamps are too discontinuous
DeviceFailure = 5Audio output device experienced and recovered from an error but introduced latency in the process (see also [gstaudio.audiobasesink.AudioBaseSink.reportDeviceFailure])

Different possible clock slaving algorithms used when the internal audio clock is not selected as the pipeline master clock.

Resample = 0Resample to match the master clock
Skew = 1Adjust playout pointer when master clock drifts too much.
None = 2No adjustment is done.
Custom = 3Use custom clock slaving algorithm (Since: 1.6)

Different possible clock slaving algorithms when the internal audio clock was not selected as the pipeline clock.

Resample = 0Resample to match the master clock.
ReTimestamp = 1Retimestamp output buffers with master clock time.
Skew = 2Adjust capture pointer when master clock drifts too much.
None = 3No adjustment is done.

Mode in which the CD audio source operates. Influences timestamping, EOS handling and seeking.

Normal = 0each single track is a stream
Continuous = 1the entire disc is a single stream

Flags passed to [gstaudio.audio_channel_mixer.AudioChannelMixer.new_]

None = 0no flag
NonInterleavedIn = 1input channels are not interleaved
NonInterleavedOut = 2output channels are not interleaved
UnpositionedIn = 4input channels are explicitly unpositioned
UnpositionedOut = 8output channels are explicitly unpositioned

Audio channel positions.

These are the channels defined in SMPTE 2036-2-2008 Table 1 for 22.2 audio systems with the Surround and Wide channels from DTS Coherent Acoustics (v.1.3.1) and 10.2 and 7.1 layouts. In the caps the actual channel layout is expressed with a channel count and a channel mask, which describes the existing channels. The positions in the bit mask correspond to the enum values. For negotiation it is allowed to have more bits set in the channel mask than the number of channels to specify the allowed channel positions but this is not allowed in negotiated caps. It is not allowed in any situation other than the one mentioned below to have less bits set in the channel mask than the number of channels.

@GST_AUDIO_CHANNEL_POSITION_MONO can only be used with a single mono channel that has no direction information and would be mixed into all directional channels. This is expressed in caps by having a single channel and no channel mask.

@GST_AUDIO_CHANNEL_POSITION_NONE can only be used if all channels have this position. This is expressed in caps by having a channel mask with no bits set.

As another special case it is allowed to have two channels without a channel mask. This implicitly means that this is a stereo stream with a front left and front right channel.

None = - 3used for position-less channels, e.g. from a sound card that records 1024 channels; mutually exclusive with any other channel position
Mono = - 2Mono without direction; can only be used with 1 channel
Invalid = - 1invalid position
FrontLeft = 0Front left
FrontRight = 1Front right
FrontCenter = 2Front center
Lfe1 = 3Low-frequency effects 1 (subwoofer)
RearLeft = 4Rear left
RearRight = 5Rear right
FrontLeftOfCenter = 6Front left of center
FrontRightOfCenter = 7Front right of center
RearCenter = 8Rear center
Lfe2 = 9Low-frequency effects 2 (subwoofer)
SideLeft = 10Side left
SideRight = 11Side right
TopFrontLeft = 12Top front left
TopFrontRight = 13Top front right
TopFrontCenter = 14Top front center
TopCenter = 15Top center
TopRearLeft = 16Top rear left
TopRearRight = 17Top rear right
TopSideLeft = 18Top side right
TopSideRight = 19Top rear right
TopRearCenter = 20Top rear center
BottomFrontCenter = 21Bottom front center
BottomFrontLeft = 22Bottom front left
BottomFrontRight = 23Bottom front right
WideLeft = 24Wide left (between front left and side left)
WideRight = 25Wide right (between front right and side right)
SurroundLeft = 26Surround left (between rear left and side left)
SurroundRight = 27Surround right (between rear right and side right)

Extra flags passed to [gstaudio.audio_converter.AudioConverter.new_] and [gstaudio.audio_converter.AudioConverter.samples].

None = 0no flag
InWritable = 1the input sample arrays are writable and can be used as temporary storage during conversion.
VariableRate = 2allow arbitrary rate updates with [gstaudio.audio_converter.AudioConverter.updateConfig].

Set of available dithering methods.

None = 0No dithering
Rpdf = 1Rectangular dithering
Tpdf = 2Triangular dithering (default)
TpdfHf = 3High frequency triangular dithering
enumGstAudioFlags : uint

Extra audio flags

None = 0no valid flag
Unpositioned = 1the position array explicitly contains unpositioned channels.

Enum value describing the most common audio formats.

Unknown = 0unknown or unset audio format
Encoded = 1encoded audio format
S8 = 28 bits in 8 bits, signed
U8 = 38 bits in 8 bits, unsigned
S16le = 416 bits in 16 bits, signed, little endian
S16be = 516 bits in 16 bits, signed, big endian
U16le = 616 bits in 16 bits, unsigned, little endian
U16be = 716 bits in 16 bits, unsigned, big endian
S2432le = 824 bits in 32 bits, signed, little endian
S2432be = 924 bits in 32 bits, signed, big endian
U2432le = 1024 bits in 32 bits, unsigned, little endian
U2432be = 1124 bits in 32 bits, unsigned, big endian
S32le = 1232 bits in 32 bits, signed, little endian
S32be = 1332 bits in 32 bits, signed, big endian
U32le = 1432 bits in 32 bits, unsigned, little endian
U32be = 1532 bits in 32 bits, unsigned, big endian
S24le = 1624 bits in 24 bits, signed, little endian
S24be = 1724 bits in 24 bits, signed, big endian
U24le = 1824 bits in 24 bits, unsigned, little endian
U24be = 1924 bits in 24 bits, unsigned, big endian
S20le = 2020 bits in 24 bits, signed, little endian
S20be = 2120 bits in 24 bits, signed, big endian
U20le = 2220 bits in 24 bits, unsigned, little endian
U20be = 2320 bits in 24 bits, unsigned, big endian
S18le = 2418 bits in 24 bits, signed, little endian
S18be = 2518 bits in 24 bits, signed, big endian
U18le = 2618 bits in 24 bits, unsigned, little endian
U18be = 2718 bits in 24 bits, unsigned, big endian
F32le = 2832-bit floating point samples, little endian
F32be = 2932-bit floating point samples, big endian
F64le = 3064-bit floating point samples, little endian
F64be = 3164-bit floating point samples, big endian
S16 = 416 bits in 16 bits, signed, native endianness
U16 = 616 bits in 16 bits, unsigned, native endianness
S2432 = 824 bits in 32 bits, signed, native endianness
U2432 = 1024 bits in 32 bits, unsigned, native endianness
S32 = 1232 bits in 32 bits, signed, native endianness
U32 = 1432 bits in 32 bits, unsigned, native endianness
S24 = 1624 bits in 24 bits, signed, native endianness
U24 = 1824 bits in 24 bits, unsigned, native endianness
S20 = 2020 bits in 24 bits, signed, native endianness
U20 = 2220 bits in 24 bits, unsigned, native endianness
S18 = 2418 bits in 24 bits, signed, native endianness
U18 = 2618 bits in 24 bits, unsigned, native endianness
F32 = 2832-bit floating point samples, native endianness
F64 = 3064-bit floating point samples, native endianness

The different audio flags that a format info can have.

Integer = 1integer samples
Float = 2float samples
Signed = 4signed samples
Complex = 16complex layout
Unpack = 32the format can be used in #GstAudioFormatUnpack and #GstAudioFormatPack functions

Layout of the audio samples for the different channels.

Interleaved = 0interleaved audio
NonInterleaved = 1non-interleaved audio

Set of available noise shaping methods

None = 0No noise shaping (default)
ErrorFeedback = 1Error feedback
Simple = 2Simple 2-pole noise shaping
Medium = 3Medium 5-pole noise shaping
High = 4High 8-pole noise shaping

The different flags that can be used when packing and unpacking.

None = 0No flag
TruncateRange = 1When the source has a smaller depth than the target format, set the least significant bits of the target to 0. This is likely slightly faster but less accurate. When this flag is not specified, the...

Extra flags that can be passed to [gstaudio.audio_quantize.AudioQuantize.new_]

None = 0no flags
NonInterleaved = 1samples are non-interleaved

The different filter interpolation methods.

None = 0no interpolation
Linear = 1linear interpolation of the filter coefficients.
Cubic = 2cubic interpolation of the filter coefficients.

Select for the filter tables should be set up.

Interpolated = 0Use interpolated filter tables. This uses less memory but more CPU and is slightly less accurate but it allows for more efficient variable rate resampling with [gstaudio.audio_resampler.AudioResamp...
Full = 1Use full filter table. This uses more memory but less CPU.
Auto = 2Automatically choose between interpolated and full filter tables.

Different resampler flags.

None = 0no flags
NonInterleavedIn = 1input samples are non-interleaved. an array of blocks of samples, one for each channel, should be passed to the resample function.
NonInterleavedOut = 2output samples are non-interleaved. an array of blocks of samples, one for each channel, should be passed to the resample function.
VariableRate = 4optimize for dynamic updates of the sample rates with [gstaudio.audioresampler.AudioResampler.update]. This will select an interpolating filter when #GSTAUDIORESAMPLERFILTERMODEAUTO is configured.

Different subsampling and upsampling methods

Nearest = 0Duplicates the samples when upsampling and drops when downsampling
Linear = 1Uses linear interpolation to reconstruct missing samples and averaging to downsample
Cubic = 2Uses cubic interpolation
BlackmanNuttall = 3Uses Blackman-Nuttall windowed sinc interpolation
Kaiser = 4Uses Kaiser windowed sinc interpolation

The format of the samples in the ringbuffer.

Raw = 0samples in linear or float
MuLaw = 1samples in mulaw
ALaw = 2samples in alaw
ImaAdpcm = 3samples in ima adpcm
Mpeg = 4samples in mpeg audio (but not AAC) format
Gsm = 5samples in gsm format
Iec958 = 6samples in IEC958 frames (e.g. AC3)
Ac3 = 7samples in AC3 format
Eac3 = 8samples in EAC3 format
Dts = 9samples in DTS format
Mpeg2Aac = 10samples in MPEG-2 AAC ADTS format
Mpeg4Aac = 11samples in MPEG-4 AAC ADTS format
Mpeg2AacRaw = 12samples in MPEG-2 AAC raw format (Since: 1.12)
Mpeg4AacRaw = 13samples in MPEG-4 AAC raw format (Since: 1.12)
Flac = 14samples in FLAC format (Since: 1.12)
Dsd = 15samples in DSD format (Since: 1.24)

The state of the ringbuffer.

Stopped = 0The ringbuffer is stopped
Paused = 1The ringbuffer is paused
Started = 2The ringbuffer is started
Error = 3The ringbuffer has encountered an error after it has been started, e.g. because the device was disconnected (Since: 1.2)

Enum value describing how DSD bits are grouped.

DsdFormatUnknown = 0unknown / invalid DSD format
DsdFormatU8 = 18 DSD bits in 1 byte
DsdFormatU16le = 216 DSD bits in 2 bytes, little endian order
DsdFormatU16be = 316 DSD bits in 2 bytes, big endian order
DsdFormatU32le = 432 DSD bits in 4 bytes, little endian order
DsdFormatU32be = 532 DSD bits in 4 bytes, big endian order
NumDsdFormats = 6number of valid DSD formats
DsdFormatU16 = 216 DSD bits in 2 bytes, native endianness
DsdFormatU32 = 432 DSD bits in 4 bytes, native endianness

Different representations of a stream volume. [gstaudio.stream_volume.StreamVolume.convertVolume] allows to convert between the different representations.

Formulas to convert from a linear to a cubic or dB volume are cbrt(val) and 20 * log10 (val).

Linear = 0Linear scale factor, 1.0 = 100%
Cubic = 1Cubic volume scale
Db = 2Logarithmic volume scale (dB, amplitude not power)

Subclasses must use (a subclass of) #GstAudioAggregatorPad for both their source and sink pads, [gst.element_class.ElementClass.addStaticPadTemplateWithGtype] is a convenient helper.

#GstAudioAggregator can perform conversion on the data arriving on its sink pads, based on the format expected downstream: in order to enable that behaviour, the GType of the sink pads must either be a (subclass of) #GstAudioAggregatorConvertPad to use the default #GstAudioConverter implementation, or a subclass of #GstAudioAggregatorPad implementing #GstAudioAggregatorPadClass.convert_buffer.

To allow for the output caps to change, the mechanism is the same as above, with the GType of the source pad.

See #GstAudioMixer for an example.

When conversion is enabled, #GstAudioAggregator will accept any type of raw audio caps and perform conversion on the data arriving on its sink pads, with whatever downstream expects as the target format.

In case downstream caps are not fully fixated, it will use the first configured sink pad to finish fixating its source pad caps.

A notable exception for now is the sample rate, sink pads must have the same sample rate as either the downstream requirement, or the first configured pad, or a combination of both (when downstream specifies a range or a set of acceptable rates).

The #GstAggregator::samples-selected signal is provided with some additional information about the output buffer:

  • "offset" G_TYPE_UINT64 Offset in samples since segment start

for the position that is next to be filled in the output buffer.

  • "frames" G_TYPE_UINT Number of frames per output buffer.

In addition the [gstbase.aggregator.Aggregator.peekNextSample] function returns additional information in the info #GstStructure of the returned sample:

  • "output-offset" G_TYPE_UINT64 Sample offset in output segment relative to

the output segment's start where the current position of this input buffer would be placed

  • "position" G_TYPE_UINT current position in the input buffer in samples
  • "size" G_TYPE_UINT size of the input buffer in samples
Fields
GstCaps * currentCapsThe caps set by the subclass
void *[4] GstReserved
Fields
GstAggregatorClass parentClass
GstBuffer * function(GstAudioAggregator * aagg, uint numFrames) createOutputBufferCreate a new output buffer contains num_frames frames.
gboolean function(GstAudioAggregator * aagg, GstAudioAggregatorPad * pad, GstBuffer * inbuf, uint inOffset, GstBuffer * outbuf, uint outOffset, uint numFrames) aggregateOneBufferAggregates one input buffer to the output buffer. The inoffset and outoffset are in "frames", which is the size of a sample times the number of channels. Returns TRUE if any non-silence was added ...
void *[20] GstReserved

An implementation of GstPad that can be used with #GstAudioAggregator.

See #GstAudioAggregator for more details.

Fields
void *[4] GstReserved
Fields
void *[4] GstReserved

The default implementation of GstPad used with #GstAudioAggregator

Fields
GstAudioInfo infoThe audio info for this pad set from the incoming caps
void *[4] GstReserved
Fields
GstBuffer * function(GstAudioAggregatorPad * pad, GstAudioInfo * inInfo, GstAudioInfo * outInfo, GstBuffer * buffer) convertBufferConvert a buffer from one format to another.
void function(GstAudioAggregatorPad * pad) updateConversionInfoCalled when either the input or output formats have changed.
void *[20] GstReserved

This is the base class for audio sinks. Subclasses need to implement the ::create_ringbuffer vmethod. This base class will then take care of writing samples to the ringbuffer, synchronisation, clipping and flushing.

Fields
GstBaseSink element
GstAudioRingBuffer * ringbuffer
ulong bufferTime
ulong latencyTime
ulong nextSample
GstClock * providedClock
gboolean eosRendering
void *[4] GstReserved

#GstAudioBaseSink class. Override the vmethod to implement functionality.

Fields
GstBaseSinkClass parentClassthe parent class.
GstAudioRingBuffer * function(GstAudioBaseSink * sink) createRingbuffercreate and return a #GstAudioRingBuffer to write to.
GstBuffer * function(GstAudioBaseSink * sink, GstBuffer * buffer) payloadpayload data in a format suitable to write to the sink. If no payloading is required, returns a reffed copy of the original buffer, else returns the payloaded buffer with all other metadata copied.
void *[4] GstReserved

This is the base class for audio sources. Subclasses need to implement the ::create_ringbuffer vmethod. This base class will then take care of reading samples from the ringbuffer, synchronisation and flushing.

Fields
GstPushSrc element
GstAudioRingBuffer * ringbuffer
GstClockTime bufferTime
GstClockTime latencyTime
ulong nextSample
GstClock * clock
void *[4] GstReserved

#GstAudioBaseSrc class. Override the vmethod to implement functionality.

Fields
GstPushSrcClass parentClassthe parent class.
GstAudioRingBuffer * function(GstAudioBaseSrc * src) createRingbuffercreate and return a #GstAudioRingBuffer to read from.
void *[4] GstReserved

A structure containing the result of an audio buffer map operation, which is executed with [gstaudio.audio_buffer.AudioBuffer.map]. For non-interleaved (planar) buffers, the beginning of each channel in the buffer has its own pointer in the @planes array. For interleaved buffers, the @planes array only contains one item, which is the pointer to the beginning of the buffer, and @n_planes equals 1.

The different channels in @planes are always in the GStreamer channel order.

Fields
GstAudioInfo infoa #GstAudioInfo describing the audio properties of this buffer
size_t nSamplesthe size of the buffer in samples
int nPlanesthe number of planes available
void * * planesan array of @n_planes pointers pointing to the start of each plane in the mapped buffer
GstBuffer * bufferthe mapped buffer
GstMapInfo * mapInfos
void *[8] privPlanesArr
GstMapInfo[8] privMapInfosArr
void *[4] GstReserved

Provides a base class for CD digital audio (CDDA) sources, which handles things like seeking, querying, discid calculation, tags, and buffer timestamping.

Using GstAudioCdSrc-based elements in applications

GstAudioCdSrc registers two #GstFormat<!-- -->s of its own, namely the "track" format and the "sector" format. Applications will usually only find the "track" format interesting. You can retrieve that #GstFormat for use in seek events or queries with gst_format_get_by_nick("track").

In order to query the number of tracks, for example, an application would set the CDDA source element to READY or PAUSED state and then query the the number of tracks via [gst.element.Element.queryDuration] using the track format acquired above. Applications can query the currently playing track in the same way.

Alternatively, applications may retrieve the currently playing track and the total number of tracks from the taglist that will posted on the bus whenever the CD is opened or the currently playing track changes. The taglist will contain GST_TAG_TRACK_NUMBER and GST_TAG_TRACK_COUNT tags.

Applications playing back CD audio using playbin and cdda://n URIs should issue a seek command in track format to change between tracks, rather than setting a new cdda://n+1 URI on playbin (as setting a new URI on playbin involves closing and re-opening the CD device, which is much much slower).

Tags and meta-information

CDDA sources will automatically emit a number of tags, details about which can be found in the libgsttag documentation. Those tags are: #GST_TAG_CDDA_CDDB_DISCID, #GST_TAG_CDDA_CDDB_DISCID_FULL, #GST_TAG_CDDA_MUSICBRAINZ_DISCID, #GST_TAG_CDDA_MUSICBRAINZ_DISCID_FULL, among others.

Tracks and Table of Contents (TOC)

Applications will be informed of the available tracks via a TOC message on the pipeline's #GstBus. The #GstToc will contain a #GstTocEntry for each track, with information about each track. The duration for each track can be retrieved via the #GST_TAG_DURATION tag from each entry's tag list, or calculated via [gst.toc_entry.TocEntry.getStartStopTimes]. The track entries in the TOC will be sorted by track number.

Fields
GstPushSrc pushsrc
GstTagList * tags
uint[2] GstReserved1
void *[2] GstReserved2

Audio CD source base class.

Fields
GstPushSrcClass pushsrcClassthe parent class
gboolean function(GstAudioCdSrc * src, const(char) * device) openopening the device
void function(GstAudioCdSrc * src) closeclosing the device
GstBuffer * function(GstAudioCdSrc * src, int sector) readSectorreading a sector
void *[20] GstReserved

CD track abstraction to communicate TOC entries to the base class.

This structure is only for use by sub-classed in connection with [gstaudio.audio_cd_src.AudioCdSrc.addTrack].

Applications will be informed of the available tracks via a TOC message on the pipeline's #GstBus instead.

Fields
gboolean isAudioWhether this is an audio track
uint numTrack number in TOC (usually starts from 1, but not always)
uint startThe first sector of this track (LBA)
uint endThe last sector of this track (LBA)
GstTagList * tagsTrack-specific tags (e.g. from cd-text information), or NULL
uint[2] GstReserved1
void *[2] GstReserved2

Extra buffer metadata describing how much audio has to be clipped from the start or end of a buffer. This is used for compressed formats, where the first frame usually has some additional samples due to encoder and decoder delays, and the last frame usually has some additional samples to be able to fill the complete last frame.

This is used to ensure that decoded data in the end has the same amount of samples, and multiply decoded streams can be gaplessly concatenated.

Note

If clipping of the start is done by adjusting the segment, this meta

has to be dropped from buffers as otherwise clipping could happen twice.

Fields
GstMeta metaparent #GstMeta
GstFormat formatGstFormat of @start and @stop, GSTFORMATDEFAULT is samples
ulong startAmount of audio to clip from start of buffer
ulong endAmount of to clip from end of buffer

#GstAudioClock makes it easy for elements to implement a #GstClock, they simply need to provide a function that returns the current clock time.

This object is internally used to implement the clock in #GstAudioBaseSink.

Fields
void * userData
GDestroyNotify destroyNotify
GstClockTime lastTime
GstClockTimeDiff timeOffset
void *[4] GstReserved
Fields
void *[4] GstReserved

This object is used to convert audio samples from one format to another. The object can perform conversion of:

  • audio format with optional dithering and noise shaping
  • audio samplerate
  • audio channels and channel layout

This base class is for audio decoders turning encoded data into raw audio samples.

GstAudioDecoder and subclass should cooperate as follows.

Configuration

  • Initially, GstAudioDecoder calls @start when the decoder element

is activated, which allows subclass to perform any global setup. Base class (context) parameters can already be set according to subclass capabilities (or possibly upon receive more information in subsequent @set_format).

  • GstAudioDecoder calls @set_format to inform subclass of the format

of input audio data that it is about to receive. While unlikely, it might be called more than once, if changing input parameters require reconfiguration.

  • GstAudioDecoder calls @stop at end of all processing.

As of configuration stage, and throughout processing, GstAudioDecoder provides various (context) parameters, e.g. describing the format of output audio data (valid when output caps have been set) or current parsing state. Conversely, subclass can and should configure context to inform base class of its expectation w.r.t. buffer handling.

Data processing

  • Base class gathers input data, and optionally allows subclass

to parse this into subsequently manageable (as defined by subclass) chunks. Such chunks are subsequently referred to as 'frames', though they may or may not correspond to 1 (or more) audio format frame.

  • Input frame is provided to subclass' @handle_frame.
  • If codec processing results in decoded data, subclass should call

@gst_audio_decoder_finish_frame to have decoded data pushed downstream.

  • Just prior to actually pushing a buffer downstream,

it is passed to @pre_push. Subclass should either use this callback to arrange for additional downstream pushing or otherwise ensure such custom pushing occurs after at least a method call has finished since setting src pad caps.

  • During the parsing process GstAudioDecoderClass will handle both

srcpad and sinkpad events. Sink events will be passed to subclass if @event callback has been provided.

Shutdown phase

  • GstAudioDecoder class calls @stop to inform the subclass that data

parsing will be stopped.

Subclass is responsible for providing pad template caps for source and sink pads. The pads need to be named "sink" and "src". It also needs to set the fixed caps on srcpad, when the format is ensured. This is typically when base class calls subclass' @set_format function, though it might be delayed until calling @gst_audio_decoder_finish_frame.

In summary, above process should have subclass concentrating on codec data processing while leaving other matters to base class, such as most notably timestamp handling. While it may exert more control in this area (see e.g. @pre_push), it is very much not recommended.

In particular, base class will try to arrange for perfect output timestamps as much as possible while tracking upstream timestamps. To this end, if deviation between the next ideal expected perfect timestamp and upstream exceeds #GstAudioDecoder:tolerance, then resync to upstream occurs (which would happen always if the tolerance mechanism is disabled).

In non-live pipelines, baseclass can also (configurably) arrange for output buffer aggregation which may help to redue large(r) numbers of small(er) buffers being pushed and processed downstream. Note that this feature is only available if the buffer layout is interleaved. For planar buffers, the decoder implementation is fully responsible for the output buffer size.

On the other hand, it should be noted that baseclass only provides limited seeking support (upon explicit subclass request), as full-fledged support should rather be left to upstream demuxer, parser or alike. This simple approach caters for seeking and duration reporting using estimated input bitrates.

Things that subclass need to take care of:

  • Provide pad templates
  • Set source pad caps when appropriate
  • Set user-configurable properties to sane defaults for format and

implementing codec at hand, and convey some subclass capabilities and expectations in context.

  • Accept data in @handle_frame and provide encoded results to

@gst_audio_decoder_finish_frame. If it is prepared to perform PLC, it should also accept NULL data in @handle_frame and provide for data for indicated duration.

Fields
GstElement element
GstPad * sinkpad
GstPad * srcpad
GRecMutex streamLock
GstSegment inputSegment
GstSegment outputSegment
void *[20] GstReserved

Subclasses can override any of the available virtual methods or not, as needed. At minimum @handle_frame (and likely @set_format) needs to be overridden.

Fields
GstElementClass elementClassThe parent class structure
gboolean function(GstAudioDecoder * dec) startOptional. Called when the element starts processing. Allows opening external resources.
gboolean function(GstAudioDecoder * dec) stopOptional. Called when the element stops processing. Allows closing external resources.
gboolean function(GstAudioDecoder * dec, GstCaps * caps) setFormatNotifies subclass of incoming data format (caps).
GstFlowReturn function(GstAudioDecoder * dec, GstAdapter * adapter, int * offset, int * length) parseOptional. Allows chopping incoming data into manageable units (frames) for subsequent decoding. This division is at subclass discretion and may or may not correspond to 1 (or more) frames as defin...
GstFlowReturn function(GstAudioDecoder * dec, GstBuffer * buffer) handleFrameProvides input data (or NULL to clear any remaining data) to subclass. Input data ref management is performed by base class, subclass should not care or intervene, and input data is only valid unt...
void function(GstAudioDecoder * dec, gboolean hard) flushOptional. Instructs subclass to clear any codec caches and discard any pending samples and not yet returned decoded data. @hard indicates whether a FLUSH is being processed, or otherwise a DISCONT ...
GstFlowReturn function(GstAudioDecoder * dec, GstBuffer * * buffer) prePushOptional. Called just prior to pushing (encoded data) buffer downstream. Subclass has full discretionary access to buffer, and a not OK flow return will abort downstream pushing.
gboolean function(GstAudioDecoder * dec, GstEvent * event) sinkEventOptional. Event handler on the sink pad. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstAudioDecoder * dec, GstEvent * event) srcEventOptional. Event handler on the src pad. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstAudioDecoder * dec) openOptional. Called when the element changes to GSTSTATEREADY. Allows opening external resources.
gboolean function(GstAudioDecoder * dec) closeOptional. Called when the element changes to GSTSTATENULL. Allows closing external resources.
gboolean function(GstAudioDecoder * dec) negotiateOptional. Negotiate with downstream and configure buffer pools, etc. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstAudioDecoder * dec, GstQuery * query) decideAllocationOptional. Setup the allocation parameters for allocating output buffers. The passed in query contains the result of the downstream allocation query. Subclasses should chain up to the parent impleme...
gboolean function(GstAudioDecoder * dec, GstQuery * query) proposeAllocationOptional. Propose buffer allocation parameters for upstream elements. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstAudioDecoder * dec, GstQuery * query) sinkQueryOptional. Query handler on the sink pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. Sin...
gboolean function(GstAudioDecoder * dec, GstQuery * query) srcQueryOptional. Query handler on the source pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. S...
GstCaps * function(GstAudioDecoder * dec, GstCaps * filter) getcapsOptional. Allows for a custom sink getcaps implementation. If not implemented, default returns gstaudiodecoderproxygetcaps applied to sink template caps.
gboolean function(GstAudioDecoder * enc, GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf) transformMetaOptional. Transform the metadata on the input buffer to the output buffer. By default this method copies all meta without tags and meta with only the "audio" tag. subclasses can implement this meth...
void *[16] GstReserved

Extra buffer metadata describing audio downmixing matrix. This metadata is attached to audio buffers and contains a matrix to downmix the buffer number of channels to @channels.

@matrix is an two-dimensional array of @to_channels times @from_channels coefficients, i.e. the i-th output channels is constructed by multiplicating the input channels with the coefficients in @matrix[i] and taking the sum of the results.

Fields
GstMeta metaparent #GstMeta
GstAudioChannelPosition * fromPositionthe channel positions of the source
GstAudioChannelPosition * toPositionthe channel positions of the destination
int fromChannelsthe number of channels of the source
int toChannelsthe number of channels of the destination
float * * matrixthe matrix coefficients.

This base class is for audio encoders turning raw audio samples into encoded audio data.

GstAudioEncoder and subclass should cooperate as follows.

Configuration

  • Initially, GstAudioEncoder calls @start when the encoder element

is activated, which allows subclass to perform any global setup.

  • GstAudioEncoder calls @set_format to inform subclass of the format

of input audio data that it is about to receive. Subclass should setup for encoding and configure various base class parameters appropriately, notably those directing desired input data handling. While unlikely, it might be called more than once, if changing input parameters require reconfiguration.

  • GstAudioEncoder calls @stop at end of all processing.

As of configuration stage, and throughout processing, GstAudioEncoder maintains various parameters that provide required context, e.g. describing the format of input audio data. Conversely, subclass can and should configure these context parameters to inform base class of its expectation w.r.t. buffer handling.

Data processing

  • Base class gathers input sample data (as directed by the context's

frame_samples and frame_max) and provides this to subclass' @handle_frame.

  • If codec processing results in encoded data, subclass should call

[gstaudio.audio_encoder.AudioEncoder.finishFrame] to have encoded data pushed downstream. Alternatively, it might also call [gstaudio.audio_encoder.AudioEncoder.finishFrame] (with a NULL buffer and some number of dropped samples) to indicate dropped (non-encoded) samples.

  • Just prior to actually pushing a buffer downstream,

it is passed to @pre_push.

  • During the parsing process GstAudioEncoderClass will handle both

srcpad and sinkpad events. Sink events will be passed to subclass if @event callback has been provided.

Shutdown phase

  • GstAudioEncoder class calls @stop to inform the subclass that data

parsing will be stopped.

Subclass is responsible for providing pad template caps for source and sink pads. The pads need to be named "sink" and "src". It also needs to set the fixed caps on srcpad, when the format is ensured. This is typically when base class calls subclass' @set_format function, though it might be delayed until calling @gst_audio_encoder_finish_frame.

In summary, above process should have subclass concentrating on codec data processing while leaving other matters to base class, such as most notably timestamp handling. While it may exert more control in this area (see e.g. @pre_push), it is very much not recommended.

In particular, base class will either favor tracking upstream timestamps (at the possible expense of jitter) or aim to arrange for a perfect stream of output timestamps, depending on #GstAudioEncoder:perfect-timestamp. However, in the latter case, the input may not be so perfect or ideal, which is handled as follows. An input timestamp is compared with the expected timestamp as dictated by input sample stream and if the deviation is less than #GstAudioEncoder:tolerance, the deviation is discarded. Otherwise, it is considered a discontuinity and subsequent output timestamp is resynced to the new position after performing configured discontinuity processing. In the non-perfect-timestamp case, an upstream variation exceeding tolerance only leads to marking DISCONT on subsequent outgoing (while timestamps are adjusted to upstream regardless of variation). While DISCONT is also marked in the perfect-timestamp case, this one optionally (see #GstAudioEncoder:hard-resync) performs some additional steps, such as clipping of (early) input samples or draining all currently remaining input data, depending on the direction of the discontuinity.

If perfect timestamps are arranged, it is also possible to request baseclass (usually set by subclass) to provide additional buffer metadata (in OFFSET and OFFSET_END) fields according to granule defined semantics currently needed by oggmux. Specifically, OFFSET is set to granulepos (= sample count including buffer) and OFFSET_END to corresponding timestamp (as determined by same sample count and sample rate).

Things that subclass need to take care of:

  • Provide pad templates
  • Set source pad caps when appropriate
  • Inform base class of buffer processing needs using context's

frame_samples and frame_bytes.

  • Set user-configurable properties to sane defaults for format and

implementing codec at hand, e.g. those controlling timestamp behaviour and discontinuity processing.

  • Accept data in @handle_frame and provide encoded results to

[gstaudio.audio_encoder.AudioEncoder.finishFrame].

Fields
GstElement element
GstPad * sinkpad
GstPad * srcpad
GRecMutex streamLock
GstSegment inputSegment
GstSegment outputSegment
void *[20] GstReserved

Subclasses can override any of the available virtual methods or not, as needed. At minimum @set_format and @handle_frame needs to be overridden.

Fields
GstElementClass elementClassThe parent class structure
gboolean function(GstAudioEncoder * enc) startOptional. Called when the element starts processing. Allows opening external resources.
gboolean function(GstAudioEncoder * enc) stopOptional. Called when the element stops processing. Allows closing external resources.
gboolean function(GstAudioEncoder * enc, GstAudioInfo * info) setFormatNotifies subclass of incoming data format. GstAudioInfo contains the format according to provided caps.
GstFlowReturn function(GstAudioEncoder * enc, GstBuffer * buffer) handleFrameProvides input samples (or NULL to clear any remaining data) according to directions as configured by the subclass using the API. Input data ref management is performed by base class, subclass sho...
void function(GstAudioEncoder * enc) flushOptional. Instructs subclass to clear any codec caches and discard any pending samples and not yet returned encoded data.
GstFlowReturn function(GstAudioEncoder * enc, GstBuffer * * buffer) prePushOptional. Called just prior to pushing (encoded data) buffer downstream. Subclass has full discretionary access to buffer, and a not OK flow return will abort downstream pushing.
gboolean function(GstAudioEncoder * enc, GstEvent * event) sinkEventOptional. Event handler on the sink pad. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstAudioEncoder * enc, GstEvent * event) srcEventOptional. Event handler on the src pad. Subclasses should chain up to the parent implementation to invoke the default handler.
GstCaps * function(GstAudioEncoder * enc, GstCaps * filter) getcapsOptional. Allows for a custom sink getcaps implementation (e.g. for multichannel input specification). If not implemented, default returns gstaudioencoderproxygetcaps applied to sink template caps.
gboolean function(GstAudioEncoder * enc) openOptional. Called when the element changes to GSTSTATEREADY. Allows opening external resources.
gboolean function(GstAudioEncoder * enc) closeOptional. Called when the element changes to GSTSTATENULL. Allows closing external resources.
gboolean function(GstAudioEncoder * enc) negotiateOptional. Negotiate with downstream and configure buffer pools, etc. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstAudioEncoder * enc, GstQuery * query) decideAllocationOptional. Setup the allocation parameters for allocating output buffers. The passed in query contains the result of the downstream allocation query. Subclasses should chain up to the parent impleme...
gboolean function(GstAudioEncoder * enc, GstQuery * query) proposeAllocationOptional. Propose buffer allocation parameters for upstream elements. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstAudioEncoder * enc, GstBuffer * outbuf, GstMeta * meta, GstBuffer * inbuf) transformMetaOptional. Transform the metadata on the input buffer to the output buffer. By default this method copies all meta without tags and meta with only the "audio" tag. subclasses can implement this meth...
gboolean function(GstAudioEncoder * encoder, GstQuery * query) sinkQueryOptional. Query handler on the sink pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. Sin...
gboolean function(GstAudioEncoder * encoder, GstQuery * query) srcQueryOptional. Query handler on the source pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. S...
void *[17] GstReserved

#GstAudioFilter is a #GstBaseTransform<!-- -->-derived base class for simple audio filters, ie. those that output the same format that they get as input.

#GstAudioFilter will parse the input format for you (with error checking) before calling your setup function. Also, elements deriving from #GstAudioFilter may use [gstaudio.audio_filter_class.AudioFilterClass.addPadTemplates] from their class_init function to easily configure the set of caps/formats that the element is able to handle.

Derived classes should override the #GstAudioFilterClass.setup() and #GstBaseTransformClass.transform_ip() and/or #GstBaseTransformClass.transform() virtual functions in their class_init function.

Fields
GstBaseTransform basetransform
void *[4] GstReserved

In addition to the @setup virtual function, you should also override the GstBaseTransform::transform and/or GstBaseTransform::transform_ip virtual function.

Fields
GstBaseTransformClass basetransformclassparent class
gboolean function(GstAudioFilter * filter, const(GstAudioInfo) * info) setupvirtual function called whenever the format changes
void *[4] GstReserved

Information for an audio format.

Fields
GstAudioFormat format#GstAudioFormat
const(char) * namestring representation of the format
const(char) * descriptionuser readable description of the format
GstAudioFormatFlags flags#GstAudioFormatFlags
int endiannessthe endianness
int widthamount of bits used for one sample
int depthamount of valid bits in @width
ubyte[8] silence@width/8 bytes with 1 silent sample
GstAudioFormat unpackFormatthe format of the unpacked samples
GstAudioFormatUnpack unpackFuncfunction to unpack samples
GstAudioFormatPack packFuncfunction to pack samples
void *[4] GstReserved

Information describing audio properties. This information can be filled in from GstCaps with [gstaudio.audio_info.AudioInfo.fromCaps].

Use the provided macros to access the info in this structure.

Fields
const(GstAudioFormatInfo) * finfothe format info of the audio
GstAudioFlags flagsadditional audio flags
GstAudioLayout layoutaudio layout
int ratethe audio sample rate
int channelsthe number of channels
int bpfthe number of bytes for one frame, this is the size of one sample * @channels
GstAudioChannelPosition[64] positionthe positions for each channel
void *[4] GstReserved

Meta containing Audio Level Indication: https://tools.ietf.org/html/rfc6464

Fields
GstMeta metaparent #GstMeta
ubyte levelthe -dBov from 0-127 (127 is silence).
gboolean voiceActivitywhether the buffer contains voice activity

#GstAudioDownmixMeta defines an audio downmix matrix to be send along with audio buffers. These functions in this module help to create and attach the meta as well as extracting it.

Fields
GstMeta metaparent #GstMeta
GstAudioInfo infothe audio properties of the buffer
size_t samplesthe number of valid samples in the buffer
size_t * offsetsthe offsets (in bytes) where each channel plane starts in the buffer or null if the buffer has interleaved layout; if not null, this is guaranteed to be an array of @info.channels elements
size_t[8] privOffsetsArr
void *[4] GstReserved

#GstAudioResampler is a structure which holds the information required to perform various kinds of resampling filtering.

This object is the base class for audio ringbuffers used by the base audio source and sink classes.

The ringbuffer abstracts a circular buffer of data. One reader and one writer can operate on the data from different threads in a lockfree manner. The base class is sufficiently flexible to be used as an abstraction for DMA based ringbuffers as well as a pure software implementations.

Fields
GstObject object
GCond condused to signal start/stop/pause/resume actions
gboolean openboolean indicating that the ringbuffer is open
gboolean acquiredboolean indicating that the ringbuffer is acquired
ubyte * memorydata in the ringbuffer
size_t sizesize of data in the ringbuffer
GstClockTime * timestamps
GstAudioRingBufferSpec specformat and layout of the ringbuffer data
int samplesPerSegnumber of samples in one segment
ubyte * emptySegpointer to memory holding one segment of silence samples
int statestate of the buffer
int segdonereadpointer in the ringbuffer
int segbasesegment corresponding to segment 0 (unused)
int waitingis a reader or writer waiting for a free segment
void * cbData
gboolean needReorder
int[64] channelReorderMap
gboolean flushing
int mayStart
gboolean active
GDestroyNotify cbDataNotify
void *[3] GstReserved

The vmethods that subclasses can override to implement the ringbuffer.

Fields
GstObjectClass parentClassparent class
gboolean function(GstAudioRingBuffer * buf) openDeviceopen the device, don't set any params or allocate anything
gboolean function(GstAudioRingBuffer * buf, GstAudioRingBufferSpec * spec) acquireallocate the resources for the ringbuffer using the given spec
gboolean function(GstAudioRingBuffer * buf) releasefree resources of the ringbuffer
gboolean function(GstAudioRingBuffer * buf) closeDeviceclose the device
gboolean function(GstAudioRingBuffer * buf) startstart processing of samples
gboolean function(GstAudioRingBuffer * buf) pausepause processing of samples
gboolean function(GstAudioRingBuffer * buf) resumeresume processing of samples after pause
gboolean function(GstAudioRingBuffer * buf) stopstop processing of samples
uint function(GstAudioRingBuffer * buf) delayget number of frames queued in device
gboolean function(GstAudioRingBuffer * buf, gboolean active) activateactivate the thread that starts pulling and monitoring the consumed segments in the device.
uint function(GstAudioRingBuffer * buf, ulong * sample, ubyte * data, int inSamples, int outSamples, int * accum) commitwrite samples into the ringbuffer
void function(GstAudioRingBuffer * buf) clearAllOptional. Clear the entire ringbuffer. Subclasses should chain up to the parent implementation to invoke the default handler.
void *[4] GstReserved

The structure containing the format specification of the ringbuffer.

When @type is GST_AUDIO_RING_BUFFER_FORMAT_TYPE_DSD, the @dsd_format is valid (otherwise it is unused). Also, when DSD is the sample type, only the rate, channels, position, and bpf fields in @info are populated.

Fields
GstCaps * capsThe caps that generated the Spec.
GstAudioRingBufferFormatType typethe sample type
GstAudioInfo infothe #GstAudioInfo
ulong latencyTimethe latency in microseconds
ulong bufferTimethe total buffer size in microseconds
int segsizethe size of one segment in bytes
int segtotalthe total number of segments
int seglatencynumber of segments queued in the lower level device, defaults to segtotal
ABIType ABI

This is the most simple base class for audio sinks that only requires subclasses to implement a set of simple functions:

  • open() :Open the device.
  • prepare() :Configure the device with the specified format.
  • write() :Write samples to the device.
  • reset() :Unblock writes and flush the device.
  • delay() :Get the number of samples written but not yet played

by the device.

  • unprepare() :Undo operations done by prepare.
  • close() :Close the device.

All scheduling of samples and timestamps is done in this base class together with #GstAudioBaseSink using a default implementation of a #GstAudioRingBuffer that uses threads.

Fields
GThread * thread
void *[4] GstReserved
Fields
GstAudioBaseSinkClass parentClassthe parent class structure.
gboolean function(GstAudioSink * sink) openOpen the device. No configuration needs to be done at this point. This function is also used to check if the device is available.
gboolean function(GstAudioSink * sink, GstAudioRingBufferSpec * spec) preparePrepare the device to operate with the specified parameters.
gboolean function(GstAudioSink * sink) unprepareUndo operations done in prepare.
gboolean function(GstAudioSink * sink) closeClose the device.
int function(GstAudioSink * sink, void * data, uint length) writeWrite data to the device. This vmethod is allowed to block until all the data is written. If such is the case then it is expected that pause, stop and reset will unblock the write when called.
uint function(GstAudioSink * sink) delayReturn how many frames are still in the device. Participates in computing the time for audio clocks and drives the synchronisation.
void function(GstAudioSink * sink) resetReturns as quickly as possible from a write and flush any pending samples from the device. This vmethod is deprecated. Please provide pause and stop instead.
void function(GstAudioSink * sink) pausePause the device and unblock write as fast as possible. For retro compatibility, the audio sink will fallback to calling reset if this vmethod is not provided. Since: 1.18
void function(GstAudioSink * sink) resumeResume the device. Since: 1.18
void function(GstAudioSink * sink) stopStop the device and unblock write as fast as possible. Pending samples are flushed from the device. For retro compatibility, the audio sink will fallback to calling reset if this vmethod is not pro...
GstAudioSinkClassExtension * extensionclass extension structure. Since: 1.18
Fields
void function(GstAudioSink * sink) clearAll

This is the most simple base class for audio sources that only requires subclasses to implement a set of simple functions:

  • open() :Open the device.
  • prepare() :Configure the device with the specified format.
  • read() :Read samples from the device.
  • reset() :Unblock reads and flush the device.
  • delay() :Get the number of samples in the device but not yet read.
  • unprepare() :Undo operations done by prepare.
  • close() :Close the device.

All scheduling of samples and timestamps is done in this base class together with #GstAudioBaseSrc using a default implementation of a #GstAudioRingBuffer that uses threads.

Fields
GThread * thread
void *[4] GstReserved

#GstAudioSrc class. Override the vmethod to implement functionality.

Fields
GstAudioBaseSrcClass parentClassthe parent class.
gboolean function(GstAudioSrc * src) openopen the device with the specified caps
gboolean function(GstAudioSrc * src, GstAudioRingBufferSpec * spec) prepareconfigure device with format
gboolean function(GstAudioSrc * src) unprepareundo the configuration
gboolean function(GstAudioSrc * src) closeclose the device
uint function(GstAudioSrc * src, void * data, uint length, GstClockTime * timestamp) readread samples from the audio device
uint function(GstAudioSrc * src) delaythe number of frames queued in the device
void function(GstAudioSrc * src) resetunblock a read to the device and reset.
void *[4] GstReserved

#GstAudioStreamAlign provides a helper object that helps tracking audio stream alignment and discontinuities, and detects discontinuities if possible.

See [gstaudio.audio_stream_align.AudioStreamAlign.new_] for a description of its parameters and [gstaudio.audio_stream_align.AudioStreamAlign.process] for the details of the processing.

Information describing DSD audio properties.

In DSD, the "sample format" is the bit. Unlike PCM, there are no further "sample formats" in DSD. However, in software, DSD bits are grouped into bytes (since dealing with individual bits is impractical), and these bytes in turn are grouped into words. This becomes relevant when interleaving channels and transmitting DSD data through audio APIs. The different types of grouping DSD bytes are referred to as the "DSD grouping forma" or just "DSD format". #GstDsdFormat has a list of valid ways of grouping DSD bytes into words.

DSD rates are equivalent to PCM sample rates, except that they specify how many DSD bytes are consumed per second. This refers to the bytes per second _per channel_; the rate does not change when the number of channel changes. (Strictly speaking, it would be more correct to measure the

bits per second, since the bit is the DSD "sample format", but it is

more practical to use bytes.) In DSD, bit rates are always an integer multiple of the CD audio rate (44100) or the DAT rate (48000). DSD64-44x is 44100 * 64 = 2822400 bits per second, or 352800 bytes per second (the latter would be used in this info structure). DSD64-48x is 48000 * 64 = 3072000 bits per second, or 384000 bytes per second. #GST_DSD_MAKE_DSD_RATE_44x can be used for specifying DSD-44x rates, *and #GST_DSD_MAKE_DSD_RATE_48x can be used for specifying DSD-48x ones. Also, since DSD-48x is less well known, when the multiplier is given without the 44x/48x specifier, 44x is typically implied.

It is important to know that in DSD, different format widths correspond to different playtimes. That is, a word with 32 DSD bits covers two times as much playtime as a word with 16 DSD bits. This is in contrast to PCM, where one word (= one PCM sample) always covers a time period of 1/samplerate, no matter how many bits a PCM sample is made of. For this reason, DSD and PCM widths and strides cannot be used the same way.

Multiple channels are arranged in DSD data either interleaved or non- interleaved. This is similar to PCM. Interleaved layouts rotate between channels and words. First, word 0 of channel 0 is present. Then word 0 of channel 1 follows. Then word 0 of channel 2 etc. until all channels are through, then comes word 1 of channel 0 etc.

Non-interleaved data is planar. First, all words of channel 0 are present, then all words of channel 1 etc. Unlike interleaved data, non-interleaved data can be sparse, that is, there can be space in between the planes. the @positions array specifies the plane offsets.

In uncommon cases, the DSD bits in the data bytes can be stored in reverse order. For example, normally, in DSDU8, the first byte contains DSD bits 0 to 7, and the most significant bit of that byte is DSD bit 0. If this order is reversed, then bit 7 is the first one instead. In that ase, @reversed_bytes is set to TRUE.

Use the provided macros to access the info in this structure.

Fields
GstDsdFormat formatDSD grouping format
int rateDSD rate
int channelsnumber of channels (must be at least 1)
GstAudioLayout layoutaudio layout
gboolean reversedBytestrue if the DSD bits in the data bytes are reversed, that is, the least significant bit comes first
GstAudioChannelPosition[64] positionspositions for each channel
void *[4] GstReserved

Buffer metadata describing planar DSD contents in the buffer. This is not needed for interleaved DSD data, and is required for non-interleaved (= planar) data.

The different channels in @offsets are always in the GStreamer channel order. Zero-copy channel reordering can be implemented by swapping the values in @offsets.

It is not allowed for channels to overlap in memory, i.e. for each i in [0, channels), the range [@offsets[i], @offsets[i] + @num_bytes_per_channel) must not overlap with any other such range.

It is, however, allowed to have parts of the buffer memory unused, by using @offsets and @num_bytes_per_channel in such a way that leave gaps on it. This is used to implement zero-copy clipping in non-interleaved buffers.

Obviously, due to the above, it is not safe to infer the number of valid bytes from the size of the buffer. You should always use the @num_bytes_per_channel variable of this metadata.

Fields
GstMeta metaparent #GstMeta
int numChannelsnumber of channels in the DSD data
size_t numBytesPerChannelthe number of valid bytes per channel in the buffer
size_t * offsetsthe offsets (in bytes) where each channel plane starts in the buffer
size_t[8] privOffsetsArr
void *[4] GstReserved

This interface is implemented by elements that provide a stream volume. Examples for such elements are #volume and #playbin.

Applications can use this interface to get or set the current stream volume. For this the "volume" #GObject property can be used or the helper functions [gstaudio.stream_volume.StreamVolume.setVolume] and [gstaudio.stream_volume.StreamVolume.getVolume]. This volume is always a linear factor, i.e. 0.0 is muted 1.0 is 100%. For showing the volume in a GUI it might make sense to convert it to a different format by using [gstaudio.stream_volume.StreamVolume.convertVolume]. Volume sliders should usually use a cubic volume.

Separate from the volume the stream can also be muted by the "mute" #GObject property or [gstaudio.stream_volume.StreamVolume.setMute] and [gstaudio.stream_volume.StreamVolume.getMute].

Elements that provide some kind of stream volume should implement the "volume" and "mute" #GObject properties and handle setting and getting of them properly. The volume property is defined to be a linear volume factor.

aliasGstAudioBaseSinkCustomSlavingCallback = void function(GstAudioBaseSink * sink, GstClockTime etime, GstClockTime itime, GstClockTimeDiff * requestedSkew, GstAudioBaseSinkDiscontReason discontReason, void * userData)
aliasGstAudioClockGetTimeFunc = GstClockTime function(GstClock * clock, void * userData)
aliasGstAudioFormatPack = void function(const(GstAudioFormatInfo) * info, GstAudioPackFlags flags, const(void) * src, void * data, int length)
aliasGstAudioFormatUnpack = void function(const(GstAudioFormatInfo) * info, GstAudioPackFlags flags, void * dest, const(void) * data, int length)
aliasGstAudioRingBufferCallback = void function(GstAudioRingBuffer * rbuf, ubyte * data, uint len, void * userData)