gstvideo.c.types

C types for gstvideo1 library

struct GstAncillaryMeta GstColorBalance GstColorBalanceChannel GstColorBalanceChannelClass GstColorBalanceInterface GstNavigation GstNavigationInterface GstVideoAFDMeta GstVideoAffineTransformationMeta GstVideoAggregator GstVideoAggregatorClass GstVideoAggregatorConvertPad GstVideoAggregatorConvertPadClass GstVideoAggregatorConvertPadPrivate GstVideoAggregatorPad GstVideoAggregatorPadClass GstVideoAggregatorPadPrivate GstVideoAggregatorParallelConvertPad GstVideoAggregatorParallelConvertPadClass GstVideoAggregatorPrivate GstVideoAlignment GstVideoAncillary GstVideoBarMeta GstVideoBufferPool GstVideoBufferPoolClass GstVideoBufferPoolPrivate GstVideoCaptionMeta GstVideoChromaResample GstVideoCodecAlphaMeta GstVideoCodecFrame GstVideoCodecState GstVideoColorimetry GstVideoColorPrimariesInfo GstVideoContentLightLevel GstVideoConverter GstVideoCropMeta GstVideoDecoder GstVideoDecoderClass GstVideoDecoderPrivate GstVideoDirection GstVideoDirectionInterface GstVideoDither GstVideoEncoder GstVideoEncoderClass GstVideoEncoderPrivate GstVideoFilter GstVideoFilterClass GstVideoFormatInfo GstVideoFrame GstVideoGLTextureUploadMeta GstVideoInfo GstVideoInfoDmaDrm GstVideoMasteringDisplayInfo GstVideoMasteringDisplayInfoCoordinates GstVideoMeta GstVideoMetaTransform GstVideoMultiviewFlagsSet GstVideoOrientation GstVideoOrientationInterface GstVideoOverlay GstVideoOverlayComposition GstVideoOverlayCompositionMeta GstVideoOverlayInterface GstVideoOverlayRectangle GstVideoRectangle GstVideoRegionOfInterestMeta GstVideoResampler GstVideoScaler GstVideoSEIUserDataUnregisteredMeta GstVideoSink GstVideoSinkClass GstVideoSinkPrivate GstVideoTileInfo GstVideoTimeCode GstVideoTimeCodeConfig GstVideoTimeCodeInterval GstVideoTimeCodeMeta GstVideoVBIEncoder GstVideoVBIParser

Types 135

Location of a @GstAncillaryMeta.

Progressive = 0Progressive or no field specified (default)
InterlacedFirst = 16Interlaced first field
InterlacedSecond = 17Interlaced second field

An enumeration indicating whether an element implements color balancing operations in software or in dedicated hardware. In general, dedicated hardware implementations (such as those provided by xvimagesink) are preferred.

Hardware = 0Color balance is implemented with dedicated hardware.
Software = 1Color balance is implemented via software processing.

A set of commands that may be issued to an element providing the #GstNavigation interface. The available commands can be queried via the [gstvideo.navigation.Navigation.queryNewCommands] query.

For convenience in handling DVD navigation, the MENU commands are aliased as: GST_NAVIGATION_COMMAND_DVD_MENU = @GST_NAVIGATION_COMMAND_MENU1 GST_NAVIGATION_COMMAND_DVD_TITLE_MENU = @GST_NAVIGATION_COMMAND_MENU2 GST_NAVIGATION_COMMAND_DVD_ROOT_MENU = @GST_NAVIGATION_COMMAND_MENU3 GST_NAVIGATION_COMMAND_DVD_SUBPICTURE_MENU = @GST_NAVIGATION_COMMAND_MENU4 GST_NAVIGATION_COMMAND_DVD_AUDIO_MENU = @GST_NAVIGATION_COMMAND_MENU5 GST_NAVIGATION_COMMAND_DVD_ANGLE_MENU = @GST_NAVIGATION_COMMAND_MENU6 GST_NAVIGATION_COMMAND_DVD_CHAPTER_MENU = @GST_NAVIGATION_COMMAND_MENU7

Invalid = 0An invalid command entry
Menu1 = 1Execute navigation menu command 1. For DVD, this enters the DVD root menu, or exits back to the title from the menu.
Menu2 = 2Execute navigation menu command 2. For DVD, this jumps to the DVD title menu.
Menu3 = 3Execute navigation menu command 3. For DVD, this jumps into the DVD root menu.
Menu4 = 4Execute navigation menu command 4. For DVD, this jumps to the Subpicture menu.
Menu5 = 5Execute navigation menu command 5. For DVD, the jumps to the audio menu.
Menu6 = 6Execute navigation menu command 6. For DVD, this jumps to the angles menu.
Menu7 = 7Execute navigation menu command 7. For DVD, this jumps to the chapter menu.
Left = 20Select the next button to the left in a menu, if such a button exists.
Right = 21Select the next button to the right in a menu, if such a button exists.
Up = 22Select the button above the current one in a menu, if such a button exists.
Down = 23Select the button below the current one in a menu, if such a button exists.
Activate = 24Activate (click) the currently selected button in a menu, if such a button exists.
PrevAngle = 30Switch to the previous angle in a multiangle feature.
NextAngle = 31Switch to the next angle in a multiangle feature.

Enum values for the various events that an element implementing the GstNavigation interface might send up the pipeline. Touch events have been inspired by the libinput API, and have the same meaning here.

Invalid = 0Returned from [gstvideo.navigation.Navigation.eventGetType] when the passed event is not a navigation event.
KeyPress = 1A key press event. Use [gstvideo.navigation.Navigation.eventParseKeyEvent] to extract the details from the event.
KeyRelease = 2A key release event. Use [gstvideo.navigation.Navigation.eventParseKeyEvent] to extract the details from the event.
MouseButtonPress = 3A mouse button press event. Use [gstvideo.navigation.Navigation.eventParseMouseButtonEvent] to extract the details from the event.
MouseButtonRelease = 4A mouse button release event. Use [gstvideo.navigation.Navigation.eventParseMouseButtonEvent] to extract the details from the event.
MouseMove = 5A mouse movement event. Use [gstvideo.navigation.Navigation.eventParseMouseMoveEvent] to extract the details from the event.
Command = 6A navigation command event. Use [gstvideo.navigation.Navigation.eventParseCommand] to extract the details from the event.
MouseScroll = 7A mouse scroll event. Use [gstvideo.navigation.Navigation.eventParseMouseScrollEvent] to extract the details from the event.
TouchDown = 8An event describing a new touch point, which will be assigned an identifier that is unique to it for the duration of its movement on the screen. Use [gstvideo.navigation.Navigation.eventParseTouchE...
TouchMotion = 9An event describing the movement of an active touch point across the screen. Use [gstvideo.navigation.Navigation.eventParseTouchEvent] to extract the details from the event.
TouchUp = 10An event describing a removed touch point. After this event, its identifier may be reused for any new touch points. Use [gstvideo.navigation.Navigation.eventParseTouchUpEvent] to extract the detail...
TouchFrame = 11An event signaling the end of a sequence of simultaneous touch events.
TouchCancel = 12An event cancelling all currently active touch points.

A set of notifications that may be received on the bus when navigation related status changes.

Invalid = 0Returned from [gstvideo.navigation.Navigation.messageGetType] when the passed message is not a navigation message.
MouseOver = 1Sent when the mouse moves over or leaves a clickable region of the output, such as a DVD menu button.
CommandsChanged = 2Sent when the set of available commands changes and should re-queried by interested applications.
AnglesChanged = 3Sent when display angles in a multi-angle feature (such as a multiangle DVD) change - either angles have appeared or disappeared.
Event = 4Sent when a navigation event was not handled by any element in the pipeline (Since: 1.6)

Flags to indicate the state of modifier keys and mouse buttons in events.

Typical modifier keys are Shift, Control, Meta, Super, Hyper, Alt, Compose, Apple, CapsLock or ShiftLock.

None = 0
ShiftMask = 1the Shift key.
LockMask = 2
ControlMask = 4the Control key.
Mod1Mask = 8the third modifier key
Mod2Mask = 16the fourth modifier key
Mod3Mask = 32the fifth modifier key
Mod4Mask = 64the sixth modifier key
Mod5Mask = 128the seventh modifier key
Button1Mask = 256the first mouse button (usually the left button).
Button2Mask = 512the second mouse button (usually the right button).
Button3Mask = 1024the third mouse button (usually the mouse wheel button or middle button).
Button4Mask = 2048the fourth mouse button (typically the "Back" button).
Button5Mask = 4096the fifth mouse button (typically the "forward" button).
SuperMask = 67108864the Super modifier
HyperMask = 134217728the Hyper modifier
MetaMask = 268435456the Meta modifier
Mask = 469770239A mask covering all entries in #GdkModifierType.

Types of navigation interface queries.

Invalid = 0invalid query
Commands = 1command query
Angles = 2viewing angle query

Enumeration of the different standards that may apply to AFD data:

0) ETSI/DVB:

https://www.etsi.org/deliver/etsi_ts/101100_101199/101154/02.01.01_60/ts_101154v020101p.pdf

1) ATSC A/53:

https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf

2) SMPTE ST2016-1:

DvbEtsi = 0AFD value is from DVB/ETSI standard
AtscA53 = 1AFD value is from ATSC A/53 standard
SmpteSt20161 = 2

Enumeration of the various values for Active Format Description (AFD)

AFD should be included in video user data whenever the rectangular picture area containing useful information does not extend to the full height or width of the coded frame. AFD data may also be included in user data when the rectangular picture area containing useful information extends to the full height and width of the coded frame.

For details, see Table 6.14 Active Format in:

ATSC Digital Television Standard: Part 4 – MPEG-2 Video System Characteristics

https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf

and Active Format Description in Complete list of AFD codes

https://en.wikipedia.org/wiki/Active_Format_Description#Complete_list_of_AFD_codes

and SMPTE ST2016-1

Notes:

1) AFD 0 is undefined for ATSC and SMPTE ST2016-1, indicating that AFD data is not available: If Bar Data is not present, AFD '0000' indicates that exact information is not available and the active image should be assumed to be the same as the coded frame. AFD '0000'. AFD '0000' accompanied by Bar Data signals that the active image’s aspect ratio is narrower than 16:9, but is not 4:3 or 14:9. As the exact aspect ratio cannot be conveyed by AFD alone, wherever possible, AFD ‘0000’ should be accompanied by Bar Data to define the exact vertical or horizontal extent of the active image. 2) AFD 0 is reserved for DVB/ETSI 3) values 1, 5, 6, 7, and 12 are reserved for both ATSC and DVB/ETSI 4) values 2 and 3 are not recommended for ATSC, but are valid for DVB/ETSI

Unavailable = 0Unavailable (see note 0 below).
_169TopAligned = 2For 4:3 coded frame, letterbox 16:9 image, at top of the coded frame. For 16:9 coded frame, full frame 16:9 image, the same as the coded frame.
_149TopAligned = 3For 4:3 coded frame, letterbox 14:9 image, at top of the coded frame. For 16:9 coded frame, pillarbox 14:9 image, horizontally centered in the coded frame.
GreaterThan169 = 4For 4:3 coded frame, letterbox image with an aspect ratio greater than 16:9, vertically centered in the coded frame. For 16:9 coded frame, letterbox image with an aspect ratio greater than 16:9.
_43Full169Full = 8For 4:3 coded frame, full frame 4:3 image, the same as the coded frame. For 16:9 coded frame, full frame 16:9 image, the same as the coded frame.
_43Full43Pillar = 9For 4:3 coded frame, full frame 4:3 image, the same as the coded frame. For 16:9 coded frame, pillarbox 4:3 image, horizontally centered in the coded frame.
_169Letter169Full = 10For 4:3 coded frame, letterbox 16:9 image, vertically centered in the coded frame with all image areas protected. For 16:9 coded frame, full frame 16:9 image, with all image areas protected.
_149Letter149Pillar = 11For 4:3 coded frame, letterbox 14:9 image, vertically centered in the coded frame. For 16:9 coded frame, pillarbox 14:9 image, horizontally centered in the coded frame.
_43Full149Center = 13For 4:3 coded frame, full frame 4:3 image, with alternative 14:9 center. For 16:9 coded frame, pillarbox 4:3 image, with alternative 14:9 center.
_169Letter149Center = 14For 4:3 coded frame, letterbox 16:9 image, with alternative 14:9 center. For 16:9 coded frame, full frame 16:9 image, with alternative 14:9 center.
_169Letter43Center = 15For 4:3 coded frame, letterbox 16:9 image, with alternative 4:3 center. For 16:9 coded frame, full frame 16:9 image, with alternative 4:3 center.

Different alpha modes.

Copy = 0When input and output have alpha, it will be copied. When the input has no alpha, alpha will be set to #GSTVIDEOCONVERTEROPTALPHA_VALUE
Set = 1set all alpha to #GSTVIDEOCONVERTEROPTALPHA_VALUE
Mult = 2multiply all alpha with #GSTVIDEOCONVERTEROPTALPHAVALUE. When the input format has no alpha but the output format has, the alpha value will be set to #GSTVIDEOCONVERTEROPTALPHAVALUE
Undefined = 0
Deletion = 128
Hanc3gAudioDataFirst = 160
Hanc3gAudioDataLast = 167
HancHdtvAudioDataFirst = 224
HancHdtvAudioDataLast = 231
HancSdtvAudioData1First = 236
HancSdtvAudioData1Last = 239
CameraPosition = 240
HancErrorDetection = 244
HancSdtvAudioData2First = 248
HancSdtvAudioData2Last = 255

Some know types of Ancillary Data identifiers.

S334Eia708 = 24833CEA 708 Ancillary data according to SMPTE 334
S334Eia608 = 24834CEA 608 Ancillary data according to SMPTE 334
S20163AfdBar = 16645AFD/Bar Ancillary data according to SMPTE 2016-3 (Since: 1.18)

Additional video buffer flags. These flags can potentially be used on any buffers carrying closed caption data, or video data - even encoded data.

Note that these are only valid for #GstCaps of type: video/... and caption/... They can conflict with other extended buffer flags.

Interlaced = 1048576If the #GstBuffer is interlaced. In mixed interlace-mode, this flags specifies if the frame is interlaced or progressive.
Tff = 2097152If the #GstBuffer is interlaced, then the first field in the video frame is the top field. If unset, the bottom field is first.
Rff = 4194304If the #GstBuffer is interlaced, then the first field (as defined by the [gstvideo.types.VideoBufferFlags.Tff] flag setting) is repeated.
Onefield = 8388608If the #GstBuffer is interlaced, then only the first field (as defined by the [gstvideo.types.VideoBufferFlags.Tff] flag setting) is to be displayed (Since: 1.16).
MultipleView = 16777216The #GstBuffer contains one or more specific views, such as left or right eye view. This flags is set on any buffer that contains non-mono content - even for streams that contain only a single view...
FirstInBundle = 33554432When conveying stereo/multiview content with frame-by-frame methods, this flag marks the first buffer in a bundle of frames that belong together.
TopField = 10485760The video frame has the top field only. This is the same as GSTVIDEOBUFFERFLAGTFF | GSTVIDEOBUFFERFLAGONEFIELD (Since: 1.16). Use GSTVIDEOBUFFERISTOP_FIELD() to check for this flag.
BottomField = 8388608The video frame has the bottom field only. This is the same as GSTVIDEOBUFFERFLAGONEFIELD (GSTVIDEOBUFFERFLAGTFF flag unset) (Since: 1.16). Use GSTVIDEOBUFFERISBOTTOM_FIELD() to check for this flag.
Marker = 512The #GstBuffer contains the end of a video field or frame boundary such as the last subframe or packet (Since: 1.18).
Last = 268435456Offset to define more flags

The various known types of Closed Caption (CC).

Unknown = 0Unknown type of CC
Cea608Raw = 1CEA-608 as byte pairs. Note that this format is not recommended since is does not specify to which field the caption comes from and therefore assumes it comes from the first field (and that there i...
Cea608S3341a = 2CEA-608 as byte triplets as defined in SMPTE S334-1 Annex A. The second and third byte of the byte triplet is the raw CEA608 data, the first byte is a bitfield: The top/7th bit is 0 for the second ...
Cea708Raw = 3CEA-708 as cc_data byte triplets. They can also contain 608-in-708 and the first byte of each triplet has to be inspected for detecting the type.
Cea708Cdp = 4CEA-708 (and optionally CEA-608) in a CDP (Caption Distribution Packet) defined by SMPTE S-334-2. Contains the whole CDP (starting with 0x9669).

Extra flags that influence the result from [gstvideo.video_chroma_resample.VideoChromaResample.new_].

None = 0no flags
Interlaced = 1the input is interlaced

Different subsampling and upsampling methods

Nearest = 0Duplicates the chroma samples when upsampling and drops when subsampling
Linear = 1Uses linear interpolation to reconstruct missing chroma and averaging to subsample

Different chroma downsampling and upsampling modes

Full = 0do full chroma up and down sampling
UpsampleOnly = 1only perform chroma upsampling
DownsampleOnly = 2only perform chroma downsampling
None = 3disable chroma resampling

Various Chroma sitings.

Unknown = 0unknown cositing
None = 1no cositing
HCosited = 2chroma is horizontally cosited
VCosited = 4chroma is vertically cosited
AltLine = 8choma samples are sited on alternate lines
Cosited = 6chroma samples cosited with luma samples
Jpeg = 1jpeg style cositing, also for mpeg1 and mjpeg
Mpeg2 = 2mpeg2 style cositing
Dv = 14DV style cositing

Flags for #GstVideoCodecFrame

DecodeOnly = 1is the frame only meant to be decoded
SyncPoint = 2is the frame a synchronization point (keyframe)
ForceKeyframe = 4should the output frame be made a keyframe
ForceKeyframeHeaders = 8should the encoder output stream headers
Corrupted = 16The buffer data is corrupted.

The color matrix is used to convert between Y'PbPr and non-linear RGB (R'G'B')

Unknown = 0unknown matrix
Rgb = 1identity matrix. Order of coefficients is actually GBR, also IEC 61966-2-1 (sRGB)
Fcc = 2FCC Title 47 Code of Federal Regulations 73.682 (a)(20)
Bt709 = 3ITU-R BT.709 color matrix, also ITU-R BT1361 / IEC 61966-2-4 xvYCC709 / SMPTE RP177 Annex B
Bt601 = 4ITU-R BT.601 color matrix, also SMPTE170M / ITU-R BT1358 525 / ITU-R BT1700 NTSC
Smpte240m = 5SMPTE 240M color matrix
Bt2020 = 6ITU-R BT.2020 color matrix. Since: 1.6

The color primaries define the how to transform linear RGB values to and from the CIE XYZ colorspace.

Unknown = 0unknown color primaries
Bt709 = 1BT709 primaries, also ITU-R BT1361 / IEC 61966-2-4 / SMPTE RP177 Annex B
Bt470m = 2BT470M primaries, also FCC Title 47 Code of Federal Regulations 73.682 (a)(20)
Bt470bg = 3BT470BG primaries, also ITU-R BT601-6 625 / ITU-R BT1358 625 / ITU-R BT1700 625 PAL & SECAM
Smpte170m = 4SMPTE170M primaries, also ITU-R BT601-6 525 / ITU-R BT1358 525 / ITU-R BT1700 NTSC
Smpte240m = 5SMPTE240M primaries
Film = 6Generic film (colour filters using Illuminant C)
Bt2020 = 7ITU-R BT2020 primaries. Since: 1.6
Adobergb = 8Adobe RGB primaries. Since: 1.8
Smptest428 = 9SMPTE ST 428 primaries (CIE 1931 XYZ). Since: 1.16
Smpterp431 = 10SMPTE RP 431 primaries (ST 431-2 (2011) / DCI P3). Since: 1.16
Smpteeg432 = 11SMPTE EG 432 primaries (ST 432-1 (2010) / P3 D65). Since: 1.16
Ebu3213 = 12EBU 3213 primaries (JEDEC P22 phosphors). Since: 1.16

Possible color range values. These constants are defined for 8 bit color values and can be scaled for other bit depths.

Unknown = 0unknown range
_0255 = 1[0..255] for 8 bit components
_16235 = 2[16..235] for 8 bit components. Chroma has [16..240] range.

Flags to be used in combination with [gstvideo.video_decoder.VideoDecoder.requestSyncPoint]. See the function documentation for more details.

DiscardInput = 1discard all following input until the next sync point.
CorruptOutput = 2discard all following output until the next sync point.

Extra flags that influence the result from [gstvideo.video_chroma_resample.VideoChromaResample.new_].

None = 0no flags
Interlaced = 1the input is interlaced
Quantize = 2quantize values in addition to adding dither.

Different dithering methods to use.

None = 0no dithering
Verterr = 1propagate rounding errors downwards
FloydSteinberg = 2Dither with floyd-steinberg error diffusion
SierraLite = 3Dither with Sierra Lite error diffusion
Bayer = 4ordered dither using a bayer pattern

Field order of interlaced content. This is only valid for interlace-mode=interleaved and not interlace-mode=mixed. In the case of mixed or GST_VIDEO_FIELD_ORDER_UNKOWN, the field order is signalled via buffer flags.

Unknown = 0unknown field order for interlaced content. The actual field order is signalled via buffer flags.
TopFieldFirst = 1top field is first
BottomFieldFirst = 2bottom field is first
enumGstVideoFlags : uint

Extra video flags

None = 0no flags
VariableFps = 1a variable fps is selected, fpsn and fpsd denote the maximum fps of the video
PremultipliedAlpha = 2Each color has been scaled by the alpha value.

Enum value describing the most common video formats.

See the GStreamer raw video format design document for details about the layout and packing of these formats in memory.

Unknown = 0Unknown or unset video format id
Encoded = 1Encoded video format. Only ever use that in caps for special video formats in combination with non-system memory GstCapsFeatures where it does not make sense to specify a real video format.
I420 = 2planar 4:2:0 YUV
Yv12 = 3planar 4:2:0 YVU (like I420 but UV planes swapped)
Yuy2 = 4packed 4:2:2 YUV (Y0-U0-Y1-V0 Y2-U2-Y3-V2 Y4 ...)
Uyvy = 5packed 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...)
Ayuv = 6packed 4:4:4 YUV with alpha channel (A0-Y0-U0-V0 ...)
Rgbx = 7sparse rgb packed into 32 bit, space last
Bgrx = 8sparse reverse rgb packed into 32 bit, space last
Xrgb = 9sparse rgb packed into 32 bit, space first
Xbgr = 10sparse reverse rgb packed into 32 bit, space first
Rgba = 11rgb with alpha channel last
Bgra = 12reverse rgb with alpha channel last
Argb = 13rgb with alpha channel first
Abgr = 14reverse rgb with alpha channel first
Rgb = 15RGB packed into 24 bits without padding (`R-G-B-R-G-B`)
Bgr = 16reverse RGB packed into 24 bits without padding (`B-G-R-B-G-R`)
Y41b = 17planar 4:1:1 YUV
Y42b = 18planar 4:2:2 YUV
Yvyu = 19packed 4:2:2 YUV (Y0-V0-Y1-U0 Y2-V2-Y3-U2 Y4 ...)
Y444 = 20planar 4:4:4 YUV
V210 = 21packed 4:2:2 10-bit YUV, complex format
V216 = 22packed 4:2:2 16-bit YUV, Y0-U0-Y1-V1 order
Nv12 = 23planar 4:2:0 YUV with interleaved UV plane
Nv21 = 24planar 4:2:0 YUV with interleaved VU plane
Gray8 = 258-bit grayscale
Gray16Be = 2616-bit grayscale, most significant byte first
Gray16Le = 2716-bit grayscale, least significant byte first
V308 = 28packed 4:4:4 YUV (Y-U-V ...)
Rgb16 = 29rgb 5-6-5 bits per component
Bgr16 = 30reverse rgb 5-6-5 bits per component
Rgb15 = 31rgb 5-5-5 bits per component
Bgr15 = 32reverse rgb 5-5-5 bits per component
Uyvp = 33packed 10-bit 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...)
A420 = 34planar 4:4:2:0 AYUV
Rgb8p = 358-bit paletted RGB
Yuv9 = 36planar 4:1:0 YUV
Yvu9 = 37planar 4:1:0 YUV (like YUV9 but UV planes swapped)
Iyu1 = 38packed 4:1:1 YUV (Cb-Y0-Y1-Cr-Y2-Y3 ...)
Argb64 = 39rgb with alpha channel first, 16 bits (native endianness) per channel
Ayuv64 = 40packed 4:4:4 YUV with alpha channel, 16 bits (native endianness) per channel (A0-Y0-U0-V0 ...)
R210 = 41packed 4:4:4 RGB, 10 bits per channel
I42010be = 42planar 4:2:0 YUV, 10 bits per channel
I42010le = 43planar 4:2:0 YUV, 10 bits per channel
I42210be = 44planar 4:2:2 YUV, 10 bits per channel
I42210le = 45planar 4:2:2 YUV, 10 bits per channel
Y44410be = 46planar 4:4:4 YUV, 10 bits per channel (Since: 1.2)
Y44410le = 47planar 4:4:4 YUV, 10 bits per channel (Since: 1.2)
Gbr = 48planar 4:4:4 RGB, 8 bits per channel (Since: 1.2)
Gbr10be = 49planar 4:4:4 RGB, 10 bits per channel (Since: 1.2)
Gbr10le = 50planar 4:4:4 RGB, 10 bits per channel (Since: 1.2)
Nv16 = 51planar 4:2:2 YUV with interleaved UV plane (Since: 1.2)
Nv24 = 52planar 4:4:4 YUV with interleaved UV plane (Since: 1.2)
Nv1264z32 = 53NV12 with 64x32 tiling in zigzag pattern (Since: 1.4)
A42010be = 54planar 4:4:2:0 YUV, 10 bits per channel (Since: 1.6)
A42010le = 55planar 4:4:2:0 YUV, 10 bits per channel (Since: 1.6)
A42210be = 56planar 4:4:2:2 YUV, 10 bits per channel (Since: 1.6)
A42210le = 57planar 4:4:2:2 YUV, 10 bits per channel (Since: 1.6)
A44410be = 58planar 4:4:4:4 YUV, 10 bits per channel (Since: 1.6)
A44410le = 59planar 4:4:4:4 YUV, 10 bits per channel (Since: 1.6)
Nv61 = 60planar 4:2:2 YUV with interleaved VU plane (Since: 1.6)
P01010be = 61planar 4:2:0 YUV with interleaved UV plane, 10 bits per channel (Since: 1.10)
P01010le = 62planar 4:2:0 YUV with interleaved UV plane, 10 bits per channel (Since: 1.10)
Iyu2 = 63packed 4:4:4 YUV (U-Y-V ...) (Since: 1.10)
Vyuy = 64packed 4:2:2 YUV (V0-Y0-U0-Y1 V2-Y2-U2-Y3 V4 ...)
Gbra = 65planar 4:4:4:4 ARGB, 8 bits per channel (Since: 1.12)
Gbra10be = 66planar 4:4:4:4 ARGB, 10 bits per channel (Since: 1.12)
Gbra10le = 67planar 4:4:4:4 ARGB, 10 bits per channel (Since: 1.12)
Gbr12be = 68planar 4:4:4 RGB, 12 bits per channel (Since: 1.12)
Gbr12le = 69planar 4:4:4 RGB, 12 bits per channel (Since: 1.12)
Gbra12be = 70planar 4:4:4:4 ARGB, 12 bits per channel (Since: 1.12)
Gbra12le = 71planar 4:4:4:4 ARGB, 12 bits per channel (Since: 1.12)
I42012be = 72planar 4:2:0 YUV, 12 bits per channel (Since: 1.12)
I42012le = 73planar 4:2:0 YUV, 12 bits per channel (Since: 1.12)
I42212be = 74planar 4:2:2 YUV, 12 bits per channel (Since: 1.12)
I42212le = 75planar 4:2:2 YUV, 12 bits per channel (Since: 1.12)
Y44412be = 76planar 4:4:4 YUV, 12 bits per channel (Since: 1.12)
Y44412le = 77planar 4:4:4 YUV, 12 bits per channel (Since: 1.12)
Gray10Le32 = 7810-bit grayscale, packed into 32bit words (2 bits padding) (Since: 1.14)
Nv1210le32 = 7910-bit variant of @GSTVIDEOFORMAT_NV12, packed into 32bit words (MSB 2 bits padding) (Since: 1.14)
Nv1610le32 = 8010-bit variant of @GSTVIDEOFORMAT_NV16, packed into 32bit words (MSB 2 bits padding) (Since: 1.14)
Nv1210le40 = 81Fully packed variant of NV12_10LE32 (Since: 1.16)
Y210 = 82packed 4:2:2 YUV, 10 bits per channel (Since: 1.16)
Y410 = 83packed 4:4:4 YUV, 10 bits per channel(A-V-Y-U...) (Since: 1.16)
Vuya = 84packed 4:4:4 YUV with alpha channel (V0-U0-Y0-A0...) (Since: 1.16)
Bgr10a2Le = 85packed 4:4:4 RGB with alpha channel(B-G-R-A), 10 bits for R/G/B channel and MSB 2 bits for alpha channel (Since: 1.16)
Rgb10a2Le = 86packed 4:4:4 RGB with alpha channel(R-G-B-A), 10 bits for R/G/B channel and MSB 2 bits for alpha channel (Since: 1.18)
Y44416be = 87planar 4:4:4 YUV, 16 bits per channel (Since: 1.18)
Y44416le = 88planar 4:4:4 YUV, 16 bits per channel (Since: 1.18)
P016Be = 89planar 4:2:0 YUV with interleaved UV plane, 16 bits per channel (Since: 1.18)
P016Le = 90planar 4:2:0 YUV with interleaved UV plane, 16 bits per channel (Since: 1.18)
P012Be = 91planar 4:2:0 YUV with interleaved UV plane, 12 bits per channel (Since: 1.18)
P012Le = 92planar 4:2:0 YUV with interleaved UV plane, 12 bits per channel (Since: 1.18)
Y212Be = 93packed 4:2:2 YUV, 12 bits per channel (Y-U-Y-V) (Since: 1.18)
Y212Le = 94packed 4:2:2 YUV, 12 bits per channel (Y-U-Y-V) (Since: 1.18)
Y412Be = 95packed 4:4:4:4 YUV, 12 bits per channel(U-Y-V-A...) (Since: 1.18)
Y412Le = 96packed 4:4:4:4 YUV, 12 bits per channel(U-Y-V-A...) (Since: 1.18)
Nv124l4 = 97NV12 with 4x4 tiles in linear order.
Nv1232l32 = 98NV12 with 32x32 tiles in linear order.
Rgbp = 99Planar 4:4:4 RGB, R-G-B order
Bgrp = 100Planar 4:4:4 RGB, B-G-R order
Av12 = 101Planar 4:2:0 YUV with interleaved UV plane with alpha as 3rd plane.
Argb64Le = 102RGB with alpha channel first, 16 bits (little endian) per channel.
Argb64Be = 103RGB with alpha channel first, 16 bits (big endian) per channel.
Rgba64Le = 104RGB with alpha channel last, 16 bits (little endian) per channel.
Rgba64Be = 105RGB with alpha channel last, 16 bits (big endian) per channel.
Bgra64Le = 106Reverse RGB with alpha channel last, 16 bits (little endian) per channel.
Bgra64Be = 107Reverse RGB with alpha channel last, 16 bits (big endian) per channel.
Abgr64Le = 108Reverse RGB with alpha channel first, 16 bits (little endian) per channel.
Abgr64Be = 109Reverse RGB with alpha channel first, 16 bits (big endian) per channel.
Nv1216l32s = 110NV12 with 16x32 Y tiles and 16x16 UV tiles.
Nv128l128 = 111NV12 with 8x128 tiles in linear order.
Nv1210be8l128 = 112NV12 10bit big endian with 8x128 tiles in linear order.
Nv1210le404l4 = 113@GSTVIDEOFORMATNV1210LE40 with 4x4 pixels tiles (5 bytes per tile row). This format is produced by Verisilicon/Hantro decoders.
DmaDrm = 114@GSTVIDEOFORMATDMADRM represent the DMA DRM special format. It's only used with memory:DMABuf #GstCapsFeatures, where an extra parameter (drm-format) is required to define the image format and its ...
Mt2110t = 115Mediatek 10bit NV12 little endian with 16x32 tiles in linear order, tile 2 bits.
Mt2110r = 116Mediatek 10bit NV12 little endian with 16x32 tiles in linear order, raster 2 bits.
A422 = 117planar 4:4:2:2 YUV, 8 bits per channel
A444 = 118planar 4:4:4:4 YUV, 8 bits per channel
A44412le = 119planar 4:4:4:4 YUV, 12 bits per channel
A44412be = 120planar 4:4:4:4 YUV, 12 bits per channel
A42212le = 121planar 4:4:2:2 YUV, 12 bits per channel
A42212be = 122planar 4:4:2:2 YUV, 12 bits per channel
A42012le = 123planar 4:4:2:0 YUV, 12 bits per channel
A42012be = 124planar 4:4:2:0 YUV, 12 bits per channel
A44416le = 125planar 4:4:4:4 YUV, 16 bits per channel
A44416be = 126planar 4:4:4:4 YUV, 16 bits per channel
A42216le = 127planar 4:4:2:2 YUV, 16 bits per channel
A42216be = 128planar 4:4:2:2 YUV, 16 bits per channel
A42016le = 129planar 4:4:2:0 YUV, 16 bits per channel
A42016be = 130planar 4:4:2:0 YUV, 16 bits per channel
Gbr16le = 131planar 4:4:4 RGB, 16 bits per channel
Gbr16be = 132planar 4:4:4 RGB, 16 bits per channel
Rbga = 133packed RGB with alpha, 8 bits per channel

The different video flags that a format info can have.

Yuv = 1The video format is YUV, components are numbered 0=Y, 1=U, 2=V.
Rgb = 2The video format is RGB, components are numbered 0=R, 1=G, 2=B.
Gray = 4The video is gray, there is one gray component with index 0.
Alpha = 8The video format has an alpha components with the number 3.
Le = 16The video format has data stored in little endianness.
Palette = 32The video format has a palette. The palette is stored in the second plane and indexes are stored in the first plane.
Complex = 64The video format has a complex layout that can't be described with the usual information in the #GstVideoFormatInfo.
Unpack = 128This format can be used in a #GstVideoFormatUnpack and #GstVideoFormatPack function.
Tiled = 256The format is tiled, there is tiling information in the last plane.
Subtiles = 512The tile size varies per plane according to the subsampling.

Extra video frame flags

None = 0no flags
Interlaced = 1The video frame is interlaced. In mixed interlace-mode, this flag specifies if the frame is interlaced or progressive.
Tff = 2The video frame has the top field first
Rff = 4The video frame has the repeat flag
Onefield = 8The video frame has one field
MultipleView = 16The video contains one or more non-mono views
FirstInBundle = 32The video frame is the first in a set of corresponding views provided as sequential frames.
TopField = 10The video frame has the top field only. This is the same as GSTVIDEOFRAMEFLAGTFF | GSTVIDEOFRAMEFLAGONEFIELD (Since: 1.16).
BottomField = 8The video frame has the bottom field only. This is the same as GSTVIDEOFRAMEFLAGONEFIELD (GSTVIDEOFRAMEFLAGTFF flag unset) (Since: 1.16).

Additional mapping flags for [gstvideo.video_frame.VideoFrame.map].

NoRef = 65536Don't take another reference of the buffer and store it in the GstVideoFrame. This makes sure that the buffer stays writable while the frame is mapped, but requires that the buffer reference stays ...
Last = 16777216Offset to define more flags

The orientation of the GL texture.

NormalYNormal = 0Top line first in memory, left row first
NormalYFlip = 1Bottom line first in memory, left row first
FlipYNormal = 2Top line first in memory, right row first
FlipYFlip = 3Bottom line first in memory, right row first

The GL texture type.

Luminance = 0Luminance texture, GL_LUMINANCE
LuminanceAlpha = 1Luminance-alpha texture, GLLUMINANCEALPHA
Rgb16 = 2RGB 565 texture, GL_RGB
Rgb = 3RGB texture, GL_RGB
Rgba = 4RGBA texture, GL_RGBA
R = 5R texture, GLREDEXT
Rg = 6RG texture, GLRGEXT
None = 0disable gamma handling
Remap = 1convert between input and output gamma Different gamma conversion modes

The possible values of the #GstVideoInterlaceMode describing the interlace mode of the stream.

Progressive = 0all frames are progressive
Interleaved = 12 fields are interleaved in one video frame. Extra buffer flags describe the field order.
Mixed = 2frames contains both interlaced and progressive video, the buffer flags describe the frame and fields.
Fields = 32 fields are stored in one buffer, use the frame ID to get access to the required field. For multiview (the 'views' property > 1) the fields of view N can be found at frame ID (N 2) and (N 2) + 1...
Alternate = 41 field is stored in one buffer, @GSTVIDEOBUFFERFLAGTF or @GSTVIDEOBUFFERFLAGBF indicates if the buffer is carrying the top or bottom field, respectively. The top and bottom buffers must alternate ...

Different color matrix conversion modes

Full = 0do conversion between color matrices
InputOnly = 1use the input color matrix to convert to and from R'G'B
OutputOnly = 2use the output color matrix to convert to and from R'G'B
None = 3disable color matrix conversion.

GstVideoMultiviewFlags are used to indicate extra properties of a stereo/multiview stream beyond the frame layout and buffer mapping that is conveyed in the #GstVideoMultiviewMode.

None = 0No flags
RightViewFirst = 1For stereo streams, the normal arrangement of left and right views is reversed.
LeftFlipped = 2The left view is vertically mirrored.
LeftFlopped = 4The left view is horizontally mirrored.
RightFlipped = 8The right view is vertically mirrored.
RightFlopped = 16The right view is horizontally mirrored.
HalfAspect = 16384For frame-packed multiview modes, indicates that the individual views have been encoded with half the true width or height and should be scaled back up for display. This flag is used for overriding...
MixedMono = 32768The video stream contains both mono and multiview portions, signalled on each buffer by the absence or presence of the @GSTVIDEOBUFFERFLAGMULTIPLE_VIEW buffer flag.

#GstVideoMultiviewFramePacking represents the subset of #GstVideoMultiviewMode values that can be applied to any video frame without needing extra metadata. It can be used by elements that provide a property to override the multiview interpretation of a video stream when the video doesn't contain any markers.

This enum is used (for example) on playbin, to re-interpret a played video stream as a stereoscopic video. The individual enum values are equivalent to and have the same value as the matching #GstVideoMultiviewMode.

None = - 1A special value indicating no frame packing info.
Mono = 0All frames are monoscopic.
Left = 1All frames represent a left-eye view.
Right = 2All frames represent a right-eye view.
SideBySide = 3Left and right eye views are provided in the left and right half of the frame respectively.
SideBySideQuincunx = 4Left and right eye views are provided in the left and right half of the frame, but have been sampled using quincunx method, with half-pixel offset between the 2 views.
ColumnInterleaved = 5Alternating vertical columns of pixels represent the left and right eye view respectively.
RowInterleaved = 6Alternating horizontal rows of pixels represent the left and right eye view respectively.
TopBottom = 7The top half of the frame contains the left eye, and the bottom half the right eye.
Checkerboard = 8Pixels are arranged with alternating pixels representing left and right eye views in a checkerboard fashion.

All possible stereoscopic 3D and multiview representations. In conjunction with #GstVideoMultiviewFlags, describes how multiview content is being transported in the stream.

None = - 1A special value indicating no multiview information. Used in GstVideoInfo and other places to indicate that no specific multiview handling has been requested or provided. This value is never carrie...
Mono = 0All frames are monoscopic.
Left = 1All frames represent a left-eye view.
Right = 2All frames represent a right-eye view.
SideBySide = 3Left and right eye views are provided in the left and right half of the frame respectively.
SideBySideQuincunx = 4Left and right eye views are provided in the left and right half of the frame, but have been sampled using quincunx method, with half-pixel offset between the 2 views.
ColumnInterleaved = 5Alternating vertical columns of pixels represent the left and right eye view respectively.
RowInterleaved = 6Alternating horizontal rows of pixels represent the left and right eye view respectively.
TopBottom = 7The top half of the frame contains the left eye, and the bottom half the right eye.
Checkerboard = 8Pixels are arranged with alternating pixels representing left and right eye views in a checkerboard fashion.
FrameByFrame = 32Left and right eye views are provided in separate frames alternately.
MultiviewFrameByFrame = 33Multiple independent views are provided in separate frames in sequence. This method only applies to raw video buffers at the moment. Specific view identification is via the `GstVideoMultiviewMeta` ...
Separated = 34Multiple views are provided as separate #GstMemory framebuffers attached to each #GstBuffer, described by the `GstVideoMultiviewMeta` and #GstVideoMeta(s)

The different video orientation methods.

Identity = 0Identity (no rotation)
_90r = 1Rotate clockwise 90 degrees
_180 = 2Rotate 180 degrees
_90l = 3Rotate counter-clockwise 90 degrees
Horiz = 4Flip horizontally
Vert = 5Flip vertically
UlLr = 6Flip across upper left/lower right diagonal
UrLl = 7Flip across upper right/lower left diagonal
Auto = 8Select flip method based on image-orientation tag
Custom = 9Current status depends on plugin internal setup

Overlay format flags.

None = 0no flags
PremultipliedAlpha = 1RGB are premultiplied by A/255.
GlobalAlpha = 2a global-alpha value != 1 is set.

The different flags that can be used when packing and unpacking.

None = 0No flag
TruncateRange = 1When the source has a smaller depth than the target format, set the least significant bits of the target to 0. This is likely slightly faster but less accurate. When this flag is not specified, the...
Interlaced = 2The source is interlaced. The unpacked format will be interlaced as well with each line containing information from alternating fields. (Since: 1.2)

Different primaries conversion modes

None = 0disable conversion between primaries
MergeOnly = 1do conversion between primaries only when it can be merged with color matrix conversion.
Fast = 2fast conversion between primaries

Different resampler flags.

None = 0no flags
HalfTaps = 1when no taps are given, half the number of calculated taps. This can be used when making scalers for the different fields of an interlaced picture. Since: 1.10

Different subsampling and upsampling methods

Nearest = 0Duplicates the samples when upsampling and drops when downsampling
Linear = 1Uses linear interpolation to reconstruct missing samples and averaging to downsample
Cubic = 2Uses cubic interpolation
Sinc = 3Uses sinc interpolation
Lanczos = 4Uses lanczos interpolation

Different scale flags.

None = 0no flags
Interlaced = 1Set up a scaler for interlaced content

Enum value describing the available tiling modes.

Unknown = 0Unknown or unset tile mode
Zflipz2x2 = 65536Every four adjacent blocks - two horizontally and two vertically are grouped together and are located in memory in Z or flipped Z order. In case of odd rows, the last row of blocks is arranged in l...
Linear = 131072Tiles are in row order.

Enum value describing the most common tiling types.

Indexed = 0Tiles are indexed. Use gstvideotilegetindex () to retrieve the tile at the requested coordinates.

Flags related to the time code information. For drop frame, only 30000/1001 and 60000/1001 frame rates are supported.

None = 0No flags
DropFrame = 1Whether we have drop frame rate
Interlaced = 2Whether we have interlaced video

The video transfer function defines the formula for converting between non-linear RGB (R'G'B') and linear RGB

Unknown = 0unknown transfer function
Gamma10 = 1linear RGB, gamma 1.0 curve
Gamma18 = 2Gamma 1.8 curve
Gamma20 = 3Gamma 2.0 curve
Gamma22 = 4Gamma 2.2 curve
Bt709 = 5Gamma 2.2 curve with a linear segment in the lower range, also ITU-R BT470M / ITU-R BT1700 625 PAL & SECAM / ITU-R BT1361
Smpte240m = 6Gamma 2.2 curve with a linear segment in the lower range
Srgb = 7Gamma 2.4 curve with a linear segment in the lower range. IEC 61966-2-1 (sRGB or sYCC)
Gamma28 = 8Gamma 2.8 curve, also ITU-R BT470BG
Log100 = 9Logarithmic transfer characteristic 100:1 range
Log316 = 10Logarithmic transfer characteristic 316.22777:1 range (100 * sqrt(10) : 1)
Bt202012 = 11Gamma 2.2 curve with a linear segment in the lower range. Used for BT.2020 with 12 bits per component. Since: 1.6
Adobergb = 12Gamma 2.19921875. Since: 1.8
Bt202010 = 13Rec. ITU-R BT.2020-2 with 10 bits per component. (functionally the same as the values GSTVIDEOTRANSFERBT709 and GSTVIDEOTRANSFERBT601). Since: 1.18
Smpte2084 = 14SMPTE ST 2084 for 10, 12, 14, and 16-bit systems. Known as perceptual quantization (PQ) Since: 1.18
AribStdB67 = 15Association of Radio Industries and Businesses (ARIB) STD-B67 and Rec. ITU-R BT.2100-1 hybrid loggamma (HLG) system Since: 1.18
Bt601 = 16also known as SMPTE170M / ITU-R BT1358 525 or 625 / ITU-R BT1700 NTSC

Return values for #GstVideoVBIParser

Done = 0No line were provided, or no more Ancillary data was found.
Ok = 1A #GstVideoAncillary was found.
Error = 2An error occurred

#GstMeta for carrying SMPTE-291M Ancillary data. Note that all the ADF fields (@DID to @checksum) are 10bit values with parity/non-parity high-bits set.

Fields
GstMeta metaParent #GstMeta
GstAncillaryMetaField fieldThe field where the ancillary data is located
gboolean cNotYChannelWhich channel (luminance or chrominance) the ancillary data is located. 0 if content is SD or stored in the luminance channel (default). 1 if HD and stored in the chrominance channel.
ushort lineThe line on which the ancillary data is located (max 11bit). There are two special values: 0x7ff if no line is specified (default), 0x7fe to specify the ancillary data is on any valid line before a...
ushort offsetThe location of the ancillary data packet in a SDI raster relative to the start of active video (max 12bits). A value of 0 means the ADF of the ancillary packet starts immediately following SAV. Th...
ushort DIDData Identified
ushort SDIDBlockNumberSecondary Data identification (if type 2) or Data block number (if type 1)
ushort dataCountThe amount of user data
ushort * dataThe User data
ushort checksumThe checksum of the ADF

This interface is implemented by elements which can perform some color balance operation on video frames they process. For example, modifying the brightness, contrast, hue or saturation.

Example elements are 'xvimagesink' and 'colorbalance'

The #GstColorBalanceChannel object represents a parameter for modifying the color balance implemented by an element providing the #GstColorBalance interface. For example, Hue or Saturation.

Fields
GObject parent
char * labelA string containing a descriptive name for this channel
int minValueThe minimum valid value for this channel.
int maxValueThe maximum valid value for this channel.
void *[4] GstReserved

Color-balance channel class.

Fields
GObjectClass parentthe parent class
void function(GstColorBalanceChannel * channel, int value) valueChangeddefault handler for value changed notification
void *[4] GstReserved

Color-balance interface.

Fields
GTypeInterface ifacethe parent interface
const(GList) * function(GstColorBalance * balance) listChannelslist handled channels
void function(GstColorBalance * balance, GstColorBalanceChannel * channel, int value) setValueset a channel value
int function(GstColorBalance * balance, GstColorBalanceChannel * channel) getValueget a channel value
GstColorBalanceType function(GstColorBalance * balance) getBalanceTypeimplementation type
void function(GstColorBalance * balance, GstColorBalanceChannel * channel, int value) valueChangeddefault handler for value changed notification
void *[4] GstReserved

The Navigation interface is used for creating and injecting navigation related events such as mouse button presses, cursor motion and key presses. The associated library also provides methods for parsing received events, and for sending and receiving navigation related bus events. One main usecase is DVD menu navigation.

The main parts of the API are:

  • The GstNavigation interface, implemented by elements which provide an

application with the ability to create and inject navigation events into the pipeline.

  • GstNavigation event handling API. GstNavigation events are created in

response to calls on a GstNavigation interface implementation, and sent in the pipeline. Upstream elements can use the navigation event API functions to parse the contents of received messages.

  • GstNavigation message handling API. GstNavigation messages may be sent on

the message bus to inform applications of navigation related changes in the pipeline, such as the mouse moving over a clickable region, or the set of available angles changing.

The GstNavigation message functions provide functions for creating and parsing custom bus messages for signaling GstNavigation changes.

Navigation interface.

Fields
GTypeInterface ifacethe parent interface
void function(GstNavigation * navigation, GstStructure * structure) sendEventsending a navigation event
void function(GstNavigation * navigation, GstEvent * event) sendEventSimplesending a navigation event (Since: 1.22)

Active Format Description (AFD)

For details, see Table 6.14 Active Format in:

ATSC Digital Television Standard: Part 4 – MPEG-2 Video System Characteristics

https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf

and Active Format Description in Complete list of AFD codes

https://en.wikipedia.org/wiki/Active_Format_Description#Complete_list_of_AFD_codes

and SMPTE ST2016-1

Fields
GstMeta metaparent #GstMeta
ubyte field0 for progressive or field 1 and 1 for field 2
GstVideoAFDSpec spec#GstVideoAFDSpec that applies to @afd
GstVideoAFDValue afd#GstVideoAFDValue AFD value

Extra buffer metadata for performing an affine transformation using a 4x4 matrix. The transformation matrix can be composed with [gstvideo.video_affine_transformation_meta.VideoAffineTransformationMeta.applyMatrix].

The vertices operated on are all in the range 0 to 1, not in Normalized Device Coordinates (-1 to +1). Transforming points in this space are assumed to have an origin at (0.5, 0.5, 0.5) in a left-handed coordinate system with the x-axis moving horizontally (positive values to the right), the y-axis moving vertically (positive values up the screen) and the z-axis perpendicular to the screen (positive values into the screen).

Fields
GstMeta metaparent #GstMeta
float[16] matrixthe column-major 4x4 transformation matrix

VideoAggregator can accept AYUV, ARGB and BGRA video streams. For each of the requested sink pads it will compare the incoming geometry and framerate to define the output parameters. Indeed output video frames will have the geometry of the biggest incoming video stream and the framerate of the fastest incoming one.

VideoAggregator will do colorspace conversion.

Zorder for each input stream can be configured on the #GstVideoAggregatorPad.

Fields
GstAggregator aggregator
GstVideoInfo infoThe #GstVideoInfo representing the currently set srcpad caps.
void *[20] GstReserved
Fields
GstAggregatorClass parentClass
GstCaps * function(GstVideoAggregator * videoaggregator, GstCaps * caps) updateCapsOptional. Lets subclasses update the #GstCaps representing the src pad caps before usage. Return null to indicate failure.
GstFlowReturn function(GstVideoAggregator * videoaggregator, GstBuffer * outbuffer) aggregateFramesLets subclasses aggregate frames that are ready. Subclasses should iterate the GstElement.sinkpads and use the already mapped #GstVideoFrame from [gstvideo.videoaggregatorpad.VideoAggregatorPad.get...
GstFlowReturn function(GstVideoAggregator * videoaggregator, GstBuffer * * outbuffer) createOutputBufferOptional. Lets subclasses provide a #GstBuffer to be used as @outbuffer of the #aggregate_frames vmethod.
void function(GstVideoAggregator * vagg, GstCaps * downstreamCaps, GstVideoInfo * bestInfo, gboolean * atLeastOneAlpha) findBestFormatOptional. Lets subclasses decide of the best common format to use.
void *[20] GstReserved

An implementation of GstPad that can be used with #GstVideoAggregator.

See #GstVideoAggregator for more details.

Fields
void *[4] GstReserved
Fields
void function(GstVideoAggregatorConvertPad * pad, GstVideoAggregator * agg, GstVideoInfo * conversionInfo) createConversionInfo
void *[4] GstReserved
Fields
GstVideoInfo infoThe #GstVideoInfo currently set on the pad
void *[4] GstReserved
Fields
void function(GstVideoAggregatorPad * pad) updateConversionInfoCalled when either the input or output formats have changed.
gboolean function(GstVideoAggregatorPad * pad, GstVideoAggregator * videoaggregator, GstBuffer * buffer, GstVideoFrame * preparedFrame) prepareFramePrepare the frame from the pad buffer and sets it to prepared_frame. Implementations should always return TRUE. Returning FALSE will cease iteration over subsequent pads.
void function(GstVideoAggregatorPad * pad, GstVideoAggregator * videoaggregator, GstVideoFrame * preparedFrame) cleanFrameclean the frame previously prepared in prepare_frame
void function(GstVideoAggregatorPad * pad, GstVideoAggregator * videoaggregator, GstBuffer * buffer, GstVideoFrame * preparedFrame) prepareFrameStart
void function(GstVideoAggregatorPad * pad, GstVideoAggregator * videoaggregator, GstVideoFrame * preparedFrame) prepareFrameFinish
void *[18] GstReserved

An implementation of GstPad that can be used with #GstVideoAggregator.

See #GstVideoAggregator for more details.

Fields

Extra alignment parameters for the memory of video buffers. This structure is usually used to configure the bufferpool if it supports the #GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT.

Fields
uint paddingTopextra pixels on the top
uint paddingBottomextra pixels on the bottom
uint paddingLeftextra pixels on the left side
uint paddingRightextra pixels on the right side
uint[4] strideAlignarray with extra alignment requirements for the strides

Video Ancillary data, according to SMPTE-291M specification.

Note that the contents of the data are always stored as 8bit data (i.e. do not contain the parity check bits).

Fields
ubyte DIDThe Data Identifier
ubyte SDIDBlockNumberThe Secondary Data Identifier (if type 2) or the Data Block Number (if type 1)
ubyte dataCountThe amount of data (in bytes) in @data (max 255 bytes)
ubyte * dataThe user data content of the Ancillary packet. Does not contain the ADF, DID, SDID nor CS.
void *[4] GstReserved

Bar data should be included in video user data whenever the rectangular picture area containing useful information does not extend to the full height or width of the coded frame and AFD alone is insufficient to describe the extent of the image.

Note

either vertical or horizontal bars are specified, but not both.

For more details, see:

https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf

and SMPTE ST2016-1

Fields
GstMeta metaparent #GstMeta
ubyte field0 for progressive or field 1 and 1 for field 2
gboolean isLetterboxif true then bar data specifies letterbox, otherwise pillarbox
uint barData1If @is_letterbox is true, then the value specifies the last line of a horizontal letterbox bar area at top of reconstructed frame. Otherwise, it specifies the last horizontal luminance sample of a ...
uint barData2If @is_letterbox is true, then the value specifies the first line of a horizontal letterbox bar area at bottom of reconstructed frame. Otherwise, it specifies the first horizontal luminance sample ...

Extra buffer metadata providing Closed Caption.

Fields
GstMeta metaparent #GstMeta
GstVideoCaptionType captionTypeThe type of Closed Caption contained in the meta.
ubyte * dataThe Closed Caption data.
size_t sizeThe size in bytes of @data

This meta is primarily for internal use in GStreamer elements to support VP8/VP9 transparent video stored into WebM or Matroska containers, or transparent static AV1 images. Nothing prevents you from using this meta for custom purposes, but it generally can't be used to easily to add support for alpha channels to CODECs or formats that don't support that out of the box.

Fields
GstMeta metaparent #GstMeta
GstBuffer * bufferthe encoded alpha frame

A #GstVideoCodecFrame represents a video frame both in raw and encoded form.

Fields
int refCount
uint flags
uint systemFrameNumberUnique identifier for the frame. Use this if you need to get hold of the frame later (like when data is being decoded). Typical usage in decoders is to set this on the opaque value provided to the ...
uint decodeFrameNumber
uint presentationFrameNumber
GstClockTime dtsDecoding timestamp
GstClockTime ptsPresentation timestamp
GstClockTime durationDuration of the frame
int distanceFromSyncDistance in frames from the last synchronization point.
GstBuffer * inputBufferthe input #GstBuffer that created this frame. The buffer is owned by the frame and references to the frame instead of the buffer should be kept.
GstBuffer * outputBufferthe output #GstBuffer. Implementations should set this either directly, or by using the [gstvideo.videodecoder.VideoDecoder.allocateOutputFrame] or [gstvideo.videodecoder.VideoDecoder.allocateOutpu...
GstClockTime deadlineRunning time when the frame will be used.
GList * events
void * userData
GDestroyNotify userDataDestroyNotify
AbidataType abidata

Structure representing the state of an incoming or outgoing video stream for encoders and decoders.

Decoders and encoders will receive such a state through their respective @set_format vmethods.

Decoders and encoders can set the downstream state, by using the [gstvideo.video_decoder.VideoDecoder.setOutputState] or [gstvideo.video_encoder.VideoEncoder.setOutputState] methods.

Fields
int refCount
GstVideoInfo infoThe #GstVideoInfo describing the stream
GstCaps * capsThe #GstCaps used in the caps negotiation of the pad.
GstBuffer * codecDataa #GstBuffer corresponding to the 'codec_data' field of a stream, or NULL.
GstCaps * allocationCapsThe #GstCaps for allocation query and pool negotiation. Since: 1.10
GstVideoMasteringDisplayInfo * masteringDisplayInfoMastering display color volume information (HDR metadata) for the stream.
GstVideoContentLightLevel * contentLightLevelContent light level information for the stream.
void *[17] padding

Structure describing the chromaticity coordinates of an RGB system. These values can be used to construct a matrix to transform RGB to and from the XYZ colorspace.

Fields
GstVideoColorPrimaries primariesa #GstVideoColorPrimaries
double Wxreference white x coordinate
double Wyreference white y coordinate
double Rxred x coordinate
double Ryred y coordinate
double Gxgreen x coordinate
double Gygreen y coordinate
double Bxblue x coordinate
double Byblue y coordinate

Structure describing the color info.

Fields
GstVideoColorRange rangethe color range. This is the valid range for the samples. It is used to convert the samples to Y'PbPr values.
GstVideoColorMatrix matrixthe color matrix. Used to convert between Y'PbPr and non-linear RGB (R'G'B')
GstVideoTransferFunction transferthe transfer function. used to convert between R'G'B' and RGB
GstVideoColorPrimaries primariescolor primaries. used to convert between R'G'B' and CIE XYZ

Content light level information specified in CEA-861.3, Appendix A.

Fields
ushort maxContentLightLevelthe maximum content light level (abbreviated to MaxCLL) in candelas per square meter (cd/m^2 and nit)
ushort maxFrameAverageLightLevelthe maximum frame average light level (abbreviated to MaxFLL) in candelas per square meter (cd/m^2 and nit)
void *[4] GstReserved

Extra buffer metadata describing image cropping.

Fields
GstMeta metaparent #GstMeta
uint xthe horizontal offset
uint ythe vertical offset
uint widththe cropped width
uint heightthe cropped height

This base class is for video decoders turning encoded data into raw video frames.

The GstVideoDecoder base class and derived subclasses should cooperate as follows:

Configuration

  • Initially, GstVideoDecoder calls @start when the decoder element

is activated, which allows the subclass to perform any global setup.

  • GstVideoDecoder calls @set_format to inform the subclass of caps

describing input video data that it is about to receive, including possibly configuration data. While unlikely, it might be called more than once, if changing input parameters require reconfiguration.

  • Incoming data buffers are processed as needed, described in Data

Processing below.

  • GstVideoDecoder calls @stop at end of all processing.

Data processing

  • The base class gathers input data, and optionally allows subclass

to parse this into subsequently manageable chunks, typically corresponding to and referred to as 'frames'.

  • Each input frame is provided in turn to the subclass' @handle_frame

callback.

  • When the subclass enables the subframe mode with [gstvideo.video_decoder.VideoDecoder.setSubframeMode],

the base class will provide to the subclass the same input frame with different input buffers to the subclass @handle_frame callback. During this call, the subclass needs to take ownership of the input_buffer as @GstVideoCodecFrame.input_buffer will have been changed before the next subframe buffer is received. The subclass will call [gstvideo.video_decoder.VideoDecoder.haveLastSubframe] when a new input frame can be created by the base class. Every subframe will share the same @GstVideoCodecFrame.output_buffer to write the decoding result. The subclass is responsible to protect its access.

  • If codec processing results in decoded data, the subclass should call

@gst_video_decoder_finish_frame to have decoded data pushed downstream. In subframe mode the subclass should call @gst_video_decoder_finish_subframe until the last subframe where it should call @gst_video_decoder_finish_frame. The subclass can detect the last subframe using GST_VIDEO_BUFFER_FLAG_MARKER on buffers or using its own logic to collect the subframes. In case of decoding failure, the subclass must call @gst_video_decoder_drop_frame or @gst_video_decoder_drop_subframe, to allow the base class to do timestamp and offset tracking, and possibly to requeue the frame for a later attempt in the case of reverse playback.

Shutdown phase

  • The GstVideoDecoder class calls @stop to inform the subclass that data

parsing will be stopped.

Additional Notes

  • Seeking/Flushing
  • When the pipeline is seeked or otherwise flushed, the subclass is

informed via a call to its @reset callback, with the hard parameter set to true. This indicates the subclass should drop any internal data queues and timestamps and prepare for a fresh set of buffers to arrive for parsing and decoding.

  • End Of Stream
  • At end-of-stream, the subclass @parse function may be called some final

times with the at_eos parameter set to true, indicating that the element should not expect any more data to be arriving, and it should parse and remaining frames and call [gstvideo.video_decoder.VideoDecoder.haveFrame] if possible.

The subclass is responsible for providing pad template caps for source and sink pads. The pads need to be named "sink" and "src". It also needs to provide information about the output caps, when they are known. This may be when the base class calls the subclass' @set_format function, though it might be during decoding, before calling @gst_video_decoder_finish_frame. This is done via @gst_video_decoder_set_output_state

The subclass is also responsible for providing (presentation) timestamps (likely based on corresponding input ones). If that is not applicable or possible, the base class provides limited framerate based interpolation.

Similarly, the base class provides some limited (legacy) seeking support if specifically requested by the subclass, as full-fledged support should rather be left to upstream demuxer, parser or alike. This simple approach caters for seeking and duration reporting using estimated input bitrates. To enable it, a subclass should call @gst_video_decoder_set_estimate_rate to enable handling of incoming byte-streams.

The base class provides some support for reverse playback, in particular in case incoming data is not packetized or upstream does not provide fragments on keyframe boundaries. However, the subclass should then be prepared for the parsing and frame processing stage to occur separately (in normal forward processing, the latter immediately follows the former), The subclass also needs to ensure the parsing stage properly marks keyframes, unless it knows the upstream elements will do so properly for incoming data.

The bare minimum that a functional subclass needs to implement is:

  • Provide pad templates
  • Inform the base class of output caps via

@gst_video_decoder_set_output_state

  • Parse input data, if it is not considered packetized from upstream

Data will be provided to @parse which should invoke @gst_video_decoder_add_to_frame and @gst_video_decoder_have_frame to separate the data belonging to each video frame.

  • Accept data in @handle_frame and provide decoded results to

@gst_video_decoder_finish_frame, or call @gst_video_decoder_drop_frame.

Fields
GstElement element
GstPad * sinkpad
GstPad * srcpad
GRecMutex streamLock
GstSegment inputSegment
GstSegment outputSegment
void *[20] padding

Subclasses can override any of the available virtual methods or not, as needed. At minimum @handle_frame needs to be overridden, and @set_format and likely as well. If non-packetized input is supported or expected, @parse needs to be overridden as well.

Fields
GstElementClass elementClass
gboolean function(GstVideoDecoder * decoder) openOptional. Called when the element changes to GSTSTATEREADY. Allows opening external resources.
gboolean function(GstVideoDecoder * decoder) closeOptional. Called when the element changes to GSTSTATENULL. Allows closing external resources.
gboolean function(GstVideoDecoder * decoder) startOptional. Called when the element starts processing. Allows opening external resources.
gboolean function(GstVideoDecoder * decoder) stopOptional. Called when the element stops processing. Allows closing external resources.
GstFlowReturn function(GstVideoDecoder * decoder, GstVideoCodecFrame * frame, GstAdapter * adapter, gboolean atEos) parseRequired for non-packetized input. Allows chopping incoming data into manageable units (frames) for subsequent decoding.
gboolean function(GstVideoDecoder * decoder, GstVideoCodecState * state) setFormatNotifies subclass of incoming data format (caps).
gboolean function(GstVideoDecoder * decoder, gboolean hard) resetOptional. Allows subclass (decoder) to perform post-seek semantics reset. Deprecated.
GstFlowReturn function(GstVideoDecoder * decoder) finishOptional. Called to request subclass to dispatch any pending remaining data at EOS. Sub-classes can refuse to decode new data after.
GstFlowReturn function(GstVideoDecoder * decoder, GstVideoCodecFrame * frame) handleFrameProvides input data frame to subclass. In subframe mode, the subclass needs to take ownership of @GstVideoCodecFrame.input_buffer as it will be modified by the base class on the next subframe buffe...
gboolean function(GstVideoDecoder * decoder, GstEvent * event) sinkEventOptional. Event handler on the sink pad. This function should return TRUE if the event was handled and should be discarded (i.e. not unref'ed). Subclasses should chain up to the parent implementati...
gboolean function(GstVideoDecoder * decoder, GstEvent * event) srcEventOptional. Event handler on the source pad. This function should return TRUE if the event was handled and should be discarded (i.e. not unref'ed). Subclasses should chain up to the parent implementa...
gboolean function(GstVideoDecoder * decoder) negotiateOptional. Negotiate with downstream and configure buffer pools, etc. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstVideoDecoder * decoder, GstQuery * query) decideAllocationOptional. Setup the allocation parameters for allocating output buffers. The passed in query contains the result of the downstream allocation query. Subclasses should chain up to the parent impleme...
gboolean function(GstVideoDecoder * decoder, GstQuery * query) proposeAllocationOptional. Propose buffer allocation parameters for upstream elements. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstVideoDecoder * decoder) flushOptional. Flush all remaining data from the decoder without pushing it downstream. Since: 1.2
gboolean function(GstVideoDecoder * decoder, GstQuery * query) sinkQueryOptional. Query handler on the sink pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. Sin...
gboolean function(GstVideoDecoder * decoder, GstQuery * query) srcQueryOptional. Query handler on the source pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. S...
GstCaps * function(GstVideoDecoder * decoder, GstCaps * filter) getcapsOptional. Allows for a custom sink getcaps implementation. If not implemented, default returns gstvideodecoderproxygetcaps applied to sink template caps.
GstFlowReturn function(GstVideoDecoder * decoder) drainOptional. Called to request subclass to decode any data it can at this point, but that more data may arrive after. (e.g. at segment end). Sub-classes should be prepared to handle new data afterward...
gboolean function(GstVideoDecoder * decoder, GstVideoCodecFrame * frame, GstMeta * meta) transformMetaOptional. Transform the metadata on the input buffer to the output buffer. By default this method is copies all meta without tags and meta with only the "video" tag. subclasses can implement this m...
gboolean function(GstVideoDecoder * decoder, GstClockTime timestamp, GstClockTime duration) handleMissingData
void *[13] padding

The interface allows unified access to control flipping and rotation operations of video-sources or operators.

#GstVideoDirectionInterface interface.

Fields
GTypeInterface ifaceparent interface type.

GstVideoDither provides implementations of several dithering algorithms that can be applied to lines of video pixels to quantize and dither them.

This base class is for video encoders turning raw video into encoded video data.

GstVideoEncoder and subclass should cooperate as follows.

Configuration

  • Initially, GstVideoEncoder calls @start when the encoder element

is activated, which allows subclass to perform any global setup.

  • GstVideoEncoder calls @set_format to inform subclass of the format

of input video data that it is about to receive. Subclass should setup for encoding and configure base class as appropriate (e.g. latency). While unlikely, it might be called more than once, if changing input parameters require reconfiguration. Baseclass will ensure that processing of current configuration is finished.

  • GstVideoEncoder calls @stop at end of all processing.

Data processing

  • Base class collects input data and metadata into a frame and hands

this to subclass' @handle_frame.

  • If codec processing results in encoded data, subclass should call

@gst_video_encoder_finish_frame to have encoded data pushed downstream.

  • If implemented, baseclass calls subclass @pre_push just prior to

pushing to allow subclasses to modify some metadata on the buffer. If it returns GST_FLOW_OK, the buffer is pushed downstream.

  • GstVideoEncoderClass will handle both srcpad and sinkpad events.

Sink events will be passed to subclass if @event callback has been provided.

Shutdown phase

  • GstVideoEncoder class calls @stop to inform the subclass that data

parsing will be stopped.

Subclass is responsible for providing pad template caps for source and sink pads. The pads need to be named "sink" and "src". It should also be able to provide fixed src pad caps in @getcaps by the time it calls @gst_video_encoder_finish_frame.

Things that subclass need to take care of:

  • Provide pad templates
  • Provide source pad caps before pushing the first buffer
  • Accept data in @handle_frame and provide encoded results to

@gst_video_encoder_finish_frame.

The #GstVideoEncoder:qos property will enable the Quality-of-Service features of the encoder which gather statistics about the real-time performance of the downstream elements. If enabled, subclasses can use [gstvideo.video_encoder.VideoEncoder.getMaxEncodeTime] to check if input frames are already late and drop them right away to give a chance to the pipeline to catch up.

Fields
GstElement element
GstPad * sinkpad
GstPad * srcpad
GRecMutex streamLock
GstSegment inputSegment
GstSegment outputSegment
void *[20] padding

Subclasses can override any of the available virtual methods or not, as needed. At minimum @handle_frame needs to be overridden, and @set_format and @get_caps are likely needed as well.

Fields
GstElementClass elementClass
gboolean function(GstVideoEncoder * encoder) openOptional. Called when the element changes to GSTSTATEREADY. Allows opening external resources.
gboolean function(GstVideoEncoder * encoder) closeOptional. Called when the element changes to GSTSTATENULL. Allows closing external resources.
gboolean function(GstVideoEncoder * encoder) startOptional. Called when the element starts processing. Allows opening external resources.
gboolean function(GstVideoEncoder * encoder) stopOptional. Called when the element stops processing. Allows closing external resources.
gboolean function(GstVideoEncoder * encoder, GstVideoCodecState * state) setFormatOptional. Notifies subclass of incoming data format. GstVideoCodecState fields have already been set according to provided caps.
GstFlowReturn function(GstVideoEncoder * encoder, GstVideoCodecFrame * frame) handleFrameProvides input frame to subclass.
gboolean function(GstVideoEncoder * encoder, gboolean hard) resetOptional. Allows subclass (encoder) to perform post-seek semantics reset. Deprecated.
GstFlowReturn function(GstVideoEncoder * encoder) finishOptional. Called to request subclass to dispatch any pending remaining data (e.g. at EOS).
GstFlowReturn function(GstVideoEncoder * encoder, GstVideoCodecFrame * frame) prePushOptional. Allows subclass to push frame downstream in whatever shape or form it deems appropriate. If not provided, provided encoded frame data is simply pushed downstream.
GstCaps * function(GstVideoEncoder * enc, GstCaps * filter) getcapsOptional. Allows for a custom sink getcaps implementation (e.g. for multichannel input specification). If not implemented, default returns gstvideoencoderproxygetcaps applied to sink template caps.
gboolean function(GstVideoEncoder * encoder, GstEvent * event) sinkEventOptional. Event handler on the sink pad. This function should return TRUE if the event was handled and should be discarded (i.e. not unref'ed). Subclasses should chain up to the parent implementati...
gboolean function(GstVideoEncoder * encoder, GstEvent * event) srcEventOptional. Event handler on the source pad. This function should return TRUE if the event was handled and should be discarded (i.e. not unref'ed). Subclasses should chain up to the parent implementa...
gboolean function(GstVideoEncoder * encoder) negotiateOptional. Negotiate with downstream and configure buffer pools, etc. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstVideoEncoder * encoder, GstQuery * query) decideAllocationOptional. Setup the allocation parameters for allocating output buffers. The passed in query contains the result of the downstream allocation query. Subclasses should chain up to the parent impleme...
gboolean function(GstVideoEncoder * encoder, GstQuery * query) proposeAllocationOptional. Propose buffer allocation parameters for upstream elements. Subclasses should chain up to the parent implementation to invoke the default handler.
gboolean function(GstVideoEncoder * encoder) flushOptional. Flush all remaining data from the encoder without pushing it downstream. Since: 1.2
gboolean function(GstVideoEncoder * encoder, GstQuery * query) sinkQueryOptional. Query handler on the sink pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. Sin...
gboolean function(GstVideoEncoder * encoder, GstQuery * query) srcQueryOptional. Query handler on the source pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. S...
gboolean function(GstVideoEncoder * encoder, GstVideoCodecFrame * frame, GstMeta * meta) transformMetaOptional. Transform the metadata on the input buffer to the output buffer. By default this method is copies all meta without tags and meta with only the "video" tag. subclasses can implement this m...
void *[16] GstReserved

Provides useful functions and a base class for video filters.

The videofilter will by default enable QoS on the parent GstBaseTransform to implement frame dropping.

Fields
gboolean negotiated
GstVideoInfo outInfo
void *[4] GstReserved

The video filter class structure.

Fields
GstBaseTransformClass parentClassthe parent class structure
gboolean function(GstVideoFilter * filter, GstCaps * incaps, GstVideoInfo * inInfo, GstCaps * outcaps, GstVideoInfo * outInfo) setInfofunction to be called with the negotiated caps and video infos
GstFlowReturn function(GstVideoFilter * filter, GstVideoFrame * inframe, GstVideoFrame * outframe) transformFrametransform a video frame
GstFlowReturn function(GstVideoFilter * trans, GstVideoFrame * frame) transformFrameIptransform a video frame in place
void *[4] GstReserved

Information for a video format.

Fields
GstVideoFormat format#GstVideoFormat
const(char) * namestring representation of the format
const(char) * descriptionuse readable description of the format
GstVideoFormatFlags flags#GstVideoFormatFlags
uint bitsThe number of bits used to pack data items. This can be less than 8 when multiple pixels are stored in a byte. for values > 8 multiple bytes should be read according to the endianness flag before a...
uint nComponentsthe number of components in the video format.
uint[4] shiftthe number of bits to shift away to get the component data
uint[4] depththe depth in bits for each component
int[4] pixelStridethe pixel stride of each component. This is the amount of bytes to the pixel immediately to the right. When bits < 8, the stride is expressed in bits. For 24-bit RGB, this would be 3 bytes, for exa...
uint nPlanesthe number of planes for this format. The number of planes can be less than the amount of components when multiple components are packed into one plane.
uint[4] planethe plane number where a component can be found
uint[4] poffsetthe offset in the plane where the first pixel of the components can be found.
uint[4] wSubsubsampling factor of the width for the component. Use GSTVIDEOSUB_SCALE to scale a width.
uint[4] hSubsubsampling factor of the height for the component. Use GSTVIDEOSUB_SCALE to scale a height.
GstVideoFormat unpackFormatthe format of the unpacked pixels. This format must have the #GSTVIDEOFORMATFLAGUNPACK flag set.
GstVideoFormatUnpack unpackFuncan unpack function for this format
int packLinesthe amount of lines that will be packed
GstVideoFormatPack packFuncan pack function for this format
GstVideoTileMode tileModeThe tiling mode
uint tileWsThe width of a tile, in bytes, represented as a shift. DEPRECATED, use tile_info[] array instead.
uint tileHsThe height of a tile, in bytes, represented as a shift. DEPREACTED, use tile_info[] array instead.
GstVideoTileInfo[4] tileInfoInformation about the tiles for each of the planes.

A video frame obtained from [gstvideo.video_frame.VideoFrame.map]

Fields
GstVideoInfo infothe #GstVideoInfo
GstVideoFrameFlags flags#GstVideoFrameFlags for the frame
GstBuffer * bufferthe mapped buffer
void * metapointer to metadata if any
int idid of the mapped frame. the id can for example be used to identify the frame in case of multiview video.
void *[4] datapointers to the plane data
GstMapInfo[4] mapmappings of the planes
void *[4] GstReserved

Extra buffer metadata for uploading a buffer to an OpenGL texture ID. The caller of [gstvideo.video_gltexture_upload_meta.VideoGLTextureUploadMeta.upload] must have OpenGL set up and call this from a thread where it is valid to upload something to an OpenGL texture.

Fields
GstMeta metaparent #GstMeta
GstVideoGLTextureOrientation textureOrientationOrientation of the textures
uint nTexturesNumber of textures that are generated
GstVideoGLTextureType[4] textureTypeType of each texture
GstBuffer * buffer
void * userData
GBoxedCopyFunc userDataCopy
GBoxedFreeFunc userDataFree

Information describing image properties. This information can be filled in from GstCaps with [gstvideo.video_info.VideoInfo.fromCaps]. The information is also used to store the specific video info when mapping a video frame with [gstvideo.video_frame.VideoFrame.map].

Use the provided macros to access the info in this structure.

Fields
const(GstVideoFormatInfo) * finfothe format info of the video
GstVideoInterlaceMode interlaceModethe interlace mode
GstVideoFlags flagsadditional video flags
int widththe width of the video
int heightthe height of the video
size_t sizethe default size of one frame
int viewsthe number of views for multiview video
GstVideoChromaSite chromaSitea #GstVideoChromaSite.
GstVideoColorimetry colorimetrythe colorimetry info
int parNthe pixel-aspect-ratio numerator
int parDthe pixel-aspect-ratio denominator
int fpsNthe framerate numerator
int fpsDthe framerate denominator
size_t[4] offsetoffsets of the planes
int[4] stridestrides of the planes
ABIType ABI

Information describing a DMABuf image properties. It wraps #GstVideoInfo and adds DRM information such as drm-fourcc and drm-modifier, required for negotiation and mapping.

Fields
GstVideoInfo vinfothe associated #GstVideoInfo
uint drmFourccthe fourcc defined by drm
ulong drmModifierthe drm modifier
uint[20] GstReserved

Mastering display color volume information defined by SMPTE ST 2086 (a.k.a static HDR metadata).

Fields
GstVideoMasteringDisplayInfoCoordinates[3] displayPrimariesthe xy coordinates of primaries in the CIE 1931 color space. the index 0 contains red, 1 is for green and 2 is for blue. each value is normalized to 50000 (meaning that in unit of 0.00002)
GstVideoMasteringDisplayInfoCoordinates whitePointthe xy coordinates of white point in the CIE 1931 color space. each value is normalized to 50000 (meaning that in unit of 0.00002)
uint maxDisplayMasteringLuminancethe maximum value of display luminance in unit of 0.0001 candelas per square metre (cd/m^2 and nit)
uint minDisplayMasteringLuminancethe minimum value of display luminance in unit of 0.0001 candelas per square metre (cd/m^2 and nit)
void *[4] GstReserved

Used to represent display_primaries and white_point of #GstVideoMasteringDisplayInfo struct. See #GstVideoMasteringDisplayInfo

Fields
ushort xthe x coordinate of CIE 1931 color space in unit of 0.00002.
ushort ythe y coordinate of CIE 1931 color space in unit of 0.00002.

Extra buffer metadata describing image properties

This meta can also be used by downstream elements to specifiy their buffer layout requirements for upstream. Upstream should try to fit those requirements, if possible, in order to prevent buffer copies.

This is done by passing a custom #GstStructure to [gst.query.Query.addAllocationMeta] when handling the ALLOCATION query. This structure should be named 'video-meta' and can have the following fields:

  • padding-top (uint): extra pixels on the top
  • padding-bottom (uint): extra pixels on the bottom
  • padding-left (uint): extra pixels on the left side
  • padding-right (uint): extra pixels on the right side

The padding fields have the same semantic as #GstVideoMeta.alignment and so represent the paddings requested on produced video buffers.

Since 1.24 it can be serialized using [gst.meta.Meta.serialize] and [gst.meta.Meta.deserialize].

Fields
GstMeta metaparent #GstMeta
GstBuffer * bufferthe buffer this metadata belongs to
GstVideoFrameFlags flagsadditional video flags
GstVideoFormat formatthe video format
int ididentifier of the frame
uint widththe video width
uint heightthe video height
uint nPlanesthe number of planes in the image
size_t[4] offsetarray of offsets for the planes. This field might not always be valid, it is used by the default implementation of @map.
int[4] stridearray of strides for the planes. This field might not always be valid, it is used by the default implementation of @map.
gboolean function(GstVideoMeta * meta, uint plane, GstMapInfo * info, void * * data, int * stride, GstMapFlags flags) mapmap the memory of a plane
gboolean function(GstVideoMeta * meta, uint plane, GstMapInfo * info) unmapunmap the memory of a plane
GstVideoAlignment alignmentthe paddings and alignment constraints of the video buffer. It is up to the caller of `[gstvideo.global.bufferAddVideoMetaFull]` to set it using [gstvideo.video_meta.VideoMeta.setAlignment], if the...

Extra data passed to a video transform #GstMetaTransformFunction such as: "gst-video-scale".

Fields
GstVideoInfo * inInfothe input #GstVideoInfo
GstVideoInfo * outInfothe output #GstVideoInfo

See #GstVideoMultiviewFlags.

The interface allows unified access to control flipping and autocenter operation of video-sources or operators.

#GstVideoOrientationInterface interface.

Fields
GTypeInterface ifaceparent interface type.
gboolean function(GstVideoOrientation * videoOrientation, gboolean * flip) getHflipvirtual method to get horizontal flipping state
gboolean function(GstVideoOrientation * videoOrientation, gboolean * flip) getVflipvirtual method to get vertical flipping state
gboolean function(GstVideoOrientation * videoOrientation, int * center) getHcentervirtual method to get horizontal centering state
gboolean function(GstVideoOrientation * videoOrientation, int * center) getVcentervirtual method to get vertical centering state
gboolean function(GstVideoOrientation * videoOrientation, gboolean flip) setHflipvirtual method to set horizontal flipping state
gboolean function(GstVideoOrientation * videoOrientation, gboolean flip) setVflipvirtual method to set vertical flipping state
gboolean function(GstVideoOrientation * videoOrientation, int center) setHcentervirtual method to set horizontal centering state
gboolean function(GstVideoOrientation * videoOrientation, int center) setVcentervirtual method to set vertical centering state

The #GstVideoOverlay interface is used for 2 main purposes :

  • To get a grab on the Window where the video sink element is going to render.

This is achieved by either being informed about the Window identifier that the video sink element generated, or by forcing the video sink element to use a specific Window identifier for rendering.

  • To force a redrawing of the latest video frame the video sink element

displayed on the Window. Indeed if the #GstPipeline is in #GST_STATE_PAUSED state, moving the Window around will damage its content. Application developers will want to handle the Expose events themselves and force the video sink element to refresh the Window's content.

Using the Window created by the video sink is probably the simplest scenario, in some cases, though, it might not be flexible enough for application developers if they need to catch events such as mouse moves and button clicks.

Setting a specific Window identifier on the video sink element is the most flexible solution but it has some issues. Indeed the application needs to set its Window identifier at the right time to avoid internal Window creation from the video sink element. To solve this issue a #GstMessage is posted on the bus to inform the application that it should set the Window identifier immediately. Here is an example on how to do that correctly:

static GstBusSyncReply
create_window (GstBus * bus, GstMessage * message, GstPipeline * pipeline)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
  return GST_BUS_PASS;

win = XCreateSimpleWindow (disp, root, 0, 0, 320, 240, 0, 0, 0);

XSetWindowBackgroundPixmap (disp, win, None);

XMapRaised (disp, win);

XSync (disp, FALSE);

gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message)),
    win);

gst_message_unref (message);

return GST_BUS_DROP;
}
...
int
main (int argc, char **argv)
{
...
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_set_sync_handler (bus, (GstBusSyncHandler) create_window, pipeline,
       NULL);
...
}

Two basic usage scenarios

There are two basic usage scenarios: in the simplest case, the application uses #playbin or #playsink or knows exactly what particular element is used for video output, which is usually the case when the application creates the videosink to use (e.g. #xvimagesink, #ximagesink, etc.) itself; in this case, the application can just create the videosink element, create and realize the window to render the video on and then call [gstvideo.video_overlay.VideoOverlay.setWindowHandle] directly with the XID or native window handle, before starting up the pipeline. As #playbin and #playsink implement the video overlay interface and proxy it transparently to the actual video sink even if it is created later, this case also applies when using these elements.

In the other and more common case, the application does not know in advance what GStreamer video sink element will be used for video output. This is usually the case when an element such as #autovideosink is used. In this case, the video sink element itself is created asynchronously from a GStreamer streaming thread some time after the pipeline has been started up. When that happens, however, the video sink will need to know right then whether to render onto an already existing application window or whether to create its own window. This is when it posts a prepare-window-handle message, and that is also why this message needs to be handled in a sync bus handler which will be called from the streaming thread directly (because the video sink will need an answer right then).

As response to the prepare-window-handle element message in the bus sync handler, the application may use [gstvideo.video_overlay.VideoOverlay.setWindowHandle] to tell the video sink to render onto an existing window surface. At this point the application should already have obtained the window handle / XID, so it just needs to set it. It is generally not advisable to call any GUI toolkit functions or window system functions from the streaming thread in which the prepare-window-handle message is handled, because most GUI toolkits and windowing systems are not thread-safe at all and a lot of care would be required to co-ordinate the toolkit and window system calls of the different threads (Gtk+ users please note: prior to Gtk+ 2.18 GDK_WINDOW_XID was just a simple structure access, so generally fine to do within the bus sync handler; this macro was changed to a function call in Gtk+ 2.18 and later, which is likely to cause problems when called from a sync handler; see below for a better approach without GDK_WINDOW_XID used in the callback).

GstVideoOverlay and Gtk+

#include <gst/video/videooverlay.h>
#include <gtk/gtk.h>
#ifdef GDK_WINDOWING_X11
#include <gdk/gdkx.h>  // for GDK_WINDOW_XID
#endif
#ifdef GDK_WINDOWING_WIN32
#include <gdk/gdkwin32.h>  // for GDK_WINDOW_HWND
#endif
...
static guintptr video_window_handle = 0;
...
static GstBusSyncReply
bus_sync_handler (GstBus * bus, GstMessage * message, gpointer user_data)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
  return GST_BUS_PASS;

if (video_window_handle != 0) {
  GstVideoOverlay *overlay;

  // GST_MESSAGE_SRC (message) will be the video sink element
  overlay = GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message));
  gst_video_overlay_set_window_handle (overlay, video_window_handle);
} else {
  g_warning ("Should have obtained video_window_handle by now!");
}

gst_message_unref (message);
return GST_BUS_DROP;
}
...
static void
video_widget_realize_cb (GtkWidget * widget, gpointer data)
{
#if GTK_CHECK_VERSION(2,18,0)
 // Tell Gtk+/Gdk to create a native window for this widget instead of
 // drawing onto the parent widget.
 // This is here just for pedagogical purposes, GDK_WINDOW_XID will call
 // it as well in newer Gtk versions
 if (!gdk_window_ensure_native (widget->window))
   g_error ("Couldn't create native window needed for GstVideoOverlay!");
#endif

#ifdef GDK_WINDOWING_X11
 {
   gulong xid = GDK_WINDOW_XID (gtk_widget_get_window (video_window));
   video_window_handle = xid;
 }
#endif
#ifdef GDK_WINDOWING_WIN32
 {
   HWND wnd = GDK_WINDOW_HWND (gtk_widget_get_window (video_window));
   video_window_handle = (guintptr) wnd;
 }
#endif
}
...
int
main (int argc, char **argv)
{
 GtkWidget *video_window;
 GtkWidget *app_window;
 ...
 app_window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
 ...
 video_window = gtk_drawing_area_new ();
 g_signal_connect (video_window, "realize",
     G_CALLBACK (video_widget_realize_cb), NULL);
 gtk_widget_set_double_buffered (video_window, FALSE);
 ...
 // usually the video_window will not be directly embedded into the
 // application window like this, but there will be many other widgets
 // and the video window will be embedded in one of them instead
 gtk_container_add (GTK_CONTAINER (ap_window), video_window);
 ...
 // show the GUI
 gtk_widget_show_all (app_window);

 // realize window now so that the video window gets created and we can
 // obtain its XID/HWND before the pipeline is started up and the videosink
 // asks for the XID/HWND of the window to render onto
 gtk_widget_realize (video_window);

 // we should have the XID/HWND now
 g_assert (video_window_handle != 0);
 ...
 // set up sync handler for setting the xid once the pipeline is started
 bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
 gst_bus_set_sync_handler (bus, (GstBusSyncHandler) bus_sync_handler, NULL,
     NULL);
 gst_object_unref (bus);
 ...
 gst_element_set_state (pipeline, GST_STATE_PLAYING);
 ...
}

GstVideoOverlay and Qt

#include <glib.h>;
#include <gst/gst.h>;
#include <gst/video/videooverlay.h>;

#include <QApplication>;
#include <QTimer>;
#include <QWidget>;

int main(int argc, char *argv[])
{
 if (!g_thread_supported ())
   g_thread_init (NULL);

 gst_init (&argc, &argv);
 QApplication app(argc, argv);
 app.connect(&app, SIGNAL(lastWindowClosed()), &app, SLOT(quit ()));

 // prepare the pipeline

 GstElement *pipeline = gst_pipeline_new ("xvoverlay");
 GstElement *src = gst_element_factory_make ("videotestsrc", NULL);
 GstElement *sink = gst_element_factory_make ("xvimagesink", NULL);
 gst_bin_add_many (GST_BIN (pipeline), src, sink, NULL);
 gst_element_link (src, sink);

 // prepare the ui

 QWidget window;
 window.resize(320, 240);
 window.show();

 WId xwinid = window.winId();
 gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (sink), xwinid);

 // run the pipeline

 GstStateChangeReturn sret = gst_element_set_state (pipeline,
     GST_STATE_PLAYING);
 if (sret == GST_STATE_CHANGE_FAILURE) {
   gst_element_set_state (pipeline, GST_STATE_NULL);
   gst_object_unref (pipeline);
   // Exit application
   QTimer::singleShot(0, QApplication::activeWindow(), SLOT(quit()));
 }

 int ret = app.exec();

 window.hide();
 gst_element_set_state (pipeline, GST_STATE_NULL);
 gst_object_unref (pipeline);

 return ret;
}

Functions to create and handle overlay compositions on video buffers.

An overlay composition describes one or more overlay rectangles to be blended on top of a video buffer.

This API serves two main purposes:

  • it can be used to attach overlay information (subtitles or logos)

to non-raw video buffers such as GL/VAAPI/VDPAU surfaces. The actual blending of the overlay can then be done by e.g. the video sink that processes these non-raw buffers.

  • it can also be used to blend overlay rectangles on top of raw video

buffers, thus consolidating blending functionality for raw video in one place.

Together, this allows existing overlay elements to easily handle raw and non-raw video as input in without major changes (once the overlays have been put into a #GstVideoOverlayComposition object anyway) - for raw video the overlay can just use the blending function to blend the data on top of the video, and for surface buffers it can just attach them to the buffer and let the sink render the overlays.

Extra buffer metadata describing image overlay data.

Fields
GstMeta metaparent #GstMeta
GstVideoOverlayComposition * overlaythe attached #GstVideoOverlayComposition

#GstVideoOverlay interface

Fields
GTypeInterface ifaceparent interface type.
void function(GstVideoOverlay * overlay) exposevirtual method to handle expose events
void function(GstVideoOverlay * overlay, gboolean handleEvents) handleEventsvirtual method to handle events
void function(GstVideoOverlay * overlay, int x, int y, int width, int height) setRenderRectanglevirtual method to set the render rectangle
void function(GstVideoOverlay * overlay, size_t handle) setWindowHandlevirtual method to configure the window handle

An opaque video overlay rectangle object. A rectangle contains a single overlay rectangle which can be added to a composition.

Helper structure representing a rectangular area.

Fields
int xX coordinate of rectangle's top-left point
int yY coordinate of rectangle's top-left point
int wwidth of the rectangle
int hheight of the rectangle

Extra buffer metadata describing an image region of interest

Fields
GstMeta metaparent #GstMeta
GQuark roiTypeGQuark describing the semantic of the Roi (f.i. a face, a pedestrian)
int ididentifier of this particular ROI
int parentIdidentifier of its parent ROI, used f.i. for ROI hierarchisation.
uint xx component of upper-left corner
uint yy component of upper-left corner
uint wbounding box width
uint hbounding box height
GList * paramslist of #GstStructure containing element-specific params for downstream, see [gstvideo.videoregionofinterestmeta.VideoRegionOfInterestMeta.addParam]. (Since: 1.14)

#GstVideoResampler is a structure which holds the information required to perform various kinds of resampling filtering.

Fields
int inSizethe input size
int outSizethe output size
uint maxTapsthe maximum number of taps
uint nPhasesthe number of phases
uint * offsetarray with the source offset for each output element
uint * phasearray with the phase to use for each output element
uint * nTapsarray with new number of taps for each phase
double * tapsthe taps for all phases
void *[4] GstReserved

H.264 H.265 metadata from SEI User Data Unregistered messages

Fields
GstMeta metaparent #GstMeta
ubyte[16] uuidUser Data Unregistered UUID
ubyte * dataUnparsed data buffer
size_t sizeSize of the data buffer

#GstVideoScaler is a utility object for rescaling and resampling video frames using various interpolation / sampling methods.

Provides useful functions and a base class for video sinks.

GstVideoSink will configure the default base sink to drop frames that arrive later than 20ms as this is considered the default threshold for observing out-of-sync frames.

Fields
GstBaseSink element
int widthvideo width (derived class needs to set this)
int heightvideo height (derived class needs to set this)
void *[4] GstReserved

The video sink class structure. Derived classes should override the @show_frame virtual function.

Fields
GstBaseSinkClass parentClassthe parent class structure
GstFlowReturn function(GstVideoSink * videoSink, GstBuffer * buf) showFramerender a video frame. Maps to #GstBaseSinkClass.render() and #GstBaseSinkClass.preroll() vfuncs. Rendering during preroll will be suppressed if the #GstVideoSink:show-preroll-frame property is set ...
gboolean function(GstVideoSink * videoSink, GstCaps * caps, const(GstVideoInfo) * info) setInfo
void *[3] GstReserved

Description of a tile. This structure allow to describe arbitrary tile dimensions and sizes.

Fields
uint widthThe width in pixels of a tile. This value can be zero if the number of pixels per line is not an integer value.
uint height
uint strideThe stride (in bytes) of a tile line. Regardless if the tile have sub-tiles this stride multiplied by the height should be equal to #GstVideoTileInfo.size. This value is used to translate into line...
uint sizeThe size in bytes of a tile. This value must be divisible by #GstVideoTileInfo.stride.
uint[4] padding

@field_count must be 0 for progressive video and 1 or 2 for interlaced.

A representation of a SMPTE time code.

@hours must be positive and less than 24. Will wrap around otherwise. @minutes and @seconds must be positive and less than 60. @frames must be less than or equal to @config.fps_n / @config.fps_d These values are NOT automatically normalized.

Fields
GstVideoTimeCodeConfig configthe corresponding #GstVideoTimeCodeConfig
uint hoursthe hours field of #GstVideoTimeCode
uint minutesthe minutes field of #GstVideoTimeCode
uint secondsthe seconds field of #GstVideoTimeCode
uint framesthe frames field of #GstVideoTimeCode
uint fieldCountInterlaced video field count

Supported frame rates: 30000/1001, 60000/1001 (both with and without drop frame), and integer frame rates e.g. 25/1, 30/1, 50/1, 60/1.

The configuration of the time code.

Fields
uint fpsNNumerator of the frame rate
uint fpsDDenominator of the frame rate
GstVideoTimeCodeFlags flagsthe corresponding #GstVideoTimeCodeFlags
GDateTime * latestDailyJamThe latest daily jam information, if present, or NULL

A representation of a difference between two #GstVideoTimeCode instances. Will not necessarily correspond to a real timecode (e.g. 00:00:10;00)

Fields
uint hoursthe hours field of #GstVideoTimeCodeInterval
uint minutesthe minutes field of #GstVideoTimeCodeInterval
uint secondsthe seconds field of #GstVideoTimeCodeInterval
uint framesthe frames field of #GstVideoTimeCodeInterval

Extra buffer metadata describing the GstVideoTimeCode of the frame.

Each frame is assumed to have its own timecode, i.e. they are not automatically incremented/interpolated.

Fields
GstMeta metaparent #GstMeta
GstVideoTimeCode tcthe GstVideoTimeCode to attach

An encoder for writing ancillary data to the Vertical Blanking Interval lines of component signals.

A parser for detecting and extracting @GstVideoAncillary data from Vertical Blanking Interval lines of component signals.

aliasGstVideoAffineTransformationGetMatrix = gboolean function(GstVideoAffineTransformationMeta * meta, float * matrix)
aliasGstVideoConvertSampleCallback = void function(GstSample * sample, GError * error, void * userData)
aliasGstVideoFormatPack = void function(const(GstVideoFormatInfo) * info, GstVideoPackFlags flags, void * src, int sstride, void * * data, const(int) * stride, GstVideoChromaSite chromaSite, int y, int width)
aliasGstVideoFormatUnpack = void function(const(GstVideoFormatInfo) * info, GstVideoPackFlags flags, void * dest, const(void *) * data, const(int) * stride, int x, int y, int width)
aliasGstVideoGLTextureUpload = gboolean function(GstVideoGLTextureUploadMeta * meta, uint * textureId)