For more details, see:
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdfand SMPTE ST2016-1
C types for gstvideo1 library
Location of a @GstAncillaryMeta.
An enumeration indicating whether an element implements color balancing operations in software or in dedicated hardware. In general, dedicated hardware implementations (such as those provided by xvimagesink) are preferred.
A set of commands that may be issued to an element providing the #GstNavigation interface. The available commands can be queried via the [gstvideo.navigation.Navigation.queryNewCommands] query.
For convenience in handling DVD navigation, the MENU commands are aliased as: GST_NAVIGATION_COMMAND_DVD_MENU = @GST_NAVIGATION_COMMAND_MENU1 GST_NAVIGATION_COMMAND_DVD_TITLE_MENU = @GST_NAVIGATION_COMMAND_MENU2 GST_NAVIGATION_COMMAND_DVD_ROOT_MENU = @GST_NAVIGATION_COMMAND_MENU3 GST_NAVIGATION_COMMAND_DVD_SUBPICTURE_MENU = @GST_NAVIGATION_COMMAND_MENU4 GST_NAVIGATION_COMMAND_DVD_AUDIO_MENU = @GST_NAVIGATION_COMMAND_MENU5 GST_NAVIGATION_COMMAND_DVD_ANGLE_MENU = @GST_NAVIGATION_COMMAND_MENU6 GST_NAVIGATION_COMMAND_DVD_CHAPTER_MENU = @GST_NAVIGATION_COMMAND_MENU7
Enum values for the various events that an element implementing the GstNavigation interface might send up the pipeline. Touch events have been inspired by the libinput API, and have the same meaning here.
A set of notifications that may be received on the bus when navigation related status changes.
Flags to indicate the state of modifier keys and mouse buttons in events.
Typical modifier keys are Shift, Control, Meta, Super, Hyper, Alt, Compose, Apple, CapsLock or ShiftLock.
Types of navigation interface queries.
Enumeration of the different standards that may apply to AFD data:
0) ETSI/DVB:
https://www.etsi.org/deliver/etsi_ts/101100_101199/101154/02.01.01_60/ts_101154v020101p.pdf1) ATSC A/53:
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf2) SMPTE ST2016-1:
Enumeration of the various values for Active Format Description (AFD)
AFD should be included in video user data whenever the rectangular picture area containing useful information does not extend to the full height or width of the coded frame. AFD data may also be included in user data when the rectangular picture area containing useful information extends to the full height and width of the coded frame.
For details, see Table 6.14 Active Format in:
ATSC Digital Television Standard: Part 4 – MPEG-2 Video System Characteristics
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdfand Active Format Description in Complete list of AFD codes
https://en.wikipedia.org/wiki/Active_Format_Description#Complete_list_of_AFD_codesand SMPTE ST2016-1
Notes:
1) AFD 0 is undefined for ATSC and SMPTE ST2016-1, indicating that AFD data is not available: If Bar Data is not present, AFD '0000' indicates that exact information is not available and the active image should be assumed to be the same as the coded frame. AFD '0000'. AFD '0000' accompanied by Bar Data signals that the active image’s aspect ratio is narrower than 16:9, but is not 4:3 or 14:9. As the exact aspect ratio cannot be conveyed by AFD alone, wherever possible, AFD ‘0000’ should be accompanied by Bar Data to define the exact vertical or horizontal extent of the active image. 2) AFD 0 is reserved for DVB/ETSI 3) values 1, 5, 6, 7, and 12 are reserved for both ATSC and DVB/ETSI 4) values 2 and 3 are not recommended for ATSC, but are valid for DVB/ETSI
Different alpha modes.
Some know types of Ancillary Data identifiers.
Additional video buffer flags. These flags can potentially be used on any buffers carrying closed caption data, or video data - even encoded data.
Note that these are only valid for #GstCaps of type: video/... and caption/... They can conflict with other extended buffer flags.
The various known types of Closed Caption (CC).
Extra flags that influence the result from [gstvideo.video_chroma_resample.VideoChromaResample.new_].
Different subsampling and upsampling methods
Different chroma downsampling and upsampling modes
Various Chroma sitings.
Flags for #GstVideoCodecFrame
The color matrix is used to convert between Y'PbPr and non-linear RGB (R'G'B')
The color primaries define the how to transform linear RGB values to and from the CIE XYZ colorspace.
Possible color range values. These constants are defined for 8 bit color values and can be scaled for other bit depths.
Flags to be used in combination with [gstvideo.video_decoder.VideoDecoder.requestSyncPoint]. See the function documentation for more details.
Extra flags that influence the result from [gstvideo.video_chroma_resample.VideoChromaResample.new_].
Different dithering methods to use.
Field order of interlaced content. This is only valid for interlace-mode=interleaved and not interlace-mode=mixed. In the case of mixed or GST_VIDEO_FIELD_ORDER_UNKOWN, the field order is signalled via buffer flags.
Extra video flags
Enum value describing the most common video formats.
See the GStreamer raw video format design document for details about the layout and packing of these formats in memory.
The different video flags that a format info can have.
Extra video frame flags
Additional mapping flags for [gstvideo.video_frame.VideoFrame.map].
The orientation of the GL texture.
The GL texture type.
The possible values of the #GstVideoInterlaceMode describing the interlace mode of the stream.
Different color matrix conversion modes
GstVideoMultiviewFlags are used to indicate extra properties of a stereo/multiview stream beyond the frame layout and buffer mapping that is conveyed in the #GstVideoMultiviewMode.
#GstVideoMultiviewFramePacking represents the subset of #GstVideoMultiviewMode values that can be applied to any video frame without needing extra metadata. It can be used by elements that provide a property to override the multiview interpretation of a video stream when the video doesn't contain any markers.
This enum is used (for example) on playbin, to re-interpret a played video stream as a stereoscopic video. The individual enum values are equivalent to and have the same value as the matching #GstVideoMultiviewMode.
All possible stereoscopic 3D and multiview representations. In conjunction with #GstVideoMultiviewFlags, describes how multiview content is being transported in the stream.
The different video orientation methods.
Overlay format flags.
The different flags that can be used when packing and unpacking.
Different primaries conversion modes
Different resampler flags.
Different subsampling and upsampling methods
Different scale flags.
Enum value describing the available tiling modes.
Enum value describing the most common tiling types.
Flags related to the time code information. For drop frame, only 30000/1001 and 60000/1001 frame rates are supported.
The video transfer function defines the formula for converting between non-linear RGB (R'G'B') and linear RGB
Return values for #GstVideoVBIParser
#GstMeta for carrying SMPTE-291M Ancillary data. Note that all the ADF fields (@DID to @checksum) are 10bit values with parity/non-parity high-bits set.
GstMeta metaParent #GstMetaGstAncillaryMetaField fieldThe field where the ancillary data is locatedgboolean cNotYChannelWhich channel (luminance or chrominance) the ancillary data is located. 0 if content is SD or stored in the luminance channel (default). 1 if HD and stored in the chrominance channel.ushort lineThe line on which the ancillary data is located (max 11bit). There are two special values: 0x7ff if no line is specified (default), 0x7fe to specify the ancillary data is on any valid line before a...ushort offsetThe location of the ancillary data packet in a SDI raster relative to the start of active video (max 12bits). A value of 0 means the ADF of the ancillary packet starts immediately following SAV. Th...ushort DIDData Identifiedushort SDIDBlockNumberSecondary Data identification (if type 2) or Data block number (if type 1)ushort dataCountThe amount of user dataushort * dataThe User dataushort checksumThe checksum of the ADFThis interface is implemented by elements which can perform some color balance operation on video frames they process. For example, modifying the brightness, contrast, hue or saturation.
Example elements are 'xvimagesink' and 'colorbalance'
The #GstColorBalanceChannel object represents a parameter for modifying the color balance implemented by an element providing the #GstColorBalance interface. For example, Hue or Saturation.
GObject parentchar * labelA string containing a descriptive name for this channelint minValueThe minimum valid value for this channel.int maxValueThe maximum valid value for this channel.void *[4] GstReservedColor-balance channel class.
GObjectClass parentthe parent classvoid function(GstColorBalanceChannel * channel, int value) valueChangeddefault handler for value changed notificationvoid *[4] GstReservedColor-balance interface.
GTypeInterface ifacethe parent interfaceconst(GList) * function(GstColorBalance * balance) listChannelslist handled channelsvoid function(GstColorBalance * balance, GstColorBalanceChannel * channel, int value) setValueset a channel valueint function(GstColorBalance * balance, GstColorBalanceChannel * channel) getValueget a channel valueGstColorBalanceType function(GstColorBalance * balance) getBalanceTypeimplementation typevoid function(GstColorBalance * balance, GstColorBalanceChannel * channel, int value) valueChangeddefault handler for value changed notificationvoid *[4] GstReservedThe Navigation interface is used for creating and injecting navigation related events such as mouse button presses, cursor motion and key presses. The associated library also provides methods for parsing received events, and for sending and receiving navigation related bus events. One main usecase is DVD menu navigation.
The main parts of the API are:
application with the ability to create and inject navigation events into the pipeline.
response to calls on a GstNavigation interface implementation, and sent in the pipeline. Upstream elements can use the navigation event API functions to parse the contents of received messages.
the message bus to inform applications of navigation related changes in the pipeline, such as the mouse moving over a clickable region, or the set of available angles changing.
The GstNavigation message functions provide functions for creating and parsing custom bus messages for signaling GstNavigation changes.
Navigation interface.
GTypeInterface ifacethe parent interfacevoid function(GstNavigation * navigation, GstStructure * structure) sendEventsending a navigation eventvoid function(GstNavigation * navigation, GstEvent * event) sendEventSimplesending a navigation event (Since: 1.22)Active Format Description (AFD)
For details, see Table 6.14 Active Format in:
ATSC Digital Television Standard: Part 4 – MPEG-2 Video System Characteristics
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdfand Active Format Description in Complete list of AFD codes
https://en.wikipedia.org/wiki/Active_Format_Description#Complete_list_of_AFD_codesand SMPTE ST2016-1
GstMeta metaparent #GstMetaubyte field0 for progressive or field 1 and 1 for field 2GstVideoAFDSpec spec#GstVideoAFDSpec that applies to @afdGstVideoAFDValue afd#GstVideoAFDValue AFD valueExtra buffer metadata for performing an affine transformation using a 4x4 matrix. The transformation matrix can be composed with [gstvideo.video_affine_transformation_meta.VideoAffineTransformationMeta.applyMatrix].
The vertices operated on are all in the range 0 to 1, not in Normalized Device Coordinates (-1 to +1). Transforming points in this space are assumed to have an origin at (0.5, 0.5, 0.5) in a left-handed coordinate system with the x-axis moving horizontally (positive values to the right), the y-axis moving vertically (positive values up the screen) and the z-axis perpendicular to the screen (positive values into the screen).
VideoAggregator can accept AYUV, ARGB and BGRA video streams. For each of the requested sink pads it will compare the incoming geometry and framerate to define the output parameters. Indeed output video frames will have the geometry of the biggest incoming video stream and the framerate of the fastest incoming one.
VideoAggregator will do colorspace conversion.
Zorder for each input stream can be configured on the #GstVideoAggregatorPad.
GstAggregator aggregatorGstVideoInfo infoThe #GstVideoInfo representing the currently set srcpad caps.GstVideoAggregatorPrivate * privvoid *[20] GstReservedGstAggregatorClass parentClassGstCaps * function(GstVideoAggregator * videoaggregator, GstCaps * caps) updateCapsOptional. Lets subclasses update the #GstCaps representing the src pad caps before usage. Return null to indicate failure.GstFlowReturn function(GstVideoAggregator * videoaggregator, GstBuffer * outbuffer) aggregateFramesLets subclasses aggregate frames that are ready. Subclasses should iterate the GstElement.sinkpads and use the already mapped #GstVideoFrame from [gstvideo.videoaggregatorpad.VideoAggregatorPad.get...GstFlowReturn function(GstVideoAggregator * videoaggregator, GstBuffer * * outbuffer) createOutputBufferOptional. Lets subclasses provide a #GstBuffer to be used as @outbuffer of the #aggregate_frames vmethod.void function(GstVideoAggregator * vagg, GstCaps * downstreamCaps, GstVideoInfo * bestInfo, gboolean * atLeastOneAlpha) findBestFormatOptional. Lets subclasses decide of the best common format to use.void *[20] GstReservedAn implementation of GstPad that can be used with #GstVideoAggregator.
See #GstVideoAggregator for more details.
GstVideoAggregatorPadClass parentClassvoid function(GstVideoAggregatorConvertPad * pad, GstVideoAggregator * agg, GstVideoInfo * conversionInfo) createConversionInfovoid *[4] GstReservedGstAggregatorPad parentGstVideoInfo infoThe #GstVideoInfo currently set on the padGstVideoAggregatorPadPrivate * privvoid *[4] GstReservedGstAggregatorPadClass parentClassvoid function(GstVideoAggregatorPad * pad) updateConversionInfoCalled when either the input or output formats have changed.gboolean function(GstVideoAggregatorPad * pad, GstVideoAggregator * videoaggregator, GstBuffer * buffer, GstVideoFrame * preparedFrame) prepareFramePrepare the frame from the pad buffer and sets it to prepared_frame. Implementations should always return TRUE. Returning FALSE will cease iteration over subsequent pads.void function(GstVideoAggregatorPad * pad, GstVideoAggregator * videoaggregator, GstVideoFrame * preparedFrame) cleanFrameclean the frame previously prepared in prepare_framevoid function(GstVideoAggregatorPad * pad, GstVideoAggregator * videoaggregator, GstBuffer * buffer, GstVideoFrame * preparedFrame) prepareFrameStartvoid function(GstVideoAggregatorPad * pad, GstVideoAggregator * videoaggregator, GstVideoFrame * preparedFrame) prepareFrameFinishvoid *[18] GstReservedAn implementation of GstPad that can be used with #GstVideoAggregator.
See #GstVideoAggregator for more details.
GstVideoAggregatorConvertPad parentInstanceExtra alignment parameters for the memory of video buffers. This structure is usually used to configure the bufferpool if it supports the #GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT.
uint paddingTopextra pixels on the topuint paddingBottomextra pixels on the bottomuint paddingLeftextra pixels on the left sideuint paddingRightextra pixels on the right sideuint[4] strideAlignarray with extra alignment requirements for the stridesVideo Ancillary data, according to SMPTE-291M specification.
Note that the contents of the data are always stored as 8bit data (i.e. do not contain the parity check bits).
ubyte DIDThe Data Identifierubyte SDIDBlockNumberThe Secondary Data Identifier (if type 2) or the Data Block Number (if type 1)ubyte dataCountThe amount of data (in bytes) in @data (max 255 bytes)ubyte * dataThe user data content of the Ancillary packet. Does not contain the ADF, DID, SDID nor CS.void *[4] GstReservedBar data should be included in video user data whenever the rectangular picture area containing useful information does not extend to the full height or width of the coded frame and AFD alone is insufficient to describe the extent of the image.
For more details, see:
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdfand SMPTE ST2016-1
GstMeta metaparent #GstMetaubyte field0 for progressive or field 1 and 1 for field 2gboolean isLetterboxif true then bar data specifies letterbox, otherwise pillarboxuint barData1If @is_letterbox is true, then the value specifies the last line of a horizontal letterbox bar area at top of reconstructed frame. Otherwise, it specifies the last horizontal luminance sample of a ...uint barData2If @is_letterbox is true, then the value specifies the first line of a horizontal letterbox bar area at bottom of reconstructed frame. Otherwise, it specifies the first horizontal luminance sample ...GstBufferPoolClass parentClassExtra buffer metadata providing Closed Caption.
GstMeta metaparent #GstMetaGstVideoCaptionType captionTypeThe type of Closed Caption contained in the meta.ubyte * dataThe Closed Caption data.size_t sizeThe size in bytes of @dataThis meta is primarily for internal use in GStreamer elements to support VP8/VP9 transparent video stored into WebM or Matroska containers, or transparent static AV1 images. Nothing prevents you from using this meta for custom purposes, but it generally can't be used to easily to add support for alpha channels to CODECs or formats that don't support that out of the box.
A #GstVideoCodecFrame represents a video frame both in raw and encoded form.
int refCountuint flagsuint systemFrameNumberUnique identifier for the frame. Use this if you need to get hold of the frame later (like when data is being decoded). Typical usage in decoders is to set this on the opaque value provided to the ...uint decodeFrameNumberuint presentationFrameNumberGstClockTime dtsDecoding timestampGstClockTime ptsPresentation timestampGstClockTime durationDuration of the frameint distanceFromSyncDistance in frames from the last synchronization point.GstBuffer * inputBufferthe input #GstBuffer that created this frame. The buffer is owned by the frame and references to the frame instead of the buffer should be kept.GstBuffer * outputBufferthe output #GstBuffer. Implementations should set this either directly, or by using the [gstvideo.videodecoder.VideoDecoder.allocateOutputFrame] or [gstvideo.videodecoder.VideoDecoder.allocateOutpu...GstClockTime deadlineRunning time when the frame will be used.GList * eventsvoid * userDataGDestroyNotify userDataDestroyNotifyAbidataType abidataStructure representing the state of an incoming or outgoing video stream for encoders and decoders.
Decoders and encoders will receive such a state through their respective @set_format vmethods.
Decoders and encoders can set the downstream state, by using the [gstvideo.video_decoder.VideoDecoder.setOutputState] or [gstvideo.video_encoder.VideoEncoder.setOutputState] methods.
int refCountGstVideoInfo infoThe #GstVideoInfo describing the streamGstCaps * capsThe #GstCaps used in the caps negotiation of the pad.GstBuffer * codecDataa #GstBuffer corresponding to the 'codec_data' field of a stream, or NULL.GstCaps * allocationCapsThe #GstCaps for allocation query and pool negotiation. Since: 1.10GstVideoMasteringDisplayInfo * masteringDisplayInfoMastering display color volume information (HDR metadata) for the stream.GstVideoContentLightLevel * contentLightLevelContent light level information for the stream.void *[17] paddingStructure describing the chromaticity coordinates of an RGB system. These values can be used to construct a matrix to transform RGB to and from the XYZ colorspace.
GstVideoColorPrimaries primariesa #GstVideoColorPrimariesdouble Wxreference white x coordinatedouble Wyreference white y coordinatedouble Rxred x coordinatedouble Ryred y coordinatedouble Gxgreen x coordinatedouble Gygreen y coordinatedouble Bxblue x coordinatedouble Byblue y coordinateStructure describing the color info.
GstVideoColorRange rangethe color range. This is the valid range for the samples. It is used to convert the samples to Y'PbPr values.GstVideoColorMatrix matrixthe color matrix. Used to convert between Y'PbPr and non-linear RGB (R'G'B')GstVideoTransferFunction transferthe transfer function. used to convert between R'G'B' and RGBGstVideoColorPrimaries primariescolor primaries. used to convert between R'G'B' and CIE XYZContent light level information specified in CEA-861.3, Appendix A.
ushort maxContentLightLevelthe maximum content light level (abbreviated to MaxCLL) in candelas per square meter (cd/m^2 and nit)ushort maxFrameAverageLightLevelthe maximum frame average light level (abbreviated to MaxFLL) in candelas per square meter (cd/m^2 and nit)void *[4] GstReservedExtra buffer metadata describing image cropping.
GstMeta metaparent #GstMetauint xthe horizontal offsetuint ythe vertical offsetuint widththe cropped widthuint heightthe cropped heightThis base class is for video decoders turning encoded data into raw video frames.
The GstVideoDecoder base class and derived subclasses should cooperate as follows:
is activated, which allows the subclass to perform any global setup.
describing input video data that it is about to receive, including possibly configuration data. While unlikely, it might be called more than once, if changing input parameters require reconfiguration.
Processing below.
to parse this into subsequently manageable chunks, typically corresponding to and referred to as 'frames'.
callback.
the base class will provide to the subclass the same input frame with different input buffers to the subclass @handle_frame callback. During this call, the subclass needs to take ownership of the input_buffer as @GstVideoCodecFrame.input_buffer will have been changed before the next subframe buffer is received. The subclass will call [gstvideo.video_decoder.VideoDecoder.haveLastSubframe] when a new input frame can be created by the base class. Every subframe will share the same @GstVideoCodecFrame.output_buffer to write the decoding result. The subclass is responsible to protect its access.
@gst_video_decoder_finish_frame to have decoded data pushed downstream. In subframe mode the subclass should call @gst_video_decoder_finish_subframe until the last subframe where it should call @gst_video_decoder_finish_frame. The subclass can detect the last subframe using GST_VIDEO_BUFFER_FLAG_MARKER on buffers or using its own logic to collect the subframes. In case of decoding failure, the subclass must call @gst_video_decoder_drop_frame or @gst_video_decoder_drop_subframe, to allow the base class to do timestamp and offset tracking, and possibly to requeue the frame for a later attempt in the case of reverse playback.
parsing will be stopped.
informed via a call to its @reset callback, with the hard parameter set to true. This indicates the subclass should drop any internal data queues and timestamps and prepare for a fresh set of buffers to arrive for parsing and decoding.
times with the at_eos parameter set to true, indicating that the element should not expect any more data to be arriving, and it should parse and remaining frames and call [gstvideo.video_decoder.VideoDecoder.haveFrame] if possible.
The subclass is responsible for providing pad template caps for source and sink pads. The pads need to be named "sink" and "src". It also needs to provide information about the output caps, when they are known. This may be when the base class calls the subclass' @set_format function, though it might be during decoding, before calling @gst_video_decoder_finish_frame. This is done via @gst_video_decoder_set_output_state
The subclass is also responsible for providing (presentation) timestamps (likely based on corresponding input ones). If that is not applicable or possible, the base class provides limited framerate based interpolation.
Similarly, the base class provides some limited (legacy) seeking support if specifically requested by the subclass, as full-fledged support should rather be left to upstream demuxer, parser or alike. This simple approach caters for seeking and duration reporting using estimated input bitrates. To enable it, a subclass should call @gst_video_decoder_set_estimate_rate to enable handling of incoming byte-streams.
The base class provides some support for reverse playback, in particular in case incoming data is not packetized or upstream does not provide fragments on keyframe boundaries. However, the subclass should then be prepared for the parsing and frame processing stage to occur separately (in normal forward processing, the latter immediately follows the former), The subclass also needs to ensure the parsing stage properly marks keyframes, unless it knows the upstream elements will do so properly for incoming data.
The bare minimum that a functional subclass needs to implement is:
@gst_video_decoder_set_output_state
Data will be provided to @parse which should invoke @gst_video_decoder_add_to_frame and @gst_video_decoder_have_frame to separate the data belonging to each video frame.
@gst_video_decoder_finish_frame, or call @gst_video_decoder_drop_frame.
GstElement elementGstPad * sinkpadGstPad * srcpadGRecMutex streamLockGstSegment inputSegmentGstSegment outputSegmentGstVideoDecoderPrivate * privvoid *[20] paddingSubclasses can override any of the available virtual methods or not, as needed. At minimum @handle_frame needs to be overridden, and @set_format and likely as well. If non-packetized input is supported or expected, @parse needs to be overridden as well.
GstElementClass elementClassgboolean function(GstVideoDecoder * decoder) openOptional. Called when the element changes to GSTSTATEREADY. Allows opening external resources.gboolean function(GstVideoDecoder * decoder) closeOptional. Called when the element changes to GSTSTATENULL. Allows closing external resources.gboolean function(GstVideoDecoder * decoder) startOptional. Called when the element starts processing. Allows opening external resources.gboolean function(GstVideoDecoder * decoder) stopOptional. Called when the element stops processing. Allows closing external resources.GstFlowReturn function(GstVideoDecoder * decoder, GstVideoCodecFrame * frame, GstAdapter * adapter, gboolean atEos) parseRequired for non-packetized input. Allows chopping incoming data into manageable units (frames) for subsequent decoding.gboolean function(GstVideoDecoder * decoder, GstVideoCodecState * state) setFormatNotifies subclass of incoming data format (caps).gboolean function(GstVideoDecoder * decoder, gboolean hard) resetOptional. Allows subclass (decoder) to perform post-seek semantics reset. Deprecated.GstFlowReturn function(GstVideoDecoder * decoder) finishOptional. Called to request subclass to dispatch any pending remaining data at EOS. Sub-classes can refuse to decode new data after.GstFlowReturn function(GstVideoDecoder * decoder, GstVideoCodecFrame * frame) handleFrameProvides input data frame to subclass. In subframe mode, the subclass needs to take ownership of @GstVideoCodecFrame.input_buffer as it will be modified by the base class on the next subframe buffe...gboolean function(GstVideoDecoder * decoder, GstEvent * event) sinkEventOptional. Event handler on the sink pad. This function should return TRUE if the event was handled and should be discarded (i.e. not unref'ed). Subclasses should chain up to the parent implementati...gboolean function(GstVideoDecoder * decoder, GstEvent * event) srcEventOptional. Event handler on the source pad. This function should return TRUE if the event was handled and should be discarded (i.e. not unref'ed). Subclasses should chain up to the parent implementa...gboolean function(GstVideoDecoder * decoder) negotiateOptional. Negotiate with downstream and configure buffer pools, etc. Subclasses should chain up to the parent implementation to invoke the default handler.gboolean function(GstVideoDecoder * decoder, GstQuery * query) decideAllocationOptional. Setup the allocation parameters for allocating output buffers. The passed in query contains the result of the downstream allocation query. Subclasses should chain up to the parent impleme...gboolean function(GstVideoDecoder * decoder, GstQuery * query) proposeAllocationOptional. Propose buffer allocation parameters for upstream elements. Subclasses should chain up to the parent implementation to invoke the default handler.gboolean function(GstVideoDecoder * decoder) flushOptional. Flush all remaining data from the decoder without pushing it downstream. Since: 1.2gboolean function(GstVideoDecoder * decoder, GstQuery * query) sinkQueryOptional. Query handler on the sink pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. Sin...gboolean function(GstVideoDecoder * decoder, GstQuery * query) srcQueryOptional. Query handler on the source pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. S...GstCaps * function(GstVideoDecoder * decoder, GstCaps * filter) getcapsOptional. Allows for a custom sink getcaps implementation. If not implemented, default returns gstvideodecoderproxygetcaps applied to sink template caps.GstFlowReturn function(GstVideoDecoder * decoder) drainOptional. Called to request subclass to decode any data it can at this point, but that more data may arrive after. (e.g. at segment end). Sub-classes should be prepared to handle new data afterward...gboolean function(GstVideoDecoder * decoder, GstVideoCodecFrame * frame, GstMeta * meta) transformMetaOptional. Transform the metadata on the input buffer to the output buffer. By default this method is copies all meta without tags and meta with only the "video" tag. subclasses can implement this m...gboolean function(GstVideoDecoder * decoder, GstClockTime timestamp, GstClockTime duration) handleMissingDatavoid *[13] paddingThe interface allows unified access to control flipping and rotation operations of video-sources or operators.
#GstVideoDirectionInterface interface.
GTypeInterface ifaceparent interface type.GstVideoDither provides implementations of several dithering algorithms that can be applied to lines of video pixels to quantize and dither them.
This base class is for video encoders turning raw video into encoded video data.
GstVideoEncoder and subclass should cooperate as follows.
is activated, which allows subclass to perform any global setup.
of input video data that it is about to receive. Subclass should setup for encoding and configure base class as appropriate (e.g. latency). While unlikely, it might be called more than once, if changing input parameters require reconfiguration. Baseclass will ensure that processing of current configuration is finished.
this to subclass' @handle_frame.
@gst_video_encoder_finish_frame to have encoded data pushed downstream.
pushing to allow subclasses to modify some metadata on the buffer. If it returns GST_FLOW_OK, the buffer is pushed downstream.
Sink events will be passed to subclass if @event callback has been provided.
parsing will be stopped.
Subclass is responsible for providing pad template caps for source and sink pads. The pads need to be named "sink" and "src". It should also be able to provide fixed src pad caps in @getcaps by the time it calls @gst_video_encoder_finish_frame.
Things that subclass need to take care of:
@gst_video_encoder_finish_frame.
The #GstVideoEncoder:qos property will enable the Quality-of-Service features of the encoder which gather statistics about the real-time performance of the downstream elements. If enabled, subclasses can use [gstvideo.video_encoder.VideoEncoder.getMaxEncodeTime] to check if input frames are already late and drop them right away to give a chance to the pipeline to catch up.
GstElement elementGstPad * sinkpadGstPad * srcpadGRecMutex streamLockGstSegment inputSegmentGstSegment outputSegmentGstVideoEncoderPrivate * privvoid *[20] paddingSubclasses can override any of the available virtual methods or not, as needed. At minimum @handle_frame needs to be overridden, and @set_format and @get_caps are likely needed as well.
GstElementClass elementClassgboolean function(GstVideoEncoder * encoder) openOptional. Called when the element changes to GSTSTATEREADY. Allows opening external resources.gboolean function(GstVideoEncoder * encoder) closeOptional. Called when the element changes to GSTSTATENULL. Allows closing external resources.gboolean function(GstVideoEncoder * encoder) startOptional. Called when the element starts processing. Allows opening external resources.gboolean function(GstVideoEncoder * encoder) stopOptional. Called when the element stops processing. Allows closing external resources.gboolean function(GstVideoEncoder * encoder, GstVideoCodecState * state) setFormatOptional. Notifies subclass of incoming data format. GstVideoCodecState fields have already been set according to provided caps.GstFlowReturn function(GstVideoEncoder * encoder, GstVideoCodecFrame * frame) handleFrameProvides input frame to subclass.gboolean function(GstVideoEncoder * encoder, gboolean hard) resetOptional. Allows subclass (encoder) to perform post-seek semantics reset. Deprecated.GstFlowReturn function(GstVideoEncoder * encoder) finishOptional. Called to request subclass to dispatch any pending remaining data (e.g. at EOS).GstFlowReturn function(GstVideoEncoder * encoder, GstVideoCodecFrame * frame) prePushOptional. Allows subclass to push frame downstream in whatever shape or form it deems appropriate. If not provided, provided encoded frame data is simply pushed downstream.GstCaps * function(GstVideoEncoder * enc, GstCaps * filter) getcapsOptional. Allows for a custom sink getcaps implementation (e.g. for multichannel input specification). If not implemented, default returns gstvideoencoderproxygetcaps applied to sink template caps.gboolean function(GstVideoEncoder * encoder, GstEvent * event) sinkEventOptional. Event handler on the sink pad. This function should return TRUE if the event was handled and should be discarded (i.e. not unref'ed). Subclasses should chain up to the parent implementati...gboolean function(GstVideoEncoder * encoder, GstEvent * event) srcEventOptional. Event handler on the source pad. This function should return TRUE if the event was handled and should be discarded (i.e. not unref'ed). Subclasses should chain up to the parent implementa...gboolean function(GstVideoEncoder * encoder) negotiateOptional. Negotiate with downstream and configure buffer pools, etc. Subclasses should chain up to the parent implementation to invoke the default handler.gboolean function(GstVideoEncoder * encoder, GstQuery * query) decideAllocationOptional. Setup the allocation parameters for allocating output buffers. The passed in query contains the result of the downstream allocation query. Subclasses should chain up to the parent impleme...gboolean function(GstVideoEncoder * encoder, GstQuery * query) proposeAllocationOptional. Propose buffer allocation parameters for upstream elements. Subclasses should chain up to the parent implementation to invoke the default handler.gboolean function(GstVideoEncoder * encoder) flushOptional. Flush all remaining data from the encoder without pushing it downstream. Since: 1.2gboolean function(GstVideoEncoder * encoder, GstQuery * query) sinkQueryOptional. Query handler on the sink pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. Sin...gboolean function(GstVideoEncoder * encoder, GstQuery * query) srcQueryOptional. Query handler on the source pad. This function should return TRUE if the query could be performed. Subclasses should chain up to the parent implementation to invoke the default handler. S...gboolean function(GstVideoEncoder * encoder, GstVideoCodecFrame * frame, GstMeta * meta) transformMetaOptional. Transform the metadata on the input buffer to the output buffer. By default this method is copies all meta without tags and meta with only the "video" tag. subclasses can implement this m...void *[16] GstReservedProvides useful functions and a base class for video filters.
The videofilter will by default enable QoS on the parent GstBaseTransform to implement frame dropping.
GstBaseTransform elementgboolean negotiatedGstVideoInfo inInfoGstVideoInfo outInfovoid *[4] GstReservedThe video filter class structure.
GstBaseTransformClass parentClassthe parent class structuregboolean function(GstVideoFilter * filter, GstCaps * incaps, GstVideoInfo * inInfo, GstCaps * outcaps, GstVideoInfo * outInfo) setInfofunction to be called with the negotiated caps and video infosGstFlowReturn function(GstVideoFilter * filter, GstVideoFrame * inframe, GstVideoFrame * outframe) transformFrametransform a video frameGstFlowReturn function(GstVideoFilter * trans, GstVideoFrame * frame) transformFrameIptransform a video frame in placevoid *[4] GstReservedInformation for a video format.
GstVideoFormat format#GstVideoFormatconst(char) * namestring representation of the formatconst(char) * descriptionuse readable description of the formatGstVideoFormatFlags flags#GstVideoFormatFlagsuint bitsThe number of bits used to pack data items. This can be less than 8 when multiple pixels are stored in a byte. for values > 8 multiple bytes should be read according to the endianness flag before a...uint nComponentsthe number of components in the video format.uint[4] shiftthe number of bits to shift away to get the component datauint[4] depththe depth in bits for each componentint[4] pixelStridethe pixel stride of each component. This is the amount of bytes to the pixel immediately to the right. When bits < 8, the stride is expressed in bits. For 24-bit RGB, this would be 3 bytes, for exa...uint nPlanesthe number of planes for this format. The number of planes can be less than the amount of components when multiple components are packed into one plane.uint[4] planethe plane number where a component can be founduint[4] poffsetthe offset in the plane where the first pixel of the components can be found.uint[4] wSubsubsampling factor of the width for the component. Use GSTVIDEOSUB_SCALE to scale a width.uint[4] hSubsubsampling factor of the height for the component. Use GSTVIDEOSUB_SCALE to scale a height.GstVideoFormat unpackFormatthe format of the unpacked pixels. This format must have the #GSTVIDEOFORMATFLAGUNPACK flag set.GstVideoFormatUnpack unpackFuncan unpack function for this formatint packLinesthe amount of lines that will be packedGstVideoFormatPack packFuncan pack function for this formatGstVideoTileMode tileModeThe tiling modeuint tileWsThe width of a tile, in bytes, represented as a shift. DEPRECATED, use tile_info[] array instead.uint tileHsThe height of a tile, in bytes, represented as a shift. DEPREACTED, use tile_info[] array instead.GstVideoTileInfo[4] tileInfoInformation about the tiles for each of the planes.A video frame obtained from [gstvideo.video_frame.VideoFrame.map]
GstVideoInfo infothe #GstVideoInfoGstVideoFrameFlags flags#GstVideoFrameFlags for the frameGstBuffer * bufferthe mapped buffervoid * metapointer to metadata if anyint idid of the mapped frame. the id can for example be used to identify the frame in case of multiview video.void *[4] datapointers to the plane dataGstMapInfo[4] mapmappings of the planesvoid *[4] GstReservedExtra buffer metadata for uploading a buffer to an OpenGL texture ID. The caller of [gstvideo.video_gltexture_upload_meta.VideoGLTextureUploadMeta.upload] must have OpenGL set up and call this from a thread where it is valid to upload something to an OpenGL texture.
GstMeta metaparent #GstMetaGstVideoGLTextureOrientation textureOrientationOrientation of the texturesuint nTexturesNumber of textures that are generatedGstVideoGLTextureType[4] textureTypeType of each textureGstBuffer * bufferGstVideoGLTextureUpload uploadvoid * userDataGBoxedCopyFunc userDataCopyGBoxedFreeFunc userDataFreeInformation describing image properties. This information can be filled in from GstCaps with [gstvideo.video_info.VideoInfo.fromCaps]. The information is also used to store the specific video info when mapping a video frame with [gstvideo.video_frame.VideoFrame.map].
Use the provided macros to access the info in this structure.
const(GstVideoFormatInfo) * finfothe format info of the videoGstVideoInterlaceMode interlaceModethe interlace modeGstVideoFlags flagsadditional video flagsint widththe width of the videoint heightthe height of the videosize_t sizethe default size of one frameint viewsthe number of views for multiview videoGstVideoChromaSite chromaSitea #GstVideoChromaSite.GstVideoColorimetry colorimetrythe colorimetry infoint parNthe pixel-aspect-ratio numeratorint parDthe pixel-aspect-ratio denominatorint fpsNthe framerate numeratorint fpsDthe framerate denominatorsize_t[4] offsetoffsets of the planesint[4] stridestrides of the planesABIType ABIInformation describing a DMABuf image properties. It wraps #GstVideoInfo and adds DRM information such as drm-fourcc and drm-modifier, required for negotiation and mapping.
GstVideoInfo vinfothe associated #GstVideoInfouint drmFourccthe fourcc defined by drmulong drmModifierthe drm modifieruint[20] GstReservedMastering display color volume information defined by SMPTE ST 2086 (a.k.a static HDR metadata).
GstVideoMasteringDisplayInfoCoordinates[3] displayPrimariesthe xy coordinates of primaries in the CIE 1931 color space. the index 0 contains red, 1 is for green and 2 is for blue. each value is normalized to 50000 (meaning that in unit of 0.00002)GstVideoMasteringDisplayInfoCoordinates whitePointthe xy coordinates of white point in the CIE 1931 color space. each value is normalized to 50000 (meaning that in unit of 0.00002)uint maxDisplayMasteringLuminancethe maximum value of display luminance in unit of 0.0001 candelas per square metre (cd/m^2 and nit)uint minDisplayMasteringLuminancethe minimum value of display luminance in unit of 0.0001 candelas per square metre (cd/m^2 and nit)void *[4] GstReservedUsed to represent display_primaries and white_point of #GstVideoMasteringDisplayInfo struct. See #GstVideoMasteringDisplayInfo
ushort xthe x coordinate of CIE 1931 color space in unit of 0.00002.ushort ythe y coordinate of CIE 1931 color space in unit of 0.00002.Extra buffer metadata describing image properties
This meta can also be used by downstream elements to specifiy their buffer layout requirements for upstream. Upstream should try to fit those requirements, if possible, in order to prevent buffer copies.
This is done by passing a custom #GstStructure to [gst.query.Query.addAllocationMeta] when handling the ALLOCATION query. This structure should be named 'video-meta' and can have the following fields:
The padding fields have the same semantic as #GstVideoMeta.alignment and so represent the paddings requested on produced video buffers.
Since 1.24 it can be serialized using [gst.meta.Meta.serialize] and [gst.meta.Meta.deserialize].
GstMeta metaparent #GstMetaGstBuffer * bufferthe buffer this metadata belongs toGstVideoFrameFlags flagsadditional video flagsGstVideoFormat formatthe video formatint ididentifier of the frameuint widththe video widthuint heightthe video heightuint nPlanesthe number of planes in the imagesize_t[4] offsetarray of offsets for the planes. This field might not always be valid, it is used by the default implementation of @map.int[4] stridearray of strides for the planes. This field might not always be valid, it is used by the default implementation of @map.gboolean function(GstVideoMeta * meta, uint plane, GstMapInfo * info, void * * data, int * stride, GstMapFlags flags) mapmap the memory of a planegboolean function(GstVideoMeta * meta, uint plane, GstMapInfo * info) unmapunmap the memory of a planeGstVideoAlignment alignmentthe paddings and alignment constraints of the video buffer. It is up to the caller of `[gstvideo.global.bufferAddVideoMetaFull]` to set it using [gstvideo.video_meta.VideoMeta.setAlignment], if the...Extra data passed to a video transform #GstMetaTransformFunction such as: "gst-video-scale".
See #GstVideoMultiviewFlags.
The interface allows unified access to control flipping and autocenter operation of video-sources or operators.
#GstVideoOrientationInterface interface.
GTypeInterface ifaceparent interface type.gboolean function(GstVideoOrientation * videoOrientation, gboolean * flip) getHflipvirtual method to get horizontal flipping stategboolean function(GstVideoOrientation * videoOrientation, gboolean * flip) getVflipvirtual method to get vertical flipping stategboolean function(GstVideoOrientation * videoOrientation, int * center) getHcentervirtual method to get horizontal centering stategboolean function(GstVideoOrientation * videoOrientation, int * center) getVcentervirtual method to get vertical centering stategboolean function(GstVideoOrientation * videoOrientation, gboolean flip) setHflipvirtual method to set horizontal flipping stategboolean function(GstVideoOrientation * videoOrientation, gboolean flip) setVflipvirtual method to set vertical flipping stategboolean function(GstVideoOrientation * videoOrientation, int center) setHcentervirtual method to set horizontal centering stategboolean function(GstVideoOrientation * videoOrientation, int center) setVcentervirtual method to set vertical centering stateThe #GstVideoOverlay interface is used for 2 main purposes :
This is achieved by either being informed about the Window identifier that the video sink element generated, or by forcing the video sink element to use a specific Window identifier for rendering.
displayed on the Window. Indeed if the #GstPipeline is in #GST_STATE_PAUSED state, moving the Window around will damage its content. Application developers will want to handle the Expose events themselves and force the video sink element to refresh the Window's content.
Using the Window created by the video sink is probably the simplest scenario, in some cases, though, it might not be flexible enough for application developers if they need to catch events such as mouse moves and button clicks.
Setting a specific Window identifier on the video sink element is the most flexible solution but it has some issues. Indeed the application needs to set its Window identifier at the right time to avoid internal Window creation from the video sink element. To solve this issue a #GstMessage is posted on the bus to inform the application that it should set the Window identifier immediately. Here is an example on how to do that correctly:
static GstBusSyncReply
create_window (GstBus * bus, GstMessage * message, GstPipeline * pipeline)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
return GST_BUS_PASS;
win = XCreateSimpleWindow (disp, root, 0, 0, 320, 240, 0, 0, 0);
XSetWindowBackgroundPixmap (disp, win, None);
XMapRaised (disp, win);
XSync (disp, FALSE);
gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message)),
win);
gst_message_unref (message);
return GST_BUS_DROP;
}
...
int
main (int argc, char **argv)
{
...
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_set_sync_handler (bus, (GstBusSyncHandler) create_window, pipeline,
NULL);
...
}There are two basic usage scenarios: in the simplest case, the application uses #playbin or #playsink or knows exactly what particular element is used for video output, which is usually the case when the application creates the videosink to use (e.g. #xvimagesink, #ximagesink, etc.) itself; in this case, the application can just create the videosink element, create and realize the window to render the video on and then call [gstvideo.video_overlay.VideoOverlay.setWindowHandle] directly with the XID or native window handle, before starting up the pipeline. As #playbin and #playsink implement the video overlay interface and proxy it transparently to the actual video sink even if it is created later, this case also applies when using these elements.
In the other and more common case, the application does not know in advance what GStreamer video sink element will be used for video output. This is usually the case when an element such as #autovideosink is used. In this case, the video sink element itself is created asynchronously from a GStreamer streaming thread some time after the pipeline has been started up. When that happens, however, the video sink will need to know right then whether to render onto an already existing application window or whether to create its own window. This is when it posts a prepare-window-handle message, and that is also why this message needs to be handled in a sync bus handler which will be called from the streaming thread directly (because the video sink will need an answer right then).
As response to the prepare-window-handle element message in the bus sync handler, the application may use [gstvideo.video_overlay.VideoOverlay.setWindowHandle] to tell the video sink to render onto an existing window surface. At this point the application should already have obtained the window handle / XID, so it just needs to set it. It is generally not advisable to call any GUI toolkit functions or window system functions from the streaming thread in which the prepare-window-handle message is handled, because most GUI toolkits and windowing systems are not thread-safe at all and a lot of care would be required to co-ordinate the toolkit and window system calls of the different threads (Gtk+ users please note: prior to Gtk+ 2.18 GDK_WINDOW_XID was just a simple structure access, so generally fine to do within the bus sync handler; this macro was changed to a function call in Gtk+ 2.18 and later, which is likely to cause problems when called from a sync handler; see below for a better approach without GDK_WINDOW_XID used in the callback).
#include <gst/video/videooverlay.h>
#include <gtk/gtk.h>
#ifdef GDK_WINDOWING_X11
#include <gdk/gdkx.h> // for GDK_WINDOW_XID
#endif
#ifdef GDK_WINDOWING_WIN32
#include <gdk/gdkwin32.h> // for GDK_WINDOW_HWND
#endif
...
static guintptr video_window_handle = 0;
...
static GstBusSyncReply
bus_sync_handler (GstBus * bus, GstMessage * message, gpointer user_data)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
return GST_BUS_PASS;
if (video_window_handle != 0) {
GstVideoOverlay *overlay;
// GST_MESSAGE_SRC (message) will be the video sink element
overlay = GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message));
gst_video_overlay_set_window_handle (overlay, video_window_handle);
} else {
g_warning ("Should have obtained video_window_handle by now!");
}
gst_message_unref (message);
return GST_BUS_DROP;
}
...
static void
video_widget_realize_cb (GtkWidget * widget, gpointer data)
{
#if GTK_CHECK_VERSION(2,18,0)
// Tell Gtk+/Gdk to create a native window for this widget instead of
// drawing onto the parent widget.
// This is here just for pedagogical purposes, GDK_WINDOW_XID will call
// it as well in newer Gtk versions
if (!gdk_window_ensure_native (widget->window))
g_error ("Couldn't create native window needed for GstVideoOverlay!");
#endif
#ifdef GDK_WINDOWING_X11
{
gulong xid = GDK_WINDOW_XID (gtk_widget_get_window (video_window));
video_window_handle = xid;
}
#endif
#ifdef GDK_WINDOWING_WIN32
{
HWND wnd = GDK_WINDOW_HWND (gtk_widget_get_window (video_window));
video_window_handle = (guintptr) wnd;
}
#endif
}
...
int
main (int argc, char **argv)
{
GtkWidget *video_window;
GtkWidget *app_window;
...
app_window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
...
video_window = gtk_drawing_area_new ();
g_signal_connect (video_window, "realize",
G_CALLBACK (video_widget_realize_cb), NULL);
gtk_widget_set_double_buffered (video_window, FALSE);
...
// usually the video_window will not be directly embedded into the
// application window like this, but there will be many other widgets
// and the video window will be embedded in one of them instead
gtk_container_add (GTK_CONTAINER (ap_window), video_window);
...
// show the GUI
gtk_widget_show_all (app_window);
// realize window now so that the video window gets created and we can
// obtain its XID/HWND before the pipeline is started up and the videosink
// asks for the XID/HWND of the window to render onto
gtk_widget_realize (video_window);
// we should have the XID/HWND now
g_assert (video_window_handle != 0);
...
// set up sync handler for setting the xid once the pipeline is started
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_set_sync_handler (bus, (GstBusSyncHandler) bus_sync_handler, NULL,
NULL);
gst_object_unref (bus);
...
gst_element_set_state (pipeline, GST_STATE_PLAYING);
...
}#include <glib.h>;
#include <gst/gst.h>;
#include <gst/video/videooverlay.h>;
#include <QApplication>;
#include <QTimer>;
#include <QWidget>;
int main(int argc, char *argv[])
{
if (!g_thread_supported ())
g_thread_init (NULL);
gst_init (&argc, &argv);
QApplication app(argc, argv);
app.connect(&app, SIGNAL(lastWindowClosed()), &app, SLOT(quit ()));
// prepare the pipeline
GstElement *pipeline = gst_pipeline_new ("xvoverlay");
GstElement *src = gst_element_factory_make ("videotestsrc", NULL);
GstElement *sink = gst_element_factory_make ("xvimagesink", NULL);
gst_bin_add_many (GST_BIN (pipeline), src, sink, NULL);
gst_element_link (src, sink);
// prepare the ui
QWidget window;
window.resize(320, 240);
window.show();
WId xwinid = window.winId();
gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (sink), xwinid);
// run the pipeline
GstStateChangeReturn sret = gst_element_set_state (pipeline,
GST_STATE_PLAYING);
if (sret == GST_STATE_CHANGE_FAILURE) {
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
// Exit application
QTimer::singleShot(0, QApplication::activeWindow(), SLOT(quit()));
}
int ret = app.exec();
window.hide();
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return ret;
}Functions to create and handle overlay compositions on video buffers.
An overlay composition describes one or more overlay rectangles to be blended on top of a video buffer.
This API serves two main purposes:
to non-raw video buffers such as GL/VAAPI/VDPAU surfaces. The actual blending of the overlay can then be done by e.g. the video sink that processes these non-raw buffers.
buffers, thus consolidating blending functionality for raw video in one place.
Together, this allows existing overlay elements to easily handle raw and non-raw video as input in without major changes (once the overlays have been put into a #GstVideoOverlayComposition object anyway) - for raw video the overlay can just use the blending function to blend the data on top of the video, and for surface buffers it can just attach them to the buffer and let the sink render the overlays.
Extra buffer metadata describing image overlay data.
GstMeta metaparent #GstMetaGstVideoOverlayComposition * overlaythe attached #GstVideoOverlayComposition#GstVideoOverlay interface
GTypeInterface ifaceparent interface type.void function(GstVideoOverlay * overlay) exposevirtual method to handle expose eventsvoid function(GstVideoOverlay * overlay, gboolean handleEvents) handleEventsvirtual method to handle eventsvoid function(GstVideoOverlay * overlay, int x, int y, int width, int height) setRenderRectanglevirtual method to set the render rectanglevoid function(GstVideoOverlay * overlay, size_t handle) setWindowHandlevirtual method to configure the window handleAn opaque video overlay rectangle object. A rectangle contains a single overlay rectangle which can be added to a composition.
Helper structure representing a rectangular area.
int xX coordinate of rectangle's top-left pointint yY coordinate of rectangle's top-left pointint wwidth of the rectangleint hheight of the rectangleExtra buffer metadata describing an image region of interest
GstMeta metaparent #GstMetaGQuark roiTypeGQuark describing the semantic of the Roi (f.i. a face, a pedestrian)int ididentifier of this particular ROIint parentIdidentifier of its parent ROI, used f.i. for ROI hierarchisation.uint xx component of upper-left corneruint yy component of upper-left corneruint wbounding box widthuint hbounding box heightGList * paramslist of #GstStructure containing element-specific params for downstream, see [gstvideo.videoregionofinterestmeta.VideoRegionOfInterestMeta.addParam]. (Since: 1.14)#GstVideoResampler is a structure which holds the information required to perform various kinds of resampling filtering.
int inSizethe input sizeint outSizethe output sizeuint maxTapsthe maximum number of tapsuint nPhasesthe number of phasesuint * offsetarray with the source offset for each output elementuint * phasearray with the phase to use for each output elementuint * nTapsarray with new number of taps for each phasedouble * tapsthe taps for all phasesvoid *[4] GstReservedH.264 H.265 metadata from SEI User Data Unregistered messages
GstMeta metaparent #GstMetaubyte[16] uuidUser Data Unregistered UUIDubyte * dataUnparsed data buffersize_t sizeSize of the data buffer#GstVideoScaler is a utility object for rescaling and resampling video frames using various interpolation / sampling methods.
Provides useful functions and a base class for video sinks.
GstVideoSink will configure the default base sink to drop frames that arrive later than 20ms as this is considered the default threshold for observing out-of-sync frames.
GstBaseSink elementint widthvideo width (derived class needs to set this)int heightvideo height (derived class needs to set this)GstVideoSinkPrivate * privvoid *[4] GstReservedThe video sink class structure. Derived classes should override the @show_frame virtual function.
GstBaseSinkClass parentClassthe parent class structureGstFlowReturn function(GstVideoSink * videoSink, GstBuffer * buf) showFramerender a video frame. Maps to #GstBaseSinkClass.render() and #GstBaseSinkClass.preroll() vfuncs. Rendering during preroll will be suppressed if the #GstVideoSink:show-preroll-frame property is set ...gboolean function(GstVideoSink * videoSink, GstCaps * caps, const(GstVideoInfo) * info) setInfovoid *[3] GstReservedDescription of a tile. This structure allow to describe arbitrary tile dimensions and sizes.
uint widthThe width in pixels of a tile. This value can be zero if the number of pixels per line is not an integer value.uint heightuint strideThe stride (in bytes) of a tile line. Regardless if the tile have sub-tiles this stride multiplied by the height should be equal to #GstVideoTileInfo.size. This value is used to translate into line...uint sizeThe size in bytes of a tile. This value must be divisible by #GstVideoTileInfo.stride.uint[4] padding@field_count must be 0 for progressive video and 1 or 2 for interlaced.
A representation of a SMPTE time code.
@hours must be positive and less than 24. Will wrap around otherwise. @minutes and @seconds must be positive and less than 60. @frames must be less than or equal to @config.fps_n / @config.fps_d These values are NOT automatically normalized.
GstVideoTimeCodeConfig configthe corresponding #GstVideoTimeCodeConfiguint hoursthe hours field of #GstVideoTimeCodeuint minutesthe minutes field of #GstVideoTimeCodeuint secondsthe seconds field of #GstVideoTimeCodeuint framesthe frames field of #GstVideoTimeCodeuint fieldCountInterlaced video field countSupported frame rates: 30000/1001, 60000/1001 (both with and without drop frame), and integer frame rates e.g. 25/1, 30/1, 50/1, 60/1.
The configuration of the time code.
uint fpsNNumerator of the frame rateuint fpsDDenominator of the frame rateGstVideoTimeCodeFlags flagsthe corresponding #GstVideoTimeCodeFlagsGDateTime * latestDailyJamThe latest daily jam information, if present, or NULLA representation of a difference between two #GstVideoTimeCode instances. Will not necessarily correspond to a real timecode (e.g. 00:00:10;00)
uint hoursthe hours field of #GstVideoTimeCodeIntervaluint minutesthe minutes field of #GstVideoTimeCodeIntervaluint secondsthe seconds field of #GstVideoTimeCodeIntervaluint framesthe frames field of #GstVideoTimeCodeIntervalExtra buffer metadata describing the GstVideoTimeCode of the frame.
Each frame is assumed to have its own timecode, i.e. they are not automatically incremented/interpolated.
An encoder for writing ancillary data to the Vertical Blanking Interval lines of component signals.
A parser for detecting and extracting @GstVideoAncillary data from Vertical Blanking Interval lines of component signals.