FFmpeg
|
Modules | |
Frame parsing | |
Macros | |
#define | AV_INPUT_BUFFER_PADDING_SIZE 64 |
Enumerations | |
enum | AVDiscard { AVDISCARD_NONE =-16, AVDISCARD_DEFAULT = 0, AVDISCARD_NONREF = 8, AVDISCARD_BIDIR = 16, AVDISCARD_NONINTRA = 24, AVDISCARD_NONKEY = 32, AVDISCARD_ALL = 48 } |
Functions | |
int | avcodec_default_get_buffer2 (AVCodecContext *s, AVFrame *frame, int flags) |
The default callback for AVCodecContext.get_buffer2(). More... | |
int | avcodec_default_get_encode_buffer (AVCodecContext *s, AVPacket *pkt, int flags) |
The default callback for AVCodecContext.get_encode_buffer(). More... | |
void | avcodec_align_dimensions (AVCodecContext *s, int *width, int *height) |
Modify width and height values so that they will result in a memory buffer that is acceptable for the codec if you do not use any horizontal padding. More... | |
void | avcodec_align_dimensions2 (AVCodecContext *s, int *width, int *height, int linesize_align[AV_NUM_DATA_POINTERS]) |
Modify width and height values so that they will result in a memory buffer that is acceptable for the codec if you also ensure that all line sizes are a multiple of the respective linesize_align[i]. More... | |
int | avcodec_enum_to_chroma_pos (int *xpos, int *ypos, enum AVChromaLocation pos) |
Converts AVChromaLocation to swscale x/y chroma position. More... | |
enum AVChromaLocation | avcodec_chroma_pos_to_enum (int xpos, int ypos) |
Converts swscale x/y chroma position to AVChromaLocation. More... | |
attribute_deprecated int | avcodec_decode_audio4 (AVCodecContext *avctx, AVFrame *frame, int *got_frame_ptr, const AVPacket *avpkt) |
Decode the audio frame of size avpkt->size from avpkt->data into frame. More... | |
attribute_deprecated int | avcodec_decode_video2 (AVCodecContext *avctx, AVFrame *picture, int *got_picture_ptr, const AVPacket *avpkt) |
Decode the video frame of size avpkt->size from avpkt->data into picture. More... | |
int | avcodec_decode_subtitle2 (AVCodecContext *avctx, AVSubtitle *sub, int *got_sub_ptr, AVPacket *avpkt) |
Decode a subtitle message. More... | |
int | avcodec_send_packet (AVCodecContext *avctx, const AVPacket *avpkt) |
Supply raw packet data as input to a decoder. More... | |
int | avcodec_receive_frame (AVCodecContext *avctx, AVFrame *frame) |
Return decoded output data from a decoder. More... | |
int | avcodec_send_frame (AVCodecContext *avctx, const AVFrame *frame) |
Supply a raw video or audio frame to the encoder. More... | |
int | avcodec_receive_packet (AVCodecContext *avctx, AVPacket *avpkt) |
Read encoded data from the encoder. More... | |
int | avcodec_get_hw_frames_parameters (AVCodecContext *avctx, AVBufferRef *device_ref, enum AVPixelFormat hw_pix_fmt, AVBufferRef **out_frames_ref) |
Create and return a AVHWFramesContext with values adequate for hardware decoding. More... | |
#define AV_INPUT_BUFFER_PADDING_SIZE 64 |
Required number of additionally allocated bytes at the end of the input bitstream for decoding. This is mainly needed because some optimized bitstream readers read 32 or 64 bit at once and could read over the end.
Note: If the first 23 bits of the additional bytes are not 0, then damaged MPEG bitstreams could cause overread and segfault.
enum AVDiscard |
int avcodec_default_get_buffer2 | ( | AVCodecContext * | s, |
AVFrame * | frame, | ||
int | flags | ||
) |
The default callback for AVCodecContext.get_buffer2().
It is made public so it can be called by custom get_buffer2() implementations for decoders without AV_CODEC_CAP_DR1 set.
Definition at line 1695 of file decode.c.
Referenced by alloc_frame_buffer(), ff_decode_preinit(), get_buffer(), init_context_defaults(), and submit_packet().
int avcodec_default_get_encode_buffer | ( | AVCodecContext * | s, |
AVPacket * | pkt, | ||
int | flags | ||
) |
The default callback for AVCodecContext.get_encode_buffer().
It is made public so it can be called by custom get_encode_buffer() implementations for encoders without AV_CODEC_CAP_DR1 set.
Definition at line 59 of file encode.c.
Referenced by init_context_defaults().
void avcodec_align_dimensions | ( | AVCodecContext * | s, |
int * | width, | ||
int * | height | ||
) |
void avcodec_align_dimensions2 | ( | AVCodecContext * | s, |
int * | width, | ||
int * | height, | ||
int | linesize_align[AV_NUM_DATA_POINTERS] | ||
) |
Modify width and height values so that they will result in a memory buffer that is acceptable for the codec if you also ensure that all line sizes are a multiple of the respective linesize_align[i].
May only be used if a codec with AV_CODEC_CAP_DR1 has been opened.
Definition at line 134 of file utils.c.
Referenced by avcodec_align_dimensions(), and update_frame_pool().
int avcodec_enum_to_chroma_pos | ( | int * | xpos, |
int * | ypos, | ||
enum AVChromaLocation | pos | ||
) |
Converts AVChromaLocation to swscale x/y chroma position.
The positions represent the chroma (0,0) position in a coordinates system with luma (0,0) representing the origin and luma(1,1) representing 256,256
xpos | horizontal chroma sample position |
ypos | vertical chroma sample position |
Definition at line 350 of file utils.c.
Referenced by avcodec_chroma_pos_to_enum(), and mkv_write_video_color().
enum AVChromaLocation avcodec_chroma_pos_to_enum | ( | int | xpos, |
int | ypos | ||
) |
Converts swscale x/y chroma position to AVChromaLocation.
The positions represent the chroma (0,0) position in a coordinates system with luma (0,0) representing the origin and luma(1,1) representing 256,256
xpos | horizontal chroma sample position |
ypos | vertical chroma sample position |
attribute_deprecated int avcodec_decode_audio4 | ( | AVCodecContext * | avctx, |
AVFrame * | frame, | ||
int * | got_frame_ptr, | ||
const AVPacket * | avpkt | ||
) |
Decode the audio frame of size avpkt->size from avpkt->data into frame.
Some decoders may support multiple frames in a single AVPacket. Such decoders would then just decode the first frame and the return value would be less than the packet size. In this case, avcodec_decode_audio4 has to be called again with an AVPacket containing the remaining data in order to decode the second frame, etc... Even if no frames are returned, the packet needs to be fed to the decoder with remaining data until it is completely consumed or an error occurs.
Some decoders (those marked with AV_CODEC_CAP_DELAY) have a delay between input and output. This means that for some packets they will not immediately produce decoded output and need to be flushed at the end of decoding to get all the decoded data. Flushing is done by calling this function with packets with avpkt->data set to NULL and avpkt->size set to 0 until it stops returning samples. It is safe to flush even those decoders that are not marked with AV_CODEC_CAP_DELAY, then no samples will be returned.
avctx | the codec context | |
[out] | frame | The AVFrame in which to store decoded audio samples. The decoder will allocate a buffer for the decoded frame by calling the AVCodecContext.get_buffer2() callback. When AVCodecContext.refcounted_frames is set to 1, the frame is reference counted and the returned reference belongs to the caller. The caller must release the frame using av_frame_unref() when the frame is no longer needed. The caller may safely write to the frame if av_frame_is_writable() returns 1. When AVCodecContext.refcounted_frames is set to 0, the returned reference belongs to the decoder and is valid only until the next call to this function or until closing or flushing the decoder. The caller may not write to it. |
[out] | got_frame_ptr | Zero if no frame could be decoded, otherwise it is non-zero. Note that this field being set to zero does not mean that an error has occurred. For decoders with AV_CODEC_CAP_DELAY set, no given decode call is guaranteed to produce a frame. |
[in] | avpkt | The input AVPacket containing the input buffer. At least avpkt->data and avpkt->size should be set. Some decoders might also require additional fields to be set. |
attribute_deprecated int avcodec_decode_video2 | ( | AVCodecContext * | avctx, |
AVFrame * | picture, | ||
int * | got_picture_ptr, | ||
const AVPacket * | avpkt | ||
) |
Decode the video frame of size avpkt->size from avpkt->data into picture.
Some decoders may support multiple frames in a single AVPacket, such decoders would then just decode the first frame.
avctx | the codec context | |
[out] | picture | The AVFrame in which the decoded video frame will be stored. Use av_frame_alloc() to get an AVFrame. The codec will allocate memory for the actual bitmap by calling the AVCodecContext.get_buffer2() callback. When AVCodecContext.refcounted_frames is set to 1, the frame is reference counted and the returned reference belongs to the caller. The caller must release the frame using av_frame_unref() when the frame is no longer needed. The caller may safely write to the frame if av_frame_is_writable() returns 1. When AVCodecContext.refcounted_frames is set to 0, the returned reference belongs to the decoder and is valid only until the next call to this function or until closing or flushing the decoder. The caller may not write to it. |
[in] | avpkt | The input AVPacket containing the input buffer. You can create such packet with av_init_packet() and by then setting data and size, some decoders might in addition need other fields like flags&AV_PKT_FLAG_KEY. All decoders are designed to use the least fields possible. |
[in,out] | got_picture_ptr | Zero if no frame could be decompressed, otherwise, it is nonzero. |
int avcodec_decode_subtitle2 | ( | AVCodecContext * | avctx, |
AVSubtitle * | sub, | ||
int * | got_sub_ptr, | ||
AVPacket * | avpkt | ||
) |
Decode a subtitle message.
Return a negative value on error, otherwise return the number of bytes used. If no subtitle could be decompressed, got_sub_ptr is zero. Otherwise, the subtitle is stored in *sub. Note that AV_CODEC_CAP_DR1 is not available for subtitle codecs. This is for simplicity, because the performance difference is expected to be negligible and reusing a get_buffer written for video codecs would probably perform badly due to a potentially very different allocation pattern.
Some decoders (those marked with AV_CODEC_CAP_DELAY) have a delay between input and output. This means that for some packets they will not immediately produce decoded output and need to be flushed at the end of decoding to get all the decoded data. Flushing is done by calling this function with packets with avpkt->data set to NULL and avpkt->size set to 0 until it stops returning subtitles. It is safe to flush even those decoders that are not marked with AV_CODEC_CAP_DELAY, then no subtitles will be returned.
avctx | the codec context | |
[out] | sub | The preallocated AVSubtitle in which the decoded subtitle will be stored, must be freed with avsubtitle_free if *got_sub_ptr is set. |
[in,out] | got_sub_ptr | Zero if no subtitle could be decompressed, otherwise, it is nonzero. |
[in] | avpkt | The input AVPacket containing the input buffer. |
Definition at line 1034 of file decode.c.
Referenced by decoder_decode_frame(), process_frame(), subtitle_handler(), transcode_subtitles(), try_decode_frame(), and wrap().
int avcodec_send_packet | ( | AVCodecContext * | avctx, |
const AVPacket * | avpkt | ||
) |
Supply raw packet data as input to a decoder.
Internally, this call will copy relevant AVCodecContext fields, which can influence decoding per-packet, and apply them when the packet is actually decoded. (For example AVCodecContext.skip_frame, which might direct the decoder to drop the frame contained by the packet sent with this function.)
avctx | codec context | |
[in] | avpkt | The input AVPacket. Usually, this will be a single video frame, or several complete audio frames. Ownership of the packet remains with the caller, and the decoder will not write to the packet. The decoder may create a reference to the packet data (or copy it if the packet is not reference-counted). Unlike with older APIs, the packet is always fully consumed, and if it contains multiple frames (e.g. some audio codecs), will require you to call avcodec_receive_frame() multiple times afterwards before you can send a new packet. It can be NULL (or an AVPacket with data set to NULL and size set to 0); in this case, it is considered a flush packet, which signals the end of the stream. Sending the first flush packet will return success. Subsequent ones are unnecessary and will return AVERROR_EOF. If the decoder still has frames buffered, it will return them after sending a flush packet. |
Definition at line 589 of file decode.c.
Referenced by compat_decode(), compute_crc_of_packets(), cri_decode_frame(), dec_enc(), decode(), decode_audio_frame(), decode_packet(), decode_write(), decoder_decode_frame(), dng_decode_jpeg(), ff_load_image(), imm5_decode_frame(), LLVMFuzzerTestOneInput(), main(), movie_decode_packet(), process_frame(), run_test(), tdsc_decode_jpeg_tile(), try_decode_frame(), video_decode(), video_decode_example(), and wrap().
int avcodec_receive_frame | ( | AVCodecContext * | avctx, |
AVFrame * | frame | ||
) |
Return decoded output data from a decoder.
avctx | codec context |
frame | This will be set to a reference-counted video or audio frame (depending on the decoder type) allocated by the decoder. Note that the function will always call av_frame_unref(frame) before doing anything else. |
Definition at line 652 of file decode.c.
Referenced by audio_video_handler(), compat_decode(), compute_crc_of_packets(), cri_decode_frame(), dec_enc(), decode(), decode_audio_frame(), decode_packet(), decode_read(), decode_write(), decoder_decode_frame(), dng_decode_jpeg(), ff_load_image(), imm5_decode_frame(), main(), movie_push_frame(), process_frame(), run_test(), tdsc_decode_jpeg_tile(), try_decode_frame(), video_decode(), video_decode_example(), and wrap().
int avcodec_send_frame | ( | AVCodecContext * | avctx, |
const AVFrame * | frame | ||
) |
Supply a raw video or audio frame to the encoder.
Use avcodec_receive_packet() to retrieve buffered output packets.
avctx | codec context | |
[in] | frame | AVFrame containing the raw audio or video frame to be encoded. Ownership of the frame remains with the caller, and the encoder will not write to the frame. The encoder may create a reference to the frame data (or copy it if the frame is not reference-counted). It can be NULL, in which case it is considered a flush packet. This signals the end of the stream. If the encoder still has packets buffered, it will return them after this call. Once flushing mode has been entered, additional flush packets are ignored, and sending frames will return AVERROR_EOF. |
For audio: If AV_CODEC_CAP_VARIABLE_FRAME_SIZE is set, then each frame can have any number of samples. If it is not set, frame->nb_samples must be equal to avctx->frame_size for all frames except the last. The final frame may be smaller than avctx->frame_size.
Definition at line 364 of file encode.c.
Referenced by compat_encode(), do_audio_out(), do_video_out(), encode(), encode_audio_frame(), encode_frame(), encode_write(), encode_write_frame(), flush_encoders(), run_test(), wrap(), and write_frame().
int avcodec_receive_packet | ( | AVCodecContext * | avctx, |
AVPacket * | avpkt | ||
) |
Read encoded data from the encoder.
avctx | codec context |
avpkt | This will be set to a reference-counted packet allocated by the encoder. Note that the function will always call av_packet_unref(avpkt) before doing anything else. |
Definition at line 395 of file encode.c.
Referenced by compat_encode(), do_audio_out(), do_video_out(), encode(), encode_audio_frame(), encode_frame(), encode_write(), encode_write_frame(), flush_encoders(), run_test(), wrap(), and write_frame().
int avcodec_get_hw_frames_parameters | ( | AVCodecContext * | avctx, |
AVBufferRef * | device_ref, | ||
enum AVPixelFormat | hw_pix_fmt, | ||
AVBufferRef ** | out_frames_ref | ||
) |
Create and return a AVHWFramesContext with values adequate for hardware decoding.
This is meant to get called from the get_format callback, and is a helper for preparing a AVHWFramesContext for AVCodecContext.hw_frames_ctx. This API is for decoding with certain hardware acceleration modes/APIs only.
The returned AVHWFramesContext is not initialized. The caller must do this with av_hwframe_ctx_init().
Calling this function is not a requirement, but makes it simpler to avoid codec or hardware API specific details when manually allocating frames.
Alternatively to this, an API user can set AVCodecContext.hw_device_ctx, which sets up AVCodecContext.hw_frames_ctx fully automatically, and makes it unnecessary to call this function or having to care about AVHWFramesContext initialization at all.
There are a number of requirements for calling this function:
The function will set at least the following fields on AVHWFramesContext (potentially more, depending on hwaccel API):
Essentially, out_frames_ref returns the same as av_hwframe_ctx_alloc(), but with basic frame parameters set.
The function is stateless, and does not change the AVCodecContext or the device_ref AVHWDeviceContext.
avctx | The context which is currently calling get_format, and which implicitly contains all state needed for filling the returned AVHWFramesContext properly. |
device_ref | A reference to the AVHWDeviceContext describing the device which will be used by the hardware decoder. |
hw_pix_fmt | The hwaccel format you are going to return from get_format. |
out_frames_ref | On success, set to a reference to an uninitialized AVHWFramesContext, created from the given device_ref. Fields will be set to values required for decoding. Not changed if an error is returned. |
Definition at line 1228 of file decode.c.
Referenced by ff_decode_get_hw_frames_ctx(), and nvdec_init_hwframes().