
Recherche avancée
Médias (91)
-
Corona Radiata
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Lights in the Sky
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Head Down
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Echoplex
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Discipline
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Letting You
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (59)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (8197)
-
ffmpeg with multiple live-stream inputs adds async delay after filter
12 janvier 2021, par GodmarI am struggling to apply ffmpeg for remote control of autonomous truck.



There are 3 video streams from cameras in local network, described with .sdp files like this one (MJPEG over RTP, correct me if I'm wrong) :

m=video 50910 RTP/AVP 26
c=IN IP4 192.168.1.91



I want to make a single video stream from three pictures combined using this :



ffmpeg -hide_banner -protocol_whitelist "rtp,file,udp" -i "cam1.sdp" \
-protocol_whitelist "rtp,file,udp" -i "cam2.sdp" \
-protocol_whitelist "rtp,file,udp" -i "cam3.sdp" \
-filter_complex "\
nullsrc=size=1800x600 [back]; \
[back][b]overlay=1000[tmp1]; \
[tmp1][c]overlay=600[tmp2]; \
[tmp2][a]overlay" \
-vcodec libx264 \
-crf 25 -maxrate 4M -bufsize 8M -r 30 -preset ultrafast -tune zerolatency \
-f mpegts udp://localhost:1234




When i launch this, the ffmpeg starts sending errors about RTP packets being lost. In the output the fps of every camera seems unstable, so this is unacceptable.
I am able to launch ffplay or mplayer on three cameras simultaneously. And I also can make such stream using pre-recorded videofile as input. So it seems like the ffmpeg just can't read three UDP streams so fast.
The cameras are streaming at 10 Mbit/s, 800x600, 30 fps MJPEG, and those are the minimal settings I can afford, but the cameras can do much more.



So I tried to do something to the size of UDP buffer. Well, there is a possibility to setup buffer_size and fifo_size for a UDP stream, but no such option for a stream described with an .sdp file. Even though I've found a way to run the stream with
rtp://
-like URL, but it doesn't seem to pass the arguments after ' ?' to the UDP.


My next idea was to launch multiple ffmpeg instances and receive the streams separately, process them and re-stream to another instance, which would consume any kind of stream, stitch them together and send out. That would actually be a good setup, since I need to filter the streams individually, crop them, lenscorrect, rotate, and maybe a large -filter_complex on a single ffmpeg instance would not handle all the streams. And I'm going to have 3 more of them.



I tried to implement this setup using 3 fifopipe or using 3
udp://localhost:124x
internal streams. None of the approaches solved my problem, but the separated ffmpeg instances seem to be able to receive three streams simultaneously.
I was able to open the repeated stream through pipes and through UDP via mplayer or ffplay. They are completely synced and live
The stitching still fails miserably.
The pipes got me a few seconds delays for cameras, and after stitching streams were choppy and out of sync.
The udp :// got me a smooth video stream as a result, but one camera has 5 sec delay, and the others have 15 and 25.


This smells like buffer. Changing the fifo_size and buffer_size doesn't seem to influence much.
I tried to add local time timestamp in re-streamer instances - this is how I found the 5, 15, 25sec delays.
I tried to add frame timestamp in stitcher instance - they come out completely synced. So
setpts=PTS-STARTPTS
doesn't work either.


So, the buffer happens between the udp :// socket and the -filter_complex input. How do I get rid of it ? How do you like my workaround ? Am I doing it completely wrong ?


-
ERROR "Tag [3][0][0][0] incompatible with output codec id '86016' (mp4a)" while writing headers for output mp4 file from a UDP stream
8 mai 2023, par lokit khemkaI have a running UDP stream, that I simulating using FFMPEG command :


ffmpeg -stream_loop -1 -re -i ./small_bunny_1080p_60fps.mp4 -v 0 -vcodec mpeg4 -f mpegts udp://127.0.0.1:23000



The video file is obtained from the github link : https://github.com/leandromoreira/ffmpeg-libav-tutorial


I keep getting error response, when I calling the function
avformat_write_header
. The output format ismp4
, output video codec isav1
and output audio codec is same as input audio codec.

I tried to create a "minimal reproducible code", however, I think it is still not completely minimal, but it reproduces the exact error.


#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>timestamp.h>
#include <libavutil></libavutil>opt.h>
#include <libswscale></libswscale>swscale.h>
#include 
#include 
#include 

#include 
#include 

typedef struct StreamingContext{
 AVFormatContext* avfc;
 const AVCodec *video_avc;
 const AVCodec *audio_avc;
 AVStream *video_avs;
 AVStream *audio_avs;
 AVCodecContext *video_avcc;
 AVCodecContext *audio_avcc;
 int video_index;
 int audio_index;
 char* filename;
 struct SwsContext *sws_ctx;
}StreamingContext;


typedef struct StreamingParams{
 char copy_video;
 char copy_audio;
 char *output_extension;
 char *muxer_opt_key;
 char *muxer_opt_value;
 char *video_codec;
 char *audio_codec;
 char *codec_priv_key;
 char *codec_priv_value;
}StreamingParams;

void logging(const char *fmt, ...)
{
 va_list args;
 fprintf(stderr, "LOG: ");
 va_start(args, fmt);
 vfprintf(stderr, fmt, args);
 va_end(args);
 fprintf(stderr, "\n");
}

int fill_stream_info(AVStream *avs, const AVCodec **avc, AVCodecContext **avcc)
{
 *avc = avcodec_find_decoder(avs->codecpar->codec_id);
 *avcc = avcodec_alloc_context3(*avc);
 if (avcodec_parameters_to_context(*avcc, avs->codecpar) < 0)
 {
 logging("Failed to fill Codec Context.");
 return -1;
 }
 avcodec_open2(*avcc, *avc, NULL);
 return 0;
}

int open_media(const char *in_filename, AVFormatContext **avfc)
{
 *avfc = avformat_alloc_context();
 if (avformat_open_input(avfc, in_filename, NULL, NULL) != 0)
 {
 logging("Failed to open input file %s", in_filename);
 return -1;
 }

 if (avformat_find_stream_info(*avfc, NULL) < 0)
 {
 logging("Failed to get Stream Info.");
 return -1;
 }
}

int prepare_decoder(StreamingContext *sc)
{
 for (int i = 0; i < (int)sc->avfc->nb_streams; i++)
 {
 if (sc->avfc->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 sc->video_avs = sc->avfc->streams[i];
 sc->video_index = i;

 if (fill_stream_info(sc->video_avs, &sc->video_avc, &sc->video_avcc))
 {
 return -1;
 }
 }
 else if (sc->avfc->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
 {
 sc->audio_avs = sc->avfc->streams[i];
 sc->audio_index = i;

 if (fill_stream_info(sc->audio_avs, &sc->audio_avc, &sc->audio_avcc))
 {
 return -1;
 }
 }
 else
 {
 logging("Skipping Streams other than Audio and Video.");
 }
 }
 return 0;
}

int prepare_video_encoder(StreamingContext *encoder_sc, AVCodecContext *decoder_ctx, AVRational input_framerate,
 StreamingParams sp, int scaled_frame_width, int scaled_frame_height)
{
 encoder_sc->video_avs = avformat_new_stream(encoder_sc->avfc, NULL);
 encoder_sc->video_avc = avcodec_find_encoder_by_name(sp.video_codec);
 if (!encoder_sc->video_avc)
 {
 logging("Cannot find the Codec.");
 return -1;
 }

 encoder_sc->video_avcc = avcodec_alloc_context3(encoder_sc->video_avc);
 if (!encoder_sc->video_avcc)
 {
 logging("Could not allocate memory for Codec Context.");
 return -1;
 }

 av_opt_set(encoder_sc->video_avcc->priv_data, "preset", "fast", 0);
 if (sp.codec_priv_key && sp.codec_priv_value)
 av_opt_set(encoder_sc->video_avcc->priv_data, sp.codec_priv_key, sp.codec_priv_value, 0);

 encoder_sc->video_avcc->height = scaled_frame_height;
 encoder_sc->video_avcc->width = scaled_frame_width;
 encoder_sc->video_avcc->sample_aspect_ratio = decoder_ctx->sample_aspect_ratio;

 if (encoder_sc->video_avc->pix_fmts)
 encoder_sc->video_avcc->pix_fmt = encoder_sc->video_avc->pix_fmts[0];
 else
 encoder_sc->video_avcc->pix_fmt = decoder_ctx->pix_fmt;

 encoder_sc->video_avcc->bit_rate = 2 * 1000 * 1000;

 encoder_sc->video_avcc->time_base = av_inv_q(input_framerate);
 encoder_sc->video_avs->time_base = encoder_sc->video_avcc->time_base;

 

 if (avcodec_open2(encoder_sc->video_avcc, encoder_sc->video_avc, NULL) < 0)
 {
 logging("Could not open the Codec.");
 return -1;
 }
 avcodec_parameters_from_context(encoder_sc->video_avs->codecpar, encoder_sc->video_avcc);
 return 0;
}


int prepare_copy(AVFormatContext *avfc, AVStream **avs, AVCodecParameters *decoder_par)
{
 *avs = avformat_new_stream(avfc, NULL);
 avcodec_parameters_copy((*avs)->codecpar, decoder_par);
 return 0;
}

int encode_video(StreamingContext *decoder, StreamingContext *encoder, AVFrame *input_frame)
{
 if (input_frame)
 input_frame->pict_type = AV_PICTURE_TYPE_NONE;

 AVPacket *output_packet = av_packet_alloc();


 int response = avcodec_send_frame(encoder->video_avcc, input_frame);

 while (response >= 0)
 {
 response = avcodec_receive_packet(encoder->video_avcc, output_packet);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
 {
 break;
 }

 output_packet->stream_index = decoder->video_index;
 output_packet->duration = encoder->video_avs->time_base.den / encoder->video_avs->time_base.num / decoder->video_avs->avg_frame_rate.num * decoder->video_avs->avg_frame_rate.den;

 av_packet_rescale_ts(output_packet, decoder->video_avs->time_base, encoder->video_avs->time_base);
 response = av_interleaved_write_frame(encoder->avfc, output_packet);
 }

 av_packet_unref(output_packet);
 av_packet_free(&output_packet);

 return 0;
}

int remux(AVPacket **pkt, AVFormatContext **avfc, AVRational decoder_tb, AVRational encoder_tb)
{
 av_packet_rescale_ts(*pkt, decoder_tb, encoder_tb);
 if (av_interleaved_write_frame(*avfc, *pkt) < 0)
 {
 logging("Error while copying Stream Packet.");
 return -1;
 }
 return 0;
}

int transcode_video(StreamingContext *decoder, StreamingContext *encoder, AVPacket *input_packet, AVFrame *input_frame)
{
 int response = avcodec_send_packet(decoder->video_avcc, input_packet);
 while (response >= 0)
 {
 response = avcodec_receive_frame(decoder->video_avcc, input_frame);
 
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
 {
 break;
 }
 if (response >= 0)
 {
 if (encode_video(decoder, encoder, input_frame))
 return -1;
 }

 av_frame_unref(input_frame);
 }
 return 0;
}

int main(int argc, char *argv[])
{
 const int scaled_frame_width = 854;
 const int scaled_frame_height = 480;
 StreamingParams sp = {0};
 sp.copy_audio = 1;
 sp.copy_video = 0;
 sp.video_codec = "libsvtav1";
 
 StreamingContext *decoder = (StreamingContext *)calloc(1, sizeof(StreamingContext));
 decoder->filename = "udp://127.0.0.1:23000";

 StreamingContext *encoder = (StreamingContext *)calloc(1, sizeof(StreamingContext));
 encoder->filename = "small_bunny_9.mp4";
 
 if (sp.output_extension)
 {
 strcat(encoder->filename, sp.output_extension);
 }

 open_media(decoder->filename, &decoder->avfc);
 prepare_decoder(decoder);


 avformat_alloc_output_context2(&encoder->avfc, NULL, "mp4", encoder->filename);
 AVRational input_framerate = av_guess_frame_rate(decoder->avfc, decoder->video_avs, NULL);
 prepare_video_encoder(encoder, decoder->video_avcc, input_framerate, sp, scaled_frame_width, scaled_frame_height);

 prepare_copy(encoder->avfc, &encoder->audio_avs, decoder->audio_avs->codecpar);
 

 if (encoder->avfc->oformat->flags & AVFMT_GLOBALHEADER)
 encoder->avfc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

 if (!(encoder->avfc->oformat->flags & AVFMT_NOFILE))
 {
 if (avio_open(&encoder->avfc->pb, encoder->filename, AVIO_FLAG_WRITE) < 0)
 {
 logging("could not open the output file");
 return -1;
 }
 }

 
 if (avformat_write_header(encoder->avfc, NULL) < 0)
 {
 logging("an error occurred when opening output file");
 return -1;
 }

 AVFrame *input_frame = av_frame_alloc();
 AVPacket *input_packet = av_packet_alloc();

 while (1)
 {
 int ret = av_read_frame(decoder->avfc, input_packet);
 if(ret<0)
 break;
 if (decoder->avfc->streams[input_packet->stream_index]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 if (transcode_video(decoder, encoder, input_packet, input_frame))
 return -1;
 av_packet_unref(input_packet);

 }
 else if (decoder->avfc->streams[input_packet->stream_index]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
 {
 
 if (remux(&input_packet, &encoder->avfc, decoder->audio_avs->time_base, encoder->audio_avs->time_base))
 return -1;
 }
 else
 {
 logging("Ignoring all nonvideo or audio packets.");
 }
 }

 if (encode_video(decoder, encoder, NULL))
 return -1;
 

 av_write_trailer(encoder->avfc);


 if (input_frame != NULL)
 {
 av_frame_free(&input_frame);
 input_frame = NULL;
 }

 if (input_packet != NULL)
 {
 av_packet_free(&input_packet);
 input_packet = NULL;
 }

 avformat_close_input(&decoder->avfc);

 avformat_free_context(decoder->avfc);
 decoder->avfc = NULL;
 avformat_free_context(encoder->avfc);
 encoder->avfc = NULL;

 avcodec_free_context(&decoder->video_avcc);
 decoder->video_avcc = NULL;
 avcodec_free_context(&decoder->audio_avcc);
 decoder->audio_avcc = NULL;

 free(decoder);
 decoder = NULL;
 free(encoder);
 encoder = NULL;

 return 0;
}



-
ERROR "Application provided invalid, non monotonically increasing dts to muxer in stream 1 : 6874 >= 6874" while writing encoded output to an mp4 file
11 mai 2023, par lokit khemkaI have a running RTSP stream, streaming video on a loop using the following FFMPEG command :


ffmpeg -re -stream_loop -1 -i ./ffmpeg_c_test/small_bunny_1080p_60fps.mp4 -ac 2 -f rtsp -rtsp_transport tcp rtsp://localhost:8554/mystream



The video file is obtained from the github link : https://github.com/leandromoreira/ffmpeg-libav-tutorial


I keep getting error response, when I calling the function
av_interleaved_write_frame
called from the functionremux
in the attached program. The output format ismp4
, output video codec isav1
and output audio codec is same as input audio codec. The error is from audio stream.

I tried to create a "minimal reproducible code", however, I think it is still not completely minimal, but it reproduces the exact error.


#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>timestamp.h>
#include <libavutil></libavutil>opt.h>
#include <libswscale></libswscale>swscale.h>
#include 
#include 
#include 

#include 
#include 

typedef struct StreamingContext{
 AVFormatContext* avfc;
 const AVCodec *video_avc;
 const AVCodec *audio_avc;
 AVStream *video_avs;
 AVStream *audio_avs;
 AVCodecContext *video_avcc;
 AVCodecContext *audio_avcc;
 int video_index;
 int audio_index;
 char* filename;
 struct SwsContext *sws_ctx;
}StreamingContext;


typedef struct StreamingParams{
 char copy_video;
 char copy_audio;
 char *output_extension;
 char *muxer_opt_key;
 char *muxer_opt_value;
 char *video_codec;
 char *audio_codec;
 char *codec_priv_key;
 char *codec_priv_value;
}StreamingParams;

void logging(const char *fmt, ...)
{
 va_list args;
 fprintf(stderr, "LOG: ");
 va_start(args, fmt);
 vfprintf(stderr, fmt, args);
 va_end(args);
 fprintf(stderr, "\n");
}

int fill_stream_info(AVStream *avs, const AVCodec **avc, AVCodecContext **avcc)
{
 *avc = avcodec_find_decoder(avs->codecpar->codec_id);
 *avcc = avcodec_alloc_context3(*avc);
 if (avcodec_parameters_to_context(*avcc, avs->codecpar) < 0)
 {
 logging("Failed to fill Codec Context.");
 return -1;
 }
 avcodec_open2(*avcc, *avc, NULL);
 return 0;
}

int open_media(const char *in_filename, AVFormatContext **avfc)
{
 *avfc = avformat_alloc_context();
 if (avformat_open_input(avfc, in_filename, NULL, NULL) != 0)
 {
 logging("Failed to open input file %s", in_filename);
 return -1;
 }

 if (avformat_find_stream_info(*avfc, NULL) < 0)
 {
 logging("Failed to get Stream Info.");
 return -1;
 }
}

int prepare_decoder(StreamingContext *sc)
{
 for (int i = 0; i < (int)sc->avfc->nb_streams; i++)
 {
 if (sc->avfc->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 sc->video_avs = sc->avfc->streams[i];
 sc->video_index = i;

 if (fill_stream_info(sc->video_avs, &sc->video_avc, &sc->video_avcc))
 {
 return -1;
 }
 }
 else if (sc->avfc->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
 {
 sc->audio_avs = sc->avfc->streams[i];
 sc->audio_index = i;

 if (fill_stream_info(sc->audio_avs, &sc->audio_avc, &sc->audio_avcc))
 {
 return -1;
 }
 }
 else
 {
 logging("Skipping Streams other than Audio and Video.");
 }
 }
 return 0;
}

int prepare_video_encoder(StreamingContext *encoder_sc, AVCodecContext *decoder_ctx, AVRational input_framerate,
 StreamingParams sp, int scaled_frame_width, int scaled_frame_height)
{
 encoder_sc->video_avs = avformat_new_stream(encoder_sc->avfc, NULL);
 encoder_sc->video_avc = avcodec_find_encoder_by_name(sp.video_codec);
 if (!encoder_sc->video_avc)
 {
 logging("Cannot find the Codec.");
 return -1;
 }

 encoder_sc->video_avcc = avcodec_alloc_context3(encoder_sc->video_avc);
 if (!encoder_sc->video_avcc)
 {
 logging("Could not allocate memory for Codec Context.");
 return -1;
 }

 av_opt_set(encoder_sc->video_avcc->priv_data, "preset", "fast", 0);
 if (sp.codec_priv_key && sp.codec_priv_value)
 av_opt_set(encoder_sc->video_avcc->priv_data, sp.codec_priv_key, sp.codec_priv_value, 0);

 encoder_sc->video_avcc->height = scaled_frame_height;
 encoder_sc->video_avcc->width = scaled_frame_width;
 encoder_sc->video_avcc->sample_aspect_ratio = decoder_ctx->sample_aspect_ratio;

 if (encoder_sc->video_avc->pix_fmts)
 encoder_sc->video_avcc->pix_fmt = encoder_sc->video_avc->pix_fmts[0];
 else
 encoder_sc->video_avcc->pix_fmt = decoder_ctx->pix_fmt;

 encoder_sc->video_avcc->bit_rate = 2 * 1000 * 1000;

 encoder_sc->video_avcc->time_base = av_inv_q(input_framerate);
 encoder_sc->video_avs->time_base = encoder_sc->video_avcc->time_base;

 

 if (avcodec_open2(encoder_sc->video_avcc, encoder_sc->video_avc, NULL) < 0)
 {
 logging("Could not open the Codec.");
 return -1;
 }
 avcodec_parameters_from_context(encoder_sc->video_avs->codecpar, encoder_sc->video_avcc);
 return 0;
}


int prepare_copy(AVFormatContext *avfc, AVStream **avs, AVCodecParameters *decoder_par)
{
 *avs = avformat_new_stream(avfc, NULL);
 avcodec_parameters_copy((*avs)->codecpar, decoder_par);
 return 0;
}

int encode_video(StreamingContext *decoder, StreamingContext *encoder, AVFrame *input_frame)
{
 if (input_frame)
 input_frame->pict_type = AV_PICTURE_TYPE_NONE;

 AVPacket *output_packet = av_packet_alloc();


 int response = avcodec_send_frame(encoder->video_avcc, input_frame);

 while (response >= 0)
 {
 response = avcodec_receive_packet(encoder->video_avcc, output_packet);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
 {
 break;
 }

 output_packet->stream_index = decoder->video_index;
 output_packet->duration = encoder->video_avs->time_base.den / encoder->video_avs->time_base.num;

 av_packet_rescale_ts(output_packet, decoder->video_avs->time_base, encoder->video_avs->time_base);
 response = av_interleaved_write_frame(encoder->avfc, output_packet);
 }

 av_packet_unref(output_packet);
 av_packet_free(&output_packet);

 return 0;
}

int remux(AVPacket **pkt, AVFormatContext **avfc, AVRational decoder_tb, AVRational encoder_tb)
{
 (*pkt)->duration = av_rescale_q((*pkt)->duration, decoder_tb, encoder_tb);
 (*pkt)->pos = -1;
 av_packet_rescale_ts(*pkt, decoder_tb, encoder_tb);
 if (av_interleaved_write_frame(*avfc, *pkt) < 0)
 {
 logging("Error while copying Stream Packet.");
 return -1;
 }
 return 0;
}

int transcode_video(StreamingContext *decoder, StreamingContext *encoder, AVPacket *input_packet, AVFrame *input_frame)
{
 int response = avcodec_send_packet(decoder->video_avcc, input_packet);
 while (response >= 0)
 {
 response = avcodec_receive_frame(decoder->video_avcc, input_frame);
 
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
 {
 break;
 }
 if (response >= 0)
 {
 if (encode_video(decoder, encoder, input_frame))
 return -1;
 }

 av_frame_unref(input_frame);
 }
 return 0;
}

int main(int argc, char *argv[])
{
 const int scaled_frame_width = 854;
 const int scaled_frame_height = 480;
 StreamingParams sp = {0};
 sp.copy_audio = 1;
 sp.copy_video = 0;
 sp.video_codec = "libsvtav1";
 
 StreamingContext *decoder = (StreamingContext *)calloc(1, sizeof(StreamingContext));
 decoder->filename = "rtsp://localhost:8554/mystream";

 StreamingContext *encoder = (StreamingContext *)calloc(1, sizeof(StreamingContext));
 encoder->filename = "small_bunny_9.mp4";
 
 if (sp.output_extension)
 {
 strcat(encoder->filename, sp.output_extension);
 }

 open_media(decoder->filename, &decoder->avfc);
 prepare_decoder(decoder);


 avformat_alloc_output_context2(&encoder->avfc, NULL, "mp4", encoder->filename);
 AVRational input_framerate = av_guess_frame_rate(decoder->avfc, decoder->video_avs, NULL);
 prepare_video_encoder(encoder, decoder->video_avcc, input_framerate, sp, scaled_frame_width, scaled_frame_height);

 prepare_copy(encoder->avfc, &encoder->audio_avs, decoder->audio_avs->codecpar);
 

 if (encoder->avfc->oformat->flags & AVFMT_GLOBALHEADER)
 encoder->avfc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

 if (!(encoder->avfc->oformat->flags & AVFMT_NOFILE))
 {
 if (avio_open(&encoder->avfc->pb, encoder->filename, AVIO_FLAG_WRITE) < 0)
 {
 logging("could not open the output file");
 return -1;
 }
 }

 
 if (avformat_write_header(encoder->avfc, NULL) < 0)
 {
 logging("an error occurred when opening output file");
 return -1;
 }

 AVFrame *input_frame = av_frame_alloc();
 AVPacket *input_packet = av_packet_alloc();

 while (1)
 {
 int ret = av_read_frame(decoder->avfc, input_packet);
 if(ret<0)
 break;
 if (decoder->avfc->streams[input_packet->stream_index]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 if (transcode_video(decoder, encoder, input_packet, input_frame))
 return -1;
 av_packet_unref(input_packet);

 }
 else if (decoder->avfc->streams[input_packet->stream_index]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
 {
 
 if (remux(&input_packet, &encoder->avfc, decoder->audio_avs->time_base, encoder->audio_avs->time_base))
 return -1;
 }
 else
 {
 logging("Ignoring all nonvideo or audio packets.");
 }
 }

 if (encode_video(decoder, encoder, NULL))
 return -1;
 

 av_write_trailer(encoder->avfc);


 if (input_frame != NULL)
 {
 av_frame_free(&input_frame);
 input_frame = NULL;
 }

 if (input_packet != NULL)
 {
 av_packet_free(&input_packet);
 input_packet = NULL;
 }

 avformat_close_input(&decoder->avfc);

 avformat_free_context(decoder->avfc);
 decoder->avfc = NULL;
 avformat_free_context(encoder->avfc);
 encoder->avfc = NULL;

 avcodec_free_context(&decoder->video_avcc);
 decoder->video_avcc = NULL;
 avcodec_free_context(&decoder->audio_avcc);
 decoder->audio_avcc = NULL;

 free(decoder);
 decoder = NULL;
 free(encoder);
 encoder = NULL;

 return 0;
}