Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (69)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8315)

  • Bad audio output in stereo mode - FFMPEG PortAudio C++

    26 août 2014, par Spamdark

    I’m here again. This time I am working with the audio, I had before some memory leaks issues but now they are solved, this time I am here with a new problem, when I configure portaudio to sound in stereo (channels = 2), the audio outputs in a bad quality.

    It only outputs good in mono, there is almost no solution in google (Or I am a bad ’googler’), here is the code :

    Thread that plays audio :

    int16_t* audioBuffer=(int16_t*)av_malloc(FRAME_SZ_AV);


    int sz = MEDIA->DecodeAudioFrame(audioBuffer,0);

    if(sz==1)
       Pa_WriteStream(MEDIA->output_stream,audioBuffer,MEDIA->_audio_ccontext->frame_size);

    if(sz!=1)
       MessageBox(0,"error","error",MB_OK);

    ZeroMemory(audioBuffer,FRAME_SZ_AV);
    av_freep(&audioBuffer);

    DecodeAudioFrame function

    int WbMedia::DecodeAudioFrame(int16_t *audio_buf, int buf_size){
    int return_status=0;
    AVPacket t_pack;

    while(!audio_packets.empty()){
       // Get new packet
       WaitForSingleObject(Queue_Audio_Mutex,INFINITE);
       t_pack = audio_packets.front();
       audio_packets.pop();
       ReleaseMutex(Queue_Audio_Mutex);

       int obt_size = AVCODEC_MAX_AUDIO_FRAME_SIZE;
       int consm = avcodec_decode_audio3(_audio_ccontext,audio_buf,&obt_size,&t_pack);
       if(consm > 0 && obt_size > 0){
           return_status=1;
           break;
       }
       return_status=-1;
       break;
    }

    av_free_packet(&t_pack);

    return return_status;
    }

    PortAudio Settings

    output_params.device = Pa_GetDefaultOutputDevice(); //choosen_device.dev_index;
    output_params.sampleFormat=paInt16;
    output_params.channelCount=channel_count;
    output_params.suggestedLatency=choosen_device.dev_inf->defaultLowOutputLatency;
    output_params.hostApiSpecificStreamInfo=NULL;

    // Start with PA opening
    PaError pa_opening_err = Pa_OpenStream(&output_stream,
       NULL,
       &output_params,
       sample_fr,
       _audio_ccontext->frame_size,
       paNoFlag,
       NULL,
       NULL
    );

    Why is the audio outputting in bad quality in stereo and not in mono ? How can I fix it ?

  • FFMpeg attach two mp3 file to eachother, length and metadata not correct

    12 mars 2017, par pocek

    Is there any way to attach two mp3 or ogg or mp4 files to eachother using ffmpeg ?

    I mean for example i have 2 mp3 files, the output i need is to listen the first one and after it finished, second file plays.

    so duration for output would be the duration for first + second file.

    I’ve tried this code but the output is not correct

    ffmpeg -i "concat:file1.mp3|file2.mp3" -acodec copy output.mp3

    the metadata is not new for this output file and plays just first file !

  • ffmpeg C api cutting a video when packet dts is greater than pts

    10 mars 2017, par TastyCatFood

    Corrupted videos

    In trying to cut out a duration of one of videos using ffmpeg C api, using the code posted here : How to cut video with FFmpeg C API , ffmpeg spit out the log below :

    D/logger: Loop count:9   out: pts:0 pts_time:0 dts:2002 dts_time:0.0333667 duration:2002 duration_time:0.0333667 stream_index:1
    D/trim_video: Error muxing packet Invalid argument

    ffmpeg considercs an instruction to decompress a frame after presenting it to be a nonsense, which is well...reasonable but stringent.

    My VLC player finds the video alright and plays it of course.

    Note :

    The code immediately below is in c++ written to be compiled with g++ as I’m developing for android. For C code, scroll down further.

    My solution(g++) :

    extern "C" {
    #include "libavformat/avformat.h"
    #include "libavutil/mathematics.h"
    #include "libavutil/timestamp.h"



    static void log_packet(
           const AVFormatContext *fmt_ctx,
           const AVPacket *pkt, const char *tag,
           long count=0)
    {

       printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
              count,
              static_cast<double>(pkt->pts),
              static_cast<double>(pkt->dts),
              static_cast<double>(pkt->duration),
              pkt->stream_index);
       return;
    }

    int trimVideo(
           const char* in_filename,
           const char* out_filename,
           double cutFrom,
           double cutUpTo)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       int ret, i;
       //jboolean  copy = true;
       //const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&amp;copy);
       //const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&amp;copy);
       long loopCount = 0;

       av_register_all();

       // Cutting may change the pts and dts of the resulting video;
       // if frames in head position are removed.
       // In the case like that, src stream's copy start pts
       // need to be recorded and is used to compute the new pts value.
       // e.g.
       //    new_pts = current_pts - trim_start_position_pts;

       // nb-streams is the number of elements in AVFormatContext.streams.
       // Initial pts value must be recorded for each stream.

       //May be malloc and memset should be replaced with [].
       int64_t *dts_start_from = NULL;
       int64_t *pts_start_from = NULL;

       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           printf( "Could not open input file '%s'", in_filename);
           goto end;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           printf("Failed to retrieve input stream information");
           goto end;
       }

       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx) {
           printf( "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       ofmt = ofmt_ctx->oformat;

       //preparing streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
           if (!out_stream) {
               printf( "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0) {
               printf( "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           out_stream->codec->codec_tag = 0;
           if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
               out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }
       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               printf( "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       //preparing the header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           printf( "Error occurred when opening output file\n");
           goto end;
       }

       // av_seek_frame translates AV_TIME_BASE into an appropriate time base.
       ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
       if (ret &lt; 0) {
           printf( "Error seek\n");
           goto end;
       }
       dts_start_from = static_cast(
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
       memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
       pts_start_from = static_cast(
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
       memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);

       //writing
       while (1) {
           AVStream *in_stream, *out_stream;
           //reading frame into pkt
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;
           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];

           //if end reached
           if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
               av_packet_unref(&amp;pkt);
               break;
           }


           // Recording the initial pts value for each stream
           // Recording dts does not do the trick because AVPacket.dts values
           // in some video files are larger than corresponding pts values
           // and ffmpeg does not like it.
           if (dts_start_from[pkt.stream_index] == 0) {
               dts_start_from[pkt.stream_index] = pkt.pts;
               printf("dts_initial_value: %f for stream index: %d \n",
                       static_cast<double>(dts_start_from[pkt.stream_index]),
                                   pkt.stream_index

               );
           }
           if (pts_start_from[pkt.stream_index] == 0) {
               pts_start_from[pkt.stream_index] = pkt.pts;
               printf( "pts_initial_value:  %f for stream index %d\n",
                       static_cast<double>(pts_start_from[pkt.stream_index]),
                                   pkt.stream_index);
           }

           log_packet(ifmt_ctx, &amp;pkt, "in",loopCount);

           /* Computes pts etc
            *      av_rescale_q_rend etc are countering changes in time_base between
            *      out_stream and in_stream, so regardless of time_base values for
            *      in and out streams, the rate at which frames are refreshed remains
            *      the same.
            *
                   pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
                   As `time_base == 1/frame_rate`, the above is an equivalent of

                   (out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
                   frame_rate is the number of frames to be displayed per second.

                   AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
            * */


           pkt.pts =
                   av_rescale_q_rnd(
                   pkt.pts - pts_start_from[pkt.stream_index],
                   static_cast<avrational>(in_stream->time_base),
                   static_cast<avrational>(out_stream->time_base),
                   static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
           pkt.dts =
                   av_rescale_q_rnd(
                   pkt.dts - dts_start_from[pkt.stream_index],
                   static_cast<avrational>(in_stream->time_base),
                   static_cast<avrational>(out_stream->time_base),
                   static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));

           if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
           if(pkt.dts &lt; 0) pkt.dts = 0;
           if(pkt.pts &lt; 0) pkt.pts = 0;

           pkt.duration = av_rescale_q(
                   pkt.duration,
                   in_stream->time_base,
                   out_stream->time_base);
           pkt.pos = -1;
           log_packet(ofmt_ctx, &amp;pkt, "out",loopCount);

           // Writes to the file after buffering packets enough to generate a frame
           // and probably sorting packets in dts order.
           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
    //        ret = av_write_frame(ofmt_ctx, &amp;pkt);
           if (ret &lt; 0) {
               printf( "Error muxing packet %d \n", ret);
               //continue;
               break;
           }
           av_packet_unref(&amp;pkt);
           ++loopCount;
       }

       //Writing end code?
       av_write_trailer(ofmt_ctx);

       end:
       avformat_close_input(&amp;ifmt_ctx);

       if(dts_start_from)free(dts_start_from);
       if(pts_start_from)free(pts_start_from);

       /* close output */
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           //printf( "Error occurred: %s\n", av_err2str(ret));
           return 1;
       }

       return 0;
    }

    }
    </avrounding></avrational></avrational></avrounding></avrational></avrational></double></double></double></double></double>

    c compatible(Console says g++ but I’m sure this is C code)

    #include "libavformat/avformat.h"
    #include "libavutil/mathematics.h"
    #include "libavutil/timestamp.h"



    static void log_packet(
           const AVFormatContext *fmt_ctx,
           const AVPacket *pkt, const char *tag,
           long count)
    {

       printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
              count,
              (double)pkt->pts,
              (double)pkt->dts,
              (double)pkt->duration,
              pkt->stream_index);
       return;
    }

    int trimVideo(
           const char* in_filename,
           const char* out_filename,
           double cutFrom,
           double cutUpTo)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       int ret, i;
       //jboolean  copy = true;
       //const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&amp;copy);
       //const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&amp;copy);
       long loopCount = 0;

       av_register_all();

       // Cutting may change the pts and dts of the resulting video;
       // if frames in head position are removed.
       // In the case like that, src stream's copy start pts
       // need to be recorded and is used to compute the new pts value.
       // e.g.
       //    new_pts = current_pts - trim_start_position_pts;

       // nb-streams is the number of elements in AVFormatContext.streams.
       // Initial pts value must be recorded for each stream.

       //May be malloc and memset should be replaced with [].
       int64_t *dts_start_from = NULL;
       int64_t *pts_start_from = NULL;

       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           printf( "Could not open input file '%s'", in_filename);
           goto end;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           printf("Failed to retrieve input stream information");
           goto end;
       }

       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx) {
           printf( "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       ofmt = ofmt_ctx->oformat;

       //preparing streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
           if (!out_stream) {
               printf( "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0) {
               printf( "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           out_stream->codec->codec_tag = 0;
           if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
               out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }
       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               printf( "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       //preparing the header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           printf( "Error occurred when opening output file\n");
           goto end;
       }

       // av_seek_frame translates AV_TIME_BASE into an appropriate time base.
       ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
       if (ret &lt; 0) {
           printf( "Error seek\n");
           goto end;
       }
       dts_start_from = (int64_t*)
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
       memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
       pts_start_from = (int64_t*)
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
       memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);

       //writing
       while (1) {
           AVStream *in_stream, *out_stream;
           //reading frame into pkt
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;
           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];

           //if end reached
           if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
               av_packet_unref(&amp;pkt);
               break;
           }


           // Recording the initial pts value for each stream
           // Recording dts does not do the trick because AVPacket.dts values
           // in some video files are larger than corresponding pts values
           // and ffmpeg does not like it.
           if (dts_start_from[pkt.stream_index] == 0) {
               dts_start_from[pkt.stream_index] = pkt.pts;
               printf("dts_initial_value: %f for stream index: %d \n",
                       (double)dts_start_from[pkt.stream_index],
                                   pkt.stream_index

               );
           }
           if (pts_start_from[pkt.stream_index] == 0) {
               pts_start_from[pkt.stream_index] = pkt.pts;
               printf( "pts_initial_value:  %f for stream index %d\n",
                       (double)pts_start_from[pkt.stream_index],
                                   pkt.stream_index);
           }

           log_packet(ifmt_ctx, &amp;pkt, "in",loopCount);

           /* Computes pts etc
            *      av_rescale_q_rend etc are countering changes in time_base between
            *      out_stream and in_stream, so regardless of time_base values for
            *      in and out streams, the rate at which frames are refreshed remains
            *      the same.
            *
                   pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
                   As `time_base == 1/frame_rate`, the above is an equivalent of

                   (out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
                   frame_rate is the number of frames to be displayed per second.

                   AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
            * */


           pkt.pts =
                   av_rescale_q_rnd(
                   pkt.pts - pts_start_from[pkt.stream_index],
                   (AVRational)in_stream->time_base,
                   (AVRational)out_stream->time_base,
                   (AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
           pkt.dts =
                   av_rescale_q_rnd(
                   pkt.dts - dts_start_from[pkt.stream_index],
                   (AVRational)in_stream->time_base,
                   (AVRational)out_stream->time_base,
                   AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

           if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
           if(pkt.dts &lt; 0) pkt.dts = 0;
           if(pkt.pts &lt; 0) pkt.pts = 0;

           pkt.duration = av_rescale_q(
                   pkt.duration,
                   in_stream->time_base,
                   out_stream->time_base);
           pkt.pos = -1;
           log_packet(ofmt_ctx, &amp;pkt, "out",loopCount);

           // Writes to the file after buffering packets enough to generate a frame
           // and probably sorting packets in dts order.
           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
    //        ret = av_write_frame(ofmt_ctx, &amp;pkt);
           if (ret &lt; 0) {
               printf( "Error muxing packet %d \n", ret);
               //continue;
               break;
           }
           av_packet_unref(&amp;pkt);
           ++loopCount;
       }

       //Writing end code?
       av_write_trailer(ofmt_ctx);

       end:
       avformat_close_input(&amp;ifmt_ctx);

       if(dts_start_from)free(dts_start_from);
       if(pts_start_from)free(pts_start_from);

       /* close output */
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           //printf( "Error occurred: %s\n", av_err2str(ret));
           return 1;
       }

       return 0;
    }

    What is the problem

    My code does not produce the error because I’m doing new_dts = current_dts - initial_pts_for_current_stream. It works but now dts values are not properly computed.

    How to recalculate dts properly ?

    P.S

    Since Olaf seems to have a very strong opinion, posting the build console message for my main.c.
    I don’t really know C or C++ but GNU gcc seems to be calling gcc for compiling and g++ for linking.
    Well, the extension for my main is now .c and the compiler being called is gcc, so that should at least mean I have got a code written in C language...

    ------------- Build: Debug in videoTrimmer (compiler: GNU GCC Compiler)---------------

    gcc -Wall -fexceptions -std=c99 -g -I/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include -I/usr/include -I/usr/local/include -c /home/d/CodeBlockWorkplace/videoTrimmer/main.c -o obj/Debug/main.o
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘log_packet’:
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:15:12: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘long int’ [-Wformat=]
        printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
               ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘trimVideo’:
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:79:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘avcodec_copy_context’ is deprecated [-Wdeprecated-declarations]
            ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
            ^
    In file included from /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:319:0,
                    from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavcodec/avcodec.h:4286:5: note: declared here
    int avcodec_copy_context(AVCodecContext *dest, const AVCodecContext *src);
        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:91:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            out_stream->codec->codec_tag = 0;
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:93:13: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
                out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
                ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    g++ -L/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib -L/usr/lib -L/usr/local/lib -o bin/Debug/videoTrimmer obj/Debug/main.o   ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavformat.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavcodec.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavutil.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswresample.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswscale.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavfilter.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libpostproc.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavdevice.a -lX11 -lvdpau -lva -lva-drm -lva-x11 -ldl -lpthread -lz -llzma -lx264
    Output file is bin/Debug/videoTrimmer with size 77.24 MB
    Process terminated with status 0 (0 minute(s), 16 second(s))
    0 error(s), 8 warning(s) (0 minute(s), 16 second(s))