Recherche avancée

Médias (1)

Mot : - Tags -/karaoke

Autres articles (95)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (8049)

  • How to compensate frame rate underrun while muxing video to mp4 container with libav

    17 septembre 2021, par Nuno Santos

    I have a process that generates video frames in real time. I’m muxing the generated video frames stream in a video file (x264 codec on a mp4 container).

    


    I'm using ffmpeg-libav and I'm basing myself on the muxing.c example. The problem with the example is that isn't a real world scenario as frames are being generated on a while loop for a given stream duration, never missing a frame.

    


    On my program, frames are supposed to be generated at FPS, however, depending on the hardware capacity it might produce less than FPS. When I initialize the video stream context I declare that frame rate is FPS :

    


    AVRational r = { 1, FPS };
ost->st->time_base = r;


    


    This specifies that the video is going to have FPS frame rate but if less frames are produced, the playback will be faster because it will still reproduce the video as it if had all the declared frames per second.

    


    After googling a lot about this topic I understand that the key to fix this is to manipulate pts and dts but I still haven't found a solution that works.

    


    There are two key functions when writing video frames in the muxing.c example, routines that I'm using in my program :

    


    AVFrame* get_video_frame(int timestamp, OutputStream *ost, const QImage &image)
{
    /* when we pass a frame to the encoder, it may keep a reference to it
     * internally; make sure we do not overwrite it here */
    if (av_frame_make_writable(ost->frame) < 0)
        exit(1);

    av_image_fill_arrays(ost->tmp_frame->data, ost->tmp_frame->linesize, image.bits(), AV_PIX_FMT_RGBA, ost->frame->width, ost->frame->height, 8);
    libyuv::ABGRToI420(ost->tmp_frame->data[0], ost->tmp_frame->linesize[0], ost->frame->data[0], ost->frame->linesize[0], ost->frame->data[1], ost->frame->linesize[1], ost->frame->data[2], ost->frame->linesize[2], ost->tmp_frame->width, -ost->tmp_frame->height);

    #if 1 // this is my attempt to rescale pts, but crashes with ptsframe->pts = av_rescale_q(timestamp, AVRational{1, 1000}, ost->st->time_base);
    #else
    ost->frame->pts = ost->next_pts++;
    #endif

    return ost->frame;
}


    


    On the original code, the pts is simply an incremeting integer for each frame. What I'm trying to do is to pass a timestamp in ms since the beggining of the recording so that I can rescale the pts. When I rescale pts the program crashes complaining that pts is lower then dts.

    


    From what I've been reading, the pts/dts manipulation is supposed to be done at the packet level so I have also tried to manipulate things on write_frame routine without success.

    


    int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c, AVStream *st, AVFrame *frame)
{
    int ret;

    // send the frame to the encoder
    ret = avcodec_send_frame(c, frame);

    if (ret<0)
    {
        fprintf(stderr, "Error sending a frame to the encoder\n");
        exit(1);
    }

    while (ret >= 0)
    {
        AVPacket pkt = { 0 };

        ret = avcodec_receive_packet(c, &pkt);

        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
        {
            break;
        }
        else if (ret<0)
        {
            //fprintf(stderr, "Error encoding a frame: %s\n", av_err2str(ret));
            exit(1);
        }

        /* rescale output packet timestamp values from codec to stream timebase */
        av_packet_rescale_ts(&pkt, c->time_base, st->time_base);
        pkt.stream_index = st->index;

        /* Write the compressed frame to the media file. */
        //log_packet(fmt_ctx, &pkt);
        ret = av_interleaved_write_frame(fmt_ctx, &pkt);
        av_packet_unref(&pkt);

        if (ret < 0)
        {
            //fprintf(stderr, "Error while writing output packet: %s\n", av_err2str(ret));
            exit(1);
        }
    }

    return ret == AVERROR_EOF ? 1 : 0;
}


    


    How should I manipulate dts and pts so that I can achieve a video at certain frame that does not have all the frames as specified in the stream initialization ? Where should I do that manipulation ? On get_video_frame ? On write_frame ? On both ?

    


    Am I heading in the right direction ? What am I missing ?

    


  • Transcoding WAV audio to AAC in an MP4 container using FFmpeg C API

    7 novembre 2022, par vstrom coder

    I'm trying to compose source video and audio into a final MP4 video.
I have a problem with the WAV audio. After decoding and filtering, I'm getting an error from the output encoder : [aac @ 0x145e04c40] more samples than frame size

    


    I initially used the following filter graph (minimal reproducible example) :

    


        abuffer -> aformat -> abuffersink


    


    At this point I was getting the error mentioned above.

    


    Then, I tried to insert a aresample filter to the graph :

    


        abuffer -> aresample -> aformat -> abuffersink


    


    But still getting the same error.
This was based on the fact that the ffmpeg CLI uses this filter when converting WAV to MP4 :

    


    Command :

    


        ffmpeg -i source.wav output.mp4 -loglevel debug


    


    Output contains :

    


        [graph_0_in_0_0 @ 0x138f06200] Setting 'time_base' to value '1/44100'
    [graph_0_in_0_0 @ 0x138f06200] Setting 'sample_rate' to value '44100'
    [graph_0_in_0_0 @ 0x138f06200] Setting 'sample_fmt' to value 's16'
    [graph_0_in_0_0 @ 0x138f06200] Setting 'channel_layout' to value 'mono'
    [graph_0_in_0_0 @ 0x138f06200] tb:1/44100 samplefmt:s16 samplerate:44100 chlayout:mono
    [format_out_0_0 @ 0x138f06620] Setting 'sample_fmts' to value 'fltp'
    [format_out_0_0 @ 0x138f06620] Setting 'sample_rates' to value '96000|88200|64000|48000|44100|32000|24000|22050|16000|12000|11025|8000|7350'
    [format_out_0_0 @ 0x138f06620] auto-inserting filter 'auto_aresample_0' between the filter 'Parsed_anull_0' and the filter 'format_out_0_0'
    [AVFilterGraph @ 0x138f060f0] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed
    [auto_aresample_0 @ 0x138f06c30] [SWR @ 0x120098000] Using s16p internally between filters
    [auto_aresample_0 @ 0x138f06c30] ch:1 chl:mono fmt:s16 r:44100Hz -> ch:1 chl:mono fmt:fltp r:44100Hz
    Output #0, mp4, to 'output.mp4':
      Metadata:
        encoder         : Lavf59.27.100
      Stream #0:0, 0, 1/44100: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, delay 1024, 69 kb/s
        Metadata:
          encoder         : Lavc59.37.100 aac


    


    I'm trying to figure out whether I should use the SWR library directly as exemplified in the transcode_aac example.

    


  • How can reduce the latency in remuxing streams from one container format to another using ffmpeg

    19 octobre 2017, par Alan Xie

    I’m using ffmpeg to remuxing streams from one container format to another with the following code :(from h264 to flv)

    #include <libavutil></libavutil>timestamp.h>
    #include <libavformat></libavformat>avformat.h>
    static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
    {
       AVRational *time_base = &amp;fmt_ctx->streams[pkt->stream_index]->time_base;
       printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
              tag,
              av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
              av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
              av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
              pkt->stream_index);
    }
    int main(int argc, char **argv)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       const char *in_filename, *out_filename;
       int ret, i;
       int stream_index = 0;
       int *stream_mapping = NULL;
       int stream_mapping_size = 0;
       if (argc &lt; 3) {
           printf("usage: %s input output\n"
                  "API example program to remux a media file with libavformat and libavcodec.\n"
                  "The output format is guessed according to the file extension.\n"
                  "\n", argv[0]);
           return 1;
       }
       in_filename  = argv[1];
       out_filename = argv[2];
       av_register_all();
       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           fprintf(stderr, "Could not open input file '%s'", in_filename);
           goto end;
       }
       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           fprintf(stderr, "Failed to retrieve input stream information");
           goto end;
       }
       av_dump_format(ifmt_ctx, 0, in_filename, 0);
       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx) {
           fprintf(stderr, "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }
       stream_mapping_size = ifmt_ctx->nb_streams;
       stream_mapping = av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping));
       if (!stream_mapping) {
           ret = AVERROR(ENOMEM);
           goto end;
       }
       ofmt = ofmt_ctx->oformat;
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           AVStream *out_stream;
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVCodecParameters *in_codecpar = in_stream->codecpar;
           if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &amp;&amp;
               in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &amp;&amp;
               in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
               stream_mapping[i] = -1;
               continue;
           }
           stream_mapping[i] = stream_index++;
           out_stream = avformat_new_stream(ofmt_ctx, NULL);
           if (!out_stream) {
               fprintf(stderr, "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }
           ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
           if (ret &lt; 0) {
               fprintf(stderr, "Failed to copy codec parameters\n");
               goto end;
           }
           out_stream->codecpar->codec_tag = 0;
       }
       av_dump_format(ofmt_ctx, 0, out_filename, 1);
       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               fprintf(stderr, "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           fprintf(stderr, "Error occurred when opening output file\n");
           goto end;
       }
       while (1) {
           AVStream *in_stream, *out_stream;
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;
           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           if (pkt.stream_index >= stream_mapping_size ||
               stream_mapping[pkt.stream_index] &lt; 0) {
               av_packet_unref(&amp;pkt);
               continue;
           }
           pkt.stream_index = stream_mapping[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];
           log_packet(ifmt_ctx, &amp;pkt, "in");
           /* copy packet */
           pkt.pts = getPts();
           pkt.dts = pkt.pts;
           pkt.duration = getDuration();
           pkt.pos = -1;
           log_packet(ofmt_ctx, &amp;pkt, "out");
           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
           if (ret &lt; 0) {
               fprintf(stderr, "Error muxing packet\n");
               break;
           }
           av_packet_unref(&amp;pkt);
       }
       av_write_trailer(ofmt_ctx);
    end:
       avformat_close_input(&amp;ifmt_ctx);
       /* close output */
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);
       av_freep(&amp;stream_mapping);
       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
           return 1;
       }
       return 0;
    }

    But I found it takes several seconds before writing the first AVPacket to the destination.
    My question is : How can I reduce the latency ?i.e. When I call the main function,the output file begin to be written at once.