Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg : Audio/Video Fade in/out

    31 mai 2017, par dazzafact

    I have this working Script with Audio fading. How can i input also a Fading for the video in and out. It always gives me an error :

    "Option filter:v (set stream filtergraph) cannot be applied to input url ./mp3/conv/1.m4a -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to."

    This Works with audio-fading:

     ffmpeg  -ss 00:00:00 -t 90 -i "concat:intermediate0.ts|concat:intermediate1.ts"  
    -i "./mp3/conv/1.m4a" -af "afade=t=out:st=84:d=6"  -map 0:v:0 -map 1:a:0 
    video/out515.mp4 -y
    

    This doesn't Work with Audio+Vic-^Fading:

    ffmpeg  -ss 00:00:00 -t 90-i  "concat:intermediate0.ts|intermediate1.ts" 
    -filter:v 'fade=in:0:30,fade=out:250:30' -i "./mp3/conv/1.m4a" 
    -af "afade=t=out:st=84:d=6" -map 0:v:0 -map 1:a:0  video/out515.mp4 -y
    
  • Using videoshow (npm module) with ffmpeg to conver audio+image into video

    31 mai 2017, par delesslin

    I'm trying to use the videoshow utility to combine a short audio clip with an image on my ubuntu system. I installed ffmpeg globally while in the root directory using:

    sudo apt-get install ffmpeg
    

    I then installed videoshow inside the project folder using:

    sudo npm install videoshow
    

    The project folder contains 3 files plus the node_modules folder: an image (wolf.jpg), an audio clip (wolf.mp3), and a js file (audio.js). I derived audio.js from an example script on the videoshow github page. Here is my script:

        var videoshow = require('videoshow')
    
    var images = [
      "wolf.jpg"
    ]
    
    var videoOptions = {
      fps: 25,
      loop: 5, // seconds
      transition: true,
      transitionDuration: 1, // seconds
      videoBitrate: 1024,
      videoCodec: 'libx264',
      size: '640x?',
      audioBitrate: '128k',
      audioChannels: 2,
      format: 'mp4',
      pixelFormat: 'yuv420p'
    }
    
    videoshow(images, videoOptions)
      .audio('wolf.mp3')
      .save('wolf.mp4')
      .on('start', function (command) {
        console.log('ffmpeg process started:', command)
      })
      .on('error', function (err, stdout, stderr) {
        console.error('Error:', err)
        console.error('ffmpeg stderr:', stderr)
      })
      .on('end', function (output) {
        console.error('Video created in:', output)
      })
    

    In the terminal, inside the project folder I then call:

    node audio.js
    

    The terminal is silent for a moment followed by:

        ffmpeg process started: ffmpeg -i /tmp/videoshow-db63732f-7376-4663-a7bc-c061091e579a -y -filter_complex concat=n=1:v=1:a=0 wolf.mp4
    ffmpeg process started: ffmpeg -i /tmp/videoshow-1f8851b4-c297-4070-a249-3624970dbb85 -i wolf.mp3 -y -b:a 128k -ac 2 -r 25 -b:v 1024k -vcodec libx264 -filter:v scale=w=640:h=trunc(ow/a/2)*2 -f mp4 -map 0:0 -map 1:0 -t 5 -af afade=t=in:ss=0:st=0:d=3 -af afade=t=out:st=2:d=3 -pix_fmt yuv420p wolf.mp4
    Error: [Error: ffmpeg exited with code 1: ]
    ffmpeg stderr: undefined
    

    I'm not sure why this isn't working, but any/all assistance would be deeply appreciated...

    Hawu'h (thanks), Roo

  • how to make video module with nodejs like facebook did with photos

    31 mai 2017, par aharit

    Just wondering how to start ? The needs :

    With around 10 photos, being able to produce a small video of 5-10sec, with animations for example (transition ?), i want to reproduce the facebook videos process if anybody know about that, which technical stack is the best, modules (ffmpeg, wrapper ffmpeg)(pyhton, nodejs).

    Thx

  • FFMPEG trim by remux doesn't write a keyframe

    31 mai 2017, par DweebsUnited

    I am using the FFMPEG libraries to trim video files. I do this all as a remux, with no encoding or decoding.

    Trimming currently works correctly with audio, but the trimmed video data appears as a solid color, with small squares of pixels changing. I believe this is because I am not catching/writing a keyframe. It is my understanding that av_seek_frame will seek to a keyframe, which does not seem to be the case..

    If need be, can I decode and then reencode just the first video frame I read after seeking? This will probably be more code than reencoding every frame, but speed is the primary issue here, not complexity.

    Thank you for any help. Also I apologize if I am misunderstanding something to do with video files, I'm still new to this.

    Example output frame: enter image description here

    Code, adapted from the remux example provided with ffmpeg:

    const char *out_filename = "aaa.mp4";
    
    FILE *fp = fdopen(fd, "r");
    fseek(fp, 0, SEEK_SET);
    
    if ( fp ) {
    
        // Build an ffmpeg file
        char path[512];
        sprintf(path, "pipe:%d", fileno(fp));
    
        // Turn on verbosity
        av_log_set_level( AV_LOG_DEBUG );
        av_log_set_callback( avLogCallback );
    
    
        av_register_all();
        avcodec_register_all();
    
    
        AVOutputFormat *ofmt = NULL;
        AVFormatContext *ifmt_ctx = avformat_alloc_context(), *ofmt_ctx = NULL;
        AVPacket pkt;
        int ret, i;
    
    
        if ((ret = avformat_open_input(&ifmt_ctx, path, av_find_input_format("mp4"), NULL)) < 0) {
            LOG("Could not open input file '%s'", path);
            goto end;
        }
    
        if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
            LOG("Failed to retrieve input stream information", "");
            goto end;
        }
    
    
        avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
        if (!ofmt_ctx) {
            LOG("Could not create output context\n");
            ret = AVERROR_UNKNOWN;
            goto end;
        }
    
        ofmt = ofmt_ctx->oformat;
    
    
        for (i = 0; i < ifmt_ctx->nb_streams; i++) {
            AVStream *in_stream = ifmt_ctx->streams[i];
            AVStream *out_stream = avformat_new_stream(ofmt_ctx, NULL);
    
            if (!out_stream) {
                LOG("Failed allocating output stream\n");
                goto end;
            }
    
            ret = avcodec_parameters_copy(out_stream->codecpar, in_stream->codecpar);
            if (ret < 0) {
                LOG("Failed to copy context from input to output stream codec context\n");
                goto end;
            }
            out_stream->codecpar->codec_tag = 0;
        }
    
        if (!(ofmt->flags & AVFMT_NOFILE)) {
            ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
            if (ret < 0) {
                LOG("Could not open output file '%s'", out_filename);
                goto end;
            }
        }
    
        ret = avformat_write_header(ofmt_ctx, NULL);
        if (ret < 0) {
            LOG("Error occurred when writing headers\n");
            goto end;
        }
    
        ret = av_seek_frame(ifmt_ctx, -1, from_seconds * AV_TIME_BASE, AVSEEK_FLAG_ANY);
        if (ret < 0) {
            LOG("Error seek\n");
            goto end;
        }
    
        int64_t *dts_start_from;
        int64_t *pts_start_from;
        dts_start_from = (int64_t *) malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
        memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
        pts_start_from = (int64_t *) malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
        memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
    
        while (1) {
            AVStream *in_stream, *out_stream;
    
            ret = av_read_frame(ifmt_ctx, &pkt);
            LOG("while %d", ret);
            LOG("Packet size: %d", pkt.size);
            LOG("Packet stream: %d", pkt.stream_index);
            if (ret < 0)
                break;
    
            in_stream = ifmt_ctx->streams[pkt.stream_index];
            out_stream = ofmt_ctx->streams[pkt.stream_index];
    
            if (av_q2d(in_stream->time_base) * pkt.pts > end_seconds) {
                av_packet_unref(&pkt);
                break;
            }
    
            if (dts_start_from[pkt.stream_index] == 0) {
                dts_start_from[pkt.stream_index] = pkt.dts;
                printf("dts_start_from: %s\n", av_ts_make_string((char[AV_TS_MAX_STRING_SIZE]){0},dts_start_from[pkt.stream_index]));
            }
            if (pts_start_from[pkt.stream_index] == 0) {
                pts_start_from[pkt.stream_index] = pkt.pts;
                printf("pts_start_from: %s\n", av_ts_make_string((char[AV_TS_MAX_STRING_SIZE]){0},pts_start_from[pkt.stream_index]));
            }
    
            /* copy packet */
            pkt.pts = ::av_rescale_q_rnd(pkt.pts - pts_start_from[pkt.stream_index], in_stream->time_base, out_stream->time_base, (AVRounding) (AV_ROUND_NEAR_INF |
                                                                                                                                                AV_ROUND_PASS_MINMAX));
            pkt.dts = ::av_rescale_q_rnd(pkt.dts - dts_start_from[pkt.stream_index], in_stream->time_base, out_stream->time_base, (AVRounding) (AV_ROUND_NEAR_INF |
                                                                                                                                                AV_ROUND_PASS_MINMAX));
            if (pkt.pts < 0) {
                pkt.pts = 0;
            }
            if (pkt.dts < 0) {
                pkt.dts = 0;
            }
            pkt.duration = (int) av_rescale_q((int64_t) pkt.duration, in_stream->time_base, out_stream->time_base);
            pkt.pos = -1;
            printf("\n");
    
            ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
            if (ret < 0) {
                LOG("Error muxing packet\n");
                break;
            }
            av_packet_unref(&pkt);
        }
        free(dts_start_from);
        free(pts_start_from);
    
        av_write_trailer(ofmt_ctx);
    
        end:
        LOG("END");
    
        avformat_close_input(&ifmt_ctx);
    
        // Close output
        if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
            avio_closep(&ofmt_ctx->pb);
        avformat_free_context(ofmt_ctx);
    
        if (ret < 0 && ret != AVERROR_EOF) {
            LOG("-- Error occurred: %s\n", av_err2str(ret));
            return 1;
        }
    }
    
  • Audio broadcasting using ffmpeg and rtmp protocol

    31 mai 2017, par Sachin Jose

    I have an application to broadcast video(YouTube). I am trying to implement a separate audio broadcasting feature. How do we audio broadcast using ffmpeg in rtmp protocol. I need help on ffmpeg command line arguments.

    I have something like:

    ffmpeg -i input.mp3 -re -acodec libmp3lame -ab 64k -ac 1 -ar 44100 -g 75 -qscale 21 -f flv rtmplink and key