Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg mass add intro to mp4 videos

    2 février 2016, par F3rnand

    I found this script from forums, and a user suggests to use this code to mass add my mp4 files with intro.

    for %a in ("*.mp4") do mencoder -ovc copy -oac copy c:\intro.mp4 "%a" -o "c:\temp\%%a"
    

    When i use this code on ff Prompt in my Windows Pc. It won't do anything. I added my data to the location above.

    Can anybody explain or suggest how to fix this? I also want to know how to mass add outro to all my mp4 files.

    Thanks

    As Requested by mulvya, this is same for all videos as it is generated from same software

    c:\ffmpeg>ffprobe intro.mp4
    ffprobe version N-78197-g5893e87 Copyright (c) 2007-2016 the FFmpeg developers
      built with gcc 5.2.0 (GCC)
      configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
      libavutil      55. 16.101 / 55. 16.101
      libavcodec     57. 22.102 / 57. 22.102
      libavformat    57. 23.101 / 57. 23.101
      libavdevice    57.  0.101 / 57.  0.101
      libavfilter     6. 27.100 /  6. 27.100
      libswscale      4.  0.100 /  4.  0.100
      libswresample   2.  0.101 /  2.  0.101
      libpostproc    54.  0.100 / 54.  0.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'intro.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf55.30.100
      Duration: 00:00:03.00, start: 0.000000, bitrate: 216 kb/s
        Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720, 212 kb/s, 24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
        Metadata:
          handler_name    : VideoHandler
    

    I tried the codes suggested by VC.One

    ffmpeg -i c:\intro.mp4 -i c:\mecabo2.mp4 -i c:\outro.mp4 -filter_complex concat=n=3:v=1:a=1 -f MP4 -y output.mp4
    for %a in ("*.mp4") do ffmpeg -i c:\intro.mp4 -i "%a" -i c:\outro.mp4 -filter_complex concat=n=3:v=1:a=1 -f MP4 -y "c:\output\%%a"
    

    but it returns same error

    Cannot find a matching stream for unlabeled input pad 3 on filter Parsed_concat_0
    
  • will Adult script pro work with google app engine as server ? [on hold]

    1er février 2016, par niko nørre

    i want to user Google app engine as a server for my website and app. but a programmer needs to set the script up for me. the script is called Adult Script Pro and it need FFMpeg and more to Work.

    do google app engine provide me with ssh root access?

    The requirements are the following:

    Apache, Lighttpd or Nginx Server (with rewrite support)

    MySQL 5.x

    PHP >= 5.2.x (mod_php or CGI/FastCGI)

    GD2 Support

    MySQL Support

    CURL Support

    SimpleXML Support

    FTP Support

    PCRE with UTF8/Unicode Properties

    PHP CLI >= 5.2.x (see above for support)

    FFMpeg => 0.11.5 (with support for lame, x264, theora, vpx, xvid, faac, faad2, amr, webm, jpeg, png, gif and freetype)

    Will the site Work whid Google app engine as server??

    Pleas help thanks Nikolaj

  • How to catch images from a http webcam livestream

    1er février 2016, par mcExchange

    I need to capture images from a webcam livestream that is provided here

    Has anyone an idea how to do it?

    I found this answer but it doesn't help me. When I enter

    ffmpeg -i rtmp://85.126.233.214/heidelberg-live/stream1.flv -r 1 -f image2 -vcodec mjpeg captured%d.jpg
    

    It returns

    [rtmp @ 0xc45f40] Cannot open connection tcp://85.126.233.214:1935
    rtmp://85.126.233.214/heidelberg-live/stream1.flv: Connection timed out
    Conversion failed!
    

    This is what the livestream part of the website looks like:

    
    
  • How to make ffmpeg hwaccel work ? [on hold]

    1er février 2016, par user2170324

    This is playing, but I can't get it to have hardware accelerated output.

    In the ff_find_hwaccel it finds h264 with AV_PIX_FMT_VDPAU, and the video has PIX_FMT_YUV420P.

    I'm trying to decode the video in the gpu.

    I don't know what I'm missing. Can someone help?

    extern "C"{
    #include avcodec.h>
    #include avformat.h>
    #include swscale.h>
    #include time.h>
    #include avstring.h>
    #include vdpau.h>
    #include vdpau_x11.h>
    }
    
    #include SDL.h>
    #include 
    #include 
    #include 
    
    
    AVHWAccel *ff_find_hwaccel(AVCodecID codec_id,  PixelFormat  pix_fmt){
        AVHWAccel *hwaccel=NULL;
        while((hwaccel= av_hwaccel_next(hwaccel))){
            if (   hwaccel->id      == codec_id
                && hwaccel->pix_fmt == pix_fmt)
                return hwaccel;
        }
        return NULL;
    }
    
    int main(){
        AVFormatContext *pFormatCtx;
        int             i, videoindex;
        AVCodecContext  *pCodecCtx;
        AVCodec         *pCodec;
        AVFrame *pFrame,*pFrameYUV;
        uint8_t *out_buffer;
        AVPacket *packet;
        int y_size;
        int ret, got_picture;
        struct SwsContext *img_convert_ctx;
    
        char filepath[]="video.mp4";
        //SDL---------------------------
        int screen_w=0,screen_h=0;
        SDL_Window *screen; 
        SDL_Renderer* sdlRenderer;
        SDL_Texture* sdlTexture;
        SDL_Rect sdlRect;
    
        FILE *fp_yuv;
    
        av_register_all();
        avformat_network_init();
        pFormatCtx = avformat_alloc_context();
    
        if(avformat_open_input(&pFormatCtx,filepath,NULL,NULL)!=0){
            printf("Couldn't open input stream.\n");
            return -1;
        }
        if(avformat_find_stream_info(pFormatCtx,NULL)<0){
            printf("Couldn't find stream information.\n");
            return -1;
        }
        videoindex=-1;
        for(i=0; inb_streams; i++) 
            if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO){
                videoindex=i;
                break;
            }
        if(videoindex==-1){
            printf("Didn't find a video stream.\n");
            return -1;
        }
    
        pCodecCtx=pFormatCtx->streams[videoindex]->codec;
        pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
    
    
        if(pCodec==NULL){
            printf("Codec not found.\n");
            return -1;
        }
        if(avcodec_open2(pCodecCtx, pCodec,NULL)<0){
            printf("Could not open codec.\n");
            return -1;
        }
    
    
    pCodecCtx->hwaccel = ff_find_hwaccel(pCodecCtx->codec->id,PIX_FMT_YUV420P);
    
        //pCodecCtx->pix_fmt=AV_PIX_FMT_VDPAU;
    
        pFrame=av_frame_alloc();
        pFrameYUV=av_frame_alloc();
        out_buffer=(uint8_t *)av_malloc(avpicture_get_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height));
        avpicture_fill((AVPicture *)pFrameYUV, out_buffer, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
        packet=(AVPacket *)av_malloc(sizeof(AVPacket));
        //Output Info-----------------------------
        printf("--------------- File Information ----------------\n");
        av_dump_format(pFormatCtx,0,filepath,0);
        printf("-------------------------------------------------\n");
        img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, 
            pCodecCtx->width, pCodecCtx->height,pCodecCtx->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL); 
    
    
        if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {  
            printf( "Could not initialize SDL - %s\n", SDL_GetError()); 
            return -1;
        } 
    
        screen_w = pCodecCtx->width;
        screen_h = pCodecCtx->height;
        screen = SDL_CreateWindow("Simplest ffmpeg player's Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
            screen_w, screen_h,
            SDL_WINDOW_OPENGL);
    
        if(!screen) {  
            printf("SDL: could not create window - exiting:%s\n",SDL_GetError());  
            return -1;
        }
    
        sdlRenderer = SDL_CreateRenderer(screen, -1, 0);  
    
        sdlTexture = SDL_CreateTexture(sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING,pCodecCtx->width,pCodecCtx->height);  
    
        sdlRect.x=0;
        sdlRect.y=0;
        sdlRect.w=screen_w;
        sdlRect.h=screen_h;
    
    
    
    
        //SDL End----------------------
        while(av_read_frame(pFormatCtx, packet)>=0){
            if(packet->stream_index==videoindex){
                ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet);
                if(ret < 0){
                    printf("Decode Error.\n");
                    return -1;
                }
                if(got_picture){
                    sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, 
                        pFrameYUV->data, pFrameYUV->linesize);
    
    
                    SDL_UpdateTexture( sdlTexture, NULL, pFrameYUV->data[0], pFrameYUV->linesize[0] );  
    
    
                    SDL_RenderClear( sdlRenderer );  
                    SDL_RenderCopy( sdlRenderer, sdlTexture,  NULL, &sdlRect);  
                    SDL_RenderPresent( sdlRenderer );  
                    SDL_Delay(40);
                }
            }
            av_free_packet(packet);
        }
        while (1) {
            ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet);
            if (ret < 0)
                break;
            if (!got_picture)
                break;
            sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, 
                pFrameYUV->data, pFrameYUV->linesize);
    
            SDL_UpdateTexture( sdlTexture, &sdlRect, pFrameYUV->data[0], pFrameYUV->linesize[0] );  
            SDL_RenderClear( sdlRenderer );  
            SDL_RenderCopy( sdlRenderer, sdlTexture,  NULL, &sdlRect);  
            SDL_RenderPresent( sdlRenderer );  
            SDL_Delay(40);
        }
    
        sws_freeContext(img_convert_ctx);
        SDL_Quit();
        av_frame_free(&pFrameYUV);
        av_frame_free(&pFrame);
        avcodec_close(pCodecCtx);
        avformat_close_input(&pFormatCtx);
    
        return 0;
    }
    
  • Maintaining timecode using -ss trim tool

    1er février 2016, par Alex Noble

    I'm currently using this FFMPEG script (using "run shell script" in Automator) on QT ProRes files to strip off the first six channels of audio, pass through for audio and video, and trim the first 6.5 seconds off the beginning of the video:

    for f in "$@"
    do
    /usr/local/bin/ffmpeg -ss 6.5 -i "$f" -c:v copy -map 0:0 -c:a copy -map 0:7  "${f%.*}_ST.mov"
    done
    

    When I use this script, it successfully trims the file but then moves the original timecode up to the new beginning of the clip. So if 00:59:48:00 was my timecode at the beginning of the original clip, it's now also the starting timecode of the beginning of my trimmed clip.

    My question is how can I trim 6.5 seconds off the beginning while also trimming that same amount of time off my timecode as well?

    So instead of my trimmed clip (let's say 23.98 fps) starting at 00:59:48:00, it would start at 00:59:54:12 since 6.5 seconds (roughly 156 frames) have been trimmed.