Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • pyinstaller and moviepy, ffmpeg works from terminal but not from finder

    12 novembre 2014, par Todd

    I am packaging python using pyinstaller 2.1 on OSX Mavericks. I have done this successfully in the past, but this is my first package that uses moviepy and ffmpeg. I use the following import:

    from moviepy.video.io import ffmpeg_reader
    

    Without this line in the code, everything works fine and I can launch my final package from its icon in finder. With the moviepy import, it will work if I launch from the terminal like this:

    open ./myapp.app
    

    but it will not open if I click on the icon from finder (opens quickly and crashes). I am assuming this has something to do with paths or environment variables that are set in terminal, but are not transferred to my packaged app. I have tried various hidden imports in pyinstaller for moviepy and its dependencies, but nothing seems to work. --debug mode hasn't provided much info to track it down. Any other ideas?

    Thanks!

  • About ffmepg. Some puzzles about using user defined filter

    12 novembre 2014, par Tian Gao

    All.

    I am just a starter in ffmpeg.

    I just downloaded it, and want to define the filter by myself. But, I met some problem, The filter I am to write is sure to be very complex, so I need use some class, which is define in cplusplus, to help me finish the task. So, I just created two files, .h file and .cpp file. And I want to define one specific class in these two files. And the class defined before, will be used in the filter file defined by my self. I know that the filter defined by users should be wrote as the strict formats. But, it always give out this wrong information,

    error: unknown type name 'class'
    

    And if i add some related lines, to command it compile the program by cplusplus, like this,

    #ifndef GY_FILTER_BLEND_H_
    #define GY_FILTER_BLEND_H_
    
    #include 
    #ifdef __cplusplus
    
    class testgy
    {
    public:
        int xx;
        testgy();
        ~testgy();
        void initial();
    
    };
    #endif
    
    #endif /* GY_FILTER_BLEND_H_ */
    

    This error info is to be shown,

    libavfilter/vf_gpu_scroll_left_right.c:64:5: error: unknown type name 'testgy'
    

    in vf_gpu_scroll_left_right.c , the related lines are as follows,

    #include "gy_filter_blend.h"
    
    AVFILTER_DEFINE_CLASS(tp_scroll_left_right);
    static av_cold int init(AVFilterContext *ctx)
    {
    
        TpScrollLeftDownContext *ctx_ptr = ctx->priv;
    
        testgy testGY;
        testGY.initial();
    
        __android_log_print(4 ,"gyy_1112" ,"testGY   = %d", testGY.xx);
        __android_log_print(4 ,"gyy_1112" ,"str   = %s", ctx_ptr->synthesisCmd0);
    
        tt  = gpu_filter_new(FILTER_TYPE_SCROLL_LEFT);
    
        gl_program_object_id_scroll_horizontal = -1;
        video_width = -1;
        video_height =-1;
    
        return 0;
    }
    

    So, The question is that I don't know how to use the class defined by cplusplus, in c file. Anyone has some related experience before?

    Thanks a lot!

  • stream audio and video data separately

    12 novembre 2014, par Nuwan

    I'm developing a C# video streaming application. At the moment I was able to capture video frames using opencv and encode it using ffmpeg and also capture and encode audio using ffmpeg and save data in to queue.

    The problem that I am facing is that when I send both streams simultaneously I am losing video stream.That means I stream it by first sending a video packet and then an audio packet. But, the player identifies video stream first and start to play. And did not play audio packets.

    Can any one give a method to do this task? It will be greatly appreciated.

  • ffmpeg rtmp connection params

    12 novembre 2014, par Samson

    I connect to FMS 3.5 and publish a live stream using the following Flex code:

    //NetConnection

    nc.connect("rtmp://[HOST]/live", "10021237", "evq600qquf9u6cep69ln5nt651");
    

    //NetStream

    ns.publish("mp4:10021237.mp4", "live");
    

    Now I want to publish a video file using ffmpeg but I m failing to pass the connection params. How can I achieve that? ffmpeg -re -i 1412322898.mp4 -vcodec libx264 -f flv "rtmp://[HOST]/live/mp4:10021237.mp4" -rtmp_conn "S:10021237 S:evq600qquf9u6cep69ln5nt651"

    FFMPEG version ffmpeg version 2.1.4 Copyright (c) 2000-2014 the FFmpeg developers built on Sep 8 2014 23:58:05 with gcc 4.7 (Ubuntu/Linaro 4.7.2-2ubuntu1) configuration: --enable-gpl --enable-libx264 --enable-libfaac --enable-nonfree libavutil 52. 48.101 / 52. 48.101 libavcodec 55. 39.101 / 55. 39.101 libavformat 55. 19.104 / 55. 19.104 libavdevice 55. 5.100 / 55. 5.100 libavfilter 3. 90.100 / 3. 90.100 libswscale 2. 5.101 / 2. 5.101 libswresample 0. 17.104 / 0. 17.104 libpostproc 52. 3.100 / 52. 3.100 Hyper fast Audio and Video encoder

  • How to use IContainer with frame-by-frame data ?

    12 novembre 2014, par AnilJ

    We have implemented a video client where it captures N number of pictures (from web cam), encodes them and packs them as a block. This block is delivered to the decoder, which then initializes a new IContainer object each time with a buffer containing this received block.

    Although this design works well, by its design it introduces a time delay in the stream delivery. Since we are capturing N frames to build a block for delivery, the delay is N times the frame rate. Also we are not sure what is the cost (time) of creating and initializing IContainer object each time.

    To improve on this, we thinking of sending frame-by-frame encoded data to the receiver. However, creating new IContainer object in this case won't work since it can not open/initialize with a P or B frame - It always requires an I-frame to initialize else open fails.

    Now my question is, how do we use the IContainer APIs for such a requirement? We do not want to initialize/open the IContainer object each time. We want to know if it is possible to reuse the same IContainer object while we keep feeding it the received frames one by one in sequence. This way we completely avoid the source buffering as well as receiver buffering (we however will require a de-jitter buffer).

    We are using this APIs. http://www.xuggle.com/public/documentation/java/api/com/xuggle/xuggler/IContainer.html

    /anil.