Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • AviSynth script with subtitles errors

    29 avril 2017, par Corpuscular

    Win7

    FFmpeg version: 20170223-dcd3418 win32 shared

    AVISynth version: 2.6

    Calling ffmpeg in a Visual Studio 2015 C# Forms Application and using process.StartInfo.Arguments to pass arguments and read an avs script. Works fine.

    The avs script:

    LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\VSFilter.dll")
    
    a=ImageSource("01.png").trim(1,24)
    b=ImageSource("02.png").trim(1,36)
    c=ImageSource("03.png").trim(1,40)
    d=ImageSource("04.png").trim(1,72)
    e=ImageSource("05.png").trim(1,36)
    f=ImageSource("06.png").trim(1,40)
    video =  a + b + c + d + e + f
    
    return video
    

    I'd like to add subtitles using the avs script but it is not working. Adding the subtitle argument immediately before the "return video" argument results in:

    subtitle("day 111", align=2, first_frame=0, halo_color=$00FF00FF, text_color=$0000FF00, size=36, last_frame=100)
    

    Result error: [avisynth @ 003cdf20] Script error: Invalid arguments to function "subtitle"

    Using video.subtitle results in:

    video.subtitle("day 111", align=2, first_frame=0, halo_color=$00FF00FF, text_color=$0000FF00, size=36, last_frame=100)
    

    No error, script completes but no subtitle on output video.

    Using subtitle(clip) results in:

    subtitle(video, "day 111", align=2, first_frame=0, halo_color=$00FF00FF, text_color=$0000FF00, size=36, last_frame=100)
    

    The script exits abnormally but there is no error message.

    Any guidance would be greatly appreciated. Please let me know if I can clarify anything.

  • mkv file progressively out of sync

    29 avril 2017, par Piruzzolo

    I have a bunch of mkv files, with FLAC as the audio codec and FFV1 as the video one.

    The files were created using an EasyCap acquisition dongle from a VCR analog source. Specifically, I used VLC's "open acquisition device" prompt and selected PAL. Then, I converted the files (audio PCM, video raw YUV) to (FLAC, FFV1) using

    ffmpeg.exe -i input.avi -acodec flac -vcodec ffv1 -level 3 -threads 4 -coder 1 -context 1 -g 1 -slices 24 -slicecrc 1 output.mkv
    

    Now, the files are progressively out of sync. It may due to the fact that while (maybe) the video has a constant framerate, the FLAC track has variable framerate. So, is there a way to sync the track to audio, or something alike? Can FFmpeg do this? Thanks

    EDIT

    On Mulvya hint, I plotted the difference in sync at various times; the first column shows the seconds elapsed, the second shows the difference - in secs. The plot seems to behave linearly, with 0.0078 as a constant slope. NOTE: measurements taken by hands, by means of a chronometer

    enter image description here

    EDIT 2

    Playing around with VirtualDub, I found that changing the framerate to 25 fps from the original 24.889 (Video->Frame rate...->Change frame rate to) and using the track converted to wav definitely does work. Two problems, though: VirtualDub crashes when importing the original FFV1-FLAC mkv file, so I had to convert the video to H264 to try it out; more, I find it difficult to use an external encoder to save VirtualDub output.

    So, could I avoid using VirtualDub, and simply use ffmpeg for it? Here's the exported vdscript:

        VirtualDub.audio.SetSource("E:\\Cassette\\Filmini\\masters\\Cassetta 4_track2_ita.wav", "");
        VirtualDub.audio.SetMode(0);
        VirtualDub.audio.SetInterleave(1,500,1,0,0);
        VirtualDub.audio.SetClipMode(1,1);
        VirtualDub.audio.SetEditMode(1);
        VirtualDub.audio.SetConversion(0,0,0,0,0);
        VirtualDub.audio.SetVolume();
        VirtualDub.audio.SetCompression();
        VirtualDub.audio.EnableFilterGraph(0);
        VirtualDub.video.SetInputFormat(0);
        VirtualDub.video.SetOutputFormat(7);
        VirtualDub.video.SetMode(3);
        VirtualDub.video.SetSmartRendering(0);
        VirtualDub.video.SetPreserveEmptyFrames(0);
        VirtualDub.video.SetFrameRate2(25,1,1);
        VirtualDub.video.SetIVTC(0, 0, 0, 0);
        VirtualDub.video.SetCompression();
        VirtualDub.video.filters.Clear();
        VirtualDub.audio.filters.Clear();
    

    The first line imports the wav-converted audio track. Can I set an equivalent pipe in ffmpeg (possibly, using FLAC - not wav)? SetFrameRate2 is maybe the key, here.

  • Installed ffmpeg, added to path, still can't save animation from Jupyter Notebook

    29 avril 2017, par dredre_420

    I'm trying to simulate a two-body orbit system working on Jupyter Notebook (python). Since the animation can't display in-line I tried installing ffmpeg and adding it to the system path using steps outlined here: http://adaptivesamples.com/how-to-install-ffmpeg-on-windows/.

    However, when I try to save my animation using anim.save('orbit.mp4', fps=15, extra_args=['-vcodec', 'libx264']), I still get the error message: ValueError: Cannot save animation: no writers are available. Please install mencoder or ffmpeg to save animations.

    Not sure what else to try at this point, very inexperienced programmer here.

  • How to stream live video from DJI Professional 3 camera ?

    29 avril 2017, par raullalves

    I have to get the live stream video from DJI Phantom 3 camera in my C++ application, in order to do a Computer Vision processing in OpenCV.

    First I tried sending the H264 raw data through an UDP socket, inside this callback:

            mReceivedVideoDataCallBack = new CameraReceivedVideoDataCallback() {
    
            @Override
            public void onResult(byte[] videoBuffer, int size) {
                //Here, I call a method from a class I created, that sends the buffer through UDP
                if (gravar_trigger) controleVideo.enviarFrame(videoBuffer, size);
    
                if (mCodecManager != null)  mCodecManager.sendDataToDecoder(videoBuffer, size);
    
            }
    
        };
    

    That communication above works well. However, I haven't been able to decode that UDP H264 data in my C++ desktop application. I have tested with FFmpeg lib, but couldn't get to alocate an AVPacketwith my UDP data, in order to decode using avcodec_send_packet and avcodec_receive_frame. I also had problems with AVCodecContext, since my UDP communication wasn't a stream like RTSP, where it could get information about its source. Therefore, I had to change how I was trying to solve the problem.

    Then, I found libstreaming, in which can be associate to stream the android video camera to a Wowza Server, creating something like a RTSP stream connection, where the data could be obtained in my final C++ application easily using OpenCV videoCapture. However, libstreaming uses its own surfaceView. In other words, I would have to link the libstreaming surfaceView with the DJI Drone's videoSurface. I'm really new to Android, so don't have any clue of how to do that.

    To sum up, is that the correct approach? Someone has a better idea? Thanks in advance

  • Linux - Generate a video from pictures and text [on hold]

    29 avril 2017, par lluisu

    I'm developing a server with AWS Elastic Beanstalk. I have a project that is programmed in Django (Python) and I have to generate a video from an array of user images. If possible, I have to append text on the images too.

    I took a look at ffmpeg and using lambda functions from AWS, but I can't see if I can paint text on the images. It seems like there's a library that lets lambda execute ffmpeg: https://github.com/binoculars/aws-lambda-ffmpeg/