Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • DVD Conversion with ffpmpeg-python

    10 décembre 2019, par Colin Bitterfield

    I am trying to convert in a programmatically DVD VOB files into an MP4. Then use scene detect to process the scenes.

    Does anyone know how to take a list of files and concat them using FFmpeg?

    This code isn't working but I am sure that it is close. I am not sure to use a concat filter or output_merge.

    import os
    import ffmpeg
    import glob
    
    vob_stream=[]
    for vob in vob_files:
        vob_stream.append(ffmpeg.input(vob))
    
    (
        ffmpeg
        .concat(
            .in_file(vob_stream)
        )
        .output(vob_out)
        .overwrite_output()
        .run(capture_stdout=True, capture_stderr=True,)
    
    )
    
  • Batch add outtro to many mp3 files

    10 décembre 2019, par niico

    I often need to take, say, 20 mp3 files and add another file to the end of them all.

    It's an outtro for a podcast.

    I'd like to automate this. I've found many articles on, say, joining many mp3s together - but not on adding an intro or outtro to many files one after another.

    I'm on Windows 10.

    Any idea how I can do this (eg with a batch script).

  • running `shell_exec` async

    10 décembre 2019, par Jay Wadhwa

    I'm using ffmpeg to convert any video file into mp4 format and I'm running it via shell_exec. It works fine:

    shell_exec("ffmpeg -y -i $currentFile -crf 23 -preset faster $newFilePath >/dev/null 2>/dev/null &");
    

    Generally, using >/dev/null 2>/dev/null & at the end throws everything to null and puts it into the background. Though, in my case it doesn't work. It's waiting for this to finish before completing script execution.

    How do make this go async?

  • Can't stream via RTSP with ffmpeg in C++

    10 décembre 2019, par Bletsu

    I am trying to send a stream I am reading on an IP address via rtsp to another IP address also via rtsp. In other words, I want to duplicate the stream I am reading.

    I already used ffmpeg to save the stream in MP4. So I figured I could also use it as well for this purpose.

    Here is a sample of code:

        m_outputURL ="rtsp://192.168.8.225:8554";
        avformat_alloc_output_context2(&m_outFormatContext, nullptr, "rtsp", &m_outputUrlFormatB[0]);
        m_outFormatContext->flags |= AVFMT_FLAG_GENPTS;
    
        if (!m_outFormatContext) {
            qDebug()<<"Could not create video storage stream as no output context could not be assigned based on filename";
            SSW_endStreaming();
            return ;
        }
        else
        {
            m_outputFmt = m_outFormatContext->oformat;
        }
    
        //Create output AVStream according to input AVStream
        AVStream *inputStream=m_inFormatContext->streams[m_videoStreamId];
    
        //find encoder from input stream
        AVCodec* codec = avcodec_find_encoder(inputStream->codecpar->codec_id);
    
        if (!(codec)) {
            qDebug()<<"Could not find codec from input stream";
            SSW_endStreaming();
            return ;
        }
        else
        {
            /* create video out stream */
            m_videoStream = avformat_new_stream(m_outFormatContext, codec);
        }
    
        if (!m_videoStream)
        {
            printf( "Failed allocating output stream\n");
            SSW_endStreaming();
        }
        else
        {
            /* copying input video context to output video context */
            ret = avcodec_parameters_copy(m_videoStream->codecpar, inputStream->codecpar);
        }
    
        if (ret < 0) {
            qDebug()<<"Unable to copy input video context to output video context";
            SSW_endStreaming();
            return;
        }
        else
        {
            m_videoStream->codecpar->codec_tag = 0;
        }
    
        /* open the output file, if needed */
        if (!(m_outputFmt->flags & AVFMT_NOFILE))
        {
            ret = avio_open(&m_outFormatContext->pb, &m_outputUrlFormatB[0], AVIO_FLAG_WRITE);
            if (ret < 0) {
                qDebug()<<"Could not open output file";
                SSW_endStreaming();
                return;
            }
            else
            {
                /* Write the stream header, if any. */
                ret = avformat_write_header(m_outFormatContext, nullptr);
            }
        }
    

    Everytime m_outputFmt->flags & AVFMT_NOFILE return true or I get an error from avio_open.

    Is there any way to make this work with rtsp? Or do you think I can achieve streaming with an easier method?

    Thank you

  • What is the process of converting an image stream into an H.264 live video stream ?

    10 décembre 2019, par LacombeJ

    How does live streaming, using any protocol/codec, work from end-to-end?

    I have been searching google, youtube, FFMPEG documentation, OBS source code, stack overflow, but still cannot understand how live video streaming works, from videos. So I am trying to capture desktop screenshots and convert that to a live video stream that is H.264 encoded.

    What I know how to do:

    1. Capture screenshot images using Graphics.CopyFromScreen with C# on some loop
    2. Encode the bits and save images as JPEG files
    3. Send JPEG image in base64 one-at-a-time and write it on a named pipe server
    4. Read image buffer from named pipe on a nodejs server
    5. Send base64 jpeg image over socket to client to display on a web page, each frame

    What I want to be able to do:

    1. Encode, I assume chunks, images into some H.264 format for live streaming with one of the protocols (RTMP, RTSP, HLS, DASH)
    2. Push the encoded video chunks onto a server (such as an RTMP server), continuously (I assume ever 1-2 seconds?)
    3. Access server from a client to stream and display live video

    I've tried using FFMPEG to continuously send .mp4 files onto an RTMP server but this doesn't seem to work as it closes the connection after each video. I have also looked into using ffmpeg concat lists but this just combines videos, it can't append videos read by a live stream to my understanding and probably wasn't made for that.

    So my best lead is from this stackoverflow answer which suggests:

    1. Encode in FLV container, set duration to be arbitrarily long (according to the answer, youtube used this method)
    2. Encode the stream into RTMP, using ffmpeg or other opensource rtmp muxers
    3. Convert stream into HLS

    How is this encoding and converting done? Can this all be done with ffmpeg commands?