Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg parameters to cut video files with Full Re-Encoding

    24 septembre 2019, par How to

    The following code will be used to trim multiple video files (with different audio or video codecs, or containers etc...). The purpose of this code is to cut a video file with ffmpeg as the resulting output (cut) file to contain the same video, audio codecs as the original as using method FULL-RENCODING. My question is whether there is any error in the current parameters?

    Example of used ffmpeg code:

    ffmpeg -i Sample.vob -ss 00:00:10.000 -strict -2 -t 00:00:10.000 -map 0:v:0 -map 0:a? -acodec copy -map 0:s? -scodec copy SSSS.vob
    

    or

    ffmpeg -i Sample.mp4 -ss 00:00:10.000 -strict -2 -t 00:00:10.000 -map 0:v:0 -map 0:a? -acodec copy -map 0:s? -scodec copy SSSS.mp4
    

    or

    ffmpeg -i Sample.mkv -ss 00:00:10.000 -strict -2 -t 00:00:10.000 -map 0:v:0 -map 0:a? -acodec copy -map 0:s? -scodec copy SSSS.mkv
    

    and others...

  • NodeJS create a storybook like youtube videos ?

    24 septembre 2019, par Ben Beri

    I need to be creating something like this:

    enter image description here

    This image was generated using this package: https://www.npmjs.com/package/ffmpeg-generate-video-preview

    However its not really suitable because my storybook has to be limited to 10x10 rows/cols and after every image I have to create a next one of 10x10. And I can't really make it generate rows/columns automatically because it just doesn't know what is the maximum amount of cols to generate based on the frames.

    How can I do such thing with maybe using ffmpeg ?

  • Unable to combine videos with ffmpeg on firebase functions

    24 septembre 2019, par 西田龍

    I tried to combine videos on firebase functions using FFmpeg. However, I was not able to do that because of encountering an error below.

    I expect that videos will be combined successfully.

    Firstly, as I did not have the directory made so I tried to make it using mkdirp-promise. But it did not make any changes; I had the same error as before.

    Error: ENOENT: no such file or directory, stat '/tmp/combined.mp4'
    

    Here are the codes.

    const combineVideos = async (tempFilePath, firstFilePath, secondFilePath,targetTempFilePath) =>{
                    ffmpeg(tempFilePath)
                      .setFfmpegPath(ffmpeg_static.path)
                      .addInput(firstFilePath)
                      .addInput(secondFilePath)
                      .on('end', function() {
                        console.log('files have been merged succesfully');
                      })
                      .on('error', function(err) {
                        console.log('an error happened: ' + err.message);
                      })
                      .mergeToFile(targetTempFilePath)
    }
    
    exports.editVids = functions.storage.object().onFinalize(async (object) => {
        await (async () => {
            const fileBucket = object.bucket; // The Storage bucket that contains the file.
            const filePath = object.name; // File path in the bucket.
    
            // Get the file name.
            const fileName = path.basename(filePath);
            // Exit if the audio is already converted.
            if(fileName==='last.mp4'){
                // Download file from bucket.
    
                const bucket = gcs.bucket(fileBucket);
                const tempFilePath = path.join(os.tmpdir(), fileName);
                const firstFilePath = path.join(os.tmpdir(), 'first.mp4');
                const secondFilePath = path.join(os.tmpdir(), 'second.mp4');
    
                // We add a '_output.flac' suffix to target audio file name. That's where we'll upload the converted audio.
                const targetTempFileName = 'combined.mp4'
                const targetTempFilePath = path.join(os.tmpdir(), targetTempFileName);
                const targetStorageFilePath = path.join(path.dirname(filePath), targetTempFileName);
    
                await mkdirp(path.dirname(tempFilePath))
    
                await bucket.file(filePath).download({destination: tempFilePath});
                await bucket.file(filePath.replace('last.mp4','first.mp4')).download({destination: firstFilePath});
                await bucket.file(filePath.replace('last.mp4','second.mp4')).download({destination: secondFilePath});
    
                console.log('Video downloaded locally to', firstFilePath);
                console.log('Video downloaded locally to', secondFilePath);
                console.log('Video downloaded locally to', tempFilePath);
                // Convert the audio to mono channel using FFMPEG.
                await combineVideos(tempFilePath, firstFilePath, secondFilePath, targetTempFilePath)
                console.log('Output video created at', targetTempFilePath);
    
                // Uploading the audio.
                await bucket.upload(targetTempFilePath, {destination: targetStorageFilePath});
                console.log('Output video uploaded to', targetStorageFilePath);
    
                // Once the audio has been uploaded delete the local file to free up disk space.
                fs.unlinkSync(tempFilePath);
                fs.unlinkSync(firstFilePath);
                fs.unlinkSync(secondFilePath);
                fs.unlinkSync(targetTempFilePath);
                return console.log('Temporary files removed.', targetTempFilePath);
            }
        })();
    });
    
  • How to generate valid live DASH for YouTube ?

    24 septembre 2019, par Matt Hensley

    I am attempting to implement YouTube live video ingestion via DASH as documented at: https://developers.google.com/youtube/v3/live/guides/encoding-with-dash

    To start, I am exercising the YouTube API manually and running ffmpeg to verify required video parameters before implementing in my app.

    Created a new livestream with liveStreams.insert and these values for the cdn field:

    "cdn": {
        "frameRate": "variable",
        "ingestionType": "dash",
        "resolution": "variable"
    }
    

    Created a broadcast via liveBroadcasts.insert, then used liveBroadcasts.bind to bind the stream to the broadcast.

    Then I grabbed the ingestionInfo from the stream and ran this ffmpeg command, copying in the ingestionAddress with the streamName:

    ffmpeg -stream_loop -1 -re -i mov_bbb.mp4 \
        -loglevel warning \
        -r 30 \
        -g 60 \
        -keyint_min 60 \
        -force_key_frames "expr:eq(mod(n,60),0)" \
        -quality realtime \
        -map v:0 \
        -c:v libx264 \
        -b:v:0 800k \
        -map a:0 \
        -c:a aac \
        -b:a 128k \
        -strict -2 \
        -f dash \
        -streaming 1 \
        -seg_duration 2 \
        -use_timeline 0 \
        -use_template 1 \
        -window_size 5 \
        -extra_window_size 10 \
        -index_correction 1 \
        -adaptation_sets "id=0,streams=v id=1,streams=a" \
        -dash_segment_type mp4 \
        -method PUT \
        -http_persistent 1 \
        -init_seg_name "dash_upload?cid=${streamName}&copy=0&file=init$RepresentationID$.mp4" \
        -media_seg_name "dash_upload?cid=${streamName}&copy=0&file=media$RepresentationID$$Number$.mp4" \
        'https://a.upload.youtube.com/dash_upload?cid=${streamName}&copy=0&file=dash.mpd'
    

    It appears all the playlist updates and video segments upload fine to YouTube - ffmpeg does not report any errors. However the liveStream status always shows noData, and the YouTube Live Control Room doesn't show the stream as receiving data.

    The DASH output, when written to files play backs fine in this test player. The playlist output doesn't match exactly the samples, but does have the required tags per the "MPD Contents" section in the documentation.

    Are my ffmpeg arguments incorrect, or does YouTube have additional playlist format requirements that are not documented?

  • Is there an ffmpeg option to specify the size of macroblocks in H.264 compression ?

    24 septembre 2019, par MEHDI SAOUDI

    I am setting up a system that extract motion vectors from videos after compressing them in H.264 using FFmpeg

    Is there an FFMPEG H.264 compression option that allows modification of the size of the macroblocks in order to minimize the number of lines of the extracted motion vectors file ?

    In other words, if i change the compression of the mackroblocks from 4x4 to 8x8, I will minimize the size of the file?

    My goal is minimizing the size of the motion vector file. thank you