Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • FFmpegKitFlutter, Error : MissingPluginException(No implementation found for method ffmpegSession on channel flutter.arthenica.com/ffmpeg_kit)

    12 février, par 7Solomon

    I Have this simple Flutter function, to compress Audio Files

    import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';
    import 'package:ffmpeg_kit_flutter/return_code.dart';
    
    
    Future compressAudio(String inputPath) async {
        try {
          //     ...
          final command = '-y -i "$inputPath" ' // Input file
              '-af "loudnorm=I=-16:TP=-1.5:LRA=11,' // Loudness normalization
              'acompressor=threshold=${_config['threshold_db']}:'
              'ratio=${_config['ratio']}:'
              'attack=${_config['attack']}:'
              'release=${_config['release']},'
              'highpass=f=20,lowpass=f=20000" ' // Audio filters
              '-ar 44100 ' // Sample rate
              '-b:a 128k ' // Bitrate
              '-codec:a libmp3lame ' // MP3 encoder
              '-q:a 2 ' // Quality setting for LAME (0-9, lower is better)
              '-map_metadata 0 ' // Copy metadata
              '"$outputPath"'; // Output file
    
          // Execute FFmpeg command
          final session = await FFmpegKit.execute(command);
          final returnCode = await session.getReturnCode();
          final logs = await session.getLogs();
          print('FFmpeg logs: $logs');
    
          if (ReturnCode.isSuccess(returnCode)) {
            return outputFileName;
          } else {
            final logs = await session.getLogs();
            throw Exception(
                'FFmpeg process failed with code $returnCode\nLogs: $logs');
          }
        } catch (e, stackTrace) {
          print('Error: $e');
          print('Stack trace: $stackTrace');
          throw Exception('Failed to compress audio: $e\nStack trace: $stackTrace');
        }
      }
    

    And I get this error Error: MissingPluginException(No implementation found for method ffmpegSession on channel flutter.arthenica.com/ffmpeg_kit)

    This is the Stacktrace

    flutter: Error: MissingPluginException(No implementation found for method ffmpegSession on channel flutter.arthenica.com/ffmpeg_kit)
    flutter: Stack trace: #0      MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:332:7)
    
    #1      AbstractSession.createFFmpegSession (package:ffmpeg_kit_flutter/abstract_session.dart:71:11)
    
    #2      FFmpegSession.create (package:ffmpeg_kit_flutter/ffmpeg_session.dart:40:21)
    
    #3      FFmpegKit.executeWithArguments (package:ffmpeg_kit_flutter/ffmpeg_kit.dart:44:9)
    
    #4      FileProcessor.compressAudio (package:predigt_upload_fl/file.dart:182:23)
    
    #5      _DetailPageState._handleSubmit (package:predigt_upload_fl/GUIs/LiveStreamDetailPage.dart:334:30)
    
    
    
    ══╡ EXCEPTION CAUGHT BY SERVICES LIBRARY ╞══════════════════════════════════════════════════════════
    flutter.arthenica.com/ffmpeg_kit_event)
    
    When the exception was thrown, this was the stack:
    #0      MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:332:7)
    
    #1      EventChannel.receiveBroadcastStream. (package:flutter/src/services/platform_channel.dart:676:9)        
    
    

    And i think this comes from abstract class FFmpegKitPlatform extends PlatformInterface inside ffmpeg_kit_flutter_platform_interface.dart, because in there are a lot of functions that are not Implemented.

    This happens not just because of one functions:

        // ...
        final logLevel = await _getLogLevel();
        print('logLevel: $logLevel');
        if (logLevel != null) {
          FFmpegKitConfig.setLogLevel(logLevel);
        }
        final version = FFmpegKitFactory.getVersion();
        final platform = await FFmpegKitConfig.getPlatform();
        final arch = await ArchDetect.getArch();
        final packageName = await Packages.getPackageName();
        await FFmpegKitConfig.enableRedirection();
        final isLTSPostfix = (await FFmpegKitConfig.isLTSBuild()) ? "-lts" : "";
        // ...
    
    

    All of these Functions and more in FFmpegKitInitializer are not Implemented. So im pretty sure im missing something else here than just some outdated Version.

    These are my Dependencies:

    dependencies:
      flutter:
        sdk: flutter
      path_provider: ^2.0.15
      just_audio: ^0.9.34
      file_picker: ^5.3.1
      path: ^1.8.3
      id3_codec: ^1.0.3
      ftpconnect: ^2.0.5
      http: ^1.1.0
      shared_preferences: ^2.2.0
      html: ^0.15.5
      youtube_explode_dart: ^2.3.9
      intl: ^0.19.0
      ffmpeg_kit_flutter: ^6.0.3
    

    Im pretty new to Flutter Development so Im not quiete sure how to go about this Problem because every other FFmpeg Wrapper also has some problems that i couldnt fix.

    If you need any other Information feel free to ask me, because I also dont know what someone would need to go fix the problem.

  • ffmpeg sequence starting at frame 1001

    11 février, par Kernowkid

    I have an image sequence starting at 1001 that I'd like to convert to a mp4 using ffmpeg. Here's what I'm putting in cmd shell:

    ffmpeg -i plates_sh01_%04d.jpeg start_number 1001 -s 1920x1080 -vcodec libx264 -crf 25 -b:v 4M -pix_fmt yuv420p plates_sh01_%04d.mp4
    

    This works for image sequences starting at frame numbers below 999. For any sequence starting at 1000 or above I get this error:

    Could find no file with path 'plates_sh01_%04d.jpeg' and index in the range 0-4 plates_sh01_%04d.jpeg_%04d.jpeg: No such file or directory

    I can't find any solutions to this apart from re-number the image sequence.

    Any help is greatly appreciated.

  • After upgrade ffmpeg code doesn't working clip build

    11 février, par Tchoune

    I have a problem after upgrading ffmpeg from 4.2.2 to 5.2.2, my code no longer works. When I upload a video to my React-Native application I get a file corruption error on my FFmpeg python agent. -> sends to Laravel which stores the video on the minio storage; the video is available -> sends http to the minio key to download locally the mp4 video is corrupted on the minio too... I have the impression that it's an error downloading the video locally that makes the video corrupt, but I have no idea how I can debug this problem. If I upload the video directly from my web interface I don't have this problem. The only difference is processClipSynchronously which is set to True on mobile and False on web.

    Laravel Agent send to python microservice :

     // Store uploaded video file
            $videoFilePath = $this->storeVideoFile($learningGoal, $videoFile);
    
            // Add video to storyboard
            $agentResponse =  Http::post($this->agentUrl . 'learning-goals/' . $learningGoal->id . '/storyboards/' . $storyboardId .  '/chapters/' . $chapterId . '/videos',
                [
                    'clip' => $videoFilePath,
                    'processClipSynchronously' => $processClipSynchronously
                ]);
    

    Python agent video :

    @app.route('/learning-goals//storyboards//chapters//videos',
               methods=['post'])
    def post_storyboard_videos(learning_goal_id, storyboard_id, chapter_id):
        storyboard = get_storyboard(learning_goal_id, storyboard_id)
        chapter, position = get_chapter(storyboard, chapter_id)
    
        if 'clip' in request.get_json():
            chapter['clip'] = request.get_json()['clip']
            if 'duration' in storyboard:
                del chapter['duration']
            if 'thumbnail' in storyboard:
                del chapter['thumbnail']
            if 'ncAudioPreviewPath' in chapter:
                del chapter['ncAudioPreviewPath']
            if 'trim_start' in chapter:
                del chapter['trim_start']
            if 'trim_end' in chapter:
                del chapter['trim_end']
            if 'perform_nc' in chapter:
                del chapter['perform_nc']
        else:
            abort(400)
    
        new_storyboard = create_new_version_storyboard(storyboard)
    
        if 'processClipSynchronously' in request.get_json() and request.get_json()['processClipSynchronously']:
            treat_clip(new_storyboard, chapter) #Mobile trigger here
        else:
            thread = StoppableThread(target=treat_clip, args=(new_storyboard, chapter))
            thread.daemon = True
            thread.start()
    
        chapter, position = get_chapter(new_storyboard, chapter_id)
    
        return json.loads(dumps(chapter))
    
    def treat_clip(storyboard, chapter):
        logging.info(
            'start treating clip (' + chapter['clip'] + ') for learning goal : ' + str(storyboard['learningGoalId']))
        file = app.config['VOLUME_PATH'] + chapter['clip']
        os.makedirs(dirname(file), exist_ok=True)
        temp_files_to_remove = []
    
        if not os.path.exists(file):
            # Download file from S3 storage.
            s3.download_file(chapter['clip'], file)
            # Clean the file at the end (it's already in S3).
            temp_files_to_remove.append(file)
        else:
            logging.warn(f'Not downloading {chapter["clip"]} from S3 as it already exists on the filesystem')
    
        resolution_width, resolution_height = get_resolution(file)
        is_rotated_video = is_rotated(file)
        sample_aspect_ratio = get_sample_aspect_ratio(file)
        frame_rate = get_frame_rate(file)
        if not file.endswith(
                '.mp4') or resolution_width != 1920 or resolution_height != 1080 or is_rotated_video or sample_aspect_ratio != '1:1' or frame_rate > 60:
            chapter['clip'] = format_video(chapter['clip'], resolution_width, resolution_height, frame_rate,
                                           is_rotated_video, str(storyboard['learningGoalId']), 1920, 1080)
            file = app.config['VOLUME_PATH'] + chapter['clip']
    
            # Update file to S3 storage
            s3.upload_file(file, chapter['clip'])
    
            # Clean the new file at the end.
            temp_files_to_remove.append(file)
    
        clip = VideoFileClip(file)
        chapter['duration'] = float(clip.duration)
        thumbnail_relative_path = create_video_thumbnail(storyboard, clip, 0)
        ....
    

    It's VideoFileClip from moviepy who generate error : Moov atom not found I think S3 not have time to download file, and corrumpt, but I don't know how to test or fix that

    thanks in advance

  • Split a video in two and burn subtitles into each output video

    11 février, par Kairei

    I want to split a single input video "input.mp4" into two separate videos "out1.mp4" and "out2.mp4." I also want to burn hard subtitles into each of the output files. The subtitles come from two pre-existing subtitle files "subtitles1.ass" and "subtitles2.ass." I tried just adding -vf "ass=subtitles1.ass" and -vf "ass=subtitles2.ass" before each of the output files. Subtitles from subtitles1.ass were added to out1.mp4 but out2.mp4 had no subtitles. I spent hours reading docs and trying things and realized I probably need a complex filter and mapping so came up with this:

    ffmpeg.exe -i "input.mp4" -filter_complex "[0:v]split=2[in1][in2];[in1]ass=subtitles1.ass[out1];[in2]ass=subtitles2.ass[out2]" -map "[out1]" -map 0:a -ss 0:00:00.00 -to 0:01:00.00 "C:\out1.mp4" -map "[out2]" -map 0:a -ss 0:01:00.00 -to 0:02:00.00 "C:\out2.mp4"

    ... which I think means "Take the input file, split it into two "input pads," send input pad 1 through the subtitle filter with parameter subtitles1.ass and send input pad 2 through the subtitle filter with parameter subtitles2.ass. The two then come out to output pads out1 and out2. I then map out1 (which has the video with burned in subtitles) and also map the audio from the input file, and send the first hour of the video to out1.mp4. I do the same thing for output pad out2 and try to get the second hour of video with subtitles from subtitiles2.ass.

    I do get out1.mp4 with the first hour of video and audio and properly burned in subtitles. Unfortunately, out2.mp4 has the correct second hour of video and audio but no subtitles. Am I missing something to get subtitles2.ass burned into out2.mp4?

  • FFMPEG : webm to mp4 quality loss

    11 février, par turboLoop

    When trying to convert a .webm video (two colored animation) to a .mp4 video using ffmpeg (3.4.2 on mac) the result is somewhat blurry. I did research this topic and tried different approaches to solve this. Here is the most promising command:

    ffmpeg -i vidoe.webm -qscale 1 video.mp4
    

    However, the quality change is still tremendous, see the difference below.

    webm

    enter image description here

    mp4

    enter image description here

    The resolution of the two videos is the same, however the size dropped from 24,3MB (.webm) to 1,5MB (.mp4) after conversion.

    Update

    Here is the log of the conversion.

    ffmpeg version 3.4.2 Copyright (c) 2000-2018 the FFmpeg developers
      built with Apple LLVM version 9.0.0 (clang-900.0.39.2)
      configuration: --prefix=/usr/local/Cellar/ffmpeg/3.4.2 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --disable-jack --enable-gpl --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --enable-videotoolbox --disable-lzma
      libavutil      55. 78.100 / 55. 78.100
      libavcodec     57.107.100 / 57.107.100
      libavformat    57. 83.100 / 57. 83.100
      libavdevice    57. 10.100 / 57. 10.100
      libavfilter     6.107.100 /  6.107.100
      libavresample   3.  7.  0 /  3.  7.  0
      libswscale      4.  8.100 /  4.  8.100
      libswresample   2.  9.100 /  2.  9.100
      libpostproc    54.  7.100 / 54.  7.100
    Input #0, matroska,webm, from 'video.webm':
      Metadata:
        encoder         : whammy
      Duration: 00:00:05.02, start: 0.000000, bitrate: 38755 kb/s
        Stream #0:0: Video: vp8, yuv420p(progressive), 1920x1080, SAR 1:1 DAR 16:9, 60 fps, 60 tbr, 1k tbn, 1k tbc (default)
    Please use -q:a or -q:v, -qscale is ambiguous
    Stream mapping:
      Stream #0:0 -> #0:0 (vp8 (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    [libx264 @ 0x7f8625800c00] -qscale is ignored, -crf is recommended.
    [libx264 @ 0x7f8625800c00] using SAR=1/1
    [libx264 @ 0x7f8625800c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 0x7f8625800c00] profile High, level 4.2
    [libx264 @ 0x7f8625800c00] 264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'video.mp4':
      Metadata:
        encoder         : Lavf57.83.100
        Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 60 fps, 15360 tbn, 60 tbc (default)
        Metadata:
          encoder         : Lavc57.107.100 libx264
        Side data:
          cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    frame=  301 fps= 45 q=-1.0 Lsize=    1417kB time=00:00:04.96 bitrate=2336.4kbits/s speed=0.735x    
    video:1412kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.309675%
    [libx264 @ 0x7f8625800c00] frame I:2     Avg QP:13.08  size:  8842
    [libx264 @ 0x7f8625800c00] frame P:75    Avg QP:24.29  size:  6785
    [libx264 @ 0x7f8625800c00] frame B:224   Avg QP:26.38  size:  4102
    [libx264 @ 0x7f8625800c00] consecutive B-frames:  0.7%  0.0%  1.0% 98.3%
    [libx264 @ 0x7f8625800c00] mb I  I16..4: 68.1% 28.7%  3.2%
    [libx264 @ 0x7f8625800c00] mb P  I16..4:  0.1%  2.2%  0.4%  P16..4:  6.5%  4.0%  1.4%  0.0%  0.0%    skip:85.4%
    [libx264 @ 0x7f8625800c00] mb B  I16..4:  0.0%  0.2%  0.0%  B16..8:  8.8%  3.0%  0.3%  direct: 0.3%  skip:87.3%  L0:52.1% L1:47.5% BI: 0.4%
    [libx264 @ 0x7f8625800c00] 8x8 transform intra:57.7% inter:67.8%
    [libx264 @ 0x7f8625800c00] coded y,uvDC,uvAC intra: 25.7% 8.7% 0.9% inter: 3.9% 0.4% 0.0%
    [libx264 @ 0x7f8625800c00] i16 v,h,dc,p: 95%  2%  3%  0%
    [libx264 @ 0x7f8625800c00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 17%  5% 48%  5%  7%  6%  5%  4%  3%
    [libx264 @ 0x7f8625800c00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 14% 31%  6%  7%  7%  6%  5%  4%
    [libx264 @ 0x7f8625800c00] i8c dc,h,v,p: 88%  6%  6%  0%
    [libx264 @ 0x7f8625800c00] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x7f8625800c00] ref P L0: 55.3%  5.5% 24.8% 14.5%
    [libx264 @ 0x7f8625800c00] ref B L0: 75.6% 16.7%  7.7%
    [libx264 @ 0x7f8625800c00] ref B L1: 93.9%  6.1%
    [libx264 @ 0x7f8625800c00] kb/s:2304.86
    

    Any idea on how to overcome this quality loss?