Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • adb screenrecord display only screenshot, it does not stream the screen [closed]

    28 avril, par hexols

    I have an Android TV, I want to stream its screen in my Ubuntu PC. I used this command:

    adb shell screenrecord --output-format=h264 - | ffplay -
    

    and after waiting for a while it displays the screenshot of the TV. But I want to display live stream of the Android TV. I also used the following command as well but got the same result:

    adb exec-out screenrecord --bit-rate=16m --output-format=h264 --size 800x600 - | ffplay -framerate 60 -framedrop -bufsize 16M -
    

    How can I achieve this using this command? Or is there a way to achieve it with another way by using VLC/Gstreamer/FFMPEG except using scrcpy/vysor?

  • How to Convert 16:9 Video to 9:16 Ratio While Ensuring Speaker Presence in Frame ?

    28 avril, par shreesha

    I am tried so many time to figure out the problem in detecting the face and also it's not so smooth enough to like other tools out there.

    So basically I am using python and Yolo in this project but I want the person who is talking and who the ROI (region of interest) is.

    Here is the code:

    from ultralytics import YOLO
    from ultralytics.engine.results import Results
    from moviepy.editor import VideoFileClip, concatenate_videoclips
    from moviepy.video.fx.crop import crop
    
    # Load the YOLOv8 model
    model = YOLO("yolov8n.pt")
    
    # Load the input video
    clip = VideoFileClip("short_test.mp4")
    
    tacked_clips = []
    
    for frame_no, frame in enumerate(clip.iter_frames()):
        # Process the frame
        results: list[Results] = model(frame)
    
        # Get the bounding box of the main object
        if results[0].boxes:
            objects = results[0].boxes
            main_obj = max(
                objects, key=lambda x: x.conf
            )  # Assuming the first detected object is the main one
    
            x1, y1, x2, y2 = [int(val) for val in main_obj.xyxy[0].tolist()]
    
            # Calculate the crop region based on the object's position and the target aspect ratio
            w, h = clip.size
            new_w = int(h * 9 / 16)
            new_h = h
    
            x_center = x2 - x1
            y_center = y2 - y1
    
            # Adjust x_center and y_center if they would cause the crop region to exceed the bounds
            if x_center + (new_w / 2) > w:
                x_center -= x_center + (new_w / 2) - w
            elif x_center - (new_w / 2) < 0:
                x_center += abs(x_center - (new_w / 2))
    
            if y_center + (new_h / 2) > h:
                y_center -= y_center + (new_h / 2) - h
            elif y_center - (new_h / 2) < 0:
                y_center += abs(y_center - (new_h / 2))
    
            # Create a subclip for the current frame
            start_time = frame_no / clip.fps
            end_time = (frame_no + 1) / clip.fps
            subclip = clip.subclip(start_time, end_time)
    
            # Apply cropping using MoviePy
            cropped_clip = crop(
                subclip, x_center=x_center, y_center=y_center, width=new_w, height=new_h
            )
    
            tacked_clips.append(cropped_clip)
    
    reframed_clip = concatenate_videoclips(tacked_clips, method="compose")
    reframed_clip.write_videofile("output_video.mp4")
    

    So basically I want to fix the face detection with ROI detection where it can detect the face and make that face and the body on to the frame and making sure that the speaker who is speaking is brought to the frame

  • Convert a sequence of exr files to mp4 using moviepy

    28 avril, par 0xbadf00d

    I have a sequence of exr files which I want to convert into a video using moviepy. When Since the colors in the exrs need be converted (otherwise the video appears almost black) I need to specify a color transfer characteristic. When I run ffmpeg directly using

    ffmpeg -y -apply_trc iec61966_2_1 -i input_%d.exr -vcodec mpeg4 output.mp4

    everything is working perfectly fine. However, if I read the exrs using clip = ImageSequenceClip("folder_to_my_exrs/", fps = 24) and try to write the video using .write_videofile("output.mp4", codec = "mpeg4", ffmpeg_params = ["-apply_trc", "iec61966_2_1"]) I'm receiving the error

    b'Codec AVOption apply_trc (color transfer characteristics to apply to EXR linear input) specified for output file #0 (output.mp4) is not an encoding option.\r\n'

    I don't really understand this. What can I do?

  • Converting DAV to MP4 and OGG

    28 avril, par mackowiakp

    I want to prepare WEB page containing films from security camera recorders. Each recorder transmit video files in DAV format so each film is converted to MP4 format by script, using such syntax:

    ffmpeg -y -i movie.dav -vcodec libx264 -crf 24 movie.mp4
    

    So I included in HTMLv5 code such entry:

      
    

    It works correctly with Chrome but not with Firefox. For proper work in FF it is necessary add link to OGG file. So correct HTMLv5 syntax for both browsers should look like this:

     
    

    Can anybody help me to pass correct ffmpeg syntax to create OGG file?

    Output from console after using -movflags +faststart options

    [maciek@piotr MMM]$ ../ffmpeg-2.4.2-64bit-static/ffmpeg -movflags +faststart -y -i   04.24.23-04.24.38\[M\]\[@0\]\[0\].dav -vcodec libx264 -crf 24 10.mp4
    ffmpeg version 2.4.2-   http://johnvansickle.com/ffmpeg/    Copyright (c) 2000-2014 the FFmpeg developers
      built on Oct  9 2014 07:24:56 with gcc 4.8 (Debian 4.8.3-11)
      configuration: --enable-gpl --enable-version3 --disable-shared --disable-debug --enable-runtime-cpudetect --enable-libmp3lame --enable-libx264 --enable-libx265 --enable- libwebp --enable-libspeex --enable-libvorbis --enable-libvpx --enable-libfreetype --enable-fontconfig --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-gray --enable-libopenjpeg --enable-libopus --disable-ffserver --enable-libass --enable-gnutls --cc=gcc-4.8
      libavutil      54.  7.100 / 54.  7.100
      libavcodec     56.  1.100 / 56.  1.100
      libavformat    56.  4.101 / 56.  4.101
      libavdevice    56.  0.100 / 56.  0.100
      libavfilter     5.  1.100 /  5.  1.100
      libswscale      3.  0.100 /  3.  0.100
      libswresample   1.  1.100 /  1.  1.100
      libpostproc    53.  0.100 / 53.  0.100
    Option movflags not found.
    
  • How to run ffmpeg with hardware encoding when using —filter_complex to hardcode subtitles ? [closed]

    28 avril, par user2006141

    I'm converting MKV files and hardcoding subtitles into MP4 format. I have over 100 files and want to speed the process up by enabling hardware encoding. I am able to hardware encode without hardcoding the subtitles via --filter_complex but as soon as I apply the filter it errors out.

    Here is my command line that works perfectly fine.

    ffmpeg -i input.mkv -filter_complex "[0:v:0]subtitles='input.mkv':si=0[v]" -map "[v]" -map 0:a:1 -c:a copy -map_chapters -1 "output.mp4"

    Here is my command line that works with hardware encoding without --filter_complex

    ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input.mkv -c:v hevc_nvenc -map 0:a:1 -c:a copy -map_chapters -1 "output.mp4"

    What I need to do is enable hardware encoding with --filter_complex, So I tried this command

    ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input.mkv -filter_complex "[0:v:0]subtitles='input.mkv':si=0[v]" -map "[v]" -c:v hevc_nvenc -map 0:a:1 -c:a copy -map_chapters -1 "output.mp4"

    I get this error

    Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scale_0'