Recherche avancée

Médias (91)

Autres articles (25)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (2661)

  • What is the least CPU-intensive format to pass high resolution frames from ffmpeg to openCV ? [closed]

    3 octobre 2024, par Doctico

    I'm developing an application to process a high-resolution (2560x1440) RTSP stream from an IP camera using OpenCV.

    


    What I've Tried

    


      

    1. OpenCV's VideoCapture :

      


        

      • Performance was poor, even with CAP_PROP_FFMPEG.
      • 


      


    2. 


    3. FFmpeg with MJPEG :

      


        

      • Decoded the stream as MJPEG and created OpenCV Mats from the image2pipe JPEG buffer.
      • 


      • Resulted in lower CPU usage for OpenCV but higher for FFmpeg.
      • 


      


    4. 


    5. Current Approach :

      


        

      • Output raw video in YUV420p format from FFmpeg.
      • 


      • Construct OpenCV Mats from each frame buffer.
      • 


      • Achieves low FFmpeg CPU usage and moderately high OpenCV CPU usage.
      • 


      


    6. 


    


    Current Implementation

    


    import subprocess
import cv2
import numpy as np

def stream_rtsp(rtsp_url):
    # FFmpeg command to stream RTSP and output to pipe
    ffmpeg_command = [
        'ffmpeg',
        '-hwaccel', 'auto',
        '-i', rtsp_url,
        '-pix_fmt', 'yuv420p',  # Use YUV420p format
        '-vcodec', 'rawvideo',
        '-an',  # Disable audio
        '-sn',  # Disable subtitles
        '-f', 'rawvideo',
        '-'  # Output to pipe
    ]

    # Start FFmpeg process
    process = subprocess.Popen(ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)

    # Frame dimensions
    width, height = 2560, 1440
    frame_size = width * height * 3 // 2  # YUV420p uses 1.5 bytes per pixel

    while True:
        # Read raw video frame from FFmpeg output
        raw_frame = process.stdout.read(frame_size)
        if not raw_frame:
            break

        yuv = np.frombuffer(raw_frame, np.uint8).reshape((height * 3 // 2, width))
        frame = cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR_I420)
        
        processFrame(frame)

    # Clean up
    process.terminate()
    cv2.destroyAllWindows()


    


    Question

    


    Are there any other ways to improve performance when processing high-resolution frames from an RTSP stream ?

    


  • Output resolution substantially below input resolution [closed]

    29 août 2024, par bobford

    I have written an Android app to overlay a watermark png using FFMpeg 6.0 which works fine on 1k and 4k videos. In both cases the output resolution substantially deteriorates albeit consistent with the reduction in file size. In both cases, the original width-height pixel sizes are retained.

    


    The ffmpeg command is :

    


    String[] array = new String[] {"-i ", inputFile, " -i ", watermarkFile, " -filter_complex ", overlayPosition, " -codec:a copy ", outputFile};
String delimiter = "";
String command = String.join(delimiter, array);


    


    I would like to retain the original resolution, or as close as possible, even with the larger file size.
It would seem there are default parameters which I am unaware of, and have absolutely no idea of how to find them, even after extensive searching. Thank you for your help !

    


  • FFMPEG in Android Kotlin - processed video should have specific resolution

    31 mai 2024, par Utsav

    I'm recording video from both the front and back cameras and I get a PIP video and a horizontal stacked video. I need to merge both videos after that. The problem with merging is that it requires both the videos (PIP and stacked) to have the same resolution and aspect ratio. This is not the case. So the FFMPEG command being executed in code to generate both these videos needs to be modified to make the resolution and aspect ratio the same.

    


    //app -> build.gradle
implementation "com.writingminds:FFmpegAndroid:0.3.2"


    


        private fun connectFfmPeg() {
        val overlayX = 10
        val overlayY = 10
        val overlayWidth = 200
        val overlayHeight = 350

        outputFile1 = createVideoPath().absolutePath
        outputFile2 = createVideoPath().absolutePath
        //Command to generate PIP video
        val cmd1 = arrayOf(
            "-y",
            "-i",
            videoPath1,
            "-i",
            videoPath2,
            "-filter_complex",
            "[1:v]scale=$overlayWidth:$overlayHeight [pip]; [0:v][pip] overlay=$overlayX:$overlayY",
            "-preset",
            "ultrafast",
            outputFile1
        )

        //Command to generate horizontal stack video
        val cmd2 = arrayOf(
            "-y",
            "-i",
            videoPath1,
            "-i",
            videoPath2,
            "-filter_complex",
            "hstack",
            "-preset",
            "ultrafast",
            outputFile2
        )

        val ffmpeg = FFmpeg.getInstance(this)
        //Both commands are executed
        //Following execution code is OK
        //Omitted for brevity
    }


    


    Here is mergeVideos() executed lastly.

    


        private fun mergeVideos(ffmpeg: FFmpeg) {
        //Sample command:
        /*
        ffmpeg -y -i output_a.mp4 -i output_b.mp4 \
        -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" \
        -map "[outv]" -map "[outa]" -preset "ultrafast" output.mp4
        */
        finalOutputFile = createVideoPath().absolutePath

        val cmd = arrayOf(
            "-y",
            "-i",
            outputFile1,
            "-i",
            outputFile2,
            "-filter_complex",
            "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]",
            "-map", "[outv]",
            "-map", "[outa]",
            "-preset", "ultrafast",
            finalOutputFile
        )
        //Execution code omitted for brevity
}


    


    Error : Upon execution of mergeVideos(), there is no progress or failure method called. The Logcat stays where it is and the app does not crash either.

    


    Possible solution :
Once I got the generated PIP and horizontal stacked videos to my device's local storage, I tried out some FFMPEG commands on the prompt to process them after moving them to my laptop and it works on the command line :

    


    //First two commands can't be executed in Kotlin code
//This is the main problem
ffmpeg -i v1.mp4 -vf "scale=640:640,setdar=1:1" output_a.mp4
ffmpeg -i v2.mp4 -vf "scale=640:640,setdar=1:1" output_b.mp4
ffmpeg -y -i output_a.mp4 -i output_b.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" -map "[outv]" -map "[outa]" -preset "ultrafast" output.mp4
//Merge is successful via command prompt


    


    Please suggest a solution