Recherche avancée

Médias (91)

Autres articles (63)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (9439)

  • local.ERROR : ffmpeg failed to execute command

    20 août 2024, par AMNA IQBAL

    I am trying to draw points on a video.

    


    The Video is 12 seconds long and for one second there are 17 data points which needs to be plotted on the video of one frame (1 second).
    
It works for 6 seconds but not for 12 seconds.

    


    Why it's not working for longer videos ? Is there any limitations of commands in ffmpeg ?

    


    public function overlayCoordinates(Request $request)
{
Log::info('Received request to overlay coordinates on video.');

    set_time_limit(600); // 600 seconds = 10 minutes
    try {

        $request->validate([
            'video' => 'required|file|mimes:mp4,avi,mkv,webm|max:102400', // Video max size 100MB
            'coordinates' => 'required|file|mimes:txt|max:5120', // Coordinates text file max size 5MB
        ]);

        $videoFile = $request->file('video');
        $coordinatesFile = $request->file('coordinates');

        $videoFilePath = $videoFile->getRealPath();
        $videoFileName = $videoFile->getClientOriginalName();

        // Move the video file to the desired location if needed
        $storedVideoPath = $videoFile->storeAs('public/videos', $videoFileName);

        // Open the video file using Laravel FFmpeg
        $media = FFMpeg::fromDisk('public')->open('videos/' . $videoFileName);
        $duration = $media->getDurationInSeconds();

        Log::info('Duration: ' . $duration);

        $coordinatesJson = file_get_contents($coordinatesFile->getPathname());
        $coordinatesArray = json_decode($coordinatesJson, true);

        $frameRate = 30; // Assuming a frame rate of 30 fps
        $visibilityDuration = 0.5; // Set duration to 0.5 second

        for ($currentTime = 0; $currentTime < 7; $currentTime++) {
            $filterString = ""; // Reset filter string for each frame
            $frameIndex = intval($currentTime * $frameRate); // Convert current time to an index

            if (isset($coordinatesArray['graphics'][$frameIndex])) {
                // Loop through the first 5 keypoints (or fewer if not available)
                $keypoints = $coordinatesArray['graphics'][$frameIndex]['kpts'];
                for ($i = 0; $i < min(12, count($keypoints)); $i++) {
                    $keypoint = $keypoints[$i];

                    $x = $keypoint['p'][0] * 1920; // Scale x coordinate to video width
                    $y = $keypoint['p'][1] * 1080; // Scale y coordinate to video height

                    $startTime = $frameIndex / $frameRate; // Calculate start time
                    $endTime = $startTime + $visibilityDuration; // Set end time for 0.5 second duration

                    // Add drawbox filter for the current keypoint
                    $filterString .= "drawbox=x={$x}:y={$y}:w=10:h=10:color=red@0.5:t=fill:enable='between(t,{$startTime},{$endTime})',";
                }
                   Log::info("Processing frame index: {$frameIndex}, Drawing first 5 keypoints.");
        }

            $filterString = rtrim($filterString, ',');
            
            // Apply the filter for the current frame
            if (!empty($filterString)) {
                $media->addFilter(function ($filters) use ($filterString) {
                    $filters->custom($filterString);
                });
            }
        }

        $filename = uniqid() . '_overlayed.mp4';
        $destinationPath = 'videos/' . $filename;

        $format = new \FFMpeg\Format\Video\X264('aac');
        $format->setKiloBitrate(5000) // Increase bitrate for better quality
               ->setAdditionalParameters(['-profile:v', 'high', '-preset', 'veryslow', '-crf', '18']) // High profile, very slow preset, and CRF of 18 for better quality
               ->setAudioCodec('aac')
               ->setAudioKiloBitrate(192); // Higher audio bitrate
        

        // Export the video in one pass to a specific disk and directory
        $media->export()
              ->toDisk('public')
              ->inFormat($format)
              ->save($destinationPath);

        return response()->json([
            'message' => 'Video processed successfully with overlays.',
            'path' => Storage::url($destinationPath)
        ]);
    } catch (\Exception $e) {
        Log::error('Overlay process failed: ' . $e->getMessage());
        return response()->json(['error' => 'Overlay process failed. Please check logs for details.'], 500);
    }
}


    


  • Using FFmpeg to receive H.264 video stream by SRTP in Windows [closed]

    29 août 2024, par Taylor Yi

    I am using LGPL version of FFmpeg on Windows. I've downloaded it from https://github.com/BtbN/FFmpeg-Builds.

    


    I have been trying to receive Secure RTP video stream using FFmpeg.
I want to receive the video as bgr24 rawvideo, and send it into output stream (using -).

    


    I have a script in GStreamer that works, so I want to convert it to FFmpeg.
Below are the GStreamer script. It plays the received video using autovideosink ;

    


    gst-launch-1.0 -v udpsrc port=4000 caps="application/x-srtp, ssrc=(uint)1356955624, mki=(buffer)01, srtp-key=(buffer)012345678901234567890123456789012345678901234567890123456789, srtp-cipher=(string)aes-128-icm, srtp-auth=(string)hmac-sha1-80, srtcp-cipher=(string)aes-128-icm, srtcp-auth=(string)hmac-sha1-80" ! srtpdec ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink


    


    At first I wanted to see if the output is correct, so I set the output as a video file instead.
Here are the scripts I tried ;

    


    ffmpeg -protocol_whitelist file,udp,rtp -i settings.sdp -c:v copy out.mp4


    


    The 'settings.sdp' file is like this ;

    


    c=IN IP4 0.0.0.0
m=video 5000 RTP/SAVP 96
a=rtpmap:96 H264/90000
a=ssrc:1356955624 cname:stream
a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:ASNFZ4kBI0VniQEjRWeJASNFZ4kBI0VniQEjRWeJ|2^20|1:1
a=rtcp-mux
a=rtcp-fb:96 nack
a=srtp-cipher:aes-128-icm
a=srtp-auth:hmac-sha1-80
a=srtcp-cipher:aes-128-icm
a=srtcp-auth:hmac-sha1-80


    


    but, when I try running this script, this error log is printed ;

    


    Incorrect amount of SRTP params
[sdp @ 000001e40785f340] RTP H.264 NAL unit type 27 is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[sdp @ 000001e40785f340] nal size exceeds length: 6664 608
[sdp @ 000001e40785f340] nal size exceeds length: 65101 502
[h264 @ 000001e4078661c0] non-existing PPS 0 referenced
[extract_extradata @ 000001e4078c4c80] Invalid NAL unit 0, skipping.
    Last message repeated 2 times
[h264 @ 000001e4078661c0] Invalid NAL unit 0, skipping.
    Last message repeated 2 times
[h264 @ 000001e4078661c0] non-existing PPS 0 referenced
[h264 @ 000001e4078661c0] decode_slice_header error
[h264 @ 000001e4078661c0] no frame!
[sdp @ 000001e40785f340] Undefined type (30)


    


    Is there an option I didn't add ? What can I do to fix this ?

    


  • How to share video stream to WSL2 while ffmpeg ? [closed]

    27 octobre 2024, par 笑先生

    Most solutions for using a camera in WSL are to build your own WSL kernel. I have realized it with the steps mentioned in Capturing webcam video with OpenCV in WSL2

    


    However, it's complicated and time-consuming. I want to realize it by sharing video streaming.

    


    Share method

    


    Step1 : Run the command below on Windows to check all the camera devices. I see an integrated camera "Integrated Webcam" (video) in the output.

    


    ffmpeg -list_devices true -f dshow -i dummy


    


    Step2 : Check the IP of Ethernet adapter vEthernet (WSL). In my computer, it's 172.24.176.1

    


    Step3 : Run the command below on Windows to share video streaming.

    


    ffmpeg -f dshow -i video="Integrated Webcam" -preset ultrafast -tune zerolatency -vcodec libx264 -f mpegts udp://172.24.176.1:5000


    


    Test

    


    Run the command to play the video streaming : ffplay udp://172.24.176.1:5000

    


    It can show the video when the command is run with a terminal of Windows (Win10).

    


    But, it cannot show anything when the command is run on with a terminal of WSL (Ubuntu 22.04). Why ?