Recherche avancée

Médias (91)

Autres articles (64)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (9293)

  • Ffmpeg configuration to stream the frames of my webcam

    6 juillet 2023, par Ridweng

    I'm trying to build a server in NodeJs to stream in RTSP my webcam using Angular to retrieve the frames and connecting using websockets. The server side is using "express-ws" module to create the Websocket.

    


    I was successfull in sending the frames from the webcam to the server in base64, the server is receiving these messages from a function on interval of (1000 / 30)ms.

    


    The issue relies on the implementation of my Ffmpeg child process. I retrieve the message and convert it into a buffer to then write this in the function of Ffmpeg.

    


    My current implementation is this one :

    


    const { spawn } = require('child_process');
exports.stream =  (ws ,req) => {
    try{
        const mess = `connection from: ${req._remoteAddress} at ${req._startTime}.`
        const initialMess =`Started ${mess}`
        ws.uuid = uuidv4()
        console.log(initialMess)

        ws.on('message', function incoming(message) {
            message = JSON.parse(message)
            console.log(`Res: ${message.width} x ${message.height}`);
            const ffmpeg = spawn('ffmpeg', [
                '-f', 'rawvideo',
                '-pixel_format', 'rgb24',
                '-video_size', `${message.width}x${message.height}`,
                '-framerate', `${message.framerate}`,
                '-i', '-',
                '-codec:v', 'libx264',
                '-preset', 'ultrafast',
                '-tune', 'zerolatency',
                '-f', 'rtsp',
                'rtsp://127.0.0.1:554/rtsp/stream',
            ]);
            const base64Data = message.video;
            const videoData = Buffer.from(base64Data, 'base64');

            ffmpeg.stdin.write(videoData);
            ffmpeg.stdin.end();
            
            ffmpeg.stderr.on('data', (data) => {
                console.error(`FFmpeg : ${data}`);
                });

            ffmpeg.on('exit', (code, signal) => {
                if (dev) console.log(`FFmpeg process exited with code ${code} and signal ${signal}`);
        
                // Close the WebSocket connection
                ws.close();
                });
        });
        ws.on('close', () => {
            const finalMess = `Stopped ${mess}`
            rem(ws.uuid)
            console.log(finalMess)
        })
    }catch(err){
        console.log(err)
    }
}


    


    In terms of the message received, this is the Angular side sending the message :

    


    const imageData = this.canvas.toDataURL('image/jpeg');
socket.send(JSON.stringify({video: imageData, width: this.canvas.width, height: this.canvas.height, framerate: interval }));


    


    The interval variable is the divisor of the interval that is triggering the function (in this case 30).

    


    I'm currently receiving the error message from Ffmpeg :

    


    ffmpeg version 6.0 Copyright (c) 2000-2023 the FFmpeg developers
  built with Apple clang version 14.0.3 (clang-1403.0.22.14.1)

FFmpeg :   configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/6.0 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon
  libavutil      58.  2.100 / 58.  2.100
  libavcodec     60.  3.100 / 60.  3.100
  libavformat    60.  3.100 / 60.  3.100
  libavdevice    60.  1.100 / 60.  1.100
  libavfilter     9.  3.100 /  9.  3.100
  libswscale      7.  1.100 /  7.  1.100
  libswresample   4. 10.100 /  4. 10.100
  libpostproc    57.  1.100 / 57.  1.100

FFmpeg : [rawvideo @ 0x150005ff0] Packet corrupt (stream = 0, dts = 0).

FFmpeg : Input #0, rawvideo, from 'fd:':
  Duration: N/A, start: 0.000000, bitrate: 221184 kb/s
  Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 221184 kb/s, 30 tbr, 30 tbn

FFmpeg : Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))

FFmpeg : fd:: corrupt input packet in stream 0
[rawvideo @ 0x14ef24290] Invalid buffer size, packet size 76343 < expected frame_size 921600

FFmpeg : Error while decoding stream #0:0: Invalid argument

FFmpeg : [libx264 @ 0x14ef25a20] using cpu capabilities: ARMv8 NEON

FFmpeg : [libx264 @ 0x14ef25a20] profile High 4:4:4 Predictive, level 3.0, 4:4:4, 8-bit

FFmpeg : [libx264 @ 0x14ef25a20] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=6 threads=7 lookahead_threads=7 sliced_threads=1 slices=7 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0

FFmpeg : [tcp @ 0x1500078a0] Connection to tcp://127.0.0.1:554?timeout=0 failed: Connection refused
[out#0/rtsp @ 0x14ef24b00] Could not write header (incorrect codec parameters ?): Connection refused

FFmpeg : [vost#0:0/libx264 @ 0x14ef256d0] Error initializing output stream: 

FFmpeg : Conversion failed!

FFmpeg process exited with code 1 and signal null


    


    This is the reason I believe the issue relies on the configuration of Ffmpeg and how to set it for this case. It would be great to being able to play this RTSP link in my VLC to conclude it as successfull.

    


    I would apreciate any suggestions or guidence.
Thanks in advance.

    


  • C++ Boost launching FFMPEG doesnt work, working ok via terminal

    21 juin 2023, par Pit Digger

    I am launching an FFMPEG process from C++ , the command is workign fine from terminal command line, but gives error when laucnhed from code. What could cause this ?

    


    Error

    


    [AVFilterGraph @ 0x3cfadc0] Error parsing filterchain "[0:v]split=3[v1][v2][v3];[v1]copy[v1out];[v2]scale=w=1280:h=720[v2out];[v3]scale=w=640:h=360[v3out]"
    
[AVFilterGraph @ 0x2f9fb00] Error parsing filterchain 
    
[AVFilterGraph @ 0x3cfadc0] Trailing garbage after a filter: split=3[v1][v2][v3];[v1]copy[v1out];[v2]scale=w=1280:h=720[v2out];[v3]scale=w=640:h=360[v3out]


    


    Code

    


    std::vector args;
args.push_back("-i"); args.push_back("input.mp4");
args.push_back("-filter_complex");
args.push_back("\"[0:v]split=3[v1][v2][v3];[v1]copy[v1out];[v2]scale=w=1280:h=720[v2out];[v3]scale=w=640:h=360[v3out]\"");

args.push_back("-map");  args.push_back("[v1out]");
args.push_back("-c:v:0");  args.push_back("libx264");
args.push_back("-x264-params");  args.push_back("\"nal-hrd=cbr:force-cfr=1\"");
args.push_back("-b:v:0");  args.push_back("1M");
args.push_back("-maxrate:v:0");  args.push_back("2M");
args.push_back("-minrate:v:0");  args.push_back("2M");
args.push_back("-bufsize:v:0");  args.push_back("2M");
args.push_back("-preset");  args.push_back("fast");
args.push_back("-g");  args.push_back("48");
args.push_back("-sc_threshold");  args.push_back("0");
args.push_back("-keyint_min");  args.push_back("48");

args.push_back("-map");  args.push_back("[v2out]");
args.push_back("-c:v:1");  args.push_back("libx264");
args.push_back("-x264-params");  args.push_back("\"nal-hrd=cbr:force-cfr=1\"");
args.push_back("-b:v:1");  args.push_back("1M");
args.push_back("-maxrate:v:1");  args.push_back("1M");
args.push_back("-minrate:v:1");  args.push_back("1M");
args.push_back("-bufsize:v:1");  args.push_back("1M");
args.push_back("-preset");  args.push_back("fast");
args.push_back("-g");  args.push_back("48");
args.push_back("-sc_threshold");  args.push_back("0");
args.push_back("-keyint_min");  args.push_back("48");

args.push_back("-map");  args.push_back("[v3out]");
args.push_back("-c:v:2");  args.push_back("libx264");
args.push_back("-x264-params");  args.push_back("\"nal-hrd=cbr:force-cfr=1\"");
args.push_back("-b:v:2");  args.push_back("500K");
args.push_back("-maxrate:v:2");  args.push_back("500K");
args.push_back("-minrate:v:2");  args.push_back("500K");
args.push_back("-bufsize:v:2");  args.push_back("500K");
args.push_back("-preset");  args.push_back("fast");
args.push_back("-g");  args.push_back("48");
args.push_back("-sc_threshold");  args.push_back("0");
args.push_back("-keyint_min");  args.push_back("48");

args.push_back("-map");  args.push_back("a:0");
args.push_back("-c:a:0");  args.push_back("aac");
args.push_back("-b:a:0");  args.push_back("96k");
args.push_back("-ac");  args.push_back("2");
args.push_back("-map");  args.push_back("a:0");
args.push_back("-c:a:1");  args.push_back("aac");
args.push_back("-b:a:1");  args.push_back("96k");
args.push_back("-ac");  args.push_back("2");
args.push_back("-map");  args.push_back("a:0");
args.push_back("-c:a:2");  args.push_back("aac");
args.push_back("-b:a:2");  args.push_back("48k");
args.push_back("-ac");  args.push_back("2");

args.push_back("-avoid_negative_ts");  args.push_back("1");
args.push_back("-f");  args.push_back("hls");
args.push_back("-hls_time");  args.push_back("6");
args.push_back("-hls_list_size");  args.push_back("15");
args.push_back("-hls_flags");  args.push_back("independent_segments");
args.push_back("-hls_segment_type");  args.push_back("mpegts");
args.push_back("-hls_segment_filename");  args.push_back("/output/stream_%v_data%02d.ts");
args.push_back("-master_pl_name");  args.push_back("index.m3u8");
args.push_back("-var_stream_map");  args.push_back("\"v:0,a:0 v:1,a:1 v:2,a:2\"");
args.push_back("/output/stream_%v.m3u8");


m_childProcess = std::make_unique(
            bp::exe = ffmpegPath,
            bp::args = args);


    


    Command that above code builds (indented for visibility) :

    


     ffmpeg -i input.mp4 -c copy -filter_complex "[0:v]split=3[v1][v2][v3];[v1]copy[v1out];[v2]scale=w=1280:h=720[v2out];[v3]scale=w=640:h=360[v3out]" 
-map [v1out] -c:v:0 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:0 1M -maxrate:v:0 2M -minrate:v:0 2M -bufsize:v:0 2M -preset fast -g 48 -sc_threshold 0 -keyint_min 48 
-map [v2out] -c:v:1 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:1 1M -maxrate:v:1 1M -minrate:v:1 1M -bufsize:v:1 1M -preset fast -g 48 -sc_threshold 0 -keyint_min 48  
-map [v3out] -c:v:2 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:2 500K -maxrate:v:2 500K -minrate:v:2 500K -bufsize:v:2 500K -preset fast -g 48 -sc_threshold 0 -keyint_min 48 -map a:0 -c:a:0 aac -b:a:0 96k -ac 2 
-map a:0 -c:a:1 aac -b:a:1 96k -ac 2 -map a:0 -c:a:2 aac -b:a:2 48k -ac 2 
-avoid_negative_ts 1 -f hls -hls_time 6 -hls_list_size 15 -hls_flags independent_segments -hls_segment_type mpegts -hls_segment_filename /output/stream_%v_data%02d.ts -master_pl_name index.m3u8 -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" /output/stream_%v.m3u8


    


  • Processing video frame by frame in AWS Lambda with Node.js and FFmpeg [closed]

    29 décembre 2023, par Aviato

    I am working on a project where I need to process video frames one at a time in an AWS Lambda function using Node.js. My goal is to avoid storing all frames in memory or the filesystem due to resource constraints. I plan to use the fluent-ffmpeg library or ffmpeg from child processes for video processing.

    


    In the past, I used OpenCV to process videos and frames without writing the frames on the disk or storing all the frames at once on the memory itself. But now as I am using node js, its a little hard to set up the code using ffmpeg, etc.

    


    Here is a small snippet from what I did with opencv :-

    


    import cv2

cap = cv2.VideoCapture(video_file)

out = cv2.VideoWriter('output.mp4', fourcc, fps, (width, height))

def generate_frame():
        while cap.isOpened():
            code, frame = cap.read()
            if code:
                yield frame
            else:
                print("completed")
                break

for i, frame in enumerate(generate_frame()):
          # Now we can process the video frames directly and write them on the output opencv
          out.write(editing_frames)


    


    Additionally, I intend to leverage image processing libraries like Sharp and the Canvas API to edit individual frames before assembling the final video. I am looking for help in handling video frames efficiently within the constraints of AWS Lambda.

    


    Any insights, code snippets, or recommendations would be greatly appreciated. Thank you !