Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (65)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (8763)

  • Unable to read video streams on FFMPEG and send it to youTube RTMP server

    29 août 2024, par Rahul Bundele

    I'm trying to send two video stream from browser as array buffer (webcam and screen share video) to server via Web RTC data channels and want ffmpeg to add webcam as overlay on screen share video and send it to youtube RTMP server, the RTC connections are established and server does receives buffer , Im getting error in Ffmpeg..error is at bottom , any tips on to add overlay and send it to youtube RTMP server would be appreciated.

    


    Client.js

    


    `
const webCamStream = await navigator.mediaDevices.getUserMedia( video : true ,audio:true ) ;
const screenStream = await navigator.mediaDevices.getDisplayMedia( video : true ) ;

    


    const webcamRecorder = new MediaRecorder(webCamStream, { mimeType: 'video/webm' });
webcamRecorder.ondataavailable = (event) => {
    if (event.data.size > 0 && webcamDataChannel.readyState === 'open') {
        const reader = new FileReader();
        reader.onload = function () {
            const arrayBuffer = this.result;
            webcamDataChannel.send(arrayBuffer);
        };
        reader.readAsArrayBuffer(event.data);
    }
};
webcamRecorder.start(100);  // Adjust the interval as needed

// Send screen share stream data
const screenRecorder = new MediaRecorder(screenStream, { mimeType: 'video/webm' });
screenRecorder.ondataavailable = (event) => {
    if (event.data.size > 0 && screenDataChannel.readyState === 'open') {
        const reader = new FileReader();
        reader.onload = function () {
            const arrayBuffer = this.result;
            screenDataChannel.send(arrayBuffer);
        };
        reader.readAsArrayBuffer(event.data);
    }
};
screenRecorder.start(100); 


    


    `

    


    Server.js

    


    const youtubeRTMP = 'rtmp://a.rtmp.youtube.com/live2/youtube key';

// Create PassThrough streams for webcam and screen
const webcamStream = new PassThrough();
const screenStream = new PassThrough();

// FFmpeg arguments for processing live streams
const ffmpegArgs = [
  '-re',
  '-i', 'pipe:3',                  // Webcam input via pipe:3
  '-i', 'pipe:4',                  // Screen share input via pipe:4
  '-filter_complex',               // Complex filter for overlay
  '[0:v]scale=320:240[overlay];[1:v][overlay]overlay=10:10[out]',
  '-map', '[out]',                 // Map the output video stream
  '-c:v', 'libx264',               // Use H.264 codec for video
  '-preset', 'ultrafast',          // Use ultrafast preset for low latency
  '-crf', '25',                    // Set CRF for quality/size balance
  '-pix_fmt', 'yuv420p',           // Pixel format for compatibility
  '-c:a', 'aac',                   // Use AAC codec for audio
  '-b:a', '128k',                  // Set audio bitrate
  '-f', 'flv',                     // Output format (FLV for RTMP)
  youtubeRTMP                      // Output to YouTube RTMP server
];

// Spawn the FFmpeg process
const ffmpegProcess = spawn('ffmpeg', ffmpegArgs, {
  stdio: ['pipe', 'pipe', 'pipe', 'pipe', 'pipe']
});

// Pipe the PassThrough streams into FFmpeg
webcamStream.pipe(ffmpegProcess.stdio[3]);
screenStream.pipe(ffmpegProcess.stdio[4]);

ffmpegProcess.on('close', code => {
  console.log(`FFmpeg process exited with code ${code}`);
});

ffmpegProcess.on('error', error => {
  console.error(`FFmpeg error: ${error.message}`);
});

const handleIncomingData = (data, stream) => {
  const buffer = Buffer.from(data);
  stream.write(buffer);
};


    


    the server gets the video buffer via webrtc data channels

    


    pc.ondatachannel = event => {
        const dataChannel = event.channel;
        pc.dc = event.channel;
        pc.dc.onmessage = event => {
            // Spawn the FFmpeg process
            // console.log('Message from client:', event.data);
            const data = event.data;

            if (dataChannel.label === 'webcam') {
            handleIncomingData(data, webcamStream);
            } else if (dataChannel.label === 'screen') {
            handleIncomingData(data, screenStream);
            }
          
        };
        pc.dc.onopen = e=>{
            // recHead.innerText = "Waiting for user to send files"
            console.log("channel opened!")
        }
    };


    


    Im getting this error in ffmpeg

    


    [in#0 @ 0000020e585a1b40] Error opening input: Bad file descriptor
Error opening input file pipe:3.
Error opening input files: Bad file descriptor


    


  • How do I stream audio from a mic in a raspberry pi with FFmpeg ?

    23 mars 2024, par Ignacio

    I'm trying to follow this to stream audio from a mic in my raspberry pi.

    


    ignacio@pi-satellite-bigbedroom:~ $ ffmpeg -re -f pulse -ac 1 -i plughw:CARD=seeed2micvoicec,DEV=0 -f rtsp -rtsp_transport tcp rtsp://192.168.86.151:8554/live.stream
ffmpeg version 4.3.6-0+deb11u1+rpt5 Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 10 (Debian 10.2.1-6)
  configuration: --prefix=/usr --extra-version=0+deb11u1+rpt5 --toolchain=hardened --incdir=/usr/include/aarch64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-mmal --enable-neon --enable-v4l2-request --enable-libudev --enable-epoxy --enable-sand --libdir=/usr/lib/aarch64-linux-gnu --arch=arm64 --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
plughw:CARD=seeed2micvoicec,DEV=0: No such process


    


    I believe this shows the cards I have :

    


    ignacio@pi-satellite-bigbedroom:~ $ pactl list sources
Source #0
    State: SUSPENDED
    Name: alsa_output.platform-bcm2835_audio.analog-stereo.monitor
    Description: Monitor of Built-in Audio Analog Stereo
    Driver: module-alsa-card.c
    Sample Specification: s16le 2ch 44100Hz
    Channel Map: front-left,front-right
    Owner Module: 4
    Mute: no
    Volume: front-left: 65536 / 100% / 0.00 dB,   front-right: 65536 / 100% / 0.00 dB
            balance 0.00
    Base Volume: 65536 / 100% / 0.00 dB
    Monitor of Sink: alsa_output.platform-bcm2835_audio.analog-stereo
    Latency: 0 usec, configured 0 usec
    Flags: DECIBEL_VOLUME LATENCY 
    Properties:
        device.description = "Monitor of Built-in Audio Analog Stereo"
        device.class = "monitor"
        alsa.card = "0"
        alsa.card_name = "bcm2835 Headphones"
        alsa.long_card_name = "bcm2835 Headphones"
        alsa.driver_name = "snd_bcm2835"
        device.bus_path = "platform-bcm2835_audio"
        sysfs.path = "/devices/platform/soc/3f00b840.mailbox/bcm2835_audio/sound/card0"
        device.form_factor = "internal"
        device.string = "0"
        module-udev-detect.discovered = "1"
        device.icon_name = "audio-card"
    Formats:
        pcm

Source #1
    State: IDLE
    Name: alsa_output.platform-soc_sound.stereo-fallback.monitor
    Description: Monitor of Built-in Audio Stereo
    Driver: module-alsa-card.c
    Sample Specification: s16le 2ch 44100Hz
    Channel Map: front-left,front-right
    Owner Module: 12
    Mute: no
    Volume: front-left: 65536 / 100% / 0.00 dB,   front-right: 65536 / 100% / 0.00 dB
            balance 0.00
    Base Volume: 65536 / 100% / 0.00 dB
    Monitor of Sink: alsa_output.platform-soc_sound.stereo-fallback
    Latency: 0 usec, configured 2000000 usec
    Flags: DECIBEL_VOLUME LATENCY 
    Properties:
        device.description = "Monitor of Built-in Audio Stereo"
        device.class = "monitor"
        alsa.card = "2"
        alsa.card_name = "seeed-2mic-voicecard"
        alsa.long_card_name = "seeed-2mic-voicecard"
        alsa.driver_name = "snd_soc_simple_card"
        device.bus_path = "platform-soc:sound"
        sysfs.path = "/devices/platform/soc/soc:sound/sound/card2"
        device.form_factor = "internal"
        device.string = "2"
        module-udev-detect.discovered = "1"
        device.icon_name = "audio-card"
    Formats:
        pcm

Source #2
    State: RUNNING
    Name: alsa_input.platform-soc_sound.stereo-fallback
    Description: Built-in Audio Stereo
    Driver: module-alsa-card.c
    Sample Specification: s16le 2ch 44100Hz
    Channel Map: front-left,front-right
    Owner Module: 12
    Mute: no
    Volume: front-left: 32845 /  50% / -18.00 dB,   front-right: 32845 /  50% / -18.00 dB
            balance 0.00
    Base Volume: 20724 /  32% / -30.00 dB
    Monitor of Sink: n/a
    Latency: 688 usec, configured 10000 usec
    Flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY 
    Properties:
        alsa.resolution_bits = "16"
        device.api = "alsa"
        device.class = "sound"
        alsa.class = "generic"
        alsa.subclass = "generic-mix"
        alsa.name = "bcm2835-i2s-wm8960-hifi wm8960-hifi-0"
        alsa.id = "bcm2835-i2s-wm8960-hifi wm8960-hifi-0"
        alsa.subdevice = "0"
        alsa.subdevice_name = "subdevice #0"
        alsa.device = "0"
        alsa.card = "2"
        alsa.card_name = "seeed-2mic-voicecard"
        alsa.long_card_name = "seeed-2mic-voicecard"
        alsa.driver_name = "snd_soc_simple_card"
        device.bus_path = "platform-soc:sound"
        sysfs.path = "/devices/platform/soc/soc:sound/sound/card2"
        device.form_factor = "internal"
        device.string = "hw:2"
        device.buffering.buffer_size = "352800"
        device.buffering.fragment_size = "176400"
        device.access_mode = "mmap+timer"
        device.profile.name = "stereo-fallback"
        device.profile.description = "Stereo"
        device.description = "Built-in Audio Stereo"
        module-udev-detect.discovered = "1"
        device.icon_name = "audio-card"
    Ports:
        analog-input: Analog Input (type: Analog, priority: 10000, availability unknown)
    Active Port: analog-input
    Formats:
        pcm


    


    I want to use the mic from the seeed-2mic-voicecard.

    


    Thanks for the help

    


  • FFmpeg : What re-encoding settings can be used to achieve results similar to Google Drive's video processing ?

    4 août 2023, par Mycroft_47

    Context :

    


    I have a large collection of videos recorded by my phone's camera, which is taking up a significant amount of space. Recently, I noticed that when I uploaded a video to Google Drive and then downloaded it again using IDM (by clicking on the pop-up that IDM displays when it detects something that can be downloaded here's what i mean), the downloaded video retained the same visual quality but occupied much less space. Upon further research, I discovered that Google re-encodes uploaded videos using H.264 video encoding, and I believe I can achieve similar compression using FFmpeg.

    


    Problem :

    


    Despite experimenting with various FFmpeg commands, I haven't been able to replicate Google Drive's compression. Every attempt using -codec:v libx264 option alone resulted in videos larger than the original files.

    


    While adjusting the -crf parameter to a higher value and opting for a faster -preset option did yield smaller file sizes, it unfortunately came at the cost of a noticeable degradation in visual quality and the appearance of some visible artifacts in the video.

    


    Google Drive's processing, on the other hand, strikes a commendable balance, achieving a satisfactory file size without compromising visual clarity, (I should note that upon zooming in on this video, I observed some minor blurring, but it was acceptable to me).

    


    Note :

    


    I'm aware that using the H.265 video encoder instead of H.264 may give better results. However, to ensure fairness and avoid any potential bias, I think the optimal approach is first to find the best command using the H.264 video encoder. Once identified, I can then replace -codec:v libx264 with -codec:v libx265. This approach will ensure that the chosen command is really the best that FFMPEG can achieve, and that it is not solely influenced by the superior performance of H.265 when used from the outset.

    


    Here's the FFMPEG command I am currently using :

    


    ffmpeg -hide_banner -loglevel verbose ^
    -i input.mp4 ^
    -codec:v libx264 ^
    -crf 36 -preset ultrafast ^
    -codec:a libopus -b:a 112k ^
    -movflags use_metadata_tags+faststart -map_metadata 0 ^
    output.mp4


    


    





    


    


    


    


    


    


    


    



    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    Video file Size (bytes) Bit rate (bps) Encoder FFPROB - JSON
    Original (named 'raw 1.mp4') 31,666,777 10,314,710  !!! link
    Without crf 36,251,852 11,805,216 Lavf60.3.100 link
    With crf 10,179,113 3,314,772 Lavf60.3.100 link
    Gdrive 6,726,189 2,190,342 Google link

    


    


    Those files can be found here.

    


    Update :

    


    I continued my experiments with the video "raw_1.mp4" and found some interesting results that resemble those shown in this blog post, (I recommend consulting this answer).

    


    In the following figure, I observed that using the -preset set to veryfast provided the most advantageous results, striking the optimal balance between compression ratio and compression time, (Note that a negative percentage in the compression variable indicates an increase in file size after processing) :
enter image description here

    


    In this figure, I used the H.264 encoder and compared the compression ratio of different outputted files resulting from seven different values of the -crf parameter (CRF values used : 25, 27, 29, 31, 33, 35, 37),
enter image description here

    


    For this figure, I've switched the encoder to H.265 while maintaining the same CRF values used in the previous figure :
enter image description here

    


    Based on these results, the -preset veryfast and a -crf value of 31 are my current preferred settings for FFmpeg, until they are proven to be suboptimal choices.
As a result, the FFmpeg command I'll use is as follows :

    


    ffmpeg -hide_banner -loglevel verbose ^
    -i input.mp4 ^
    -codec:v libx264 ^
    -crf 31 -preset veryfast ^
    -codec:a libopus -b:a 112k ^
    -movflags use_metadata_tags+faststart -map_metadata 0 ^
    output.mp4


    


    Note that these choices are based solely on the compression results obtained so far, and they do not take into account the visual quality of the outputted files.