Recherche avancée

Médias (1)

Mot : - Tags -/portrait

Autres articles (63)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (7505)

  • FFMPEG OpenTok archive audio drift

    25 août 2020, par Chris Kooken

    I am using OpenTok to build a live video platform. It generates webm files from each users stream.

    


    I am using FFMPEG to convert webm (WEBRTC) videos to MP4s to edit in my NLE. The problem I am having is my audio is drifting. I THINK it is because the user pauses the audio during the stream. This is the command i'm running

    


    ffmpeg -acodec libopus -i 65520df3-1033-480e-adde-1856d18e2352.webm  -max_muxing_queue_size 99999999 65520df3-1033-480e-adde-1856d18e2352.webm.new.mp4


    


    The problem is I think, whenever the user muted themselves, there are no frames. But the PTS is in tact.

    


    This is from the OpenTok documentation (my WebRTC platform)

    


    


    Audio and video frames may not arrive with monotonic timestamps ; frame rates are not always consistent. This is especially relevant if either the video or audio track is disabled for a time, using one of publishVideo or publishAudio publisher properties.

    


    


    


    Frame presentation timestamps (PTS) are written based on NTP
timestamps taken at the time of capture, offset by the timestamp of
the first received frame. Even if a track is muted and later unmuted,
the timestamp offset should remain consistent throughout the duration
of the entire stream. When decoding in post-processing, a gap in PTS
between consecutive frames will exist for the duration of the track
mute : there are no "silent" frames in the container.

    


    


    How can I convert these files and have them play in sync ? Note, when I play in QuickTime or VLC, the files are synced correctly.

    


    EDIT
I've gotten pretty close with this command :

    


     ffmpeg -acodec libopus -i $f -max_muxing_queue_size 99999999 -vsync 1 -af aresample=async=1 -r 30 $f.mp4


    


    But every once in a while, I get a video where the audio starts right away, and they wont actually be talking in the video until half-way thought the video. My guess is they muted themselves during the video conference... so in some cases audio is 5-10 mins ahead. Again, plays fine in quicktime, but pulled into my NLE, its way off.

    


  • How to append dummy frames to FFmpeg pipe after EOF to prevent Shaka Packager from stopping (LL-HLS/LL-DASH) ? [closed]

    6 août, par Arjit

    I am working on a live streaming pipeline where I pipe FFmpeg into Shaka Packager to generate LL-HLS and LL-DASH output from an RTMP stream.

    


    Scenario :

    


      

    • FFmpeg receives the RTMP stream and pipes the output to Shaka Packager.
    • 


    • When the RTMP publisher stops streaming, FFmpeg naturally ends, sending an EOF (end-of-file) to the pipe.
    • 


    • This causes Shaka Packager to stop processing, as it closes its read stream on EOF.
    • 


    


    What I want :

    


    Before FFmpeg terminates, or even after, I want to append 5 seconds of black dummy frames (video + silent audio) to the pipe, so that :

    


      

    1. Shaka Packager can finalize the last segment properly.
    2. 


    3. Shaka Packager doesn't terminate prematurely on EOF but processes these dummy frames.
    4. 


    5. This is needed for clean stream finalization for LL-HLS/LL-DASH workflows.
    6. 


    


    The Problem :

    


      

    • When FFmpeg exits, the pipe reaches EOF.
    • 


    • Any attempt to write additional data (like dummy frames) into the same pipe after EOF results in a "broken pipe" error because Shaka Packager has already closed its read end.
    • 


    • I can't find a way to inject those black frames into the stream after the original FFmpeg exits, without Shaka shutting down.
    • 


    


    What I've Tried :

    


      

    • Tried spawning another FFmpeg process to write black frames to the same pipe after the main FFmpeg process exits. But by then, the pipe is already closed by Shaka Packager.
    • 


    • Attempted using mkfifo with multiple writers but it doesn't work since FIFO allows only one writer and one reader at a time.
    • 


    • Can't just "delay" killing FFmpeg as the input stream is dynamic, and I need to programmatically pad with dummy frames at the end.
    • 


    


    My Question :

    


    How can I keep the pipe "open" to allow appending dummy black frames after the main FFmpeg process ends, so that Shaka Packager continues processing and properly finalizes the segments instead of exiting on EOF ?

    


    Is there a way to chain multiple FFmpeg processes or a muxer that can act as a "keep-alive" buffer for Shaka Packager until I explicitly tell it to end ?
    
Or is there a recommended way to handle such "end of live stream padding" when using FFmpeg → Shaka Packager pipelines ?

    


  • How to Transcode ALL Audio streams from input to output using ffmpeg ?

    24 novembre 2022, par user1940163

    I have an input MPEG TS file 'unit_test.ts'. This file has following content (shown by ffprobe) :

    


    Input #0, mpegts, from 'unit_test.ts':
  Duration: 00:00:57.23, start: 73674.049844, bitrate: 2401 kb/s
  Program 1
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x31]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 852x480 [SAR 640:639 DAR 16:9], Closed Captions, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
    Stream #0:1[0x34](eng): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, 5.1(side), fltp, 448 kb/s
    Stream #0:2[0x35](spa): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s


    


    I want to convert it into another MPEG TS file. Requirement is that the Video stream of the input should be directly copied to the output whereas ALL the audio streams should be transcoded "aac" format.

    


    I tried this command :

    


    ffmpeg -i unit_test.ts  -map 0 -c copy -c:a aac maud_test.ts

    


    It converted it into 'maud_test.ts' with following contents (shown by ffprobe)

    


    Input #0, mpegts, from 'maud_test.ts':
  Duration: 00:00:57.25, start: 1.400000, bitrate: 2211 kb/s
  Program 1
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 852x480 [SAR 640:639 DAR 16:9], Closed Captions, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
    Stream #0:1[0x101](eng): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, 6 channels, fltp, 391 kb/s
    Stream #0:2[0x102](spa): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 133 kb/s


    


    So it appeared as if the command worked....However when I play the maud_test.ts file in vlc player I can see both audio streams listed in the menu ; but Stream 1 (eng) remains silent............whereas Stream 2 (spa) has proper audio. (Original TS file has both audio streams properly audible)

    


    I have tried this with different input files and have seen that same problem occurs in each case.

    


    What that I am doing is not right ?

    


    How should I get this done ? (I can write explicit stream by stream map and channel arguments to get that done ; however I want the command line to be generic, in that the input file could be having any configuration with one Video and several Audios with different formats. The configuration will not be known beforehand.)