Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (15)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

Sur d’autres sites (5985)

  • Convert sequence of h264 frames into video with ffmpeg

    12 août 2022, par utnd03

    I'm testing a hardware video encoder module, and for each input frame (raw YUV in NV12 format) the encoder generates an h264 output frame. How can I "concatenate" these individual h264 output frames into a video (e.g. mp4) with ffmpeg ?

    


    $ ls
dst0000.h264  dst0002.h264  dst0004.h264  dst0006.h264  dst0008.h264 
dst0001.h264  dst0003.h264  dst0005.h264  dst0007.h264

$ file dst0000.h264
dst0000.h264: JVT NAL sequence, H.264 video, main @ L 31


    


    I tried ffmpeg -i dst0001.h264 -c:v copy output.mp4, which allows me to view a single h264 frame (and they look normal, so the encoder is working properly).

    


    I also tried ffmpeg -f image2 -s 640x480 -r 30 -i dst%04d.h264 -c:v copy out.mp4, but this gave the following error, which I had no idea how to fix :

    


    [image2 @ 0x55bf32735780] Could not find codec parameters for stream 0 (Video: none, none, 640x480): unknown codec
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'dst%04d.h264':
  Duration: 00:00:00.30, start: 0.000000, bitrate: N/A
  Stream #0:0: Video: none, none, 640x480, 30 fps, 30 tbr, 30 tbn
[mp4 @ 0x55bf3273fc80] Could not find tag for codec none in stream #0, codec not currently supported in container
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 -- 
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
    Last message repeated 1 times


    


  • FFMPEG concat leaves audio gapes between clips

    14 novembre 2022, par GotCubes

    I'm writing a python script that uses subprocess to invoke FFMPEG, not using pyffmpeg.

    



    My script generates a variable number of MP4 files using the AAC audio codec, and concatenates them together using FFMPEG. Here is how I'm constructing each clip :

    



    ffmpeg -loop 1 -i image.jpg -i recording.mp3 -tune stillimage -c:a aac -b:a 256k -shortest clip.mp4


    



    The command I'm using to concatenate them is :

    



    ffmpeg -f concat -i clip_names.txt -c copy video_raw.mp4


    



    I then take that resulting video, and mix a looping audio track over it, and adjust the volume. (Sorry for the awful formatting)

    



    ffmpeg -i video_raw -filter_complex
                 "amovie=Tracks/Breaktime.mp3:loop=0,
                  volume=0.1,
                  asetpts=N/SR/TB[aud];
                  [0:a][aud]amix[a]"
-map 0:v -map [a] -b:a 256k -shortest final_video.mp4


    



    These commands seem to work as I intend them to. When I play the resulting MP4 from my local machine, everything plays without issue.

    



    However, I uploaded the video to YouTube, and ran into issues. When the video is played from YouTube, there is about a second of silence at every timestamp where two clips were concatenated, before the next clip begins. I've tried this from Chrome, IE, and Firefox, all with the same issues.

    



    Based on what I've looked into so far, I think it could be an issue with how the priming samples of each individual clip are handled. I'm not obligated to keep using MP4 or AAC, so if using a different audio/video codec would work better, feel free to suggest !

    



    Is there some type of manipulation I can do in FFMPEG to get rid of the priming samples, or somehow process them differently ? In the end, I'm looking for each clip to play back to back without the delay that the concat operation seems to insert. Thank you !

    


  • How to make WebM screen recording chunks independently processable for audio with FFmpeg ?

    3 décembre 2024, par Dinesh Kumar

    I am streaming screen recordings from the browser into 5-second WebM chunks using the MediaRecorder API. The first chunk (root chunk) is independently processable because it contains the necessary EBML headers and metadata. However, subsequent chunks are not independently processable, as they lack the required metadata, which prevents me from extracting audio independently from them.

    


    I am unable to extract audio independently from the individual chunks using FFmpeg due to missing headers, resulting in errors like EBML header parsing failed. The first chunk works fine on its own, but the subsequent chunks also need to be processed independently for audio extraction.

    


    I am looking for a solution using FFmpeg to fix these chunks so that I can extract audio independently from each chunk.

    


      

    1. Is there a way to repair these chunks post-recording with FFmpeg to include the missing metadata and headers, making them independently processable for audio extraction ?
    2. 


    3. Can FFmpeg reinitialize the EBML headers in each chunk, or is there a command that can add the metadata from the first chunk to subsequent chunks to allow for independent audio extraction ?
    4. 


    


    Additionally, should I consider any changes in the MediaRecorder API to ensure that the chunks are properly formatted for independent processing ? The goal is to make each WebM chunk fully independent, allowing me to extract audio independently from each chunk.