Recherche avancée

Médias (1)

Mot : - Tags -/biographie

Autres articles (111)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (10230)

  • Convert sequence of h264 frames into video with ffmpeg

    12 août 2022, par utnd03

    I'm testing a hardware video encoder module, and for each input frame (raw YUV in NV12 format) the encoder generates an h264 output frame. How can I "concatenate" these individual h264 output frames into a video (e.g. mp4) with ffmpeg ?

    


    $ ls
dst0000.h264  dst0002.h264  dst0004.h264  dst0006.h264  dst0008.h264 
dst0001.h264  dst0003.h264  dst0005.h264  dst0007.h264

$ file dst0000.h264
dst0000.h264: JVT NAL sequence, H.264 video, main @ L 31


    


    I tried ffmpeg -i dst0001.h264 -c:v copy output.mp4, which allows me to view a single h264 frame (and they look normal, so the encoder is working properly).

    


    I also tried ffmpeg -f image2 -s 640x480 -r 30 -i dst%04d.h264 -c:v copy out.mp4, but this gave the following error, which I had no idea how to fix :

    


    [image2 @ 0x55bf32735780] Could not find codec parameters for stream 0 (Video: none, none, 640x480): unknown codec
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'dst%04d.h264':
  Duration: 00:00:00.30, start: 0.000000, bitrate: N/A
  Stream #0:0: Video: none, none, 640x480, 30 fps, 30 tbr, 30 tbn
[mp4 @ 0x55bf3273fc80] Could not find tag for codec none in stream #0, codec not currently supported in container
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 -- 
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
    Last message repeated 1 times


    


  • FFMPEG concat leaves audio gapes between clips

    14 novembre 2022, par GotCubes

    I'm writing a python script that uses subprocess to invoke FFMPEG, not using pyffmpeg.

    



    My script generates a variable number of MP4 files using the AAC audio codec, and concatenates them together using FFMPEG. Here is how I'm constructing each clip :

    



    ffmpeg -loop 1 -i image.jpg -i recording.mp3 -tune stillimage -c:a aac -b:a 256k -shortest clip.mp4


    



    The command I'm using to concatenate them is :

    



    ffmpeg -f concat -i clip_names.txt -c copy video_raw.mp4


    



    I then take that resulting video, and mix a looping audio track over it, and adjust the volume. (Sorry for the awful formatting)

    



    ffmpeg -i video_raw -filter_complex
                 "amovie=Tracks/Breaktime.mp3:loop=0,
                  volume=0.1,
                  asetpts=N/SR/TB[aud];
                  [0:a][aud]amix[a]"
-map 0:v -map [a] -b:a 256k -shortest final_video.mp4


    



    These commands seem to work as I intend them to. When I play the resulting MP4 from my local machine, everything plays without issue.

    



    However, I uploaded the video to YouTube, and ran into issues. When the video is played from YouTube, there is about a second of silence at every timestamp where two clips were concatenated, before the next clip begins. I've tried this from Chrome, IE, and Firefox, all with the same issues.

    



    Based on what I've looked into so far, I think it could be an issue with how the priming samples of each individual clip are handled. I'm not obligated to keep using MP4 or AAC, so if using a different audio/video codec would work better, feel free to suggest !

    



    Is there some type of manipulation I can do in FFMPEG to get rid of the priming samples, or somehow process them differently ? In the end, I'm looking for each clip to play back to back without the delay that the concat operation seems to insert. Thank you !

    


  • How to make WebM screen recording chunks independently processable for audio with FFmpeg ?

    3 décembre 2024, par Dinesh Kumar

    I am streaming screen recordings from the browser into 5-second WebM chunks using the MediaRecorder API. The first chunk (root chunk) is independently processable because it contains the necessary EBML headers and metadata. However, subsequent chunks are not independently processable, as they lack the required metadata, which prevents me from extracting audio independently from them.

    


    I am unable to extract audio independently from the individual chunks using FFmpeg due to missing headers, resulting in errors like EBML header parsing failed. The first chunk works fine on its own, but the subsequent chunks also need to be processed independently for audio extraction.

    


    I am looking for a solution using FFmpeg to fix these chunks so that I can extract audio independently from each chunk.

    


      

    1. Is there a way to repair these chunks post-recording with FFmpeg to include the missing metadata and headers, making them independently processable for audio extraction ?
    2. 


    3. Can FFmpeg reinitialize the EBML headers in each chunk, or is there a command that can add the metadata from the first chunk to subsequent chunks to allow for independent audio extraction ?
    4. 


    


    Additionally, should I consider any changes in the MediaRecorder API to ensure that the chunks are properly formatted for independent processing ? The goal is to make each WebM chunk fully independent, allowing me to extract audio independently from each chunk.