Recherche avancée

Médias (0)

Mot : - Tags -/upload

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (76)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Participer à sa documentation

    10 avril 2011

    La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
    Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
    Pour ce faire, vous pouvez vous inscrire sur (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (11272)

  • HLS with FFmpeg (separated track sync)

    30 janvier 2019, par PhilG

    I’m looking for a solution to proceed transcoding a video file to HLS multi-bitrate with audio track separated

    basically,
    I got a video file, I transcoded it into 4 resolutions + 1 audio track

    • 180p
    • 360p
    • 720p
    • 1080p
    • 2160p (maybe)
    • Audio1
    • Audio2 (maybe)

    but for the exemple, here is my 180p command :

    ffmpeg -i ${source} \
       -pix_fmt yuv420p \
       -c:v libx264 \
       -b:v 230k -minrate:v 230k -maxrate:v 230k -bufsize:v 200k \
       -profile:v baseline -level 3.0 \
       -x264opts scenecut=0:keyint=75:min-keyint=75 \
       -hls_time 3 \
       -hls_playlist_type vod \
       -r 25 \
       -vf scale=w=320:h=180:force_original_aspect_ratio=decrease \
       -an \
       -f hls \
       -hls_segment_filename ../OUT/${base_name}/180p/180p_%06d.ts ../OUT/${base_name}/180p/180p.m3u8

    and the audio track :

    ffmpeg -i ${source} \
       -vn \
       -c:a aac \
       -b:a 128k \
       -ar:a 48000 \
       -ac:a 2 \
       -hls_time 3 \
       -hls_playlist_type vod \
       -hls_segment_filename ../OUT/${base_name}/audio1/audio1_%06d.ts ../OUT/${base_name}/audio1/audio1.m3u8

    for convenient reason, I launch separate ffmpeg command for each resolution, depending on the video source quality

    Then I create a standard Master Playlist

    #EXTM3U
    #EXT-X-VERSION:3
    #EXT-X-STREAM-INF:BANDWIDTH=230000,RESOLUTION=320x180,CODECS="avc1.42001e"
    180p/180p.m3u8
    #EXT-X-STREAM-INF:BANDWIDTH=600000,RESOLUTION=640x360,CODECS="avc1.42e00a"
    360p/360p.m3u8
    #EXT-X-STREAM-INF:BANDWIDTH=3150000,RESOLUTION=1280x720,CODECS="avc1.4d0028"
    720p/720p.m3u8
    #EXT-X-STREAM-INF:BANDWIDTH=5000000,RESOLUTION=1920x1080,CODECS="avc1.4d0029"
    1080p/1080p.m3u8
    #EXT-X-STREAM-INF:BANDWIDTH=128000,CODECS="mp4a.40.2"
    audio1/audio1.m3u8

    When I try to read the Master Playlist,
    I don’t have any sound
    Using VLC, the audio track is played before the video tracks

    So, How Can I sync Audio track with Video Tracks ?

    Thanks

  • Use ffmpeg to sequentially add multiple audio tracks and pin a specific track to the end

    3 janvier 2019, par kraftydevil

    I have a single video with no audio tracks and want to add several audio tracks sequentially (each track starts immediately after the other).

    The basic case might look something like this :

    |-----------VIDEO-----------VIDEO-------------VIDEO-----------VIDEO-----------|  
    |---FULL AUDIO TRACK 1---|---FULL AUDIO TRACK 2---|---PARTIAL AUDIO TRACK 3---|

    Here is my attempt to achieve this :

    ffmpeg -i video.mov -i audio1.mp3 -i audio2.mp3 -i audio3.mp3 -map 0:0 -map 1:0 -map 2:0 -map 3:0 out.mp4

    Of course it doesn’t produced the desired result. It only uses the first music clip in out.mp4, and no other audio tracks are started when it ends.

    Question 1
    What am I missing in order to add multiple audio tracks sequentially ? I assume it’s specifying starting and end points of audio clips but I’m coming up short on locating the syntax.

    ...

    In addition, I’m looking for a way to ensure that the video ends with the full duration of AUDIO TRACK 3, as seen below :

    |-----------VIDEO-----------VIDEO-------------VIDEO-----------VIDEO-----------|  
    |---FULL AUDIO TRACK 1---|---PARTIAL AUDIO TRACK 2---|---FULL AUDIO TRACK 3---|

    In this case, AUDIO TRACK 2 gets trimmed so that the full AUDIO TRACK 3 is pinned to the end.

    Question 2
    Can this type of audio pinning be done in FFmpeg, or would I have to trim AUDIO TRACK 2 with another program first ?

  • Does a track run in a fragmented MP4 have to start with a key frame ?

    18 janvier 2021, par stevendesu

    I'm ingesting an RTMP stream and converting it to a fragmented MP4 file in JavaScript. It took a week of work but I'm almost finished with this task. I'm generating a valid ftyp atom, moov atom, and moof atom and the first frame of the video actually plays (with audio) before it goes into an infinite buffering with no errors listed in chrome://media-internals

    



    Plugging the video into ffprobe, I get an error similar to :

    



    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x558559198080] Failed to add index entry
    Last message repeated 368 times
[h264 @ 0x55855919b300] Invalid NAL unit size (-619501801 > 966).
[h264 @ 0x55855919b300] Error splitting the input into NAL units.


    



    This led me on a massive hunt for data alignment issues or invalid byte offsets in my tfhd and trun atoms, however no matter where I looked or how I sliced the data, I couldn't find any problems in the moof atom.

    



    I then took the original FLV file and converted it to an MP4 in ffmpeg with the following command :

    



    ffmpeg -i ~/Videos/rtmp/big_buck_bunny.flv -c copy -ss 5 -t 10 -movflags frag_keyframe+empty_moov+faststart test.mp4


    



    I opened both the MP4 I was creating and the MP4 output by ffmpeg in an atom parsing file and compared the two :

    



    Comparing MP4 files with MP4A

    



    The first thing that jumped out at me was the ffmpeg-generated file has multiple video samples per moof. Specifically, every moof started with 1 key frame, then contained all difference frames until the next key frame (which was used as the start of the following moof atom)

    



    Contrast this with how I'm generating my MP4. I create a moof atom every time an FLV VIDEODATA packet arrives. This means my moof may not contain a key frame (and usually doesn't)

    



    Could this be why I'm having trouble ? Or is there something else I'm missing ?

    



    The video files in question can be downloaded here :

    



    



    Another issue I noticed was ffmpeg's prolific use of base_data_offset in the tfhd atom. However when I tried tracking the total number of bytes appended and setting the base_data_offset myself, I got an error in Chrome along the lines of : "MSE doesn't support base_data_offset". Per the ISO/IEC 14996-10 spec :

    



    


    If not provided, the base-data-offset for the first track in the movie fragment is the position of the first byte of the enclosing Movie Fragment Box, and for second and subsequent track fragments, the default is the end of the data defined by the preceding fragment.

    


    



    This wording leads me to believe that the data_offset in the first trun atom should be equal to the size of the moof atom and the data_offset in the second trun atom should be 0 (0 bytes from the end of the data defined by the preceding fragment). However when I tried this I got an error that the video data couldn't be parsed. What did lead to data that could be parsed was the length of the moof atom plus the total length of the first track (as if the base offset were the first byte of the enclosing moof box, same as the first track)