Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (101)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (16547)

  • FFMPEG Picture in picture with DASH

    24 mai 2022, par Macster

    I'm using FFMPEG to transcode a video into different resolutions and it's working fine. But now I want to merge two videos picture in picture, as one video, which then has to be transcoded into different resolutions.

    


    The command below is what I've got so far. Unfortunately, it works only for the 170p resolution. If I switch the player to 720p the overlay video is gone.

    


    I guess I have to use some kind of naming scheme for the merging files and the different resolutions, so FFMPEG can differentiate between them. But how am I going to do that ?

    


    FFMPEG Command

    


    ffmpeg \
-re \
-i "input.webm" \
-i "overlay.webm" \
-filter_complex "[1]scale=iw/3:-1[pip];[0][pip]overlay=W-w-10:10:shortest=1[v];[0:a][1:a]amerge[a]" \
-r 30 \
-usage lowlatency \
-qp_b 1 \
-quality ultrafast \
-level 2.0 \
-map "[v]" \
-map "[a]" \
-map 0 \
-c:a aac \ 
-c:v h264_qsv \
-b:v:1 1800k \
-s:v:1 1280x720 \
-b:v:0 300k \
-s:v:0 320x170 \
-profile:v:0 main \
-profile:v:1 main \
-bf 1 \
-keyint_min 30 \
-g 30 \
-sc_threshold 1 \
-b_strategy 0 \
-ar:a:1 96000 \
-seg_duration 1 \
-remove_at_exit 0 \
-streaming 1 \
-window_size 10 \
-adaptation_sets "id=0,streams=v id=1,streams=a" \
-utc_timing_url https://time.akamai.com/?iso \
-live 1 \
-f dash "manifest.mpd" 


    


  • Converting AAC stream to DASH MP4 with high fragment length precision

    5 mars 2017, par vdudouyt

    For my HTML5 project I need to create a fragmented MP4 file with a single audio stream (no video), each fragment of which has a duration of exactly 0.1 second.

    Accordingly to ffmpeg docs, you can accomplish that by passing a value in microseconds with ’-frag_duration’ - which I found to be working and playable with HTML5 MediaSource API :

    $ ffmpeg -y -i input.aac -c:a libfdk_aac -b:a 64k -level:v 13 -r 25 -strict experimental -movflags empty_moov+default_base_moof -frag_duration 100000 output.mp4

    As we have a 210 second audio split up by 0.1s fragments, I expect that in output.mp4 we’d have 2100 fragments, hence 2100 moof atoms. But, upon inspecting it I’ve figured out that we only have 1811 moof atoms - which means that some (or maybe even all) fragments are bigger than expected :

    $ python ~/git/mp4viewer/src/showboxes.py output.mp4 |grep moof|wc -l
    1811

    Could anybody tell me what’s wrong, and how could I accomplish what I want ?

    Right now my assumption is that during an encoding I have AAC frame length which is not a multiple of 0.1s, hence ffmpeg has no chance to produce the fragments that are strictly equal to 0.1s but I’m not sure. If somebody can confirm that - and let me know a way to explicitly set AAC frame_size in FFMPEG (I couldn’t find anything like that in the docs), or completely disprove this - it would be also highly appreciated.

  • Adding multiple audio tracks and subtitles to dash manifest (mpd) with ffmpeg

    21 novembre 2020, par knona

    I'm trying to create a website to stream some videos. For each video, I extract video, audio and subtitles in 3 different folders. It happens that a video has multiple audio tracks and multiple subtitles. I did a lot of research and I don't know how to add all of them in the manifest. Right now, I use this command :

    



    ffmpeg -f webm_dash_manifest \
-i video1.mp4 -f webm_dash_manifest \
-i video2.mp4 -f webm_dash_manifest \
-i audio1.webm -f webm_dash_manifest \
-i audio2.webm -f webm_dash_manifest \
-i subtitles.vtt \
-c copy -map 0 -map 1 -map 2 -map 3 \
-f webm_dash_manifest -adaptation_sets "id=0,streams=v id=1,streams=a" manifest.mpd


    



    My two videos have different resolutions and bitrates, and it works perfectly. But I don't get any subtitles and my two audio tracks are considered like one same audio track which has two different bitrates (just like videos). I think I should have many adaptation_sets, but I don't know how to create them.

    



    How can I create that manifest the right way ?