Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (58)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (8812)

  • How To Implement FFMPEG LL-HLS

    8 septembre 2022, par Devin Dixon

    How is Low Latency HLS achieved with FFMPEG ? From my understanding thus far, I am seeing changes around the -f option. For example :

    


    -f dash -method PUT http://example.com/live/manifest.mpd


    


    But there isn't much information researching on LL-HLS with ffmpeg. Making smaller segments I am finding comes at the cost of choppiness in the stream. Has anyone done this ? And is the protocol actually adopted or just in "theory".

    


  • FFMPEG Encoding in Multiple resoultions for adaptive streaming

    9 janvier 2020, par thatman

    I am using the following ffmpeg script for encoding mp4 video into different resolutions for adaptive HLS/DASH streaming :

    ffmpeg -y -nostdin -loglevel error -i INPUT.mp4 \
       -map 0:v:0  -map 0:v:0 -map 0:v:0  -map 0:v:0  -map 0:v:0  -map 0:v:0 -map 0:a\?:0  \
       -maxrate:v:0 350k -bufsize:v:0 700k -c:v:0 libx264 -filter:v:0 "scale=320:-2"  \
       -maxrate:v:1 1000k -bufsize:v:1 2000k -c:v:1 libx264 -filter:v:1 "scale=640:-2"  \
       -maxrate:v:2 3000k -bufsize:v:2 6000k -c:v:2 libx264 -filter:v:2 "scale=1280:-2" \
       -maxrate:v:3 300k -bufsize:v:3 600k -c:v:3 libvpx-vp9 -filter:v:3 "scale=320:-2"  \
       -maxrate:v:4 1088k -bufsize:v:4 2176k -c:v:4 libvpx-vp9 -filter:v:4 "scale=640:-2"  \
       -maxrate:v:5 1500k -bufsize:v:5 3000k -c:v:5 libvpx-vp9 -filter:v:5 "scale=1280:-2"  \
       -use_timeline 1  -use_template 1 -adaptation_sets "id=0,streams=v  id=1,streams=a" \
       -threads 8 -seg_duration 5 -hls_init_time 1 -hls_time 5 -hls_playlist true -f dash OUTPUT.mpd

    But the script is giving this error :

    Only ’-vf scale=320:640’ read, ignoring remaining -vf options : Use ’,’ to separate filters
    Only ’-vf scale=640:1280’ read, ignoring remaining -vf options : Use ’,’ to separate filters
    Only ’-af (null)’ read, ignoring remaining -af options : Use ’,’ to separate filters

    Please help in resolving the issue. Thanks in advance !

  • How to append fMP4 chunks to SourceBuffer ?

    24 octobre 2020, par Stefan Falk

    I have finally managed to create an fMP4 but now I am not able to seek or play the file depending on what I do in the file.

    


    On my backend I am taking the file and convert it to MP4 or fragmented MP4.

    


    The file gets send to the clients chunk-wise but this approach does not seem to work as it used to work on Chrome (bot not on Firefox) when using MP3.

    


    How I played MP3

    


    Say we have a 10 seconds track that is 1 MB in size which I want to start playing from second five. I want to load chunks of 1 second.

    


    Thus, I have offset = 5 / 10 * file_size and chunkSize = 1 / 10 * file_size`.

    


    With this I just started loading the MP3-file at an offset of 0.5 MB and loaded the chunks as needed where each chunk was 0.1 MB in size.

    


    This worked because before actually playing the file, I loaded the first bytes of the file and appended it to the SourceBuffer as well s.t. it was able to load the meta-information of the file. However, this approach is just not working for fMP4.

    


    What I tried with fMP4

    


    So, I have been converting MP3 to fMP4 with the MP3-approach ..

    


    .. using +dash (can play but not seek)

    


    ffmpeg -i input.mp3 -acodec aac -b:a 256k -f mp4 -movflags +dash output.mp4


    


    .. using frag_keyframe+empty_moov (cannot play on Chrome)

    


    ffmpeg -i input.mp3 -acodec aac -b:a 256k -f mp4 -movflags frag_keyframe+empty_moov output.mp4


    


    On the client the chunks get appended to a SourceBuffer (as explained above) after creating it with the Mime-Type audio/mp4; codecs="mp4a.40.2" :

    


    this.sourceBuffer = this.mediaSource
                        .addSourceBuffer('audio/mp4; codecs="mp4a.40.2"');


    


    and

    


    private appendSegment = (chunk) => {
  try {
    this.sourceBuffer.appendBuffer(chunk);
  } catch {
    return;
  }
}


    


    The problem is that I can only play the +dash converted file if I start reading it from the start and continue adding chunks.

    


    However, if I start reading the file from further down, the audio gets never played.

    


    playTrack(track, 0.0);  // Start at second 0 works
playTrack(track, 10.0); // Start at second 10 does not work