Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (101)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (10544)

  • Does a track run in a fragmented MP4 have to start with a key frame ?

    18 janvier 2021, par stevendesu

    I'm ingesting an RTMP stream and converting it to a fragmented MP4 file in JavaScript. It took a week of work but I'm almost finished with this task. I'm generating a valid ftyp atom, moov atom, and moof atom and the first frame of the video actually plays (with audio) before it goes into an infinite buffering with no errors listed in chrome://media-internals

    



    Plugging the video into ffprobe, I get an error similar to :

    



    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x558559198080] Failed to add index entry
    Last message repeated 368 times
[h264 @ 0x55855919b300] Invalid NAL unit size (-619501801 > 966).
[h264 @ 0x55855919b300] Error splitting the input into NAL units.


    



    This led me on a massive hunt for data alignment issues or invalid byte offsets in my tfhd and trun atoms, however no matter where I looked or how I sliced the data, I couldn't find any problems in the moof atom.

    



    I then took the original FLV file and converted it to an MP4 in ffmpeg with the following command :

    



    ffmpeg -i ~/Videos/rtmp/big_buck_bunny.flv -c copy -ss 5 -t 10 -movflags frag_keyframe+empty_moov+faststart test.mp4


    



    I opened both the MP4 I was creating and the MP4 output by ffmpeg in an atom parsing file and compared the two :

    



    Comparing MP4 files with MP4A

    



    The first thing that jumped out at me was the ffmpeg-generated file has multiple video samples per moof. Specifically, every moof started with 1 key frame, then contained all difference frames until the next key frame (which was used as the start of the following moof atom)

    



    Contrast this with how I'm generating my MP4. I create a moof atom every time an FLV VIDEODATA packet arrives. This means my moof may not contain a key frame (and usually doesn't)

    



    Could this be why I'm having trouble ? Or is there something else I'm missing ?

    



    The video files in question can be downloaded here :

    



    



    Another issue I noticed was ffmpeg's prolific use of base_data_offset in the tfhd atom. However when I tried tracking the total number of bytes appended and setting the base_data_offset myself, I got an error in Chrome along the lines of : "MSE doesn't support base_data_offset". Per the ISO/IEC 14996-10 spec :

    



    


    If not provided, the base-data-offset for the first track in the movie fragment is the position of the first byte of the enclosing Movie Fragment Box, and for second and subsequent track fragments, the default is the end of the data defined by the preceding fragment.

    


    



    This wording leads me to believe that the data_offset in the first trun atom should be equal to the size of the moof atom and the data_offset in the second trun atom should be 0 (0 bytes from the end of the data defined by the preceding fragment). However when I tried this I got an error that the video data couldn't be parsed. What did lead to data that could be parsed was the length of the moof atom plus the total length of the first track (as if the base offset were the first byte of the enclosing moof box, same as the first track)

    


  • Specifying track title or language in MPEG DASH MANIFEST

    10 février 2019, par Ramesh Navi

    I am creating a manifest to playback Adaptive WebM using DASH. Everything working pretty fine but I need language-name/track-name instead of bitrate. Is it supported ? How can update/optimize to support such feature ?

    Manifest creation :

    ffmpeg \
    -f webm_dash_manifest -i webm240.webm \
    -f webm_dash_manifest -i webm360.webm \
    -f webm_dash_manifest -i webm480.webm \
    -f webm_dash_manifest -i webm720.webm \
    -f webm_dash_manifest -i audio1.webm \
    -f webm_dash_manifest -i audio2.webm \
    -f webm_dash_manifest -i audio3.webm \
    -f webm_dash_manifest -i audio4.webm \
    -c copy -map 0 -map 1 -map 2 -map 3 -map 4 -map 5 -map 6  -map 7 \
    -f webm_dash_manifest \
    -adaptation_sets "id=0,streams=0,1,2,3 id=1,streams=4,5,6,7" \
    manifest.mpd

    Player audio track selection :

    enter image description here

  • Icecast to Youtube Live w/ Track Meta data

    22 novembre 2018, par eusid

    I am using ffmpeg to stream a still image along with my Icecast stream to youtube. I would like to give the artists credit by displaying their names on the image or perhaps even using a visualizer type thing.

    I am curious how people have solved this issue to get the meta data on the image, or even use a visualizer in the past. I want to do this headlessly from my server so I don’t have to run OBS or something on my desktop. I would hope to not have to reinvent the wheel, but if you could point me in the right direction I’ll build the wheel if asking for a completed wheel gets me down votes.

    How is this typically solved ? Mainly getting the text on the image and the stream updating with the new image. I could write something with pillow to do this perhaps. Not sure if it would work.