Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (70)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (11256)

  • Creating an Init file from existing non-fragmented, segmented MP4 files

    20 novembre 2018, par slhck

    I am performing chunked encodes of longer video files, where I’ve split the original file into individual sequences that I have encoded separately. These sequences are files of different length, depending on where the scene cuts appear—they may be between 2 and 5 seconds long. They all start with an I-frame and are standalone.

    My encoded sequences are all MP4s, e.g. :

    test_0000.mp4
    test_0001.mp4
    test_0002.mp4
    test_0003.mp4
    test_0004.mp4

    They all have common properties :

    $ mp4info test_0000.mp4

    File:
     major brand:      isom
     minor version:    200
     compatible brand: isom
     compatible brand: iso2
     compatible brand: mp41
     fast start:       no

    Movie:
     duration:   2016 ms
     time scale: 1000
     fragments:  no

    ...

    Now, in order to play those with a DASH player, I have to create an initialization segment and individual fragmented MP4s.

    I could generate the fragmented MP4s via mp4fragment which I run on each standalone MP4 file :

    $ mp4info test_0000.m4s
    File:
     major brand:      isom
     minor version:    200
     compatible brand: isom
     compatible brand: iso2
     compatible brand: mp41
     compatible brand: iso5
     fast start:       yes

    Movie:
     duration:   2016 ms
     time scale: 1000
     fragments:  yes

    ...

    But obviously, these are now not according to spec, and all contain a moov atom :

    What I’d need is individual media segments with only one moof and mdat box, which then require an initialization segment with only a moov box.

    How can I generate that from the existing, already encoded segments ?

    I know this appears like an XY problem. In principle, I could already segment my original file directly after encoding, and run those encodes at the same time, e.g. using ffmpeg’s dash muxer, or MP4Box, however :

    • There is almost no control over the resulting segment sizes, with respect to minimum and maximum duration
    • This approach does not parallelize

    I have also checked Bento4 ; it does not seem to offer this functionality. Neither does FFmpeg. MP4Box behaves similarly. They all assume you have one long file to start with.


    I see I could splice off the ftyp and moov boxes from these “fake fragments” in order to create an initialization segment. But I would end up with segments containing multiple moof and mdat boxes, which is not according to the specification – it only allows one fragment and media data box :

    4. Media Segments

    […] one optional Segment Type Box (styp) followed by a single Movie Fragment Box (moof) followed by one or more Media Data Boxes (mdat).

    I guess I can live with this the styp not being present.

  • How to merge segmented webvtt subtitle files and output a single file ?

    15 février, par Dobbelina

    How to merge a segmented webvtt subtitle file and output a single file ?,
m3u8 looks like this example :

    



    #EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-TARGETDURATION:4
#USP-X-TIMESTAMP-MAP:MPEGTS=900000,LOCAL=1970-01-01T00:00:00Z
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-1.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-2.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-3.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-4.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-5.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-6.webvtt
#EXT-X-ENDLIST


    



    I noticed that each segment is not synchronized/cued against total playing time, but against the individual ts segments.
If ffmpeg could be used to do this, what magic input do i need to give it ?

    



    A single correctly cued vtt or srt file is what i want.

    



    I have a great appetite and don't like chunks, lol !

    



    Thanks for any replies you lovely people !

    




    



    With this i get a merged vtt file, but the cues are all wrong :

    



    ffmpeg -i "https://cmoreseusphlsvod60.akamaized.net/vod/bea44/0ghzi1b2cz5(11792107_ISMUSP).ism/0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000.m3u8" -f segment -segment_time 4 -segment_format webvtt -scodec copy out-%05d.vtt


    



    Each segment is not synchronized/cued against total playing time, but against the individual ts segments.
Example output of above command :

    



    WEBVTT

00:00.000 --> 00:03.040
Du har aktier i ett företag
som saknar framtid.

00:00.000 --> 00:03.280
De vill ha aktierna.
Du känner dem inte, Olga.

00:00.000 --> 00:01.720
De som får Kastrups aktier vinner.


    



    Cues all start like this which isn't very helpfull : 00:00.000

    



    Some segments contains no cues, like segment 15 for example :
https://cmoreseusphlsvod60.akamaized.net/vod/bea44/0ghzi1b2cz5(11792107_ISMUSP).ism/0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-15.webvtt

    



    


    "A WebVTT Segment MAY contain no cues ; this indicates that no
 subtitles are to be displayed during that period."

    


    


  • How to detect blue screen of ffmpeg video packet ?

    28 novembre 2017, par 심상원

    Good morning. There is one question about FFMPEG.

    I’m using FFMPEG to study C ++ on Linux.

    When the camera spirituality is RTSP and the format is H.264,

    I would like to determine if the camera image is a blue screen, but the following concepts are confusing.

    1. KeyFrame comes in 1 second or every X seconds cycle. Does the KeyFrame get delivered from the camera even if it is still the same image ?

    2. If the KeyFrame is delivered, is the size of the packet transmitted between the cycles zero ?

    3. If the above method is the same as normal image, should I compare the individual frames after decoding ?

    If you do not have any of these questions, please let me know if you have a good way.

    Thank you.