Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (54)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

Sur d’autres sites (6891)

  • Adding audio at specific time of the video

    21 septembre 2018, par Sergio Bruccoleri

    I want to add an audio file at a specific time of the video, without completely replacing the original video audio stream (just at the specific time, for the whole duration of the replacement audio).

    I have this command below :

    ffmpeg -y -i video.mp4 -itsoffset 00:00:07 -i audio.mp3 -map 0:0 -map 1:0 -c:v copy -async 1 out.mp4

    but this command replaces the entire audio stream of the video with the content of the second input

    The expected result is :

    • The video plays as normal with the original audio

    • Once timestamp 00:00:07 is reached, the new ’replaced’ audio streams
      play.

    • Once the new audio streams stops the original audio continues
      playing.

    Anyone that can help me solving this issue ? I’ve been trying with atrim without any result, but probably I am doing something wrong.

  • How to seek mp4 aac audio using Media Source Extensions

    29 août 2018, par Chris

    Please can someone offer me a few pointers when trying to seek within streamed aac audio in mp4 containers ? I’m trying to develop a music download service that sips data via ranged requests rather than simply link to a mp4 file as an <audio></audio> src. (which will instead buffer the whole file as quickly as possible, and so be rather wasteful and expensive).

    So far I’ve managed to successfully append sequential audio range buffers to the SourceBuffer object using partial/ranged requests, attached to my suitably mime-typed MediaSource object. But as soon as I try to seek, the wheels come off and I receive a ’CHUNK_DEMUXER_ERROR_APPEND_FAILED’ error, with the specific issue : ’stream parsing failed’.

    I’ve prepared my mp4 files by encoding them with ffmpeg (via the fluent ffmpeg module), rewriting the movie header box at the start of the file (via the -movflags faststart setting) so that the duration can be parsed. I then fragment the file with mp4fragment (part of the Bento4 tools) with the default settings, and check to ensure the structure of the file matches the format specified by ISO BMFF, with pairs of movie fragments and data boxes (moof/mdat) describing the audio stream. Given the source buffer has no problem playing from the beginning, with contiguous subsequent ranges, this appears to confirm that the format of the mp4 file is acceptable.

    As an aside, I’ve tried fragmenting the file completely in ffmpeg/fluent ffmpeg (using the ’-movflags empty_moov+default_base_moof’ options), but while this works, it also removes the duration from the moov as you’d expect, so the file gets larger during playback as more fragments are fetched and appended. If I set the file duration manually, I still have the issue of not being able to seek to unbuffered audio, so I only seem to be making life more difficult trying to fragment the file solely in ffmpeg.

    So how should I go about seeking within the stream ? I gather that seeking effectively ’needle-drops’ randomly, and so the source buffer might struggle to parse the data out of context, but I imagined that it would skip to the next available fragment in the range that I fetch (which is calculated using the percentage of the seek bar width to set the player.currentTime, which is then converted to a suitable byte range using the 128kbps CBR figure to convert seconds to bytes, to send a 206 partial range request).

    I’ve seen mention of buffer offsets, but I don’t understand how these apply. Most of the dev examples I’ve seen just focus on whole files or segmented videos, rather than fragmented single audio files for seeking ? Do I need to somehow retain a portion of the data from the moov box when seeking for the source buffer to be able to parse it ? In the trun box I have a data offset that varies between two values throughout the file, 444 and 448, depending on whether the sample count is 86 or 87. I’m not sure why it’s not consistent.

    Here’s what the moov looks like from my audio file :

    [ftyp] size=8+24
     major_brand = isom
     minor_version = 200
     compatible_brand = isom
     compatible_brand = iso2
     compatible_brand = mp41
     compatible_brand = iso5
    [moov] size=8+620
     [mvhd] size=12+96
       timescale = 1000
       duration = 350047
       duration(ms) = 350047
     [trak] size=8+448
       [tkhd] size=12+80, flags=7
         enabled = 1
         id = 1
         duration = 350047
         width = 0.000000
         height = 0.000000
       [edts] size=8+28
         [elst] size=12+16
           entry count = 1
           entry/segment duration = 350000
           entry/media time = 2048
           entry/media rate = 1
       [mdia] size=8+312
         [mdhd] size=12+20
           timescale = 44100
           duration = 0
           duration(ms) = 0
           language = und
         [hdlr] size=12+41
           handler_type = soun
           handler_name = Bento4 Sound Handler
         [minf] size=8+219
           [smhd] size=12+4
             balance = 0
           [dinf] size=8+28
             [dref] size=12+16
               [url ] size=12+0, flags=1
                 location = [local to file]
           [stbl] size=8+159
             [stsd] size=12+79
               entry-count = 1
               [mp4a] size=8+67
                 data_reference_index = 1
                 channel_count = 2
                 sample_size = 16
                 sample_rate = 44100
                 [esds] size=12+27
                   [ESDescriptor] size=2+25
                     es_id = 0
                     stream_priority = 0
                     [DecoderConfig] size=2+17
                       stream_type = 5
                       object_type = 64
                       up_stream = 0
                       buffer_size = 0
                       max_bitrate = 128006
                       avg_bitrate = 128006
                       DecoderSpecificInfo = 12 10
                     [Descriptor:06] size=2+1
             [stts] size=12+4
               entry_count = 0
             [stsc] size=12+4
               entry_count = 0
             [stsz] size=12+8
               sample_size = 0
               sample_count = 0
             [stco] size=12+4
               entry_count = 0
     [mvex] size=8+48
       [mehd] size=12+4
         duration = 350047
       [trex] size=12+20
         track id = 1
         default sample description index = 1
         default sample duration = 0
         default sample size = 0
         default sample flags = 0

    And here’s a typical fragment :

    [moof] size=8+428
     [mfhd] size=12+4
       sequence number = 1
     [traf] size=8+404
       [tfhd] size=12+8, flags=20008
         track ID = 1
         default sample duration = 1024
       [tfdt] size=12+8, version=1
         base media decode time = 0
       [trun] size=12+352, flags=201
         sample count = 86
         data offset = 444
    [mdat] size=8+32653

    Does that all look good ? Any pointers for seeking within such a file would be hugely appreciated. Thanks !

  • Playing RTSP in WPF application with low latency using FFMPEG / FFMediaElement (FFME)

    22 mars 2019, par Paboka

    I’m trying to use FFMediaElement (FFME, WPF MediaElement replacement based on FFmpeg) component to play RSTP live video in my WPF application.

    I have a good connection to my camera and I want to play it with minimum available latency.

    I’ve reduced the latency by changing ProbeSize to its minimal value :

    private void Media_OnMediaInitializing(object Sender, MediaInitializingRoutedEventArgs e)
    {
     e.Configuration.GlobalOptions.ProbeSize = 32;
    }

    But I still have about 1 second of latency since the very beginning of the stream. I mean, when I start playing, I have to wait for 1 second till the video appears and then I have 1s of latency.

    I’ve also tried to change following parameters :

    e.Configuration.GlobalOptions.EnableReducedBuffering = true;
    e.Configuration.GlobalOptions.FlagNoBuffer = true;
    e.Configuration.GlobalOptions.MaxAnalyzeDuration = TimeSpan.Zero;

    but it gave no result.

    I measured time-interval between FFmpeg output lines (the number in the first column is the time, elapsed from the previous line, ms)

    ----     OpenCommand: Entered
      39     FFInterop.Initialize: FFmpeg v4.0
       0     EVENT START: MediaInitializing
       0     EVENT DONE : MediaInitializing
     379     EVENT START: MediaOpening
       0     EVENT DONE : MediaOpening
       0     COMP VIDEO: Start Offset:      0,000; Duration:        N/A
      41     SYNC-BUFFER: Started.
     609     SYNC-BUFFER: Finished. Clock set to 1534932751,634
       0     EVENT START: MediaOpened
       0     EVENT DONE : MediaOpened
       0     EVENT START: BufferingStarted
       0     EVENT DONE : BufferingStarted
       0     OpenCommand: Completed
       0     V BLK: 1534932751,634 | CLK: 1534932751,634 | DFT:    0 | IX:   0 | PQ:     0,0k | TQ:     0,0k
       0     Command Queue (1 commands): Before ProcessNext
       0        Play - ID: 404 Canceled: False; Completed: False; Status: WaitingForActivation; State:
      94     V BLK: 1534932751,675 | CLK: 1534932751,699 | DFT:   24 | IX:   1 | PQ:     0,0k | TQ:     0,0k

    So, the most the process of "sync-buffering" takes the most of the time.

    Is there any parameter of FFmpeg which allows reducing a size of the buffer ?