Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (67)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (4875)

  • avformat/asfdec_o : Don't segfault with lots of attached pics

    12 novembre 2020, par Andreas Rheinhardt
    avformat/asfdec_o : Don't segfault with lots of attached pics
    

    The ASF file format has a limit of 127 streams and the "asf_o" demuxer
    (the ASF demuxer from Libav) has an array of pointers for a structure
    called ASFStream that is allocated on demand for every stream. Attached
    pictures are not streams in the sense of the ASF specification, yet the
    demuxer created an ASFStream for them ; and in one codepath it also
    forgot to check whether the array of ASFStreams is already full. The
    result is a write beyond the end of the array and a segfault lateron.

    Fixing this is easy : Don't create ASFStreams for attached picture
    streams.

    (Other results of the current state of affairs are unnecessary allocations
    (of ASFStreams structures), the misparsing of valid files (there might not
    be enough ASFStreams left for the valid streams if attached pictures take
    up too many) ; furthermore, the ASFStreams created for attached pictures all
    have the stream number 0, an invalid stream number (the valid range is
    1-127). This means that invalid data (packets for a stream with stream
    number 0) won't get rejected lateron.)

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavformat/asfdec_o.c
  • movenc : Allow writing a DASH sidx atom at the start of files

    21 octobre 2014, par Martin Storsjö
    movenc : Allow writing a DASH sidx atom at the start of files
    

    This is mapped to the faststart flag (which in this case
    perhaps should be called "shift and write index at the
    start of the file"), which for fragmented files will
    write a sidx index at the start.

    When segmenting DASH into files, there’s usually one sidx
    at the start of each segment (although it’s not clear to me
    whether that actually is necessary). When storing all of it
    in one file, the MPD doesn’t necessarily need to describe
    the individual segments, but the offsets of the fragments can be
    fetched from one large sidx atom at the start of the file. This
    allows creating files for the DASH ISO BMFF on-demand profile.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DBH] libavformat/movenc.c
    • [DBH] libavformat/movenc.h
  • How to adjust mpeg 2 ts start time with ffmpeg ?

    29 juin 2015, par Maxim Kornienko

    I’m writing simple HLS (Http Live Streaming) java server to live cast (really live, not on demand) screenshow + voice. I constantly get chunks of image frames and audio samples as input to my service and produce mpeg 2 ts files + m3u8 playlist web page as output. The workflow is the following :

    1. Collect (buffer) source video frames and audio for certain period of time
    2. Convert series of video frames to h.264 encoded video file
    3. Convert audio samples to mp3 audio file
    4. Merge them to .ts file with ffmpeg command

      ffmpeg -i audio.mp3 -i video.mp4 -f mpegts -c:a copy -c:v copy -vprofile main -level:v 4.0 -vbsf h264_mp4toannexb -flags -global_header segment.ts
    5. Publish several .ts files on m3u8 playlist.

    The problem is resulting playlist interrupts after first segment is played. VLC logs following error :

    freetype error: Breaking unbreakable line
    ts error: libdvbpsi (PSI decoder): TS discontinuity (received 0, expected 4) for PID 17
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 0
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 4096
    core error: ES_OUT_SET_(GROUP_)PCR is called too late (pts_delay increased to 1000 ms)
    core error: ES_OUT_RESET_PCR called
    core error: Could not convert timestamp 185529572000
    ts error: libdvbpsi (PSI decoder): TS discontinuity (received 0, expected 4) for PID 17
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 0
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 4096
    core error: ES_OUT_SET_(GROUP_)PCR is called too late (jitter of 8653 ms ignored)
    core error: Could not get display date for timestamp 0
    core error: Could not convert timestamp 185538017000
    core error: Could not convert timestamp 185538267000
    core error: Could not convert timestamp 185539295977
    ...

    I guess the reason is that start time of segments do not belong to one stream, but it’s impossible to concat and resegment (with ffmepg -f segment) whole stream once new chunk is added. Tried adding #EXT-X-DISCONTINUITY tag to playlist as suggested here but it didn’t help. When I ffprobe them I get :

    Input #0, mpegts, from '26.ts':
    Duration: 00:00:10.02, start: 1.876978, bitrate: 105 kb/s
    Program 1
    Metadata:
     service_name    : Service01
     service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 640x640, 4 fps, 4 tbr, 90k tbn, 8 tbc
    Stream #0:1[0x101]: Audio: mp3 ([3][0][0][0] / 0x0003), 48000 Hz, mono, s16p, 64 kb/s  

    Where start value in line Duration: 00:00:10.02, start: 1.876978, bitrate: 105 kb/s is more or less equal for all segments.
    When I check segments from available proven-to-work playlists (like http://vevoplaylist-live.hls.adaptive.level3.net/vevo/ch1/appleman.m3u8) they all have diffrenet start values for each segment, for example :

    Input #0, mpegts, from 'segm150518140104572-424570.ts':
    Duration: 00:00:06.17, start: 65884.808689, bitrate: 479 kb/s
    Program 257
    Stream #0:0[0x20]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 320x180 [SAR 1:1 DAR 16:9], 30 fps, 29.97 tbr, 90k tbn, 60 tbc
    Stream #0:1[0x21]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 115 kb/s
    Stream #0:2[0x22]: Data: timed_id3 (ID3  / 0x20334449)

    and the next after it

    Input #0, mpegts, from 'segm150518140104572-424571.ts':
    Duration: 00:00:06.22, start: 65890.814689, bitrate: 468 kb/s
    Program 257
    Stream #0:0[0x20]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 320x180 [SAR 1:1 DAR 16:9], 30 fps, 29.97 tbr, 90k tbn, 60 tbc
    Stream #0:1[0x21]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 124 kb/s
    Stream #0:2[0x22]: Data: timed_id3 (ID3  / 0x20334449)

    differ in the way that start time of segm150518140104572-424571.ts is equal to start time + duration of segm150518140104572-424570.ts.

    How could this start value be adjusted with ffmpeg ? Or maybe my whole aproach is wrong ? Unfortunately I couldn’t find on the internet working example of live (not on demand) video service implemented with ffmepg.