Recherche avancée

Médias (91)

Autres articles (95)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9156)

  • avformat/matroskadec : Improve frame size parsing error messages

    3 décembre 2019, par Andreas Rheinhardt
    avformat/matroskadec : Improve frame size parsing error messages
    

    When parsing the sizes of the frames in a lace fails, sometimes no
    error message was raised (e.g. when using xiph or fixed-size lacing).
    Only EBML lacing generated error messages (which were wrongly declared
    as AV_LOG_INFO), but even here not all errors resulted in an error
    message. So add a generic error message to catch them all.

    Moreover, if parsing one of the EBML numbers fails, ebml_read_num already
    emits its own error messages, so that all that is needed is a generic error
    message to indicate that this happened during parsing the sizes of the
    frames in a block ; in other words, the error messages specific to
    parsing EBML lace numbers can be and have been removed.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavformat/matroskadec.c
  • ffmpeg problems with streaming mp4 over udp in local network

    28 novembre 2019, par AJ Cole

    I’m streaming mp4 video files (some of them are avi converted to mp4 with ffmpeg earlier) over udp://232.255.23.23:1234 from linux (embedded) with ffmpeg v3.4.2 to multiple linux (antix) machines that play the stream with MPV, all of this happens in local network so I expected it to work flawlessly, but unfortunately it doesn’t.

    Here are the original commands I tried to use :

    ffmpeg

    ffmpeg -re -i PATH_TO_FILE.mp4 -c copy -f mpegts udp://232.255.23.23:1234

    mpv

    mpv --no-config --geometry=[geo settings] --no-border udp://232.255.23.23:1234

    This seemed to woork good, however a problem appeared that on the displaying end, the stream is actually much longer than the streamed content itself. The mp4 files in total have 5 minutes 36 seconds, and mpv plays the entire stream loop in >= 6 minutes. I think it’s happening because of dropped frames, that mpv waits to recover or something and therefore extends the length of the actual content. This cannot work in my case, as I have a precise time gap for displaying the stream and it cannot be longer than the streamed content.
    All the content is made in 1680x800 resolution and is displayed on a screen with 1680x1050 resoltion (positioned with mpv geometry)

    It appears that using this command for mpv :

    mpv --no-config --framedrop=no --geometry=[geo settings] --no-border udp://232.255.23.23:1234

    made the duration correct, however this introduces huge artifacts in the videos sometimes.

    I read that using -re for streaming can cause these frame drops, so I tried putting a static number of fps for both file input and output stream, for example :

    ffmpeg -re -i PATH_TO_FILE.mp4 -c copy -r 25 -f mpegts udp://232.255.23.23:1234

    This reads the file at native framerate and outputs the stream at 25fps, and it appears to have the timing duration correct, but it also causes occasional articats and I think has worse qualit overall. Output from mpv when one of the artifacts happened :

    [ffmpeg/video] h264: cabac decode of qscale diff failed at 85 19
    [ffmpeg/video] h264: error while decoding MB 85 19, bytestream 85515

    I also tried using --untimed or --no-cache in mpv, but this causes stutters in the video

    I’m also getting requent Invalid video timestamp warnings in MPV, for example : Invalid video timestamp: 1.208333 -> -8.711667

    Playing in mpv without --no-config and with --untimed added also causes frequent artifacts :

    V: -00:00:00 / 00:00:00 Cache:  0s+266KB
    [ffmpeg/video] h264: Invalid NAL unit 8, skipping.
    V: -00:00:00 / 00:00:00 Cache:  0s+274KB
    [ffmpeg/video] h264: Reference 4 >= 4
    [ffmpeg/video] h264: error while decoding MB 6 0, bytestream 31474
    [ffmpeg/video] h264: error while decoding MB 78 49, bytestream -12
    V: 00:00:06 / 00:00:00 Cache:  5s+11KB
    Invalid video timestamp: 6.288333 -> -8.724933
    V: -00:00:05 / 00:00:00 Cache:  3s+0KB
    [ffmpeg/video] h264: Invalid NAL unit 8, skipping.
    [ffmpeg/video] h264: error while decoding MB 59 24, bytestream -27
    V: -00:00:04 / 00:00:00 Cache:  3s+0KB
    [ffmpeg/video] h264: Reference 4 >= 3
    [ffmpeg/video] h264: error while decoding MB 5 2, bytestream 13402
    V: -00:00:03 / 00:00:00 Cache:  2s+0KB
    [ffmpeg/video] h264: Reference 5 >= 4
    [ffmpeg/video] h264: error while decoding MB 51 21, bytestream 9415

    I tried playing the stream with ffplay and it also caused the videos to be "played" 20 seconds longer.
    Is there any way to keep the streaming duration intact and prevent those huge artifacts ? These aren’t any huge video files, they are few MB each, everything happens in local network so the latencies are minimal.

    Output from ffmpeg when streaming one of the files :

    libavutil      55. 78.100 / 55. 78.100
     libavcodec     57.107.100 / 57.107.100
     libavformat    57. 83.100 / 57. 83.100
     libavdevice    57. 10.100 / 57. 10.100
     libavfilter     6.107.100 /  6.107.100
     libswscale      4.  8.100 /  4.  8.100
     libswresample   2.  9.100 /  2.  9.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SDM.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.48.100
     Duration: 00:00:20.00, start: 0.000000, bitrate: 1883 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1680x800 [SAR 1:1 DAR 21:10], 1880 kb/s, 24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    Output #0, mpegts, to 'udp://232.255.23.23:1234':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.83.100
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1680x800 [SAR 1:1 DAR 21:10], q=2-31, 1880 kb/s, 24 fps, 24 tbr, 90k tbn, 25 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    frame=  480 fps= 24 q=-1.0 Lsize=    5009kB time=00:00:19.87 bitrate=2064.7kbits/s speed=   1x
    video:4592kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 9.082929%

    Edit : all the files don’t contain any audio, so it should be even less traffic on the network

  • Precise method of segmenting & transcoding video+audio (via ffmpeg), into an on-demand HLS stream ?

    17 novembre 2019, par Felix

    recently I’ve been messing around with FFMPEG and streams through Nodejs. My ultimate goal is to serve a transcoded video stream - from any input filetype - via HTTP, generated in real-time as it’s needed in segments.

    I’m currently attempting to handle this using HLS. I pre-generate a dummy m3u8 manifest using the known duration of the input video. It contains a bunch of URLs that point to individual constant-duration segments. Then, once the client player starts requesting the individual URLs, I use the requested path to determine which time range of video the client needs. Then I transcode the video and stream that segment back to them.

    Now for the problem : This approach mostly works, but has a small audio bug. Currently, with most test input files, my code produces a video that - while playable - seems to have a very small (< .25 second) audio skip at the start of each segment.

    I think this may be an issue with splitting using time in ffmpeg, where possibly the audio stream cannot be accurately sliced at the exact frame the video is. So far, I’ve been unable to figure out a solution to this problem.

    If anybody has any direction they can steer me - or even a prexisting library/server that solves this use-case - I appreciate the guidance. My knowledge of video encoding is fairly limited.

    I’ll include an example of my relevant current code below, so others can see where I’m stuck. You should be able to run this as a Nodejs Express server, then point any HLS player at localhost:8080/master to load the manifest and begin playback. See the transcode.get('/segment/:seg.ts' line at the end, for the relevant transcoding bit.

    'use strict';
    const express = require('express');
    const ffmpeg = require('fluent-ffmpeg');
    let PORT = 8080;
    let HOST = 'localhost';
    const transcode = express();


    /*
    * This file demonstrates an Express-based server, which transcodes &amp; streams a video file.
    * All transcoding is handled in memory, in chunks, as needed by the player.
    *
    * It works by generating a fake manifest file for an HLS stream, at the endpoint "/m3u8".
    * This manifest contains links to each "segment" video clip, which browser-side HLS players will load as-needed.
    *
    * The "/segment/:seg.ts" endpoint is the request destination for each clip,
    * and uses FFMpeg to generate each segment on-the-fly, based off which segment is requested.
    */


    const pathToMovie = 'C:\\input-file.mp4';  // The input file to stream as HLS.
    const segmentDur = 5; //  Controls the duration (in seconds) that the file will be chopped into.


    const getMetadata = async(file) => {
       return new Promise( resolve => {
           ffmpeg.ffprobe(file, function(err, metadata) {
               console.log(metadata);
               resolve(metadata);
           });
       });
    };



    // Generate a "master" m3u8 file, which the player should point to:
    transcode.get('/master', async(req, res) => {
       res.set({"Content-Disposition":"attachment; filename=\"m3u8.m3u8\""});
       res.send(`#EXTM3U
    #EXT-X-STREAM-INF:BANDWIDTH=150000
    /m3u8?num=1
    #EXT-X-STREAM-INF:BANDWIDTH=240000
    /m3u8?num=2`)
    });

    // Generate an m3u8 file to emulate a premade video manifest. Guesses segments based off duration.
    transcode.get('/m3u8', async(req, res) => {
       let met = await getMetadata(pathToMovie);
       let duration = met.format.duration;

       let out = '#EXTM3U\n' +
           '#EXT-X-VERSION:3\n' +
           `#EXT-X-TARGETDURATION:${segmentDur}\n` +
           '#EXT-X-MEDIA-SEQUENCE:0\n' +
           '#EXT-X-PLAYLIST-TYPE:VOD\n';

       let splits = Math.max(duration / segmentDur);
       for(let i=0; i&lt; splits; i++){
           out += `#EXTINF:${segmentDur},\n/segment/${i}.ts\n`;
       }
       out+='#EXT-X-ENDLIST\n';

       res.set({"Content-Disposition":"attachment; filename=\"m3u8.m3u8\""});
       res.send(out);
    });

    // Transcode the input video file into segments, using the given segment number as time offset:
    transcode.get('/segment/:seg.ts', async(req, res) => {
       const segment = req.params.seg;
       const time = segment * segmentDur;

       let proc = new ffmpeg({source: pathToMovie})
           .seekInput(time)
           .duration(segmentDur)
           .outputOptions('-preset faster')
           .outputOptions('-g 50')
           .outputOptions('-profile:v main')
           .withAudioCodec('aac')
           .outputOptions('-ar 48000')
           .withAudioBitrate('155k')
           .withVideoBitrate('1000k')
           .outputOptions('-c:v h264')
           .outputOptions(`-output_ts_offset ${time}`)
           .format('mpegts')
           .on('error', function(err, st, ste) {
               console.log('an error happened:', err, st, ste);
           }).on('progress', function(progress) {
               console.log(progress);
           })
           .pipe(res, {end: true});
    });

    transcode.listen(PORT, HOST);
    console.log(`Running on http://${HOST}:${PORT}`);