Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (111)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (10289)

  • What ffmpeg settings to ensure 0 duplicate frames and 0 dropped frames when capturing to mpeg-2 program stream using ffmpeg/avfoundation on Mac ?

    16 février 2017, par aerodavo

    I’m trying to capture to a DVD compliant mpeg-2 file (ffmpeg : -target ntsc-dvd) from the HDMI output of a camcorder into a Magewell HDMI to USB 3.0 box into my Late 2012 15" non-retina MacBook Pro (quad core 2.3, 16gb ram, ssd), using ffmpeg/avfoundation.

    I’ve tried everything I can think of, or find online. I’m still getting duplicate and dropped frames, which either leads to audio/video sync issues, or audio dropouts, especially for longer recordings. I need this to be stable for recordings of up to 2.5 hours. This is the Terminal output for a 1.5 hour recording :

    Lapaki:~ Lapaki$ /Users/Lapaki/Desktop/ffmpeg -f avfoundation -video_size 960x540 -pixel_format uyvy422 -framerate ntsc -i "XI:XI" -vf crop=iw-240:ih:120:0 -target ntsc-dvd -aspect 4:3 -q:v 3 -ab 256k /Users/Lapaki/Desktop/FF\ Test/`date +%F`\ `date +%H_%M_%S`.mpg
    ffmpeg version 3.2.3-tessus Copyright (c) 2000-2017 the FFmpeg developers
     built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
     configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzmq --enable-version3 --disable-ffplay --disable-indev=qtkit --disable-indev=x11grab_xcb
     libavutil      55. 34.101 / 55. 34.101
     libavcodec     57. 64.101 / 57. 64.101
     libavformat    57. 56.101 / 57. 56.101
     libavdevice    57.  1.100 / 57.  1.100
     libavfilter     6. 65.100 /  6. 65.100
     libswscale      4.  2.100 /  4.  2.100
     libswresample   2.  3.100 /  2.  3.100
     libpostproc    54.  1.100 / 54.  1.100
    Input #0, avfoundation, from 'XI:XI':
     Duration: N/A, start: 610606.984208, bitrate: N/A
       Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 960x540, 29.97 fps, 29.97 tbr, 1000k tbn, 1000k tbc
       Stream #0:1: Audio: pcm_f32le, 48000 Hz, stereo, flt, 3072 kb/s
    Output #0, dvd, to '/Users/Lapaki/Desktop/FF Test/2017-02-15 17_46_28.mpg':
     Metadata:
       encoder         : Lavf57.56.101
       Stream #0:0: Video: mpeg2video (Main), yuv420p, 720x480 [SAR 8:9 DAR 4:3], q=2-31, 6000 kb/s, 29.97 fps, 90k tbn, 29.97 tbc
       Metadata:
         encoder         : Lavc57.64.101 mpeg2video
       Side data:
         cpb: bitrate max/min/avg: 9000000/0/6000000 buffer size: 1835008 vbv_delay: -1
       Stream #0:1: Audio: ac3, 48000 Hz, stereo, fltp, 256 kb/s
       Metadata:
         encoder         : Lavc57.64.101 ac3
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> mpeg2video (native))
     Stream #0:1 -> #0:1 (pcm_f32le (native) -> ac3 (native))
    Press [q] to stop, [?] for help
    [swscaler @ 0x7fd315892800] Warning: data is not aligned! This can lead to a speedloss
    frame=   20 fps=0.0 q=3.0 size=     298kB time=00:00:00.65 bitrate=3721.4kbits/sframe=   35 fps= 35 q=3.0 size=     498kB time=00:00:01.13 bitrate=3591.2kbits/sframe=   50 fps= 33 q=3.0 size=     708kB time=00:00:01.64 bitrate=3519.4kbits/sframe=   65 fps= 32 q=3.0 size=     920kB time=00:00:02.16  
    ...
    bitrate=2721.7kbits/frame=162094 fps= 30 q=3.0 size= 1796936kB time=01:30:08.47 bitrate=2721.7kbits/frame=162109 fps= 30 q=3.0 size= 1797142kB time=01:30:08.98 bitrate=2721.8kbits/frame=162110 fps= 30 q=3.0 Lsize= 1797202kB time=01:30:09.01 bitrate=2721.9kbits/s dup=221 drop=0 speed=   1x    
    video:1579050kB audio:168069kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.866632%

    I deleted the middle part (denoted by the "..."), which is just a lot more of the same accumulating information. By the end there are 221 duplicate frames, and for this one I didn’t get any dropped frames, but that happens every other time as well it seems.

    With this code, the audio seems to stay in pretty good sync, but I get little dropouts every 30 seconds to a minute or so. On this recording, there are dropouts at :

    00:00:43, 00:01:19, 00:01:47, 00:02:17, 00:03:18, ...

    I stopped listening there ; they happen at the end too, so I assume similarly spaced dropouts are happening throughout the file.

    Is there some secret ffmpeg code to ensure there are no dropped or duplicate frames when capturing from a live source to dvd compliant mpeg-2 files ?

    When I convert from pretty much any type of file to mpeg-2 files using -target ntsc-dvd, the speed is something like 10x on this machine, so it seems like it would have no problem keeping up with a live source, right ?

    I’ve also tried constant bit rate using -b:v 5000k -minrate 5000k -maxrate 5000k -bufsize 2500k, which also doesn’t prevent dropped/duplicate frames.

    I’ve tried separating out the audio and video inputs, which doesn’t solve it.

    I’ve tried using -vsync 0 on the video input, which does seem to solve the issue, because the output doesn’t report dups/drops, but the audio/video drift out of sync more and more, so that hasn’t worked either.

    Thanks so much for any help. I’ve been testing and testing and searching and searching for weeks...

  • FFmpeg same input stream several files (sequentially) [on hold]

    9 septembre 2016, par Aram

    What I’m doing : Breaking (remuxing, ie take a packet write a packet, no encoding decoding pipeline) a video into several video chunks (files) separated using ffmpeg’s API (Just targetting H264 and HVEC encoded videos)

    What I want to do : merge them with ffmpeg’s concat

    What is happening : I get artifacts in the parts I’m breaking the video.

    What I think is happening : av_write_interleaved is managing the I believe are I-frames, B-frames and P-frames in the middle of the video, but because I cut after N packets sometimes I break before a full frame is sent to av_write_frame, and then start writing again which leaves artifacts in the video. How can I break, ie known when ’full frame’ has been sent to av_write_interleaved so I can avoid having these artifacts.

    Example of the dts and pts I’m getting

    in pts: 668668 pts_time: 27.8612 dts: 668668  dts_time: 27.8612 duration: 1001 duration_time: 0.0417083 stream_index: 0
    out pts: 1668333 pts_time: 0.0834166 dts: 1668333  dts_time: 0.0834166 duration: 1001 duration_time: 5.005e-05 stream_index: 0
    in pts: 669669 pts_time: 27.9029 dts: 669669  dts_time: 27.9029 duration: 1001 duration_time: 0.0417083 stream_index: 0
    out pts: 2502500 pts_time: 0.125125 dts: 2502500  dts_time: 0.125125 duration: 1001 duration_time: 5.005e-05 stream_index: 0
    in pts: 674674 pts_time: 28.1114 dts: 670670  dts_time: 27.9446 duration: 1001 duration_time: 0.0417083 stream_index: 0
    out pts: 6673333 pts_time: 0.333667 dts: 3336667  dts_time: 0.166833 duration: 1001 duration_time: 5.005e-05 stream_index: 0

    Thanks in advance.

    EDIT:CODE

    #include <iostream>
    #include <stdexcept>

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>timestamp.h>
    }

    using namespace std;

    std::string video = "./trailer.mp4";
    std::string in_path, out_path;
    // AVPacket pkt;
    AVOutputFormat *ofmt = nullptr;
    AVFormatContext *ifmt_ctx = nullptr, *ofmt_ctx = nullptr;
    AVStream *in_stream, *out_stream;
    bool recording = false, header = false;
    bool first_frame = true;
    bool set_base_dts = false;
    int64_t last_dts = 0, base_dts = 0;

    void init(const std::string &amp;input_path,
             const std::string &amp;output_path = std::string());

    void batch_write(uint64_t frames = 1, bool video_only = true);

    // Here i'm trying to reseat the dts/pts so when I play a chunk alone
    // I dont get a video that starts n seconds later.
    void reset_dts(AVPacket &amp;pkt) {
       last_dts = pkt.dts;

       // if points overflow
       if (base_dts > pkt.dts) {
           base_dts = 0;
       } else {
           // cout &lt;&lt; "base " &lt;&lt; base_dts &lt;&lt; endl;
           // cout &lt;&lt; "prev " &lt;&lt; pkt.dts &lt;&lt; endl;
           pkt.dts -= base_dts;
           pkt.pts -= base_dts;
           // cout &lt;&lt; "after " &lt;&lt; pkt.dts &lt;&lt; endl;
       }

       pkt.dts =
           av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base,
                            AV_ROUND_PASS_MINMAX | AV_ROUND_NEAR_INF);

       pkt.pts =
           av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base,
                            AV_ROUND_PASS_MINMAX | AV_ROUND_NEAR_INF);
    }

    void write_frame(bool video_only) {
       if (!recording) {
           init(in_path, out_path);
       }

       AVPacket pkt;

       while (true) {
           av_read_frame(ifmt_ctx, &amp;pkt);

           // Skip audio packets
           if (video_only &amp;&amp; in_stream->index != pkt.stream_index) {
               av_packet_unref(&amp;pkt);
               continue;
           }

           if (first_frame) {
               av_packet_unref(&amp;pkt);
               first_frame = false;
               return;
           }

           // in_stream = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];

           reset_dts(pkt);

           av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
           break;
       }
       av_packet_unref(&amp;pkt);
    }

    void reopen(const string &amp;output_path) {
       out_path = output_path;

       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_path.c_str());
       ofmt = ofmt_ctx->oformat;

       for (int i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
               cout &lt;&lt; "i" &lt;&lt; endl;
               in_stream = ifmt_ctx->streams[i];
               out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);

               avcodec_copy_context(out_stream->codec, in_stream->codec);

               out_stream->codec->codec_tag = 0;
               if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
                   out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
               break;
           }
       }

       av_dump_format(ofmt_ctx, 0, out_path.c_str(), 1);
       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           avio_open(&amp;ofmt_ctx->pb, out_path.c_str(), AVIO_FLAG_WRITE);
       }

       if (!header) {
           avformat_write_header(ofmt_ctx, NULL);
           header = true;
       }

       recording = true;
    }

    void soft_close() {
       if (!recording)
           return;

       av_write_trailer(ofmt_ctx);
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE)) {
           avio_closep(&amp;ofmt_ctx->pb);
       }

       avformat_free_context(ofmt_ctx);
       out_stream = in_stream = nullptr;
       ofmt_ctx = nullptr;
       header = false;
       recording = false;
       first_frame = true;
       set_base_dts = true;
       base_dts = last_dts;
    }

    void close() {
       if (!recording)
           return;

       // Close all the output contexts and reset private variables
       soft_close();

       // Free the input context
       avformat_free_context(ifmt_ctx);
       ifmt_ctx = nullptr;
    }

    void batch_write(uint64_t frames, bool video_only) {
       while (frames) {
           write_frame(video_only);
           frames--;
       }
    }

    void init(const std::string &amp;input_path, const string &amp;output_path) {
       if (!input_path.empty()) {
           in_path = input_path;
           out_path = output_path;
       }

       ifmt_ctx = avformat_alloc_context();
       avformat_open_input(&amp;ifmt_ctx, in_path.c_str(), 0, 0);
       avformat_find_stream_info(ifmt_ctx, 0) &lt; 0);
       av_dump_format(ifmt_ctx, 0, in_path.c_str(), 0);

       reopen(output_path);
    }

    int main() {
       av_register_all();
       avformat_network_init();

       // Init inputs
       init(video, "out0.mp4");
       for (int64_t i = 0; i &lt; 10; i++) {
           std::string filename = "out" + to_string(i) + ".mp4";
           soft_close();
           reopen(filename);
           batch_write(23); // Around 23fps avg
       }
       close();
    }
    </stdexcept></iostream>

    Compiling with
    g++ test.cpp -o wtf -lavutil -lavformat -lavcodec -fpermissive -std=c++11

  • Best approach to real time http streaming to HTML5 video client

    28 juin 2017, par deandob

    I’m really stuck trying to understand the best way to stream real time output of ffmpeg to a HTML5 client using node.js, as there are a number of variables at play and I don’t have a lot of experience in this space, having spent many hours trying different combinations.

    My use case is :

    1) IP video camera RTSP H.264 stream is picked up by FFMPEG and remuxed into a mp4 container using the following FFMPEG settings in node, output to STDOUT. This is only run on the initial client connection, so that partial content requests don’t try to spawn FFMPEG again.

    liveFFMPEG = child_process.spawn("ffmpeg", [
                   "-i", "rtsp://admin:12345@192.168.1.234:554" , "-vcodec", "copy", "-f",
                   "mp4", "-reset_timestamps", "1", "-movflags", "frag_keyframe+empty_moov",
                   "-"   // output to stdout
                   ],  {detached: false});

    2) I use the node http server to capture the STDOUT and stream that back to the client upon a client request. When the client first connects I spawn the above FFMPEG command line then pipe the STDOUT stream to the HTTP response.

    liveFFMPEG.stdout.pipe(resp);

    I have also used the stream event to write the FFMPEG data to the HTTP response but makes no difference

    xliveFFMPEG.stdout.on("data",function(data) {
           resp.write(data);
    }

    I use the following HTTP header (which is also used and working when streaming pre-recorded files)

    var total = 999999999         // fake a large file
    var partialstart = 0
    var partialend = total - 1

    if (range !== undefined) {
       var parts = range.replace(/bytes=/, "").split("-");
       var partialstart = parts[0];
       var partialend = parts[1];
    }

    var start = parseInt(partialstart, 10);
    var end = partialend ? parseInt(partialend, 10) : total;   // fake a large file if no range reques

    var chunksize = (end-start)+1;

    resp.writeHead(206, {
                     'Transfer-Encoding': 'chunked'
                    , 'Content-Type': 'video/mp4'
                    , 'Content-Length': chunksize // large size to fake a file
                    , 'Accept-Ranges': 'bytes ' + start + "-" + end + "/" + total
    });

    3) The client has to use HTML5 video tags.

    I have no problems with streaming playback (using fs.createReadStream with 206 HTTP partial content) to the HTML5 client a video file previously recorded with the above FFMPEG command line (but saved to a file instead of STDOUT), so I know the FFMPEG stream is correct, and I can even correctly see the video live streaming in VLC when connecting to the HTTP node server.

    However trying to stream live from FFMPEG via node HTTP seems to be a lot harder as the client will display one frame then stop. I suspect the problem is that I am not setting up the HTTP connection to be compatible with the HTML5 video client. I have tried a variety of things like using HTTP 206 (partial content) and 200 responses, putting the data into a buffer then streaming with no luck, so I need to go back to first principles to ensure I’m setting this up the right way.

    Here is my understanding of how this should work, please correct me if I’m wrong :

    1) FFMPEG should be setup to fragment the output and use an empty moov (FFMPEG frag_keyframe and empty_moov mov flags). This means the client does not use the moov atom which is typically at the end of the file which isn’t relevant when streaming (no end of file), but means no seeking possible which is fine for my use case.

    2) Even though I use MP4 fragments and empty MOOV, I still have to use HTTP partial content, as the HTML5 player will wait until the entire stream is downloaded before playing, which with a live stream never ends so is unworkable.

    3) I don’t understand why piping the STDOUT stream to the HTTP response doesn’t work when streaming live yet if I save to a file I can stream this file easily to HTML5 clients using similar code. Maybe it’s a timing issue as it takes a second for the FFMPEG spawn to start, connect to the IP camera and send chunks to node, and the node data events are irregular as well. However the bytestream should be exactly the same as saving to a file, and HTTP should be able to cater for delays.

    4) When checking the network log from the HTTP client when streaming a MP4 file created by FFMPEG from the camera, I see there are 3 client requests : A general GET request for the video, which the HTTP server returns about 40Kb, then a partial content request with a byte range for the last 10K of the file, then a final request for the bits in the middle not loaded. Maybe the HTML5 client once it receives the first response is asking for the last part of the file to load the MP4 MOOV atom ? If this is the case it won’t work for streaming as there is no MOOV file and no end of the file.

    5) When checking the network log when trying to stream live, I get an aborted initial request with only about 200 bytes received, then a re-request again aborted with 200 bytes and a third request which is only 2K long. I don’t understand why the HTML5 client would abort the request as the bytestream is exactly the same as I can successfully use when streaming from a recorded file. It also seems node isn’t sending the rest of the FFMPEG stream to the client, yet I can see the FFMPEG data in the .on event routine so it is getting to the FFMPEG node HTTP server.

    6) Although I think piping the STDOUT stream to the HTTP response buffer should work, do I have to build an intermediate buffer and stream that will allow the HTTP partial content client requests to properly work like it does when it (successfully) reads a file ? I think this is the main reason for my problems however I’m not exactly sure in Node how to best set that up. And I don’t know how to handle a client request for the data at the end of the file as there is no end of file.

    7) Am I on the wrong track with trying to handle 206 partial content requests, and should this work with normal 200 HTTP responses ? HTTP 200 responses works fine for VLC so I suspect the HTML5 video client will only work with partial content requests ?

    As I’m still learning this stuff its difficult to work through the various layers of this problem (FFMPEG, node, streaming, HTTP, HTML5 video) so any pointers will be greatly appreciated. I have spent hours researching on this site and the net, and I have not come across anyone who has been able to do real time streaming in node but I can’t be the first, and I think this should be able to work (somehow !).