Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (41)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

Sur d’autres sites (8796)

  • vp8 : make mv_min/max thread-local if using partition threading.

    5 avril 2017, par Ronald S. Bultje
    vp8 : make mv_min/max thread-local if using partition threading.
    

    Fixes tsan warnings like this in fate-vp8-test-vector-007 :

    WARNING : ThreadSanitizer : data race (pid=65909)
    Write of size 4 at 0x7d8c0000e088 by thread T1 :
    #0 vp8_decode_mb_row_sliced vp8.c:2519 (ffmpeg:x86_64+0x100995ede)
    [..]
    Previous write of size 4 at 0x7d8c0000e088 by thread T2 :
    #0 vp8_decode_mb_row_sliced vp8.c:2519 (ffmpeg:x86_64+0x100995ede)

    • [DH] libavcodec/vp8.c
    • [DH] libavcodec/vp8.h
  • streaming FLV to RTMP with FFMpeg using H264 codec and C++ API to flv.js

    5 février 2018, par Jan Kuri

    I would like to stream live video from webcam using OpenCV using H264 codec and converting that to FLV then stream over RTMP server and catch the stream in browser with flv.js. Basically I have everything working except that I cannot read stream in flv.js. I can open stream with ffplay so I think at least most of the things are set correctly.

    My current implementation :

    #include <iostream>
    #include <vector>

    #include <opencv2></opencv2>highgui.hpp>
    #include <opencv2></opencv2>video.hpp>

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libavutil></libavutil>pixdesc.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libswscale></libswscale>swscale.h>
    }

    void stream_video(double width, double height, int fps, int camID)
    {
     av_register_all();
     avformat_network_init();

     const char *output = "rtmp://localhost/live/stream";
     const AVRational dst_fps = {fps, 1};
     int ret;

     // initialize video capture device
     cv::VideoCapture cam(camID);
     if (!cam.isOpened())
     {
       std::cout &lt;&lt; "Failed to open video capture device!" &lt;&lt; std::endl;
       exit(1);
     }

     cam.set(cv::CAP_PROP_FRAME_WIDTH, width);
     cam.set(cv::CAP_PROP_FRAME_HEIGHT, height);

     // allocate cv::Mat with extra bytes (required by AVFrame::data)
     std::vector imgbuf(height * width * 3 + 16);
     cv::Mat image(height, width, CV_8UC3, imgbuf.data(), width * 3);

     // open output format context
     AVFormatContext *outctx = nullptr;
     ret = avformat_alloc_output_context2(&amp;outctx, nullptr, "flv", output);
     if (ret &lt; 0)
     {
       std::cout &lt;&lt; "Could not allocate output format context!" &lt;&lt; std::endl;
       exit(1);
     }

     // open output IO context
     if (!(outctx->oformat->flags &amp; AVFMT_NOFILE))
     {
       ret = avio_open2(&amp;outctx->pb, output, AVIO_FLAG_WRITE, nullptr, nullptr);
       if (ret &lt; 0)
       {
         std::cout &lt;&lt; "Could not open output IO context!" &lt;&lt; std::endl;
         exit(1);
       }
     }

     // create new video stream
     AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_H264);
     AVStream *strm = avformat_new_stream(outctx, codec);
     AVCodecContext *avctx = avcodec_alloc_context3(codec);

     avctx->codec_id = AV_CODEC_ID_H264;
     avctx->width = width;
     avctx->height = height;
     avctx->pix_fmt = AV_PIX_FMT_YUV420P;
     avctx->framerate = dst_fps;
     avctx->time_base = av_inv_q(dst_fps);

     ret = avcodec_parameters_from_context(strm->codecpar, avctx);
     if (ret &lt; 0)
     {
       std::cout &lt;&lt; "Could not initialize stream codec parameters!" &lt;&lt; std::endl;
       exit(1);
     }

     AVDictionary *opts = nullptr;
     av_dict_set(&amp;opts, "preset", "superfast", 0);
     av_dict_set(&amp;opts, "tune", "zerolatency", 0);

     // open video encoder
     ret = avcodec_open2(avctx, codec, &amp;opts);
     if (ret &lt; 0)
     {
       std::cout &lt;&lt; "Could not open video encoder!" &lt;&lt; std::endl;
       exit(1);
     }

     // initialize sample scaler
     SwsContext *swsctx = sws_getContext(width, height, AV_PIX_FMT_BGR24, width, height, avctx->pix_fmt, SWS_BICUBIC, nullptr, nullptr, nullptr);
     if (!swsctx)
     {
       std::cout &lt;&lt; "Could not initialize sample scaler!" &lt;&lt; std::endl;
       exit(1);
     }

     // allocate frame buffer for encoding
     AVFrame *frame = av_frame_alloc();

     std::vector framebuf(av_image_get_buffer_size(avctx->pix_fmt, width, height, 1));
     av_image_fill_arrays(frame->data, frame->linesize, framebuf.data(), avctx->pix_fmt, width, height, 1);
     frame->width = width;
     frame->height = height;
     frame->format = static_cast<int>(avctx->pix_fmt);

     // write header
     ret = avformat_write_header(outctx, nullptr);
     if (ret &lt; 0)
     {
       std::cout &lt;&lt; "Could not write header!" &lt;&lt; std::endl;
       exit(1);
     }

     // encoding loop
     int64_t frame_pts = 0;
     unsigned nb_frames = 0;
     bool end_of_stream = false;

     do
     {
       nb_frames++;

       if (!end_of_stream)
       {
         cam >> image;
         // convert cv::Mat to AVFrame.
         const int stride[] = {static_cast<int>(image.step[0])};
         sws_scale(swsctx, &amp;image.data, stride, 0, image.rows, frame->data, frame->linesize);
         frame->pts = frame_pts++;
       }
       // encode video frame.
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);

       ret = avcodec_send_frame(avctx, frame);
       if (ret &lt; 0)
       {
         std::cout &lt;&lt; "Error sending frame to codec context!" &lt;&lt; std::endl;
         exit(1);
       }

       ret = avcodec_receive_packet(avctx, &amp;pkt);
       if (ret &lt; 0)
       {
         std::cout &lt;&lt; "Error receiving packet from codec context!" &lt;&lt; std::endl;
         exit(1);
       }

       // rescale packet timestamp.
       av_packet_rescale_ts(&amp;pkt, avctx->time_base, strm->time_base);
       // write packet.
       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = AV_NOPTS_VALUE;
       av_interleaved_write_frame(outctx, &amp;pkt);

       std::cout &lt;&lt; " Frames: " &lt;&lt; nb_frames &lt;&lt; '\r' &lt;&lt; std::flush;

       av_packet_unref(&amp;pkt);
     } while (!end_of_stream);

     av_write_trailer(outctx);
     std::cout &lt;&lt; nb_frames &lt;&lt; " frames encoded" &lt;&lt; std::endl;

     av_frame_free(&amp;frame);
     avcodec_close(avctx);
     avio_close(outctx->pb);
     avformat_free_context(outctx);
    }

    int main()
    {
     double width = 1280, height = 720, fps = 30;
     int camID = 1;

     stream_video(width, height, fps, camID);

     return 0;
    }
    </int></int></vector></iostream>

    As I said before I can successfully open the stream with ffplay rtmp://localhost/live/stream or ffplay http://localhost:8000/live/stream.flv but I cannot open the stream with flv.js player inside browser with getting errors :

    flv: Invalid AVCDecoderConfigurationRecord, lack of data!
    [FLVDemuxer] > Malformed Nalus near timestamp 0, NaluSize > DataSize!
    [FLVDemuxer] > Malformed Nalus near timestamp 1, NaluSize > DataSize!
    [FLVDemuxer] > Malformed Nalus near timestamp 2, NaluSize > DataSize!
    ....

    I would really appreciate any help of fixing the stream to work properly with flv.js, if I stream video like ffmpeg -re -i input.mp4 -c copy -f flv rtmp://localhost/live/stream I can open stream in flv.js without any issues, so "this command" I would like to achieve inside code roughly.
    I also put my code on GitHub repository here if someone would like to compile the code and check on it.

  • swscale/aarch64 : use multiply accumulate and shift-right narrow

    9 décembre 2019, par Sebastian Pop
    swscale/aarch64 : use multiply accumulate and shift-right narrow
    

    This patch rewrites the innermost loop of ff_yuv2planeX_8_neon to avoid zips and
    horizontal adds by using fused multiply adds. The patch also uses ld1r to load
    one element and replicate it across all lanes of the vector. The patch also
    improves the clipping code by removing the shift right instructions and
    performing the shift with the shift-right narrow instructions.

    I see 8% difference on an m6g instance with neoverse-n1 CPUs :
    $ ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf bench=start,scale=1024x1024,bench=stop -f null -
    before : t:0.014015 avg:0.014096 max:0.015018 min:0.013971
    after : t:0.012985 avg:0.013013 max:0.013996 min:0.012818

    Tested with `make check` on aarch64-linux.

    Signed-off-by : Sebastian Pop <spop@amazon.com>
    Reviewed-by : Clément Bœsch <u@pkh.me>
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libswscale/aarch64/output.S