Recherche avancée

Médias (1)

Mot : - Tags -/punk

Autres articles (95)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (6714)

  • Playing RTSP in WPF application with low latency using FFMPEG / FFMediaElement (FFME)

    22 mars 2019, par Paboka

    I’m trying to use FFMediaElement (FFME, WPF MediaElement replacement based on FFmpeg) component to play RSTP live video in my WPF application.

    I have a good connection to my camera and I want to play it with minimum available latency.

    I’ve reduced the latency by changing ProbeSize to its minimal value :

    private void Media_OnMediaInitializing(object Sender, MediaInitializingRoutedEventArgs e)
    {
     e.Configuration.GlobalOptions.ProbeSize = 32;
    }

    But I still have about 1 second of latency since the very beginning of the stream. I mean, when I start playing, I have to wait for 1 second till the video appears and then I have 1s of latency.

    I’ve also tried to change following parameters :

    e.Configuration.GlobalOptions.EnableReducedBuffering = true;
    e.Configuration.GlobalOptions.FlagNoBuffer = true;
    e.Configuration.GlobalOptions.MaxAnalyzeDuration = TimeSpan.Zero;

    but it gave no result.

    I measured time-interval between FFmpeg output lines (the number in the first column is the time, elapsed from the previous line, ms)

    ----     OpenCommand: Entered
      39     FFInterop.Initialize: FFmpeg v4.0
       0     EVENT START: MediaInitializing
       0     EVENT DONE : MediaInitializing
     379     EVENT START: MediaOpening
       0     EVENT DONE : MediaOpening
       0     COMP VIDEO: Start Offset:      0,000; Duration:        N/A
      41     SYNC-BUFFER: Started.
     609     SYNC-BUFFER: Finished. Clock set to 1534932751,634
       0     EVENT START: MediaOpened
       0     EVENT DONE : MediaOpened
       0     EVENT START: BufferingStarted
       0     EVENT DONE : BufferingStarted
       0     OpenCommand: Completed
       0     V BLK: 1534932751,634 | CLK: 1534932751,634 | DFT:    0 | IX:   0 | PQ:     0,0k | TQ:     0,0k
       0     Command Queue (1 commands): Before ProcessNext
       0        Play - ID: 404 Canceled: False; Completed: False; Status: WaitingForActivation; State:
      94     V BLK: 1534932751,675 | CLK: 1534932751,699 | DFT:   24 | IX:   1 | PQ:     0,0k | TQ:     0,0k

    So, the most the process of "sync-buffering" takes the most of the time.

    Is there any parameter of FFmpeg which allows reducing a size of the buffer ?

  • How to extract motion vectors from h264 without a full decode on the CPU

    25 septembre 2020, par Adrian May

    I'm trying to use my nose as a pointing device. The plan is to encode the video stream from a webcam pointed at my face as h264 or the like, get the motion vectors, cook the numbers a bit and chuck them into /dev/uinput to make the mouse pointer move about. The uinput bit was easy.

    


    This has to work with zero discernable latency. This, for instance :

    


    #!/bin/bash
[ -p pipe.mkv ] || mkfifo pipe.mkv
ffmpeg -y -rtbufsize 1M -s 640x360 -vcodec mjpeg -i /dev/video0 -c h264_nvenc pipe.mkv &
ffplay -flags2 +export_mvs -vf codecview=mv=pf+bf+bb pipe.mkv


    


    shows that the vectors are there but with a latency of several seconds which is unusable in a mouse. I know that the first ffmpeg step is working very fast by using the GPU, so either the pipe or the h264 decode in the second step is introducing the latency.

    


    I tried MV Tractus (same as mpegflow I think) in a similar pipe arrangement and it was also very slow. They do a full h264 decode on the CPU and I think that's the problem cos I can see them imposing a lot of load on one CPU. If the pipe had caused the delay by buffering badly then the CPU wouldn't have been loaded. I guess ffplay also did the decoding on the CPU and I couldn't persuade it not to, but it only wants to draw arrows which are no use to me.

    


    I think there are several approaches, and I'd like advice on which would be best, or if there's something even better I don't know about. I could :

    


      

    • Decode in hardware and get the motion vectors. So far this has failed. I tried combining ffmpeg's extract_mvs.c and hw_decode.c samples but no motion vectors turn up. vdpau is the only decoder I got working on my linux box. I have a nvidia gpu.
    • 


    • Do a minimal parse of the h264 to fish out the motion vectors only, ignoring all the other data. I think this would mean putting some kind of "motion only" option in libav's parser, but I'm not at all familiar with that code.
    • 


    • Find some other h264 parsing library that has said option and also unpacks the container.
    • 


    • Forget about hardware accelerated encoding and use a stripped down encoder to make only the motion vectors on either CPU or GPU. I suspect this would be slow cos I think calculating the motion vectors is the hardest part of the algorithm.
    • 


    


    I'm tending towards the second option but I need some help figuring out where in the libav code to do it.

    


  • Extracting frames from a video does not work correctly [closed]

    13 avril 2024, par Al Tilmidh

    Using the libraries (libav) and (ffmpeg), I try to extract frames as .jpg files from a video.mp4, the problem is that my program crashes when I use the CV_8UC3 parameter, but by changing this parameter to CV_8UC1, the extracted images end up without color (grayscale), I don't really know what I missed, here is a minimal code to reproduce the two situations :

    


    #include <opencv2></opencv2>opencv.hpp>&#xA;&#xA;extern "C"&#xA;{&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;}&#xA;&#xA;int main()&#xA;{&#xA;    AVFormatContext *formatContext = nullptr;&#xA;&#xA;    if (avformat_open_input(&amp;formatContext, "video.mp4", nullptr, nullptr) != 0)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    if (avformat_find_stream_info(formatContext, nullptr) &lt; 0)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVPacket packet;&#xA;    const AVCodec *codec = nullptr;&#xA;    AVCodecContext *codecContext = nullptr;&#xA;&#xA;    int videoStreamIndex = av_find_best_stream(formatContext, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;codec, 0);&#xA;    if (videoStreamIndex &lt; 0)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    codecContext = avcodec_alloc_context3(codec);&#xA;    avcodec_parameters_to_context(codecContext, formatContext->streams[videoStreamIndex]->codecpar);&#xA;&#xA;    if (avcodec_open2(codecContext, codec, nullptr) &lt; 0)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVFrame *frame = av_frame_alloc();&#xA;&#xA;    while (av_read_frame(formatContext, &amp;packet) >= 0)&#xA;    {&#xA;        if (packet.stream_index == videoStreamIndex)&#xA;        {&#xA;            int response = avcodec_send_packet(codecContext, &amp;packet);&#xA;            &#xA;            if (response &lt; 0)&#xA;            {&#xA;                break;&#xA;            }&#xA;&#xA;            while (response >= 0)&#xA;            {&#xA;                response = avcodec_receive_frame(codecContext, frame);&#xA;                if (response == AVERROR(EAGAIN))&#xA;                {&#xA;                    // NO FRAMES&#xA;                    break;&#xA;                }&#xA;&#xA;                else if (response == AVERROR_EOF)&#xA;                {&#xA;                    // END OF FILE&#xA;                    break;&#xA;                }&#xA;&#xA;                else if (response &lt; 0)&#xA;                {&#xA;                    break;&#xA;                }&#xA;&#xA;                //cv::Mat frameMat(frame->height, frame->width, CV_8UC3, frame->data[0]); // CV_8UC3 → THE PROGRAM CRASHES&#xA;                cv::Mat frameMat(frame->height, frame->width, CV_8UC1, frame->data[0]); // CV_8UC1 → WORK BUT IMAGES ARE IN GRAYSCALE&#xA;                cv::imwrite("frame_" &#x2B; std::to_string(frame->pts) &#x2B; ".jpg", frameMat);&#xA;            }&#xA;        }&#xA;&#xA;        av_packet_unref(&amp;packet);&#xA;    }&#xA;&#xA;    av_frame_free(&amp;frame);&#xA;    avcodec_free_context(&amp;codecContext);&#xA;    avformat_close_input(&amp;formatContext);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;