Recherche avancée

Médias (91)

Autres articles (70)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (10447)

  • Extracting frames from a video does not work correctly [closed]

    13 avril 2024, par Al Tilmidh

    Using the libraries (libav) and (ffmpeg), I try to extract frames as .jpg files from a video.mp4, the problem is that my program crashes when I use the CV_8UC3 parameter, but by changing this parameter to CV_8UC1, the extracted images end up without color (grayscale), I don't really know what I missed, here is a minimal code to reproduce the two situations :

    


    #include <opencv2></opencv2>opencv.hpp>&#xA;&#xA;extern "C"&#xA;{&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;}&#xA;&#xA;int main()&#xA;{&#xA;    AVFormatContext *formatContext = nullptr;&#xA;&#xA;    if (avformat_open_input(&amp;formatContext, "video.mp4", nullptr, nullptr) != 0)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    if (avformat_find_stream_info(formatContext, nullptr) &lt; 0)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVPacket packet;&#xA;    const AVCodec *codec = nullptr;&#xA;    AVCodecContext *codecContext = nullptr;&#xA;&#xA;    int videoStreamIndex = av_find_best_stream(formatContext, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;codec, 0);&#xA;    if (videoStreamIndex &lt; 0)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    codecContext = avcodec_alloc_context3(codec);&#xA;    avcodec_parameters_to_context(codecContext, formatContext->streams[videoStreamIndex]->codecpar);&#xA;&#xA;    if (avcodec_open2(codecContext, codec, nullptr) &lt; 0)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVFrame *frame = av_frame_alloc();&#xA;&#xA;    while (av_read_frame(formatContext, &amp;packet) >= 0)&#xA;    {&#xA;        if (packet.stream_index == videoStreamIndex)&#xA;        {&#xA;            int response = avcodec_send_packet(codecContext, &amp;packet);&#xA;            &#xA;            if (response &lt; 0)&#xA;            {&#xA;                break;&#xA;            }&#xA;&#xA;            while (response >= 0)&#xA;            {&#xA;                response = avcodec_receive_frame(codecContext, frame);&#xA;                if (response == AVERROR(EAGAIN))&#xA;                {&#xA;                    // NO FRAMES&#xA;                    break;&#xA;                }&#xA;&#xA;                else if (response == AVERROR_EOF)&#xA;                {&#xA;                    // END OF FILE&#xA;                    break;&#xA;                }&#xA;&#xA;                else if (response &lt; 0)&#xA;                {&#xA;                    break;&#xA;                }&#xA;&#xA;                //cv::Mat frameMat(frame->height, frame->width, CV_8UC3, frame->data[0]); // CV_8UC3 → THE PROGRAM CRASHES&#xA;                cv::Mat frameMat(frame->height, frame->width, CV_8UC1, frame->data[0]); // CV_8UC1 → WORK BUT IMAGES ARE IN GRAYSCALE&#xA;                cv::imwrite("frame_" &#x2B; std::to_string(frame->pts) &#x2B; ".jpg", frameMat);&#xA;            }&#xA;        }&#xA;&#xA;        av_packet_unref(&amp;packet);&#xA;    }&#xA;&#xA;    av_frame_free(&amp;frame);&#xA;    avcodec_free_context(&amp;codecContext);&#xA;    avformat_close_input(&amp;formatContext);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

  • Playing RTSP in WPF application with low latency using FFMPEG / FFMediaElement (FFME)

    22 mars 2019, par Paboka

    I’m trying to use FFMediaElement (FFME, WPF MediaElement replacement based on FFmpeg) component to play RSTP live video in my WPF application.

    I have a good connection to my camera and I want to play it with minimum available latency.

    I’ve reduced the latency by changing ProbeSize to its minimal value :

    private void Media_OnMediaInitializing(object Sender, MediaInitializingRoutedEventArgs e)
    {
     e.Configuration.GlobalOptions.ProbeSize = 32;
    }

    But I still have about 1 second of latency since the very beginning of the stream. I mean, when I start playing, I have to wait for 1 second till the video appears and then I have 1s of latency.

    I’ve also tried to change following parameters :

    e.Configuration.GlobalOptions.EnableReducedBuffering = true;
    e.Configuration.GlobalOptions.FlagNoBuffer = true;
    e.Configuration.GlobalOptions.MaxAnalyzeDuration = TimeSpan.Zero;

    but it gave no result.

    I measured time-interval between FFmpeg output lines (the number in the first column is the time, elapsed from the previous line, ms)

    ----     OpenCommand: Entered
      39     FFInterop.Initialize: FFmpeg v4.0
       0     EVENT START: MediaInitializing
       0     EVENT DONE : MediaInitializing
     379     EVENT START: MediaOpening
       0     EVENT DONE : MediaOpening
       0     COMP VIDEO: Start Offset:      0,000; Duration:        N/A
      41     SYNC-BUFFER: Started.
     609     SYNC-BUFFER: Finished. Clock set to 1534932751,634
       0     EVENT START: MediaOpened
       0     EVENT DONE : MediaOpened
       0     EVENT START: BufferingStarted
       0     EVENT DONE : BufferingStarted
       0     OpenCommand: Completed
       0     V BLK: 1534932751,634 | CLK: 1534932751,634 | DFT:    0 | IX:   0 | PQ:     0,0k | TQ:     0,0k
       0     Command Queue (1 commands): Before ProcessNext
       0        Play - ID: 404 Canceled: False; Completed: False; Status: WaitingForActivation; State:
      94     V BLK: 1534932751,675 | CLK: 1534932751,699 | DFT:   24 | IX:   1 | PQ:     0,0k | TQ:     0,0k

    So, the most the process of "sync-buffering" takes the most of the time.

    Is there any parameter of FFmpeg which allows reducing a size of the buffer ?

  • Use ffmpeg libraries to convert stream formats

    17 septembre 2021, par Syrinx

    I'm attempting to write a small program and link it to a minimal set of ffmpeg libraries, like libavformat and whatever other libraries I need.

    &#xA;

    I am looking for documentation to get me started, or maybe a quick fix to the example program I am using.

    &#xA;

    I know ffmpeg (the project) provides example programs to help developers get started. I'm using the transcoding example, as it's close to my end goal, but it exits during init with an error about an audio issue.

    &#xA;

    Here I am using the transcoding example program that come with ffmpeg v4.4, on Ubuntu 18.04. My input source has one video channel (h264) and one audio channel (pcm_mulaw).

    &#xA;

    $ LD_LIBRARY_PATH=../../dist/lib ./transcoding rtsp://ip-camera/stream out.flv&#xA;...&#xA;  Stream #0:0: Video: h264, yuv420p, 1280x720, q=2-31, 20 tbn&#xA;  Stream #0:1: Audio: pcm_mulaw, 8000 Hz, 0 channels, s16&#xA;auto_resampler_0 @ 0x55da787fa140] [SWR @ 0x55da787fa5c0] Rematrix is needed between mono and 0 channels but there is not enough information to do it&#xA;[auto_resampler_0 @ 0x55da787fa140] Failed to configure output pad on auto_resampler_0&#xA;

    &#xA;

    In libswresample/swresample.c :

    &#xA;

    320      if ((!s->out_ch_layout || !s->in_ch_layout) &amp;&amp; s->used_ch_count != s->out.ch_count &amp;&amp; !s->rematrix_custom) {&#xA;321          av_log(s, AV_LOG_ERROR, "Rematrix is needed between %s and %s "&#xA;322                 "but there is not enough information to do it\n", l1, l2);&#xA;323          ret = AVERROR(EINVAL);&#xA;324          goto fail;&#xA;325      }&#xA;

    &#xA;

    I'd really like it if I could make the transcoding example program work (fix it, or maybe use it appropriately if I am misunderstanding something). But short of that, where should I look for documentation about using the ffmpeg libraries ?

    &#xA;

    I don't even care about the audio. If I can just disable the audio, I would be happy with that solution. I tried tracking the "-an" option to ffmpeg (the program) to see how it does that in source code, but the options handling is a mess and I can't distinguish the parts of the code that I need from all the noise.

    &#xA;

    ffmpeg has web pages like this that aren't useful at all. There is documentation in the source code that looks like it should be viewed as HTML, but I don't see it exported anywhere. "make doc" generates a very small set of man pages that are insufficient to get me started.

    &#xA;