Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (21)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (5614)

  • Fluent-ffmpeg : Output with label 'screen0' doesnot exist in any diferent filter graph, or was already used elsewhere

    28 avril 2019, par mandaputtra

    I’m trying to take video frame by frame using fluent-ffmpeg, to create video masking and some kin of like experiment video editor. But when I do that by screenshots it says ffmpeg exited with code 1: Output with label 'screen0' does not exist in any defined filter graph, or was already used elsewhere.

    Here example array that I use for producing timestamps. ["0.019528","0.05226","0.102188","0.13635","0.152138","0.186013","0.236149" ...]

    // read json file that contains timestaps, array
    fs.readFile(`${config.videoProc}/timestamp/1.json`, 'utf8', async (err, data) => {
     if (err) throw new Error(err.message);
     const timestamp = JSON.parse(data);
     // screenshot happens here
     // loop till there is nothing on array...
     function takeFrame() {
       command.input(`${config.publicPath}/static/video/1.mp4`)
         .on('error', error => console.log(error.message))
         .on('end', () => {
           if (timestamp.length > 0) {
             // screenshot again ...
             takeFrame();
           } else {
             console.log('Process ended');
           }
         })
         .noAudio()
         .screenshots({
           timestamps: timestamp.splice(0, 100),
           filename: '%i.png',
           folder: '../video/img',
           size: '320x240',
         });
     }
     // call the function
     takeFrame();
    });

    My expected result are, I can genereta all the 600 screenshot. on one video. but the actual result is this error ffmpeg exited with code 1: Output with label 'screen0' does not exist in any defined filter graph, or was already used elsewhere and only 100 screen generated.

    [UPDATE]

    using -filter_complex as mentioned in here doenst work.

    ffmpeg exited with code 1: Error initializing complex filters.
    Invalid argument

    [UPDATE]

    command line arg :

    ffmpeg -ss 0.019528 -i D:\Latihan\video-cms-core\public/static/video/1.mp4 -y -filter_complex scale=w=320:h=240[size0];[size0]split=1[screen0] -an -vframes 1 -map [screen0] ..\video\img\1.png
  • Decoding YUYV422 raw images using FFmpeg

    12 avril 2019, par user373864q

    I have a collection of sequential YUYV422 raw images that I wish to turn into a video. The problem seems to be that when the frame is created in the avcodec_receive_frame. The frame only contains one channel instead of four in the YUYV format. This results in Input picture width <640> is greater then stride (0) since only the zeroth index of data and linesize is set in the frame. I don’t know if this is a ffmpeg bug or a misconfiguration on my part.

    #include "icsFfmpegImageDecoder.h"
    #include <stdexcept>

    ImageDecoder::ImageDecoder(std::string filename)
    {
       AVInputFormat* iformat;
       if (!(iformat = av_find_input_format("image2")))
           throw std::invalid_argument(std::string("input Codec not found\n"));

       this->fctx = NULL;
       if (avformat_open_input(&amp;this->fctx, filename.c_str(), iformat, NULL) &lt; 0)
       {
           std::string error = "Failed to open input file ";
           error += filename;
           error += "\n";
           throw std::invalid_argument(error);
       }
    #ifdef LIB_AVFORMAT_STREAM_CODEC_DEPRECATED
       if (!(this->codec = avcodec_find_decoder(this->fctx->streams[0]->codecpar->codec_id)))
           throw std::invalid_argument(std::string("Failed to find codec\n"));

       if (!(this->cctx = avcodec_alloc_context3(this->codec)))
           throw std::invalid_argument(std::string("could not create image read context codec"));

       if (avcodec_parameters_to_context(this->cctx, this->fctx->streams[0]->codecpar) &lt; 0)
           throw std::invalid_argument(std::string("could not get contest codec from stream"));
    #else
       this->cctx = this->fctx->streams[0]->codec;
       if (!(this->codec = avcodec_find_decoder(this->cctx->codec_id)))
           throw std::invalid_argument(std::string("Failed to find codec\n"));
    #endif

       if (this->cctx->codec_id == AV_CODEC_ID_RAWVIDEO) {
           // TODO Make Dynamic
           this->cctx->pix_fmt = AV_PIX_FMT_YUYV422 ;
           this->cctx->height = 800;
           this->cctx->width = 1280;
       }

       if (avcodec_open2(this->cctx, this->codec, NULL) &lt; 0)
           throw std::invalid_argument(std::string("Failed to open codec\n"));

    #ifdef USING_NEW_AVPACKET_SETUP
       if (!(this->pkt = av_packet_alloc()))
           throw std::invalid_argument(std::string("Failed to alloc frame\n"));
    #else
       this->pkt = new AVPacket();
       av_init_packet(this->pkt);
    #endif
       read_file();
    }

    ImageDecoder::~ImageDecoder()
    {
       avcodec_close(this->cctx);
       avformat_close_input(&amp;this->fctx);
    #ifdef USING_NEW_AVPACKET_SETUP
       av_packet_free(&amp;this->pkt);
    #else
       av_free_packet(this->pkt);
       delete this->pkt;
    #endif
    }

    void ImageDecoder::read_file()
    {
       if (av_read_frame(this->fctx, this->pkt) &lt; 0)
           throw std::invalid_argument(std::string("Failed to read frame from file\n"));

       if (this->pkt->size == 0)  
           this->ret = -1;
    }

    #ifdef LIB_AVCODEC_USE_SEND_RECEIVE_NOTATION
    void ImageDecoder::send_next_packet() {

       if ((this->ret = avcodec_send_packet(this->cctx, this->pkt)) &lt; 0)
           throw std::invalid_argument("Error sending a packet for decoding\n");
    }

    bool ImageDecoder::receive_next_frame(AVFrame* frame)
    {
       if (this->ret >= 0)
       {
           this->ret = avcodec_receive_frame(this->cctx, frame);
           if (this->ret == AVERROR_EOF)
               return false;
           else if (this->ret == AVERROR(11))//11 == EAGAIN builder sucks
               return false;
           else if (this->ret &lt; 0)
               throw std::invalid_argument("Error during decoding\n");
           return true;
       }
       return false;
    }
    #else
    void ImageDecoder::decode_frame(AVFrame* frame)
    {
       int got_frame = 0;
       if (avcodec_decode_video2(this->cctx, frame, &amp;got_frame, this->pkt) &lt; 0)
           throw std::invalid_argument("Error while decoding frame %d\n");
    }
    #endif
    </stdexcept>
  • How to detect a surge of activity in a video ?

    22 mars 2019, par Alain Collins

    I’d like to automatically detect a surge of activity in a video, e.g. basketball jump shot, hockey face-off, sprinters starting, etc., preferably using ffmpeg.

    In these instances, there’s some motion as the players assume their positions, followed by a pause as they wait for the ref to throw the ball or drop the puck, followed by a lot of motion as all players begin to react. It’s also typical that the camera will be still during this period and begin moving as the ball or puck changes position.

    I’ve tried using the ’select’ filter select='gt(scene,0.4)', but that seems to be more concerned with scene changes (i.e., more dramatic changes) even with low thresholds.

    I also tried exporting the scene information and examining it manually, but couldn’t find a clear pattern that correlated with motion in the video.

    UPDATE : I ran a mestimate on the video. Processing took a long time, but there was definitely a change in activity before and at point of interest. Is there a way to export this information to a file, or otherwise detect the amount of motion seen my mestimate ?