Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (40)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7590)

  • FFMPEG SCREENSHOT GENERATE ERROR : No such filter : 'tile' [duplicate]

    23 mai 2013, par itseasy21

    This question is an exact duplicate of :

    i have been trying on making multiple screenshots from a video file using ffmpeg and i succeed too in command but the only problem is while executing that i am getting this error :

    No such filter: 'tile'
    Error opening filters!

    The command i execute is :

    ffmpeg -ss 00:00:10 -i './tmp/try.avi' -vcodec mjpeg -vframes 1 -vf 'select=not(mod(n\,1000)),scale=320:240,tile=2x3' './tmp/try.jpg'

    The output i get is :

    xxxxx@xxxx.com [~/public_html/xxxx]# ffmpeg -ss 00:00:10 -i './tmp/try.avi' -vcodec mjpeg -vframes 1 -vf 'select=not(mod(n\,1000)),scale=320:240,tile=2x3' './tmp/try.jpg'

    ffmpeg version 0.7.11, Copyright (c) 2000-2011 the FFmpeg developers
     built on Mar 10 2012 18:07:20 with gcc 4.4.6 20110731 (Red Hat 4.4.6-3)
     configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --enable-shared --enable-runtime-cpudetect --enable-gpl --enable-version3 --enable-postproc --enable-avfilter --enable-pthreads --enable-x11grab --enable-vdpau --disable-avisynth --enable-frei0r --enable-libopencv --enable-libdc1394 --enable-libdirac --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --disable-stripping
     libavutil    50. 43. 0 / 50. 43. 0
     libavcodec   52.123. 0 / 52.123. 0
     libavformat  52.111. 0 / 52.111. 0
     libavdevice  52.  5. 0 / 52.  5. 0
     libavfilter   1. 80. 0 /  1. 80. 0
     libswscale    0. 14. 1 /  0. 14. 1
     libpostproc  51.  2. 0 / 51.  2. 0

    Seems stream 0 codec frame rate differs from container frame rate: 29.97 (30000/1001) -> 25.00 (25/1)
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './tmp/try.avi':
     Metadata:
       major_brand     : 3gp4
       minor_version   : 512
       compatible_brands: isomiso23gp4
       creation_time   : 1970-01-01 00:00:00
     Duration: 00:09:24.82, start: 0.000000, bitrate: 118 kb/s
       Stream #0.0(und): Video: h263, yuv420p, 176x144 [PAR 12:11 DAR 4:3], 102 kb/s, 25 fps, 25 tbr, 25 tbn, 29.97 tbc
       Metadata:
         creation_time   : 1970-01-01 00:00:00
       Stream #0.1(und): Audio: amrnb, 8000 Hz, 1 channels, flt, 12 kb/s
       Metadata:
         creation_time   : 1970-01-01 00:00:00
    Incompatible pixel format 'yuv420p' for codec 'mjpeg', auto-selecting format 'yuvj420p'
    [buffer @ 0x1ad89a0] w:176 h:144 pixfmt:yuv420p tb:1/1000000 sar:12/11 sws_param:
    No such filter: 'tile'
    Error opening filters!

    any solution for this ????

  • MPEG DASH - do I need to have audio and video tracks as seperate source file for creating DASH package using mp4box

    10 juillet 2016, par Tarun

    I have one source mp4, I tried to create MPEG DASH package using mp4box by GPAC.
    I am able to play output MPD files in OSMO4 player by GPAC.

    However I am not able to play the same in DASH JS player @ http://dashif.org/reference/players/javascript/0.2.3/index.html

    When I try to play the mpd in it I get error "Error creating source buffer"

    I tried reading their MPD files, and I found that those guys are using audio and video as separate source track.

    Ques1) Does DASH specs states that audio and video tracks should be seprate source tracks ?

    Ques2) Please find below the MPD file created by me, Let me know if anybody thinks that there is a problem in it

    <mpd type="static" xmlns="urn:mpeg:DASH:schema:MPD:2011" profiles="urn:mpeg:dash:profile:full:2011" minbuffertime="PT1.5S" mediapresentationduration="PT0H2M31.63S">
    <programinformation moreinformationurl="http://gpac.sourceforge.net">

    </programinformation>
    <period start="PT0S" duration="PT0H2M31.63S">
     <adaptationset>
    <contentcomponent contenttype="video"></contentcomponent>
    <contentcomponent contenttype="audio" lang="und"></contentcomponent>
    <segmenttemplate initialization="flight_init.mp4"></segmenttemplate>
    <representation mimetype="video/mp4" codecs="avc1.64001f,mp4a.40.02" width="1280" height="720" samplerate="44100" numchannels="2" lang="und" startwithsap="1" bandwidth="3096320">
    <segmenttemplate timescale="1000" duration="20164" media="flight_test_flight_3000$Number$.mp4" startnumber="1"></segmenttemplate>
    </representation>
    <representation mimetype="video/mp4" codecs="avc1.64001e,mp4a.40.02" width="640" height="360" samplerate="44100" numchannels="2" lang="und" startwithsap="1" bandwidth="1119428">
    <segmenttemplate timescale="1000" duration="20099" media="flight_test_flight_1000$Number$.mp4" startnumber="1"></segmenttemplate>
    </representation>
    <representation mimetype="video/mp4" codecs="avc1.640014,mp4a.40.02" width="320" height="180" samplerate="44100" numchannels="2" lang="und" startwithsap="1" bandwidth="722208">
    <segmenttemplate timescale="1000" duration="20164" media="flight_test_flight_600$Number$.mp4" startnumber="1"></segmenttemplate>
    </representation>
    </adaptationset>
    </period>
    </mpd>
  • Libav (ffmpeg) copying decoded video timestamps to encoder

    31 octobre 2016, par Jason C

    I am writing an application that decodes a single video stream from an input file (any codec, any container), does a bunch of image processing, and encodes the results to an output file (single video stream, Quicktime RLE, MOV). I am using ffmpeg’s libav 3.1.5 (Windows build for now, but the application will be cross-platform).

    There is a 1:1 correspondence between input and output frames and I want the frame timing in the output to be identical to the input. I am having a really, really hard time accomplishing this. So my general question is : How do I reliably (as in, in all cases of inputs) set the output frame timing identical to the input ?

    It took me a very long time to slog through the API and get to the point I am at now. I put together a minimal test program to work with :

    #include <cstdio>

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libswscale></libswscale>swscale.h>
    }

    using namespace std;


    struct DecoderStuff {
       AVFormatContext *formatx;
       int nstream;
       AVCodec *codec;
       AVStream *stream;
       AVCodecContext *codecx;
       AVFrame *rawframe;
       AVFrame *rgbframe;
       SwsContext *swsx;
    };


    struct EncoderStuff {
       AVFormatContext *formatx;
       AVCodec *codec;
       AVStream *stream;
       AVCodecContext *codecx;
    };


    template <typename t="t">
    static void dump_timebase (const char *what, const T *o) {
       if (o)
           printf("%s timebase: %d/%d\n", what, o->time_base.num, o->time_base.den);
       else
           printf("%s timebase: null object\n", what);
    }


    // reads next frame into d.rawframe and d.rgbframe. returns false on error/eof.
    static bool read_frame (DecoderStuff &amp;d) {

       AVPacket packet;
       int err = 0, haveframe = 0;

       // read
       while (!haveframe &amp;&amp; err >= 0 &amp;&amp; ((err = av_read_frame(d.formatx, &amp;packet)) >= 0)) {
          if (packet.stream_index == d.nstream) {
              err = avcodec_decode_video2(d.codecx, d.rawframe, &amp;haveframe, &amp;packet);
          }
          av_packet_unref(&amp;packet);
       }

       // error output
       if (!haveframe &amp;&amp; err != AVERROR_EOF) {
           char buf[500];
           av_strerror(err, buf, sizeof(buf) - 1);
           buf[499] = 0;
           printf("read_frame: %s\n", buf);
       }

       // convert to rgb
       if (haveframe) {
           sws_scale(d.swsx, d.rawframe->data, d.rawframe->linesize, 0, d.rawframe->height,
                     d.rgbframe->data, d.rgbframe->linesize);
       }

       return haveframe;

    }


    // writes an output frame, returns false on error.
    static bool write_frame (EncoderStuff &amp;e, AVFrame *inframe) {

       // see note in so post about outframe here
       AVFrame *outframe = av_frame_alloc();
       outframe->format = inframe->format;
       outframe->width = inframe->width;
       outframe->height = inframe->height;
       av_image_alloc(outframe->data, outframe->linesize, outframe->width, outframe->height,
                      AV_PIX_FMT_RGB24, 1);
       //av_frame_copy(outframe, inframe);
       static int count = 0;
       for (int n = 0; n &lt; outframe->width * outframe->height; ++ n) {
           outframe->data[0][n*3+0] = ((n+count) % 100) ? 0 : 255;
           outframe->data[0][n*3+1] = ((n+count) % 100) ? 0 : 255;
           outframe->data[0][n*3+2] = ((n+count) % 100) ? 0 : 255;
       }
       ++ count;

       AVPacket packet;
       av_init_packet(&amp;packet);
       packet.size = 0;
       packet.data = NULL;

       int err, havepacket = 0;
       if ((err = avcodec_encode_video2(e.codecx, &amp;packet, outframe, &amp;havepacket)) >= 0 &amp;&amp; havepacket) {
           packet.stream_index = e.stream->index;
           err = av_interleaved_write_frame(e.formatx, &amp;packet);
       }

       if (err &lt; 0) {
           char buf[500];
           av_strerror(err, buf, sizeof(buf) - 1);
           buf[499] = 0;
           printf("write_frame: %s\n", buf);
       }

       av_packet_unref(&amp;packet);
       av_freep(&amp;outframe->data[0]);
       av_frame_free(&amp;outframe);

       return err >= 0;

    }


    int main (int argc, char *argv[]) {

       const char *infile = "wildlife.wmv";
       const char *outfile = "test.mov";
       DecoderStuff d = {};
       EncoderStuff e = {};

       av_register_all();

       // decoder
       avformat_open_input(&amp;d.formatx, infile, NULL, NULL);
       avformat_find_stream_info(d.formatx, NULL);
       d.nstream = av_find_best_stream(d.formatx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;d.codec, 0);
       d.stream = d.formatx->streams[d.nstream];
       d.codecx = avcodec_alloc_context3(d.codec);
       avcodec_parameters_to_context(d.codecx, d.stream->codecpar);
       avcodec_open2(d.codecx, NULL, NULL);
       d.rawframe = av_frame_alloc();
       d.rgbframe = av_frame_alloc();
       d.rgbframe->format = AV_PIX_FMT_RGB24;
       d.rgbframe->width = d.codecx->width;
       d.rgbframe->height = d.codecx->height;
       av_frame_get_buffer(d.rgbframe, 1);
       d.swsx = sws_getContext(d.codecx->width, d.codecx->height, d.codecx->pix_fmt,
                               d.codecx->width, d.codecx->height, AV_PIX_FMT_RGB24,
                               SWS_POINT, NULL, NULL, NULL);
       //av_dump_format(d.formatx, 0, infile, 0);
       dump_timebase("in stream", d.stream);
       dump_timebase("in stream:codec", d.stream->codec); // note: deprecated
       dump_timebase("in codec", d.codecx);

       // encoder
       avformat_alloc_output_context2(&amp;e.formatx, NULL, NULL, outfile);
       e.codec = avcodec_find_encoder(AV_CODEC_ID_QTRLE);
       e.stream = avformat_new_stream(e.formatx, e.codec);
       e.codecx = avcodec_alloc_context3(e.codec);
       e.codecx->bit_rate = 4000000; // arbitrary for qtrle
       e.codecx->width = d.codecx->width;
       e.codecx->height = d.codecx->height;
       e.codecx->gop_size = 30; // 99% sure this is arbitrary for qtrle
       e.codecx->pix_fmt = AV_PIX_FMT_RGB24;
       e.codecx->time_base = d.stream->time_base; // ???
       e.codecx->flags |= (e.formatx->flags &amp; AVFMT_GLOBALHEADER) ? AV_CODEC_FLAG_GLOBAL_HEADER : 0;
       avcodec_open2(e.codecx, NULL, NULL);
       avcodec_parameters_from_context(e.stream->codecpar, e.codecx);
       //av_dump_format(e.formatx, 0, outfile, 1);
       dump_timebase("out stream", e.stream);
       dump_timebase("out stream:codec", e.stream->codec); // note: deprecated
       dump_timebase("out codec", e.codecx);

       // open file and write header
       avio_open(&amp;e.formatx->pb, outfile, AVIO_FLAG_WRITE);
       avformat_write_header(e.formatx, NULL);

       // frames
       while (read_frame(d) &amp;&amp; write_frame(e, d.rgbframe))
           ;

       // write trailer and close file
       av_write_trailer(e.formatx);
       avio_closep(&amp;e.formatx->pb);

    }
    </typename></cstdio>

    A few notes about that :

    • Since all of my attempts at frame timing so far have failed, I’ve removed almost all timing-related stuff from this code to start with a clean slate.
    • Almost all error checking and cleanup omitted for brevity.
    • The reason I allocate a new output frame with a new buffer in write_frame, rather than using inframe directly, is because this is more representative of what my real application is doing. My real app also uses RGB24 internally, hence the conversions here.
    • The reason I generate a weird pattern in outframe, rather than using e.g. av_copy_frame, is because I just wanted a test pattern that compressed well with Quicktime RLE (my test input ends up generating a 1.7GB output file otherwise).
    • The input video I am using, "wildlife.wmv", can be found here. I’ve hard-coded the filenames.
    • I am aware that avcodec_decode_video2 and avcodec_encode_video2 are deprecated, but don’t care. They work fine, I’ve already struggled too much getting my head around the latest version of the API, ffmpeg changes their API with nearly every release, and I really don’t feel like dealing with avcodec_send_* and avcodec_receive_* right now.
    • I think I’m supposed to be finishing off by passing a NULL frame to avcodec_encode_video2 to flush some buffers or something but I’m a bit confused about that. Unless somebody feels like explaining that let’s ignore it for now, it’s a separate question. The docs are as vague about this point as they are about everything else.
    • My test input file’s frame rate is 29.97.

    Now, as for my current attempts. The following timing related fields are present in the above code, with details/confusion in bold. There’s a lot of them, because the API is mind-bogglingly convoluted :

    • main: d.stream->time_base : Input video stream time base. For my test input file this is 1/1000.
    • main: d.stream->codec->time_base : Not sure what this is (I never could make sense of why AVStream has an AVCodecContext field when you always use your own new context anyways) and also the codec field is deprecated. For my test input file this is 1/1000.
    • main: d.codecx->time_base : Input codec context time-base. For my test input file this is 0/1. Am I supposed to set it ?
    • main: e.stream->time_base : Time base of the output stream I create. What do I set this to ?
    • main: e.stream->codec->time_base : Time base of the deprecated and mysterious codec field of the output stream I create. Do I set this to anything ?
    • main: e.codecx->time_base : Time base of the encoder context I create. What do I set this to ?
    • read_frame: packet.dts : Decoding timestamp of packet read.
    • read_frame: packet.pts : Presentation timestamp of packet read.
    • read_frame: packet.duration : Duration of packet read.
    • read_frame: d.rawframe->pts : Presentation timestamp of raw frame decoded. This is always 0. Why isn’t it read by the decoder...?
    • read_frame: d.rgbframe->pts / write_frame: inframe->pts : Presentation timestamp of decoded frame converted to RGB. Not set to anything currently.
    • read_frame: d.rawframe->pkt_* : Fields copied from packet, discovered after reading this post. They are set correctly but I don’t know if they are useful.
    • write_frame: outframe->pts : Presentation timestamp of frame being encoded. Should I set this to something ?
    • write_frame: outframe->pkt_* : Timing fields from a packet. Should I set these ? They seem to be ignored by the encoder.
    • write_frame: packet.dts : Decoding timestamp of packet being encoded. What do I set it to ?
    • write_frame: packet.pts : Presentation timestamp of packet being encoded. What do I set it to ?
    • write_frame: packet.duration : Duration of packet being encoded. What do I set it to ?

    I have tried the following, with the described results. Note that inframe is d.rgbframe :

    1.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.codecx->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set outframe->pts = inframe->pts in write_frame
      • Result : Warning that encoder time base is not set (since d.codecx->time_base was 0/1), seg fault.
    2.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set outframe->pts = inframe->pts in write_frame
      • Result : No warnings, but VLC reports frame rate as 480.048 (no idea where this number came from) and file plays too fast. Also the encoder sets all the timing fields in packet to 0, which was not what I expected. (Edit : Turns out this is because av_interleaved_write_frame, unlike av_write_frame, takes ownership of the packet and swaps it with a blank one, and I was printing the values after that call. So they are not ignored.)
    3.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set any of pts/dts/duration in packet in write_frame to anything.
      • Result : Warnings about packet timestamps not set. Encoder seems to reset all packet timing fields to 0, so none of this has any effect.
    4.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • I found these fields, pkt_pts, pkt_dts, and pkt_duration in AVFrame after reading this post, so I tried copying those all the way through to outframe.
      • Result : Really had my hopes up, but ended up with same results as attempt 3 (packet timestamp not set warning, incorrect results).

    I tried various other hand-wavey permutations of the above and nothing worked. What I want to do is create an output file that plays back with the same timing and frame rate as the input (29.97 constant frame rate in this case).

    So how do I do this ? Of the zillions of timing related fields here, what do I do to make the output be the same as the input ? And how do I do it in such a way that handles arbitrary video input formats that may store their time stamps and time bases in different places ? I need this to always work.


    For reference, here is a table of all the packet and frame timestamps read from the video stream of my test input file, to give a sense of what my test file looks like. None of the input packet pts’ are set, same with frame pts, and for some reason the duration of the first 108 frames is 0. VLC plays the file fine and reports the frame rate as 29.9700089 :