Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (78)

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (7026)

  • Ffplay : Change filter options while playing audio

    11 février 2018, par artha

    I am trying to apply equalizer filters to ffplay while it is playing an audio. Is it possible ?

    Like start playing the audio with the command :

    ffplay -i foo.wav

    Then while foo.wav is playing change it to

    ffplay -af "equalizer=f=1000:width_type=h:width=200:g=-10" -i foo.wav

    without stopping the audio.

  • How to set pts and dts of AVPacket from RTP timestamps while muxing VP8 RTP stream to webm using ffmpeg libavformat ?

    30 janvier 2018, par user2595786

    I am using ffmpeg libavformat library to write a video only webm file. I recieve VP8 encoded rtp stream on my server. I have successfully grouped the rtp byte stream (from rtp payload) into individual frames, and constructed a AVPacket. I am NOT re-encoding the payload to VP8 here as it is already vp8 encoded.

    I am writing the AVPacket to the file using av_write_interleaved() method. Though I am getting a webm file as output, it is not playing at all. When I checked for the info on the file using mkv tool’s ’mkvinfo’ command, I found the following info :

    + EBML head
    |+ EBML version: 1
    |+ EBML read version: 1
    |+ EBML maximum ID length: 4
    |+ EBML maximum size length: 8
    |+ Doc type: webm
    |+ Doc type version: 2
    |+ Doc type read version: 2
    + Segment, size 2142500
    |+ Seek head (subentries will be skipped)
    |+ EbmlVoid (size: 170)
    |+ Segment information
    | + Timestamp scale: 1000000
    | + Multiplexing application: Lavf58.0.100
    | + Writing application: Lavf58.0.100
    | + Duration: 78918744.480s (21921:52:24.480)
    |+ Segment tracks
    | + A track
    |  + Track number: 1 (track ID for mkvmerge & mkvextract: 0)
    |  + Track UID: 1
    |  + Lacing flag: 0
    |  + Name: Video Track
    |  + Language: eng
    |  + Codec ID: V_VP8
    |  + Track type: video
    |  + Default duration: 1.000ms (1000.000 frames/fields per second for a
    video track)
    |  + Video track
    |   + Pixel width: 640
    |   + Pixel height: 480
    |+ Tags
    | + Tag
    |  + Targets
    |  + Simple
    |   + Name: ENCODER
    |   + String: Lavf58.0.100
    | + Tag
    |  + Targets
    |   + TrackUID: 1
    |  + Simple
    |   + Name: DURATION
    |   + String: 21921:52:24.4800000
    |+ Cluster

    As we can see, the duration of the stream is very disproportionately high. (My valid stream duration should be around 8-10 secs). And, the frame rate in the track info is also not what I am setting it to be. I am setting frame rate as 25 fps.

    I am applying av_scale_q(rtpTimeStamp, codec_timebase, stream_timebase) and setting the rescaled rtpTimeStamp as pts and dts values. My guess is my way of setting pts and dts is wrong. Please help me how to set pts and dts values on the AVPacket, so as get a working webm file with proper meta info on it.

    EDIT :

    The following is the code I call to init the library :

    #define STREAM_FRAME_RATE 25
    #define STREAM_PIX_FMT AV_PIX_FMT_YUV420P

    typedef struct OutputStream {
      AVStream *st;
      AVCodecContext *enc;
      AVFrame *frame;
    } OutputStream;


    typedef struct WebMWriter {
         OutputStream *audioStream, *videoStream;
         AVFormatContext *ctx;
         AVOutputFormat *outfmt;
         AVCodec *audioCodec, *videoCodec;
    } WebMWriter;

    static OutputStream audioStream = { 0 }, videoStream = { 0 };

    WebMWriter *init(char *filename)
    {
       av_register_all();

       AVFormatContext *ctx = NULL;
       AVCodec *audioCodec = NULL, *videoCodec = NULL;
       const char *fmt_name = NULL;
       const char *file_name = filename;

       int alloc_status = avformat_alloc_output_context2(&ctx, NULL, fmt_name, file_name);

       if(!ctx)
               return NULL;

       AVOutputFormat *fmt = (*ctx).oformat;

       AVDictionary *video_opt = NULL;
       av_dict_set(&video_opt, "language", "eng", 0);
       av_dict_set(&video_opt, "title", "Video Track", 0);

       if(fmt->video_codec != AV_CODEC_ID_NONE)
       {
               addStream(&videoStream, ctx, &videoCodec, AV_CODEC_ID_VP8, video_opt);
       }

    if(videoStream.st)
               openVideo1(&videoStream, videoCodec, NULL);

       av_dump_format(ctx, 0, file_name, 1);

       int ret = -1;
       /* open the output file, if needed */
       if (!(fmt->flags & AVFMT_NOFILE)) {
               ret = avio_open(&ctx->pb, file_name, AVIO_FLAG_WRITE);
               if (ret < 0) {
                       printf("Could not open '%s': %s\n", file_name, av_err2str(ret));
                       return NULL;
               }
       }

       /* Write the stream header, if any. */
       AVDictionary *format_opt = NULL;
       ret = avformat_write_header(ctx, &format_opt);
       if (ret < 0) {
               fprintf(stderr, "Error occurred when opening output file: %s\n",
                               av_err2str(ret));
               return NULL;
       }


       WebMWriter *webmWriter = malloc(sizeof(struct WebMWriter));
       webmWriter->ctx = ctx;
       webmWriter->outfmt = fmt;
       webmWriter->audioStream = &audioStream;
       webmWriter->videoStream = &videoStream;
       webmWriter->videoCodec = videoCodec;

       return webmWriter;
    }

    The following is the openVideo() method :

    void openVideo1(OutputStream *out_st, AVCodec *codec, AVDictionary *opt_arg)
    {      
       AVCodecContext *codec_ctx = out_st->enc;
       int ret = -1;
       AVDictionary *opt = NULL;
       if(opt_arg != NULL)
       {      
               av_dict_copy(&opt, opt_arg, 0);
               ret = avcodec_open2(codec_ctx, codec, &opt);
       }
       else
       {      
               ret = avcodec_open2(codec_ctx, codec, NULL);
       }

       /* copy the stream parameters to the muxer */
       ret = avcodec_parameters_from_context(out_st->st->codecpar, codec_ctx);
       if (ret < 0) {
               printf("Could not copy the stream parameters\n");
               exit(1);
       }

    }

    The following is the addStream() method :

    void addStream(OutputStream *out_st, AVFormatContext *ctx, AVCodec **cdc, enum AVCodecID codecId, AVDictionary *opt_arg)
    {

       (*cdc) = avcodec_find_encoder(codecId);
       if(!(*cdc)) {
               exit(1);
       }

       /*as we are passing a NULL AVCodec cdc, So AVCodecContext codec_ctx will not be allocated, we have to do it explicitly */
       AVStream *st = avformat_new_stream(ctx, *cdc);
       if(!st) {
               exit(1);
       }

       out_st->st = st;
       st->id = ctx->nb_streams-1;

       AVDictionary *opt = NULL;
       av_dict_copy(&opt, opt_arg, 0);
       st->metadata = opt;

       AVCodecContext *codec_ctx = st->codec;
       if (!codec_ctx) {
               fprintf(stderr, "Could not alloc an encoding context\n");
               exit(1);
       }
       out_st->enc = codec_ctx;

       codec_ctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;

    switch ((*cdc)->type) {
               case AVMEDIA_TYPE_AUDIO:
                       codec_ctx->codec_id = codecId;
                       codec_ctx->sample_fmt  = AV_SAMPLE_FMT_FLTP;
                       codec_ctx->bit_rate    = 64000;
                       codec_ctx->sample_rate = 48000;
                       codec_ctx->channels    = 2;//1;
                       codec_ctx->channel_layout = AV_CH_LAYOUT_STEREO;
                       codec_ctx->codec_type = AVMEDIA_TYPE_AUDIO;
                       codec_ctx->time_base = (AVRational){1,STREAM_FRAME_RATE};


                       break;

               case AVMEDIA_TYPE_VIDEO:
                       codec_ctx->codec_id = codecId;
                       codec_ctx->bit_rate = 90000;
                       codec_ctx->width    = 640;
                       codec_ctx->height   = 480;


                       codec_ctx->time_base = (AVRational){1,STREAM_FRAME_RATE};
                       codec_ctx->gop_size = 12;
                       codec_ctx->pix_fmt = STREAM_PIX_FMT;
                       codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;

                       break;

               default:
                       break;
       }

    /* Some formats want stream headers to be separate. */
       if (ctx->oformat->flags & AVFMT_GLOBALHEADER)
               codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }

    The following is the code I call to write a frame of data to the file :

    int writeVideoStream(AVFormatContext *ctx, AVStream *st, uint8_t *data, int size, long frameTimeStamp, int isKeyFrame, AVCodecContext *codec_ctx)
    {      
       AVRational rat = st->time_base;
       AVPacket pkt = {0};
       av_init_packet(&pkt);

       void *opaque = NULL;
       int flags = AV_BUFFER_FLAG_READONLY;
       AVBufferRef *bufferRef = av_buffer_create(data, size, NULL, opaque, flags);

       pkt.buf = bufferRef;
       pkt.data = data;
       pkt.size = size;  
       pkt.stream_index  = st->index;

       pkt.pts = pkt.dts = frameTimeStamp;
       pkt.pts = av_rescale_q(pkt.pts, codec_ctx->time_base, st->time_base);
       pkt.dts = av_rescale_q(pkt.dts, codec_ctx->time_base, st->time_base);


       if(isKeyFrame == 1)
               pkt.flags |= AV_PKT_FLAG_KEY;

       int ret = av_interleaved_write_frame(ctx, &pkt);
       return ret;
    }

    NOTE :
    Here ’frameTimeStamp’ is the rtp timeStamp on the rtp packet of that frame.

    EDIT 2.0 :

    My updated addStream() method with codecpars changes :

    void addStream(OutputStream *out_st, AVFormatContext *ctx, AVCodec **cdc, enum AVCodecID codecId, AVDictionary *opt_arg)
    {

       (*cdc) = avcodec_find_encoder(codecId);
       if(!(*cdc)) {
               printf("@@@@@ couldnt find codec \n");
               exit(1);
       }

       AVStream *st = avformat_new_stream(ctx, *cdc);
       if(!st) {
               printf("@@@@@ couldnt init stream\n");
               exit(1);
       }

       out_st->st = st;
       st->id = ctx->nb_streams-1;
       AVCodecParameters *codecpars = st->codecpar;
       codecpars->codec_id = codecId;
       codecpars->codec_type = (*cdc)->type;

       AVDictionary *opt = NULL;
       av_dict_copy(&opt, opt_arg, 0);
       st->metadata = opt;
       //av_dict_free(&opt);

       AVCodecContext *codec_ctx = st->codec;
       if (!codec_ctx) {
               fprintf(stderr, "Could not alloc an encoding context\n");
               exit(1);
       }
       out_st->enc = codec_ctx;

       //since opus is experimental codec
       //codec_ctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;

    switch ((*cdc)->type) {
               case AVMEDIA_TYPE_AUDIO:
                       codec_ctx->codec_id = codecId;
                       codec_ctx->sample_fmt  = AV_SAMPLE_FMT_FLTP;//AV_SAMPLE_FMT_U8 or AV_SAMPLE_FMT_S16;
                       codec_ctx->bit_rate    = 64000;
                       codec_ctx->sample_rate = 48000;
                       codec_ctx->channels    = 2;//1;
                       codec_ctx->channel_layout = AV_CH_LAYOUT_STEREO; //AV_CH_LAYOUT_MONO;
                       codec_ctx->codec_type = AVMEDIA_TYPE_AUDIO;
                       codec_ctx->time_base = (AVRational){1,STREAM_FRAME_RATE};

                       codecpars->format = codec_ctx->sample_fmt;
                       codecpars->channels = codec_ctx->channels;
                       codecpars->sample_rate = codec_ctx->sample_rate;

                       break;

               case AVMEDIA_TYPE_VIDEO:
                       codec_ctx->codec_id = codecId;
                       codec_ctx->bit_rate = 90000;
                       codec_ctx->width    = 640;
                       codec_ctx->height   = 480;

                       codec_ctx->time_base = (AVRational){1,STREAM_FRAME_RATE};
                       codec_ctx->gop_size = 12;
                       codec_ctx->pix_fmt = STREAM_PIX_FMT;
                       //codec_ctx->max_b_frames = 1;
                       codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
                       codec_ctx->framerate = av_inv_q(codec_ctx->time_base);
                       st->avg_frame_rate = codec_ctx->framerate;//(AVRational){25000, 1000};

                       codecpars->format = codec_ctx->pix_fmt;
                       codecpars->width = codec_ctx->width;
                       codecpars->height = codec_ctx->height;
                       codecpars->sample_aspect_ratio = (AVRational){codec_ctx->width, codec_ctx->height};

                       break;

               default:
                       break;
       }      
       codecpars->bit_rate = codec_ctx->bit_rate;

       int ret = avcodec_parameters_to_context(codec_ctx, codecpars);
       if (ret < 0) {
               printf("Could not copy the stream parameters\n");
               exit(1);
       }

       /* Some formats want stream headers to be separate. */
       if (ctx->oformat->flags & AVFMT_GLOBALHEADER)
               codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }
  • KLV data in RTP stream

    18 septembre 2013, par Ardoramor

    I have implemented RFC6597 to stream KLV is RTP SMPTE336M packets. Currently, my SDP looks like this :

    v=2
    o=- 0 0 IN IP4 127.0.0.1
    s=Unnamed
    i=N/A
    c=IN IP4 192.168.1.6
    t=0 0
    a=recvonly
    m=video 8202 RTP/AVP 96
    a=rtpmap:96 H264/90000
    a=fmtp:96 packetization-mode=1;profile-level-id=428028;sprop-parameter-sets=Z0KAKJWgKA9E,aM48gA==;
    a=control:trackID=0
    m=application 8206 RTP/AVP 97
    a=rtpmap:97 smpte336m/1000
    a=control:trackID=1

    I try to remux the RTP stream with FFmpeg like so :

    ffmpeg.exe -i test.sdp -map 0:0 -map 0:1 -c:v copy -c:d copy test.m2ts

    I get the following output with FFmpeg :

    ffmpeg version 1.2 Copyright (c) 2000-2013 the FFmpeg developers
     built on Mar 28 2013 00:34:08 with gcc 4.8.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
     libavutil      52. 18.100 / 52. 18.100
     libavcodec     54. 92.100 / 54. 92.100
     libavformat    54. 63.104 / 54. 63.104
     libavdevice    54.  3.103 / 54.  3.103
     libavfilter     3. 42.103 /  3. 42.103
     libswscale      2.  2.100 /  2.  2.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  2.100 / 52.  2.100
    [aac @ 0000000002137900] Sample rate index in program config element does not match the sample rate index configured by the container.
       Last message repeated 1 times
    [aac @ 0000000002137900] decode_pce: Input buffer exhausted before END element found
    [h264 @ 00000000002ce540] Missing reference picture, default is 0
    [h264 @ 00000000002ce540] decode_slice_header error
    [sdp @ 00000000002cfa80] Estimating duration from bitrate, this may be inaccurate
    Input #0, sdp, from 'C:\Users\dragan\Documents\Workspace\Android\uvlens\tests\test.sdp':
     Metadata:
       title           : Unnamed
       comment         : N/A
     Duration: N/A, start: 0.000000, bitrate: N/A
       Stream #0:0: Audio: aac, 32000 Hz, 58 channels, fltp
       Stream #0:1: Video: h264 (Baseline), yuv420p, 640x480, 14.83 tbr, 90k tbn, 180k tbc
       Stream #0:2: Data: none
    File 'C:\Users\dragan\Documents\Workspace\Android\uvlens\tests\test.m2ts' already exists. Overwrite ? [y/N] y
    Output #0, mpegts, to 'C:\Users\dragan\Documents\Workspace\Android\uvlens\tests\test.m2ts':
     Metadata:
       title           : Unnamed
       comment         : N/A
       encoder         : Lavf54.63.104
       Stream #0:0: Video: h264, yuv420p, 640x480, q=2-31, 90k tbn, 90k tbc
       Stream #0:1: Data: none
    Stream mapping:
     Stream #0:1 -> #0:0 (copy)
     Stream #0:2 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    [mpegts @ 0000000002159940] Application provided invalid, non monotonically increasing dts to muxer in stream 1: 8583659665 >= 8583656110
    av_interleaved_write_frame(): Invalid argument

    The problem is that KLV stream packets do not contain have a DTS field. According to the RFC6597 STMPE336M, RTP packet structure is the same as a standard structure :

    4.1.  RTP Header Usage

    This payload format uses the RTP packet header fields as described in
    the table below:

    +-----------+-------------------------------------------------------+
    | Field     | Usage                                                 |
    +-----------+-------------------------------------------------------+
    | Timestamp | The RTP Timestamp encodes the instant along a         |
    |           | presentation timeline that the entire KLVunit encoded |
    |           | in the packet payload is to be presented.  When one   |
    |           | KLVunit is placed in multiple RTP packets, the RTP    |
    |           | timestamp of all packets comprising that KLVunit MUST |
    |           | be the same.  The timestamp clock frequency is        |
    |           | defined as a parameter to the payload format          |
    |           | (Section 6).                                          |
    |           |                                                       |
    | M-bit     | The RTP header marker bit (M) is used to demarcate    |
    |           | KLVunits.  Senders MUST set the marker bit to '1' for |
    |           | any RTP packet that contains the final byte of a      |
    |           | KLVunit.  For all other packets, senders MUST set the |
    |           | RTP header marker bit to '0'.  This allows receivers  |
    |           | to pass a KLVunit for parsing/decoding immediately    |
    |           | upon receipt of the last RTP packet comprising the    |
    |           | KLVunit.  Without this, a receiver would need to wait |
    |           | for the next RTP packet with a different timestamp to |
    |           | arrive, thus signaling the end of one KLVunit and the |
    |           | start of another.                                     |
    +-----------+-------------------------------------------------------+

    The remaining RTP header fields are used as specified in [RFC3550].

    Header from RFC3550 :

    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |V=2|P|X|  CC   |M|     PT      |       sequence number         |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                           timestamp                           |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |           synchronization source (SSRC) identifier            |
    +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
    |            contributing source (CSRC) identifiers             |
    |                             ....                              |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

    RFC's note about placement of KLV data into RTP packet :

    KLVunits small enough to fit into a single RTP
    packet (RTP packet size is up to the implementation but should
    consider underlying transport/network factors such as MTU
    limitations) are placed directly into the payload of the RTP packet,
    with the first byte of the KLVunit (which is the first byte of a KLV
    Universal Label Key) being the first byte of the RTP packet payload.

    My question is where does FFmpeg keep looking for the DTS ?

    Does it interpret the Timestamp field of the RTP packet header as DTS ? If so, I've verified that the timestamps increase (although at different rates) but are not equal to what FFmpeg prints out :

    8583659665 >= 8583656110