Recherche avancée

Médias (1)

Mot : - Tags -/pirate bay

Autres articles (99)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (12899)

  • ffmpeg convert to webm error "too many invisible frames"

    24 janvier 2019, par Вадим Коломиец

    I need to convert any format (for example, mp4, avi etc) to .webm with own ioContext. I build ffmpeg with vpx, ogg, vorbis, opus and create simple project. But when i write any frame i get error "Too many invisible frames. Failed to send packet to filter vp9_superframe for stream 0"

    I’ve already tried convert from webm to webm with copy codec params with avcodec_parameters_copy and this works.

       #include <qcoreapplication>
    #include <qfileinfo>
    #include <iostream>
    #include <fstream>

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>timestamp.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavfilter></libavfilter>buffersink.h>
    #include <libavfilter></libavfilter>buffersrc.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>pixdesc.h>
    }

    using namespace std;

    struct BufferData {
       QByteArray data;
       uint fullsize;

       BufferData() {
           fullsize =0;
       }
    };


    static int write_packet_to_buffer(void *opaque, uint8_t *buf, int buf_size)         {
       BufferData *bufferData = static_cast(opaque);
       bufferData->fullsize += buf_size;
       bufferData->data.append((const char*)buf, buf_size);
       return buf_size;
    }


    static bool writeBuffer(const QString &amp;filename, BufferData *bufferData) {
       QFile file(filename);
       if( !file.open(QIODevice::WriteOnly) )  return false;
       file.write(bufferData->data);
       qDebug()&lt;&lt;"FILE SIZE = " &lt;&lt; file.size();
       file.close();
       return true;
    }

    int main(int argc, char *argv[])
    {
       QCoreApplication a(argc, argv);
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       int ret;
       int stream_index = 0;
       int *stream_mapping = NULL;
       int stream_mapping_size = 0;

       const char *in_filename  = "../assets/sample.mp4";
       const char *out_filename = "../assets/sample_new.webm";


       //------------------------  Input file  ----------------------------
       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           fprintf(stderr, "Could not open input file '%s'", in_filename);
           return 1;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           fprintf(stderr, "Failed to retrieve input stream information");
           return 1;
       }
       av_dump_format(ifmt_ctx, 0, in_filename, 0);
       //-----------------------------------------------------------------


       //---------------------- BUFFER -------------------------
      AVIOContext *avio_ctx = NULL;
      uint8_t *avio_ctx_buffer = NULL;
      size_t avio_ctx_buffer_size = 4096*1024;
      const size_t bd_buf_size = 1024*1024;
      /* fill opaque structure used by the AVIOContext write callback */
      avio_ctx_buffer = (uint8_t*)av_malloc(avio_ctx_buffer_size);
      if (!avio_ctx_buffer) return AVERROR(ENOMEM);

      BufferData bufferData;
      avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,
                                    1, &amp;bufferData, NULL,
                                    &amp;write_packet_to_buffer, NULL);


      if (!avio_ctx) return AVERROR(ENOMEM);
      //------------------------------------------------------


       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
           if (!ofmt_ctx) {
           fprintf(stderr, "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           return 1;
       }

       //------------------------  Stream list  ----------------------------
       stream_mapping_size = ifmt_ctx->nb_streams;
       stream_mapping = (int*)av_mallocz_array(stream_mapping_size,     sizeof(*stream_mapping));
       if (!stream_mapping) {
           ret = AVERROR(ENOMEM);
           return 1;
       }
       //-------------------------------------------------------------------



       //------------------------  Output file  ----------------------------
       AVCodec *encoder;
       AVCodecContext *input_ctx;
       AVCodecContext *enc_ctx;
       for (int i=0; i &lt; ifmt_ctx->nb_streams; i++) {
           AVStream *out_stream;
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVCodecParameters *in_codecpar = in_stream->codecpar;

           if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &amp;&amp;
               in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &amp;&amp;
               in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
               stream_mapping[i] = -1;
               continue;
           }

           enc_ctx = avcodec_alloc_context3(encoder);
           if (!enc_ctx) {
               av_log(NULL, AV_LOG_FATAL, "Failed to allocate the encoder context\n");
               return AVERROR(ENOMEM);
           }

           stream_mapping[i] = stream_index++;

           out_stream = avformat_new_stream(ofmt_ctx, NULL);
           if (!out_stream) {
               fprintf(stderr, "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               return 1;
           }

           out_stream->codecpar->width = in_codecpar->width;
           out_stream->codecpar->height = in_codecpar->height;
           out_stream->codecpar->level = in_codecpar->level;
           out_stream->codecpar->format =in_codecpar->format;
           out_stream->codecpar->profile =in_codecpar->profile;
           out_stream->codecpar->bit_rate =in_codecpar->bit_rate;
           out_stream->codecpar->channels =in_codecpar->channels;
           out_stream->codecpar->codec_tag = 0;
           out_stream->codecpar->color_trc =in_codecpar->color_trc;
           out_stream->codecpar->codec_type =in_codecpar->codec_type;
           out_stream->codecpar->frame_size =in_codecpar->frame_size;
           out_stream->codecpar->block_align =in_codecpar->block_align;
           out_stream->codecpar->color_range =in_codecpar->color_range;
           out_stream->codecpar->color_space =in_codecpar->color_space;
           out_stream->codecpar->field_order =in_codecpar->field_order;
           out_stream->codecpar->sample_rate =in_codecpar->sample_rate;
           out_stream->codecpar->video_delay =in_codecpar->video_delay;
           out_stream->codecpar->seek_preroll =in_codecpar->seek_preroll;
           out_stream->codecpar->channel_layout =in_codecpar->channel_layout;
           out_stream->codecpar->chroma_location =in_codecpar->chroma_location;
           out_stream->codecpar->color_primaries =in_codecpar->color_primaries;
           out_stream->codecpar->initial_padding =in_codecpar->initial_padding;
           out_stream->codecpar->trailing_padding =in_codecpar->trailing_padding;
           out_stream->codecpar->bits_per_raw_sample = in_codecpar->bits_per_raw_sample;
           out_stream->codecpar->sample_aspect_ratio.num = in_codecpar->sample_aspect_ratio.num;
           out_stream->codecpar->sample_aspect_ratio.den = in_codecpar->sample_aspect_ratio.den;
           out_stream->codecpar->bits_per_coded_sample   = in_codecpar->bits_per_coded_sample;


           if (in_codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
               out_stream->codecpar->codec_id =ofmt_ctx->oformat->video_codec;
           }
           else if(in_codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
               out_stream->codecpar->codec_id = ofmt_ctx->oformat-    >audio_codec;
           }
       }
       av_dump_format(ofmt_ctx, 0, out_filename, 1);
       ofmt_ctx->pb = avio_ctx;

       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           fprintf(stderr, "Error occurred when opening output file\n");
           return 1;
       }
       //------------------------------------------------------------------------------


       while (1) {
           AVStream *in_stream, *out_stream;

           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;

           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           if (pkt.stream_index >= stream_mapping_size ||
               stream_mapping[pkt.stream_index] &lt; 0) {
               av_packet_unref(&amp;pkt);
               continue;
           }

           pkt.stream_index = stream_mapping[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];

           /* copy packet */
           pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
           pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
           pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
           pkt.pos = -1;

           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
           if (ret &lt; 0) {
               fprintf(stderr, "Error muxing packet\n");
               break;
           }
           av_packet_unref(&amp;pkt);
       }
       av_write_trailer(ofmt_ctx);
       avformat_close_input(&amp;ifmt_ctx);

       /* close output */
       writeBuffer(fileNameOut, &amp;bufferData);
       avformat_free_context(ofmt_ctx);
       av_freep(&amp;stream_mapping);
       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           fprintf(stderr, "Error occurred: %d\n",ret);
           return 1;
       }
       return a.exec();
    }
    </fstream></iostream></qfileinfo></qcoreapplication>
  • What's the meaning of return val about the "write_packet()/seek()" callback functions in "AVIOContext" struct ?

    12 février 2019, par pango

    I’m writing a muxer DirectShow Filter using libav, I need to redirect muxer’s output to filter’s output pin, So I use avio_alloc_context() to create AVIOContext with my write_packet and seek callback functions, these 2 functions are defined below :

    int (*write_packet)(void *opaque, uint8_t *buf, int buf_size)
    int64_t (*seek)(void *opaque, int64_t offset, int whence)

    I can understand the meaning of these functions’ input parameters, but what’s the meaning of its return ? Is it means the bytes written actually ?

  • ffmpeg : switch to the new BSF API

    21 septembre 2016, par Clément Bœsch
    ffmpeg : switch to the new BSF API
    

    This commit is initially largely based on commit 4426540 from Anton
    Khirnov <anton@khirnov.net> and two following fixes (80fb19b and
    fe7b21c) which were previously skipped respectively in 98e3153, c9ee36e,
    and 7fe7cdc.

    mpeg4-bsf-unpack-bframes FATE reference is updated because the bsf
    filter now actually fixes the extradata (mpeg4_unpack_bframes_init()
    changing one byte is now honored on the output extradata).

    The FATE references for remove_extra change because the packet flags
    were wrong and the keyframes weren’t marked, causing the bsf relying on
    these proprieties to not actually work as intended.

    The following was fixed by James Almer :

    The filter option arguments are now also parsed correctly.

    A hack to propagate extradata changed by bitstream filters after the
    first av_bsf_receive_packet() call is added to maintain the current
    behavior. This was previously done by av_bitstream_filter_filter() and
    is needed for the aac_adtstoasc bsf.

    The exit_on_error was not being checked anymore, and led to an exit
    error in the last frame of h264_mp4toannexb test. Restoring this
    behaviour prevents erroring out. The test is still changed as a result
    due to the badly filtered frame now not being written after the failure.

    Signed-off-by : Clément Bœsch <u@pkh.me>
    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] cmdutils.c
    • [DH] ffmpeg.c
    • [DH] ffmpeg.h
    • [DH] ffmpeg_opt.c
    • [DH] tests/ref/fate/ffmpeg-bsf-remove-k
    • [DH] tests/ref/fate/ffmpeg-bsf-remove-r
    • [DH] tests/ref/fate/h264_mp4toannexb_ticket2991
    • [DH] tests/ref/fate/mpeg4-bsf-unpack-bframes