Recherche avancée

Médias (0)

Mot : - Tags -/api

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (65)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (12733)

  • lavf : switch to AVStream.time_base as the hint for the muxer timebase

    18 mai 2014, par Anton Khirnov
    lavf : switch to AVStream.time_base as the hint for the muxer timebase
    

    Previously, AVStream.codec.time_base was used for that purpose, which
    was quite confusing for the callers. This change also opens the path for
    removing AVStream.codec.

    The change in the lavf-mkv test is due to the native timebase (1/1000)
    being used instead of the default one (1/90000), so the packets are now
    sent to the crc muxer in the same order in which they are demuxed
    (previously some of them got reordered because of inexact timestamp
    conversion).

    • [DBH] doc/APIchanges
    • [DBH] libavformat/avformat.h
    • [DBH] libavformat/avienc.c
    • [DBH] libavformat/filmstripenc.c
    • [DBH] libavformat/framehash.c
    • [DBH] libavformat/movenc.c
    • [DBH] libavformat/mpegtsenc.c
    • [DBH] libavformat/mux.c
    • [DBH] libavformat/mxfenc.c
    • [DBH] libavformat/oggenc.c
    • [DBH] libavformat/riffenc.c
    • [DBH] libavformat/rmenc.c
    • [DBH] libavformat/swf.h
    • [DBH] libavformat/swfenc.c
    • [DBH] libavformat/utils.c
    • [DBH] libavformat/version.h
    • [DBH] libavformat/yuv4mpegenc.c
    • [DBH] tests/ref/lavf/mkv
  • lavf : switch to AVStream.time_base as the hint for the muxer timebase

    18 mai 2014, par Anton Khirnov
    lavf : switch to AVStream.time_base as the hint for the muxer timebase
    

    Previously, AVStream.codec.time_base was used for that purpose, which
    was quite confusing for the callers. This change also opens the path for
    removing AVStream.codec.

    The change in the lavf-mkv test is due to the native timebase (1/1000)
    being used instead of the default one (1/90000), so the packets are now
    sent to the crc muxer in the same order in which they are demuxed
    (previously some of them got reordered because of inexact timestamp
    conversion).

    • [DH] doc/APIchanges
    • [DH] libavformat/avformat.h
    • [DH] libavformat/avienc.c
    • [DH] libavformat/filmstripenc.c
    • [DH] libavformat/framehash.c
    • [DH] libavformat/movenc.c
    • [DH] libavformat/mpegtsenc.c
    • [DH] libavformat/mux.c
    • [DH] libavformat/mxfenc.c
    • [DH] libavformat/oggenc.c
    • [DH] libavformat/riffenc.c
    • [DH] libavformat/rmenc.c
    • [DH] libavformat/swf.h
    • [DH] libavformat/swfenc.c
    • [DH] libavformat/utils.c
    • [DH] libavformat/version.h
    • [DH] libavformat/yuv4mpegenc.c
    • [DH] tests/ref/lavf/mkv
  • Cropping Square Video using FFmpeg

    27 juin 2014, par zoruc

    Updated

    So I am trying to decode a mp4 file, crop the video into a square, and then re encode it back out to another mp4 file. This is my current code but there are a few issues with it.

    One is that the video doesn’t keep its rotation after the video has been re encoded

    Second is that the frames get outputted in a very fast video file that is not the same length as the original

    Third is that there is no sound

    Lastly and most importantly is do I need AVFilter to do the frame cropping or can it just be done per frame as a resize of the frame and then encoded back out.

    const char *inputPath = "test.mp4";
    const char *outPath = "cropped.mp4";
    const char *outFileType = "mp4";

    static AVFrame *oframe = NULL;
    static AVFilterGraph *filterGraph = NULL;  
    static AVFilterContext *crop_ctx = NULL;
    static AVFilterContext *buffersink_ctx = NULL;
    static AVFilterContext *buffer_ctx = NULL;

    int err;

    int crop_video(int width, int height) {

    av_register_all();
    avcodec_register_all();
    avfilter_register_all();

    AVFormatContext *inCtx = NULL;

    // open input file
    err = avformat_open_input(&inCtx, inputPath, NULL, NULL);
    if (err < 0) {
       printf("error at open input in\n");
       return err;
    }

    // get input file stream info
    err = avformat_find_stream_info(inCtx, NULL);
    if (err < 0) {
       printf("error at find stream info\n");
       return err;
    }

    // get info about video
    av_dump_format(inCtx, 0, inputPath, 0);

    // find video input stream
    int vs = -1;
    int s;
    for (s = 0; s < inCtx->nb_streams; ++s) {
       if (inCtx->streams[s] && inCtx->streams[s]->codec && inCtx->streams[s]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
           vs = s;
           break;
       }
    }

    // check if video stream is valid
    if (vs == -1) {
       printf("error at open video stream\n");
       return -1;
    }

    // set output format
    AVOutputFormat * outFmt = av_guess_format(outFileType, NULL, NULL);
    if (!outFmt) {
       printf("error at output format\n");
       return -1;
    }

    // get an output context to write to
    AVFormatContext *outCtx = NULL;
    err = avformat_alloc_output_context2(&outCtx, outFmt, NULL, NULL);
    if (err < 0 || !outCtx) {
       printf("error at output context\n");
       return err;
    }

    // input and output stream
    AVStream *outStrm = avformat_new_stream(outCtx, NULL);
    AVStream *inStrm = inCtx->streams[vs];

    // add a new codec for the output stream
    AVCodec *codec = NULL;
    avcodec_get_context_defaults3(outStrm->codec, codec);

    outStrm->codec->thread_count = 1;

    outStrm->codec->coder_type = AVMEDIA_TYPE_VIDEO;

    if(outCtx->oformat->flags & AVFMT_GLOBALHEADER) {
       outStrm->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
    }

    outStrm->codec->sample_aspect_ratio = outStrm->sample_aspect_ratio = inStrm->sample_aspect_ratio;

    err = avio_open(&outCtx->pb, outPath, AVIO_FLAG_WRITE);
    if (err < 0) {
       printf("error at opening outpath\n");
       return err;
    }

    outStrm->disposition = inStrm->disposition;
    outStrm->codec->bits_per_raw_sample = inStrm->codec->bits_per_raw_sample;
    outStrm->codec->chroma_sample_location = inStrm->codec->chroma_sample_location;
    outStrm->codec->codec_id = inStrm->codec->codec_id;
    outStrm->codec->codec_type = inStrm->codec->codec_type;

    if (!outStrm->codec->codec_tag) {
       if (! outCtx->oformat->codec_tag
           || av_codec_get_id (outCtx->oformat->codec_tag, inStrm->codec->codec_tag) == outStrm->codec->codec_id
           || av_codec_get_tag(outCtx->oformat->codec_tag, inStrm->codec->codec_id) <= 0) {
           outStrm->codec->codec_tag = inStrm->codec->codec_tag;
       }
    }

    outStrm->codec->bit_rate = inStrm->codec->bit_rate;
    outStrm->codec->rc_max_rate = inStrm->codec->rc_max_rate;
    outStrm->codec->rc_buffer_size = inStrm->codec->rc_buffer_size;

    const size_t extra_size_alloc = (inStrm->codec->extradata_size > 0) ?
    (inStrm->codec->extradata_size + FF_INPUT_BUFFER_PADDING_SIZE) :
    0;

    if (extra_size_alloc) {
       outStrm->codec->extradata = (uint8_t*)av_mallocz(extra_size_alloc);
       memcpy( outStrm->codec->extradata, inStrm->codec->extradata, inStrm->codec->extradata_size);
    }

    outStrm->codec->extradata_size = inStrm->codec->extradata_size;

    AVRational input_time_base = inStrm->time_base;
    AVRational frameRate = {25, 1};
    if (inStrm->r_frame_rate.num && inStrm->r_frame_rate.den
       && (1.0 * inStrm->r_frame_rate.num / inStrm->r_frame_rate.den < 1000.0)) {
       frameRate.num = inStrm->r_frame_rate.num;
       frameRate.den = inStrm->r_frame_rate.den;
    }

    outStrm->r_frame_rate = frameRate;
    outStrm->codec->time_base = inStrm->codec->time_base;

    outStrm->codec->pix_fmt = inStrm->codec->pix_fmt;
    outStrm->codec->width = width;
    outStrm->codec->height =  height;
    outStrm->codec->has_b_frames =  inStrm->codec->has_b_frames;

    if (!outStrm->codec->sample_aspect_ratio.num) {
       AVRational r0 = {0, 1};
       outStrm->codec->sample_aspect_ratio =
       outStrm->sample_aspect_ratio =
       inStrm->sample_aspect_ratio.num ? inStrm->sample_aspect_ratio :
       inStrm->codec->sample_aspect_ratio.num ?
       inStrm->codec->sample_aspect_ratio : r0;
    }

    avformat_write_header(outCtx, NULL);

    filterGraph = avfilter_graph_alloc();
    if (!filterGraph) {
       printf("could not open filter graph");
       return -1;
    }

    AVFilter *crop = avfilter_get_by_name("crop");
    AVFilter *buffer = avfilter_get_by_name("buffer");
    AVFilter *buffersink = avfilter_get_by_name("buffersink");

    char args[512];

    snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
            width, height, inStrm->codec->pix_fmt,
            inStrm->codec->time_base.num, inStrm->codec->time_base.den,
            inStrm->codec->sample_aspect_ratio.num, inStrm->codec->sample_aspect_ratio.den);

    err = avfilter_graph_create_filter(&buffer_ctx, buffer, NULL, args, NULL, filterGraph);
    if (err < 0) {
       printf("error initializing buffer filter\n");
       return err;
    }

    err = avfilter_graph_create_filter(&buffersink_ctx, buffersink, NULL, NULL, NULL, filterGraph);
    if (err < 0) {
       printf("unable to create buffersink filter\n");
       return err;
    }
    snprintf(args, sizeof(args), "%d:%d", width, height);
    err = avfilter_graph_create_filter(&crop_ctx, crop, NULL, args, NULL, filterGraph);
    if (err < 0) {
       printf("error initializing crop filter\n");
       return err;
    }

    err = avfilter_link(buffer_ctx, 0, crop_ctx, 0);
    if (err < 0) {
       printf("error linking filters\n");
       return err;
    }

    err = avfilter_link(crop_ctx, 0, buffersink_ctx, 0);
    if (err < 0) {
       printf("error linking filters\n");
       return err;
    }

    err = avfilter_graph_config(filterGraph, NULL);
    if (err < 0) {
       printf("error configuring the filter graph\n");
       return err;
    }

    printf("filtergraph configured\n");

    for (;;) {

       AVPacket packet = {0};
       av_init_packet(&packet);

       err = AVERROR(EAGAIN);
       while (AVERROR(EAGAIN) == err)
           err = av_read_frame(inCtx, &packet);

       if (err < 0) {
           if (AVERROR_EOF != err && AVERROR(EIO) != err) {
               printf("eof error\n");
               return 1;
           } else {
               break;
           }
       }

       if (packet.stream_index == vs) {

           //
           //            AVPacket pkt_temp_;
           //            memset(&pkt_temp_, 0, sizeof(pkt_temp_));
           //            AVPacket *pkt_temp = &pkt_temp_;
           //
           //            *pkt_temp = packet;
           //
           //            int error, got_frame;
           //            int new_packet = 1;
           //
           //            error = avcodec_decode_video2(inStrm->codec, frame, &got_frame, pkt_temp);
           //            if(error < 0) {
           //                LOGE("error %d", error);
           //            }
           //
           //            // if (error >= 0) {
           //
           //            // push the video data from decoded frame into the filtergraph
           //            int err = av_buffersrc_write_frame(buffer_ctx, frame);
           //            if (err < 0) {
           //                LOGE("error writing frame to buffersrc");
           //                return -1;
           //            }
           //            // pull filtered video from the filtergraph
           //            for (;;) {
           //                int err = av_buffersink_get_frame(buffersink_ctx, oframe);
           //                if (err == AVERROR_EOF || err == AVERROR(EAGAIN))
           //                    break;
           //                if (err < 0) {
           //                    LOGE("error reading buffer from buffersink");
           //                    return -1;
           //                }
           //            }
           //
           //            LOGI("output frame");

           err = av_interleaved_write_frame(outCtx, &packet);
           if (err < 0) {
               printf("error at write frame");
               return -1;
           }

           //}
       }

       av_free_packet(&packet);
    }

    av_write_trailer(outCtx);
    if (!(outCtx->oformat->flags & AVFMT_NOFILE) && outCtx->pb)
       avio_close(outCtx->pb);

    avformat_free_context(outCtx);
    avformat_close_input(&inCtx);

    return 0;

    }