Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (45)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (6107)

  • What am I doing wrong with my audio writing in ffmpeg ? [on hold]

    12 septembre 2014, par Michael Nguyen

    I’m trying to splice multiple video sources into one. I’m having trouble understanding the audio portion of it. Rather I should say, the audio part of my code doesn’t seem to work. I don’t understand it. Could somebody help me understand what I am doing wrong ? The method doing all the work is called renderMovieRequest

    Thanks in advance.

    My entire code can be found here : http://pastebin.com/rAZkU3XZ

    Any help would be appreciated.
    below is a snippet of the code (it’s too long otherwise)

    int64_t timeBase;
    bool seek(AVFormatContext *pFormatCtx, int frameIndex){

       if(!pFormatCtx)
           return false;

       int64_t seekTarget = int64_t(frameIndex) * timeBase;

       if(av_seek_frame(pFormatCtx, -1, seekTarget, AVSEEK_FLAG_ANY) < 0) {
           ELOG("av_seek_frame failed.");
           return false;
       }

       return true;

    }

    typedef struct OutputStream {
       AVStream *st;
       /* pts of the next frame that will be generated */
       int64_t next_pts;
       int samples_count;
       AVFrame *frame;
       AVFrame *tmp_frame;
       float t, tincr, tincr2;
       struct SwsContext *sws_ctx;
       struct SwrContext *swr_ctx;
    } OutputStream;


    static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       /* rescale output packet timestamp values from codec to stream timebase */
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;
       /* Write the compressed frame to the media file. */
       log_packet(fmt_ctx, pkt);
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }
    /* Add an output stream. */
    static void add_stream(OutputStream *ost, AVFormatContext *oc,
                          AVCodec **codec,
                          enum AVCodecID codec_id) {
       AVCodecContext *c;
       int i;
       /* find the encoder */
       *codec = avcodec_find_encoder(codec_id);
       if (!(*codec)) {
           ELOG("Could not find encoder for '%s'\n", avcodec_get_name(codec_id));
           return;
       }
       ost->st = avformat_new_stream(oc, *codec);
       if (!ost->st) {
           ELOG("Could not allocate stream\n");
           return;
       }
       ost->st->id = oc->nb_streams-1;
       c = ost->st->codec;
       switch ((*codec)->type) {
       case AVMEDIA_TYPE_AUDIO:
           c->sample_fmt  = (*codec)->sample_fmts ?
               (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
           c->bit_rate    = 64000;
           c->sample_rate = 44100;
           if ((*codec)->supported_samplerates) {
               c->sample_rate = (*codec)->supported_samplerates[0];
               for (i = 0; (*codec)->supported_samplerates[i]; i++) {
                   if ((*codec)->supported_samplerates[i] == 44100)
                       c->sample_rate = 44100;
               }
           }
           c->channels        = av_get_channel_layout_nb_channels(c->channel_layout);
           c->channel_layout = AV_CH_LAYOUT_STEREO;
           if ((*codec)->channel_layouts) {
               c->channel_layout = (*codec)->channel_layouts[0];
               for (i = 0; (*codec)->channel_layouts[i]; i++) {
                   if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
                       c->channel_layout = AV_CH_LAYOUT_STEREO;
               }
           }
           c->channels        = av_get_channel_layout_nb_channels(c->channel_layout);
           ost->st->time_base = (AVRational){ 1, c->sample_rate };
           break;
       case AVMEDIA_TYPE_VIDEO:
           c->codec_id = codec_id;
           c->bit_rate = 400000;
           /* Resolution must be a multiple of two. */
    //        c->width    = 352;
    //        c->height   = 288;
           c->width    = 1280;
           c->height   = 720;

           /* timebase: This is the fundamental unit of time (in seconds) in terms
            * of which frame timestamps are represented. For fixed-fps content,
            * timebase should be 1/framerate and timestamp increments should be
            * identical to 1. */
           ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
           c->time_base       = ost->st->time_base;
           c->gop_size      = 12; /* emit one intra frame every twelve frames at most */
           c->pix_fmt       = STREAM_PIX_FMT;
           if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
               /* just for testing, we also add B frames */
               c->max_b_frames = 2;
           }
           if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
               /* Needed to avoid using macroblocks in which some coeffs overflow.
                * This does not happen with normal video, it just happens here as
                * the motion of the chroma plane does not match the luma plane. */
               c->mb_decision = 2;
           }
       break;
       default:
           break;
       }
       /* Some formats want stream headers to be separate. */
       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;
    }

    /**************************************************************/
    /* audio output */
    static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
                                     uint64_t channel_layout,
                                     int sample_rate, int nb_samples)
    {
       AVFrame *frame = av_frame_alloc();
       int ret;
       if (!frame) {
           fprintf(stderr, "Error allocating an audio frame\n");
           exit(1);
       }
       frame->format = sample_fmt;
       frame->channel_layout = channel_layout;
       frame->sample_rate = sample_rate;
       frame->nb_samples = nb_samples;
       if (nb_samples) {
           ret = av_frame_get_buffer(frame, 0);
           if (ret < 0) {
               fprintf(stderr, "Error allocating an audio buffer\n");
               exit(1);
           }
       }
       return frame;
    }
    static int open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
    {
       AVCodecContext *c;
       int nb_samples;
       int ret;
       AVDictionary *opt = NULL;
       c = ost->st->codec;
       /* open it */
       av_dict_copy(&opt, opt_arg, 0);
       ret = avcodec_open2(c, codec, &opt);
       av_dict_free(&opt);
       if (ret < 0) {
           ELOG("Could not open audio codec: %s\n", av_err2str(ret));
           return ret;
       }
       /* init signal generator */
       ost->t     = 0;
       ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
       /* increment frequency by 110 Hz per second */
       ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
       if (c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE)
           nb_samples = 10000;
       else
           nb_samples = c->frame_size;
       ost->frame     = alloc_audio_frame(c->sample_fmt, c->channel_layout,
                                          c->sample_rate, nb_samples);
       ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
                                          c->sample_rate, nb_samples);
       /* create resampler context */
           ost->swr_ctx = swr_alloc();
           if (!ost->swr_ctx) {
               ELOG("Could not allocate resampler context\n");
               return -300;
           }
           /* set options */
           av_opt_set_int       (ost->swr_ctx, "in_channel_count",   c->channels,       0);
           av_opt_set_int       (ost->swr_ctx, "in_sample_rate",     c->sample_rate,    0);
           av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt",      AV_SAMPLE_FMT_S16, 0);
           av_opt_set_int       (ost->swr_ctx, "out_channel_count",  c->channels,       0);
           av_opt_set_int       (ost->swr_ctx, "out_sample_rate",    c->sample_rate,    0);
           av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt",     c->sample_fmt,     0);
           /* initialize the resampling context */
           if ((ret = swr_init(ost->swr_ctx)) < 0) {
               ELOG("Failed to initialize the resampling context: %i\n", ret);
               return ret;
           }

           return 0;
    }

    /*
    * encode one audio frame and send it to the muxer
    * return 1 when encoding is finished, 0 otherwise
    */
    static int write_audio_frame(AVFormatContext *oc, OutputStream *ost, AVFrame *frame)
    {
       AVCodecContext *c;
       AVPacket pkt = { 0 }; // data and size must be 0;
    //    AVFrame *frame;
       int ret;
       int got_packet;
       int dst_nb_samples;
       av_init_packet(&pkt);
       c = ost->st->codec;
    //    frame = get_audio_frame(ost);
       if (frame) {
           /* convert samples from native format to destination codec format, using the resampler */
               /* compute destination number of samples */
               dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
                                               c->sample_rate, c->sample_rate, AV_ROUND_UP);
               av_assert0(dst_nb_samples == frame->nb_samples);
           /* when we pass a frame to the encoder, it may keep a reference to it
            * internally;
            * make sure we do not overwrite it here
            */
           ret = av_frame_make_writable(ost->frame);
           if (ret < 0) {
               ELOG("Unable to prepare frame for writing: Error code: %s", av_err2str(ret));
               return ret;
           }
               /* convert to destination format */
               ret = swr_convert(ost->swr_ctx,
                                 ost->frame->data, dst_nb_samples,
                                 (const uint8_t **)frame->data, frame->nb_samples);
               if (ret < 0) {
                   ELOG("Error while converting: %s\n", av_err2str(ret));
                   return -1;
               }
               frame = ost->frame;
           frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
           ost->samples_count += dst_nb_samples;
       }
       ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
       if (ret < 0) {
           ELOG("Error encoding audio frame: %s\n", av_err2str(ret));
           return -1;
       }
       if (got_packet) {
           ret = write_frame(oc, &c->time_base, ost->st, &pkt);
           if (ret < 0) {
               ELOG( "Error while writing audio frame: %s\n", av_err2str(ret));
               return -1;
           }
       }
       return (frame || got_packet) ? 0 : 1;
    }


    /**************************************************************/
    /* video output */
    static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
    {
       AVFrame *picture;
       int ret;
       picture = av_frame_alloc();
       if (!picture)
           return NULL;
       picture->format = pix_fmt;
       picture->width  = width;
       picture->height = height;
       /* allocate the buffers for the frame data */
       ret = av_frame_get_buffer(picture, 32);
       if (ret < 0) {
           fprintf(stderr, "Could not allocate frame data.\n");
           exit(1);
       }
       return picture;
    }


    static int open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
    {
       int ret;
       AVCodecContext *c = ost->st->codec;
       AVDictionary *opt = NULL;
       av_dict_copy(&opt, opt_arg, 0);
       /* open the codec */
       ret = avcodec_open2(c, codec, &opt);
       av_dict_free(&opt);

       if (ret < 0) {
           ELOG("Could not open video codec: %s\n", av_err2str(ret));
           return ret;
       }
       /* allocate and init a re-usable frame */
       DLOG("Allocate and init a are-usable frame: %i x %i Format: %i", c->width, c->height, c->pix_fmt);
       ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
       if (!ost->frame) {
           ELOG("Could not allocate video frame\n");
           return -100;
       }

       /* If the output format is not YUV420P, then a temporary YUV420P
        * picture is needed too. It is then converted to the required
        * output format. */
       ost->tmp_frame = NULL;
       if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
           DLOG("input format is not YUV420P converting to size %i x %i", c->width, c->height);
           ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
           if (!ost->tmp_frame) {
               ELOG("Could not allocate temporary picture\n");
               return -200;
           }
       }

       return 0;
    }

    /*
    * encode one video frame and send it to the muxer
    * return 1 when encoding is finished, 0 otherwise
    */
    static int write_video_frame(AVFormatContext *oc, OutputStream *ost, AVFrame *frame)
    {
       int ret;
       AVCodecContext *c;
       int got_packet = 0;
       c = ost->st->codec;

       if (oc->oformat->flags & AVFMT_RAWPICTURE) {
           /* a hack to avoid data copy with some raw video muxers */
           AVPacket pkt;
           av_init_packet(&pkt);
           if (!frame)
               return 1;
           pkt.flags        |= AV_PKT_FLAG_KEY;
           pkt.stream_index  = ost->st->index;
           pkt.data          = (uint8_t *)frame;
           pkt.size          = sizeof(AVPicture);
           pkt.pts = pkt.dts = frame->pts;
           av_packet_rescale_ts(&pkt, c->time_base, ost->st->time_base);
           ret = av_interleaved_write_frame(oc, &pkt);
       } else {
           AVPacket pkt = { 0 };
           av_init_packet(&pkt);
           /* encode the image */
           ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
           if (ret < 0) {
               fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
               exit(1);
           }
           if (got_packet) {
               ret = write_frame(oc, &c->time_base, ost->st, &pkt);
           } else {
               ret = 0;
           }
       }
       if (ret < 0) {
           fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
           exit(1);
       }
       return (frame || got_packet) ? 0 : 1;
    }
    static void close_stream(AVFormatContext *oc, OutputStream *ost)
    {
       avcodec_close(ost->st->codec);
       av_frame_free(&ost->frame);
       av_frame_free(&ost->tmp_frame);
       sws_freeContext(ost->sws_ctx);
       swr_free(&ost->swr_ctx);
    }



    int renderMovieRequest(movieRequest *movieRequestObj, string outputPath) {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVFormatContext *pFormatCtx = NULL;
       AVCodec *audio_codec, *video_codec;

       OutputStream video_st = { 0 }, audio_st = { 0 };
       size_t            i;
       int             videoStream, audioStream;
       AVCodecContext  *pCodecCtx = NULL;
       AVCodec         *pCodec = NULL;
       AVFrame         *pFrame = NULL;
       AVFrame         *pFrameRGB = NULL;
       AVPacket        packet = { 0 };
       int             frameFinished;
       int             audioFrameFinished;
       int             numBytes;
       uint8_t         *buffer = NULL;
       AVDictionary    *optionsDict = NULL;
       AVDictionary *opt = NULL;
       struct SwsContext      *sws_ctx = NULL;

       const char *in_filename, *out_filename;
       int ret;

       int have_audio = 0, have_video = 0;
       int encode_audio = 0, encode_video = 0;

       processProtobuf(movieRequestObj);

       out_filename = outputPath.c_str();

       av_register_all();

       DLOG("attempting to create context for output file %s", out_filename);

       avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx) {
           ELOG("Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           return ret; //goto end;
       }
       ofmt = ofmt_ctx->oformat;

      /* Add the audio and video streams using the default format codecs
          * and initialize the codecs. */
         if (ofmt->video_codec != AV_CODEC_ID_NONE) {
             add_stream(&video_st, ofmt_ctx, &video_codec, ofmt->video_codec);
             have_video = 1;
             encode_video = 1;
         }
         if (ofmt->audio_codec != AV_CODEC_ID_NONE) {
             add_stream(&audio_st, ofmt_ctx, &audio_codec, ofmt->audio_codec);
             have_audio = 1;
             encode_audio = 1;
         }

       DLOG("allocate encode buffers");
    /* Now that all the parameters are set, we can open the audio and
        * video codecs and allocate the necessary encode buffers. */
       if (have_video)
           open_video(ofmt_ctx, video_codec, &video_st, opt);
       if (have_audio) {
           DLOG("Opening audio codec");
           open_audio(ofmt_ctx, audio_codec, &audio_st, opt);
       }

       DLOG("open output file for writing");
      /* open the output file, if needed */
       if (!(ofmt->flags & AVFMT_NOFILE)) {
           ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret < 0) {
               ELOG( "Could not open '%s': %s\n", out_filename, av_err2str(ret));
               return 1;
           }
       }

       /* Write the stream header, if any. */
       ret = avformat_write_header(ofmt_ctx, &opt);
       if (ret < 0) {
           ELOG("Error occurred when opening output file: %s\n", av_err2str(ret));
           return 1;
       }

       vector<clipshptr> * clips = &amp;(movieRequestObj->clips);

       DLOG("ready to process clips: %i", clips->size());
       for (size_t clipIdx = 0; clipIdx &lt; clips->size(); ++clipIdx) {

           shared_ptr<clip> currentClip = clips->at(clipIdx);

           switch (currentClip->getClipType()) {
               case VIDEO_CLIP: {
                   DLOG("clip is a video clip...");

                   shared_ptr<videoclip> vidClip = dynamic_pointer_cast<videoclip>(clips->at(clipIdx));

                   if (vidClip->shouldHaveSegments) {
                       // open the file for reading and create a temporary file for output
                       in_filename = vidClip->vidFileName.c_str();
                       DLOG("Opening %s for reading", in_filename);

                       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
                           ELOG("Could not open input file '%s'", in_filename);
                           return ret; //goto end;
                       }

                       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
                           ELOG("Failed to retrieve input stream information");
                           return ret; //goto end;
                       }

                       av_dump_format(ifmt_ctx, 0, in_filename, 0);

                       videoStream = -1;
                       audioStream = -1;
                       // setup input format context and output format context;
    //                    AVStream *video_in_stream = NULL;
                       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
                           if(ifmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
                               videoStream=i;
    //                            video_in_stream = ifmt_ctx->streams[i];
                           }
                           else if(ifmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO) {
                               audioStream=i;
    //                            video_in_stream = ifmt_ctx->streams[i];
                           }
                       }

                       if (videoStream == -1) {
                           DLOG("not a video stream.");
                           continue;
                       }

                       // Get a pointer to the codec context for the video stream
                       pCodecCtx = ifmt_ctx->streams[videoStream]->codec;
                       if (pCodecCtx == NULL) {
                           ELOG("Error in getting pointer to codec for vidstream");
                       }

                       DLOG("Input pixel format: %i ", pCodecCtx->pix_fmt);

                       // Find the decoder for the video stream
                       pCodec=avcodec_find_decoder(pCodecCtx->codec_id);

                       if(pCodec==NULL) {
                           ELOG("Unsupported codec!\n");
                           return -1; // Codec not found
                       }
                       // Open codec
                       if(avcodec_open2(pCodecCtx, pCodec, &amp;optionsDict)&lt;0) {
                           ELOG("Unable to open codec");
                           return -1; // Could not open codec
                       }

                       // get the timebase
                       timeBase = (int64_t(pCodecCtx->time_base.num) * AV_TIME_BASE) / int64_t(pCodecCtx->time_base.den);

                       // Allocate video frame
                       pFrame=av_frame_alloc();

                       // Allocate an AVFrame structure
                       pFrameRGB=av_frame_alloc();
                       if(pFrameRGB==NULL)
                           return -1;

                       // Determine required buffer size and allocate buffer
    //                    numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
                       numBytes = avpicture_get_size(PIX_FMT_RGB24, movieRequestObj->width, movieRequestObj->height);
                       DLOG("Buffer size allocated: %i x %i: %i ", movieRequestObj->width, movieRequestObj->height, numBytes);
                       buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

                       sws_ctx = sws_getContext
                       (
                           pCodecCtx->width,
                           pCodecCtx->height,
                           pCodecCtx->pix_fmt,
                           movieRequestObj->width,
                           movieRequestObj->height,
                           PIX_FMT_RGB24,
                           SWS_BILINEAR,
                           NULL,
                           NULL,
                           NULL
                       );

                       // Assign appropriate parts of buffer to image planes in pFrameRGB
                       // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
                       // of AVPicture
                       avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, movieRequestObj->width, movieRequestObj->height);
                       size_t numSegments = vidClip->segments.size();

                       DLOG("Found %i segments to process", numSegments);
                       for (size_t segmentIdx = 0; segmentIdx &lt; numSegments; ++segmentIdx) {
                           // seek to the right position
                           int frameOffset = vidClip->segments.at(segmentIdx).first;
                           int clipDuration = vidClip->segments.at(segmentIdx).second;
                           DLOG("Starting Frame Number: %i Duration: %i", frameOffset, clipDuration);

                           seek(ifmt_ctx, frameOffset);
                           // loop for X frames where X is &lt; frameOffset + clipDuration; clipDuration is the length of the clip in terms of frames
                           for (int frameIdx = frameOffset; frameIdx &lt; (frameOffset + clipDuration); ++frameIdx) {
                               av_init_packet(&amp;packet);
                               int avReadResult = 0;
                               int continueRecording = 1;
                               while ((continueRecording == 1) &amp;&amp; (frameIdx &lt; (frameOffset + clipDuration) )) {
                                   avReadResult = av_read_frame(ifmt_ctx, &amp;packet);
                                   if(avReadResult != 0){
                                       if (avReadResult != AVERROR_EOF) {
                                           ELOG("av_read_frame error: %i", avReadResult );
                                       } else {
                                           ILOG("End of input file");
                                       }
                                       continueRecording = 0;
                                   }
                                   // Is this a packet from the video stream?
                                   if(packet.stream_index==videoStream) {
                                       // Decode video frame
                                       avcodec_decode_video2(pCodecCtx, pFrameRGB, &amp;frameFinished, &amp;packet);

                                       // Did we get a video frame?
                                       if(frameFinished) {
                                           // Convert the image from its native format to RGB
                                           sws_scale
                                           (
                                              sws_ctx,
                                              (uint8_t const * const *)pFrame->data,
                                              pFrame->linesize,
                                              0,
                                              pCodecCtx->height,
                                              pFrameRGB->data,
                                              pFrameRGB->linesize
                                           );
                                           write_video_frame(ofmt_ctx, &amp;video_st, pFrameRGB);
                                           frameIdx++;
                                       }

                                   }
                                   else if (packet.stream_index == audioStream) {
                                       // Decode audio frame
                                       DLOG("Audio frame found");
                                       avcodec_decode_audio4(pCodecCtx, pFrameRGB, &amp;audioFrameFinished, &amp;packet);

                                       if (audioFrameFinished) {
    //                                        write the audio frame to file
                                           write_audio_frame(ofmt_ctx, &amp;audio_st, pFrameRGB);

                                       }

                                   }
                                   // Free the packet that was allocated by av_read_frame
                                   av_free_packet(&amp;packet);
                               }
                                   // Free the RGB image

                           }
                       }

                       DLOG("Cleaning up frame allocations");
                       av_free(buffer);
                       av_free(pFrameRGB);
                       // Free the YUV frame
                       av_free(pFrame);

                   } // end video clip processing
               }
               break;

               case TITLE_CLIP: {
                 }
               break;

               default:
                   ELOG("Failed to identify clip");
                   break;
           } // end switch statement

           DLOG("Finished processing clip #%i", clipIdx);
           avformat_close_input(&amp;ifmt_ctx);
       } // end main for loop -> clip iteration


    /* Write the trailer, if any. The trailer must be written before you
        * close the CodecContexts open when you wrote the header; otherwise
        * av_write_trailer() may try to use memory that was freed on
        * av_codec_close(). */
       av_write_trailer(ofmt_ctx);

       /* Close each codec. */
       if (have_video)
           close_stream(ofmt_ctx, &amp;video_st);
       if (have_audio)
           close_stream(ofmt_ctx, &amp;audio_st);

       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE)) {
           /* Close the output file. */
           avio_close(ofmt_ctx->pb);
       }

       DLOG("Closing input format context");
       avformat_close_input(&amp;ifmt_ctx);

       DLOG("Free ouptut format context");
       avformat_free_context(ofmt_ctx);

       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           ELOG( "Error occurred: %s\n", av_err2str(ret));
           return 1;
       }

       return 0;
    }


    #ifdef __cplusplus
    }

    #endif
    </videoclip></videoclip></clip></clipshptr>
  • How to create a widget – Introducing the Piwik Platform

    4 septembre 2014, par Thomas Steur — Development

    This is the next post of our blog series where we introduce the capabilities of the Piwik platform (our previous post was How to create a scheduled task in Piwik). This time you’ll learn how to create a new widget. For this tutorial you will need to have basic knowledge of PHP.

    What is a widget in Piwik ?

    Widgets can be added to your dashboards or exported via a URL to embed it on any page. Most widgets in Piwik represent a report but a widget can display anything. For instance a RSS feed of your corporate news. If you prefer to have most of your business relevant data in one dashboard why not display the number of offline sales, the latest stock price, or other key metrics together with your analytics data ?

    Getting started

    In this series of posts, we assume that you have already set up your development environment. If not, visit the Piwik Developer Zone where you’ll find the tutorial Setting up Piwik.

    To summarize the things you have to do to get setup :

    • Install Piwik (for instance via git).
    • Activate the developer mode : ./console development:enable --full.
    • Generate a plugin : ./console generate:plugin --name="MyWidgetPlugin". There should now be a folder plugins/MyWidgetPlugin.
    • And activate the created plugin under Settings => Plugins.

    Let’s start creating a widget

    We start by using the Piwik Console to create a widget template :

    ./console generate:widget

    The command will ask you to enter the name of the plugin the widget should belong to. I will simply use the above chosen plugin name “MyWidgetPlugin”. It will ask you for a widget category as well. You can select any existing category, for instance “Visitors”, “Live !” or “Actions”, or you can define a new category, for instance your company name. There should now be a file plugins/MyWidgetPlugin/Widgets.php which contains already some examples to get you started easily :

    1. class Widgets extends \Piwik\Plugin\Widgets
    2. {
    3.     /**
    4.      * Here you can define the category the widget belongs to. You can reuse any existing widget category or define your own category.
    5.      * @var string
    6.      */
    7.     protected $category = 'ExampleCompany';
    8.  
    9.     /**
    10.      * Here you can add one or multiple widgets. You can add a widget by calling the method "addWidget()" and pass the name of the widget as well as a method name that should be called to render the widget. The method can be defined either directly here in this widget class or in the controller in case you want to reuse the same action for instance in the menu etc.
    11.      */
    12.     protected function init()
    13.     {
    14.         $this-&gt;addWidget('Example Widget Name', $method = 'myExampleWidget');
    15.         $this-&gt;addWidget('Example Widget 2',    $method = 'myExampleWidget', $params = array('myparam' =&gt; 'myvalue'));
    16.     }
    17.  
    18.     /**
    19.      * This method renders a widget as defined in "init()". It's on you how to generate the content of the widget. As long as you return a string everything is fine. You can use for instance a "Piwik\View" to render a twig template. In such a case don't forget to create a twig template (eg. myViewTemplate.twig) in the "templates" directory of your plugin.
    20.      *
    21.      * @return string
    22.      */
    23.     public function myExampleWidget()
    24.     {
    25.         $view = new View('@MyWidgetPlugin/myViewTemplate');
    26.         return $view-&gt;render();
    27.     }
    28. }

    Télécharger

    As you might have noticed in the generated template we put emphasis on adding comments to explain you directly how to continue and where to get more information. Ideally this saves you some time and you don’t even have to search for more information on our developer pages. The category is defined in the property $category and can be changed at any time. Starting from Piwik 2.6.0 the generator will directly create a translation key if necessary to make it easy to translate the category into any language. Translations will be a topic in one of our future posts until then you can explore this feature on our Internationalization guide.

    A simple example

    We can define one or multiple widgets in the init method by calling addWidget($widgetName, $methodName). To do so we define the name of a widget which will be seen by your users as well as the name of the method that shall render the widget.

    protected $category = 'Example Company';

    public function init()
    {
       // Registers a widget named 'News' under the category 'Example Company'.
       // The method 'myCorporateNews' will be used to render the widget.
       $this-&gt;addWidget('News', $method = 'myCorporateNews');
    }

    public function myCorporateNews()
    {
       return file_get_contents('http://example.com/news');
    }

    This example would display the content of the specified URL within the widget as defined in the method myCorporateNews. It’s on you how to generate the content of the widget. Any string returned by this method will be displayed within the widget. You can use for example a View to render a Twig template. For simplification we are fetching the content from another site. A more complex version would cache this content for faster performance. Caching and views will be covered in one of our future posts as well.

    Example Widget

    Did you know ? To make your life as a developer as stress-free as possible the platform checks whether the registered method actually exists and whether the method is public. If not, Piwik will display a notification in the UI and advice you with the next step.

    Checking permissions

    Often you do not want to have the content of a widget visible to everyone. You can check for permissions by using one of our many convenient methods which all start with \Piwik\Piwik::checkUser*. Just to introduce some of them :

    // Make sure the current user has super user access
    \Piwik\Piwik::checkUserHasSuperUserAccess();

    // Make sure the current user is logged in and not anonymous
    \Piwik\Piwik::checkUserIsNotAnonymous();

    And here is an example how you can use it within your widget :

    public function myCorporateNews()
    {
       // Make sure there is an idSite URL parameter
       $idSite = Common::getRequestVar('idSite', null, 'int');

       // Make sure the user has at least view access for the specified site. This is useful if you want to display data that is related to the specified site.
       Piwik::checkUserHasViewAccess($idSite);

       $siteUrl = \Piwik\Site::getMainUrlFor($idSite);

       return file_get_contents($siteUrl . '/news');
    }

    In case any condition is not met an exception will be thrown and an error message will be presented to the user explaining that he does not have enough permissions. You’ll find the documentation for those methods in the Piwik class reference.

    How to test a widget

    After you have created your widgets you are surely wondering how to test it. First, you should write a unit or integration test which we will cover in one of our future blog posts. Just one hint : You can use the command ./console generate:test to create a test. To manually test a widget you can add a widget to a dashboard or export it.

    Publishing your Plugin on the Marketplace

    In case you want to share your widgets with other Piwik users you can do this by pushing your plugin to a public GitHub repository and creating a tag. Easy as that. Read more about how to distribute a plugin.

    Advanced features

    Isn’t it easy to create a widget ? We never even created a file ! Of course, based on our API design principle “The complexity of our API should never exceed the complexity of your use case.” you can accomplish more if you want : You can clarify parameters that will be passed to your widget, you can create a method in the Controller instead of the Widget class to make the same method also reusable for adding it to the menu, you can assign different categories to different widgets, you can remove any widgets that were added by the Piwik core or other plugins and more.

    Would you like to know more about widgets ? Go to our Widgets class reference in the Piwik Developer Zone.

    If you have any feedback regarding our APIs or our guides in the Developer Zone feel free to send it to us.

  • Shared Hosting + Static FFMPEG + PHP (or other scripting language) == failure on finding codec

    7 juillet 2012, par Ethan Allen

    here is the following environment :

    1. 1and1 shared hosting (they do not have ffmpeg installed, all good)
    2. i built a static ffmpeg binary that does not require dynamic loading of shared libaries (i built this on an Ubuntu system)
    3. ffmpeg binary is fully accessible and is able to get access
    4. executing the same command terminal or executing a php/perl script via terminal works fine...however, both scripts through a browser/web request fails with the following :

    Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

    array
    0 => string 'ffmpeg version git-2012-07-06-6936111 Copyright (c) 2000-2012 the FFmpeg developers' (length=83)
    1 => string ' built on Jul 5 2012 23:04:34 with gcc 4.4.3' (length=46)
    2 => string ' configuration : —prefix=' /ffmpeg' —enable-gpl —enable-libfaac —enable-libmp3lame —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libtheora —enable-libvorbis —enable-libvpx —enable-libx264 —enable-nonfree —enable-version3 —enable-static —disable-shared —extra-libs=-static —extra-cflags=-static' (length=323)
    3 => string ' libavutil 51. 64.100 / 51. 64.100' (length=40)
    4 => string ' libavcodec 54. 33.100 / 54. 33.100' (length=40)
    5 => string ' libavformat 54. 15.100 / 54. 15.100' (length=40)
    6 => string ' libavdevice 54. 1.100 / 54. 1.100' (length=40)
    7 => string ' libavfilter 3. 0.101 / 3. 0.101' (length=40)
    8 => string ' libswscale 2. 1.100 / 2. 1.100' (length=40)
    9 => string ' libswresample 0. 15.100 / 0. 15.100' (length=40)
    10 => string ' libpostproc 52. 0.100 / 52. 0.100' (length=40)
    11 => string 'Input #0, image2, from 'http://axiomchurch.co/main/wp-content/plugins/video-embed-thumbnail-generator/flash/skin/images/PlayNormal.png':' ; (length=136)
    12 => string ' Duration : 00:00:00.04, start : 0.000000, bitrate : N/A' (length=54)
    13 => string ' Stream #0:0 : Video : png, rgba, 100x100, 25 tbr, 25 tbn, 25 tbc' (length=66)
    14 => string '[graph 0 input from stream 0:0 @ 0x9482000] w:100 h:100 pixfmt:rgba tb:1/25 fr:25/1 sar:0/1 sws_param:flags=2' (length=109)
    15 => string '[output stream 0:0 @ 0x948ccc0] No opaque field provided' (length=56)
    16 => string '[auto-inserted scaler 0 @ 0x948d160] w:100 h:100 fmt:rgba sar:0/1 -> w:100 h:100 fmt:yuvj420p sar:0/1 flags:0x4' (length=111)
    17 => string '[mjpeg @ 0x948c760] ff_frame_thread_encoder_init failed' (length=55)
    18 => string 'Output #0, image2, to '/homepages/17/d411786663/htdocs/main/wp-content/uploads/2012/07/ffmpeg_exists_test%d.jpg' :' (length=113)
    19 => string ' Stream #0:0 : Video : mjpeg, yuvj420p, 100x100, q=2-31, 200 kb/s, 90k tbn, 25 tbc' (length=83)
    20 => string 'Stream mapping :' (length=15)
    21 => string ' Stream #0:0 -> #0:0 (png -> mjpeg)' (length=36)
    22 => string 'Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height' (length=119)

    The command being executed :

    /kunden/homepages/17/.../htdocs/bin/ffmpeg -i http://....co/main/wp-content/plugins/video-embed-thumbnail-generator/flash/skin/images/PlayNormal.png -ac 2 /homepages/17/.../htdocs/main/wp-content/uploads/2012/07/ffmpeg_exists_test.jpg

    You can see i have the full path of my ffmpeg binary and that it is executing. Here is what it looks like successfully via terminal :

    ffmpeg version git-2012-07-06-6936111 Copyright (c) 2000-2012 the FFmpeg developers
    built on Jul 5 2012 23:04:34 with gcc 4.4.3
    configuration : —prefix=' /ffmpeg' —enable-gpl —enable-libfaac —enable-libmp3lame —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libtheora —enable-libvorbis —enable-libvpx —enable-libx264 —enable-nonfree —enable-version3 —enable-static —disable-shared —extra-libs=-static —extra-cflags=-static
    libavutil 51. 64.100 / 51. 64.100
    libavcodec 54. 33.100 / 54. 33.100
    libavformat 54. 15.100 / 54. 15.100
    libavdevice 54. 1.100 / 54. 1.100
    libavfilter 3. 0.101 / 3. 0.101
    libswscale 2. 1.100 / 2. 1.100
    libswresample 0. 15.100 / 0. 15.100
    libpostproc 52. 0.100 / 52. 0.100
    Input #0, image2, from 'http://axiomchurch.co/main/wp-content/plugins/video-embed-thumbnail-generator/flash/skin/images/PlayNormal.png':
    Duration : 00:00:00.04, start : 0.000000, bitrate : N/A
    Stream #0:0 : Video : png, rgba, 100x100, 25 tbr, 25 tbn, 25 tbc
    [graph 0 input from stream 0:0 @ 0x9482000] w:100 h:100 pixfmt:rgba tb:1/25 fr:25/1 sar:0/1 sws_param:flags=2
    [output stream 0:0 @ 0x948ccc0] No opaque field provided
    [auto-inserted scaler 0 @ 0x948d160] w:100 h:100 fmt:rgba sar:0/1 -> w:100 h:100 fmt:yuvj420p sar:0/1 flags:0x4
    Output #0, image2, to '/homepages/17/d411786663/htdocs/main/wp-content/uploads/2012/07/ffmpeg_exists_test.jpg' :
    Metadata :
    encoder : Lavf54.15.100
    Stream #0:0 : Video : mjpeg, yuvj420p, 100x100, q=2-31, 200 kb/s, 90k tbn, 25 tbc
    Stream mapping :
    Stream #0:0 -> #0:0 (png -> mjpeg)
    Press [q] to stop, [?] for help
    frame= 1 fps=0.0 q=0.0 Lsize= 0kB time=00:00:00.04 bitrate= 0.0kbits/s
    video:2kB audio:0kB subtitle:0 global headers:0kB muxing overhead -100.000000%

    The user running the script at terminal AND the apache user are the same user...i have verified this.

    Something about the environment apache is running through is killing me...i dont have access to apache error logs unfortunately.

    As a side note...i am trying to utilize the Video Embed & Thumbnail Generator for wordpress.

    Any help appreciated, thanks !