Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (62)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7518)

  • Error in video streaming using libavformat : VBV buffer size not set, muxing may fail

    15 janvier 2014, par Blue Sky

    I stream a video using libavformat as follows :

    static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
                           enum AVCodecID codec_id)
    {
    AVCodecContext *c;
    AVStream *st;
    /* find the encoder */
    *codec = avcodec_find_encoder(codec_id);
    if (!(*codec)) {
       fprintf(stderr, "Could not find encoder for '%s'\n",
               avcodec_get_name(codec_id));
       exit(1);
    }
    st = avformat_new_stream(oc, *codec);
    if (!st) {
       fprintf(stderr, "Could not allocate stream\n");
       exit(1);
    }
    st->id = oc->nb_streams-1;
    c = st->codec;
    switch ((*codec)->type) {
    case AVMEDIA_TYPE_AUDIO:
       c->sample_fmt  = (*codec)->sample_fmts ?
           (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
       c->bit_rate    = 64000;
       c->sample_rate = 44100;
       c->channels    = 2;
       break;
    case AVMEDIA_TYPE_VIDEO:
       c->codec_id = codec_id;
       c->bit_rate = 400000;
       /* Resolution must be a multiple of two. */
       c->width    = outframe_width;
       c->height   = outframe_height;
       /* timebase: This is the fundamental unit of time (in seconds) in terms
        * of which frame timestamps are represented. For fixed-fps content,
        * timebase should be 1/framerate and timestamp increments should be
        * identical to 1. */
       c->time_base.den = STREAM_FRAME_RATE;
       c->time_base.num = 1;
       c->gop_size      = 12; /* emit one intra frame every twelve frames at most */
       c->pix_fmt       = STREAM_PIX_FMT;
       if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
           /* just for testing, we also add B frames */
           c->max_b_frames = 2;
       }
       if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
           /* Needed to avoid using macroblocks in which some coeffs overflow.
            * This does not happen with normal video, it just happens here as
            * the motion of the chroma plane does not match the luma plane. */
           c->mb_decision = 2;
       }
    break;
    default:
       break;
    }
    /* Some formats want stream headers to be separate. */
    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
       c->flags |= CODEC_FLAG_GLOBAL_HEADER;
    return st;
    }

    But when I run this code, I get the following error/warning :

    [mpeg @ 01f3f040] VBV buffer size not set, muxing may fail

    Do you know how I can set the VBV buffer size in the code ? In fact, when I use ffplay to display the streamed video, ffplay doesn't show anything for short videos but for long videos, it start displaying the video immediately. So, it looks like ffplay needs a buffer to be filled up by some amount so that it can start displaying the stream. Am I right ?

  • Android h264 decode non-existing PPS 0 referenced

    22 janvier 2014, par nmxprime

    In Android JNI, using ffmpeg with libx264 use below codes to encode and decode raw rgb data !. I should use swscale to convert rgb565 to yuv420p as required by H.264. But not clear about this conversion.Please help, where i am wrong, with regard the log i get !

    Code for Encoding

    codecinit()- called once(JNI wrapper function)

    int Java_com_my_package_codecinit (JNIEnv *env, jobject thiz) {
    avcodec_register_all();
    codec = avcodec_find_encoder(AV_CODEC_ID_H264);//AV_CODEC_ID_MPEG1VIDEO);
    if(codec->id == AV_CODEC_ID_H264)
       __android_log_write(ANDROID_LOG_ERROR, "set","h264_encoder");

    if (!codec) {
       fprintf(stderr, "codec not found\n");
       __android_log_write(ANDROID_LOG_ERROR, "codec", "not found");

    }
       __android_log_write(ANDROID_LOG_ERROR, "codec", "alloc-contest3");
    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    picture= av_frame_alloc();

    if(picture == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "picture","context-null");

    c->bit_rate = 400000;
    c->height = 800;
    c->time_base= (AVRational){1,25};
    c->gop_size = 10;
    c->max_b_frames=1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    outbuf_size = 768000;
    c->width = 480;

    size = (c->width * c->height);

    if (avcodec_open2(c, codec,NULL) < 0) {

    __android_log_write(ANDROID_LOG_ERROR, "codec", "could not open");


    }

    ret = av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                        c->pix_fmt, 32);
    if (ret < 0) {
           __android_log_write(ANDROID_LOG_ERROR, "image","alloc-failed");
       fprintf(stderr, "could not alloc raw picture buffer\n");

    }

    picture->format = c->pix_fmt;
    picture->width  = c->width;
    picture->height = c->height;
    return 0;

    }

    encodeframe()-called in a while loop

    int Java_com_my_package_encodeframe (JNIEnv *env, jobject thiz,jbyteArray buffer) {
    jbyte *temp= (*env)->GetByteArrayElements(env, buffer, 0);
    Output = (char *)temp;
    const uint8_t * const inData[1] = { Output };
    const int inLinesize[1] = { 2*c->width };

    //swscale should implement here

       av_init_packet(&pkt);
       pkt.data = NULL;    // packet data will be allocated by the encoder
       pkt.size = 0;

       fflush(stdout);
    picture->data[0] = Output;
    ret = avcodec_encode_video2(c, &pkt, picture,&got_output);

       fprintf(stderr,"ret = %d, got-out = %d \n",ret,got_output);
        if (ret < 0) {
                   __android_log_write(ANDROID_LOG_ERROR, "error","encoding");
           if(got_output > 0)
           __android_log_write(ANDROID_LOG_ERROR, "got_output","is non-zero");

       }

       if (got_output) {
           fprintf(stderr,"encoding frame %3d (size=%5d): (ret=%d)\n", 1, pkt.size,ret);
           fprintf(stderr,"before caling decode");
           decode_inline(&pkt); //function that decodes right after the encode
           fprintf(stderr,"after caling decode");


           av_free_packet(&pkt);
       }


    fprintf(stderr,"y val: %d \n",y);


    (*env)->ReleaseByteArrayElements(env, buffer, Output, 0);
    return ((ret));
    }

    decode_inline() function

    decode_inline(AVPacket *avpkt){
    AVCodec *codec;
    AVCodecContext *c = NULL;
    int frame, got_picture, len = -1,temp=0;

    AVFrame *rawFrame, *rgbFrame;
    uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
    char buf[1024];
    char rawBuf[768000],rgbBuf[768000];

    struct SwsContext *sws_ctx;

    memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
    avcodec_register_all();

    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    codec = avcodec_find_decoder(AV_CODEC_ID_H264);
    if (!codec) {
       fprintf(stderr, "codec not found\n");
       fprintf(stderr, "codec = %p \n", codec);
       }
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->width = 480;
    c->height = 800;

    rawFrame = av_frame_alloc();
    rgbFrame = av_frame_alloc();

    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
       }
    sws_ctx = sws_getContext(c->width, c->height,/*PIX_FMT_RGB565BE*/
               PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB565/*PIX_FMT_YUV420P*/,
               SWS_BILINEAR, NULL, NULL, NULL);


    frame = 0;

    unsigned short *decodedpixels = &rawBuf;
    rawFrame->data[0] = &rawBuf;
    rgbFrame->data[0] = &rgbBuf;

    fprintf(stderr,"size of avpkt %d \n",avpkt->size);
    temp = avpkt->size;
    while (temp > 0) {
           len = avcodec_decode_video2(c, rawFrame, &got_picture, avpkt);

           if (len < 0) {
               fprintf(stderr, "Error while decoding frame %d\n", frame);
               exit(1);
               }
           temp -= len;
           avpkt->data += len;

           if (got_picture) {
               printf("saving frame %3d\n", frame);
               fflush(stdout);
           //TODO  
           //memcpy(decodedpixels,rawFrame->data[0],rawFrame->linesize[0]);
           //  decodedpixels +=rawFrame->linesize[0];

               frame++;
               }

           }

    avcodec_close(c);
    av_free(c);
    //free(rawBuf);
    //free(rgbBuf);
    av_frame_free(&rawFrame);
    av_frame_free(&rgbFrame);

    }

    The log i get

    For the decode_inline() function :


    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] Invalid mix of idr and non-idr slices
    01-02 14:50:50.160: I/stderr(3407): Error while decoding frame 0

    Edit : Changing GOP value :

    If i change c->gop_size = 3; as expected it emits one I frame every three frames. The non-existing PPS 0 referenced message is not there for in every third execution, but all other have this message

  • Android h264 decode non-existing PPS 0 referenced

    22 janvier 2014, par nmxprime

    In Android JNI, using ffmpeg with libx264 use below codes to encode and decode raw rgb data !. I should use swscale to convert rgb565 to yuv420p as required by H.264. But not clear about this conversion.Please help, where i am wrong, with regard the log i get !

    Code for Encoding

    codecinit()- called once(JNI wrapper function)

    int Java_com_my_package_codecinit (JNIEnv *env, jobject thiz) {
    avcodec_register_all();
    codec = avcodec_find_encoder(AV_CODEC_ID_H264);//AV_CODEC_ID_MPEG1VIDEO);
    if(codec->id == AV_CODEC_ID_H264)
       __android_log_write(ANDROID_LOG_ERROR, "set","h264_encoder");

    if (!codec) {
       fprintf(stderr, "codec not found\n");
       __android_log_write(ANDROID_LOG_ERROR, "codec", "not found");

    }
       __android_log_write(ANDROID_LOG_ERROR, "codec", "alloc-contest3");
    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    picture= av_frame_alloc();

    if(picture == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "picture","context-null");

    c->bit_rate = 400000;
    c->height = 800;
    c->time_base= (AVRational){1,25};
    c->gop_size = 10;
    c->max_b_frames=1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    outbuf_size = 768000;
    c->width = 480;

    size = (c->width * c->height);

    if (avcodec_open2(c, codec,NULL) < 0) {

    __android_log_write(ANDROID_LOG_ERROR, "codec", "could not open");


    }

    ret = av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                        c->pix_fmt, 32);
    if (ret < 0) {
           __android_log_write(ANDROID_LOG_ERROR, "image","alloc-failed");
       fprintf(stderr, "could not alloc raw picture buffer\n");

    }

    picture->format = c->pix_fmt;
    picture->width  = c->width;
    picture->height = c->height;
    return 0;

    }

    encodeframe()-called in a while loop

    int Java_com_my_package_encodeframe (JNIEnv *env, jobject thiz,jbyteArray buffer) {
    jbyte *temp= (*env)->GetByteArrayElements(env, buffer, 0);
    Output = (char *)temp;
    const uint8_t * const inData[1] = { Output };
    const int inLinesize[1] = { 2*c->width };

    //swscale should implement here

       av_init_packet(&pkt);
       pkt.data = NULL;    // packet data will be allocated by the encoder
       pkt.size = 0;

       fflush(stdout);
    picture->data[0] = Output;
    ret = avcodec_encode_video2(c, &pkt, picture,&got_output);

       fprintf(stderr,"ret = %d, got-out = %d \n",ret,got_output);
        if (ret < 0) {
                   __android_log_write(ANDROID_LOG_ERROR, "error","encoding");
           if(got_output > 0)
           __android_log_write(ANDROID_LOG_ERROR, "got_output","is non-zero");

       }

       if (got_output) {
           fprintf(stderr,"encoding frame %3d (size=%5d): (ret=%d)\n", 1, pkt.size,ret);
           fprintf(stderr,"before caling decode");
           decode_inline(&pkt); //function that decodes right after the encode
           fprintf(stderr,"after caling decode");


           av_free_packet(&pkt);
       }


    fprintf(stderr,"y val: %d \n",y);


    (*env)->ReleaseByteArrayElements(env, buffer, Output, 0);
    return ((ret));
    }

    decode_inline() function

    decode_inline(AVPacket *avpkt){
    AVCodec *codec;
    AVCodecContext *c = NULL;
    int frame, got_picture, len = -1,temp=0;

    AVFrame *rawFrame, *rgbFrame;
    uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
    char buf[1024];
    char rawBuf[768000],rgbBuf[768000];

    struct SwsContext *sws_ctx;

    memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
    avcodec_register_all();

    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    codec = avcodec_find_decoder(AV_CODEC_ID_H264);
    if (!codec) {
       fprintf(stderr, "codec not found\n");
       fprintf(stderr, "codec = %p \n", codec);
       }
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->width = 480;
    c->height = 800;

    rawFrame = av_frame_alloc();
    rgbFrame = av_frame_alloc();

    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
       }
    sws_ctx = sws_getContext(c->width, c->height,/*PIX_FMT_RGB565BE*/
               PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB565/*PIX_FMT_YUV420P*/,
               SWS_BILINEAR, NULL, NULL, NULL);


    frame = 0;

    unsigned short *decodedpixels = &rawBuf;
    rawFrame->data[0] = &rawBuf;
    rgbFrame->data[0] = &rgbBuf;

    fprintf(stderr,"size of avpkt %d \n",avpkt->size);
    temp = avpkt->size;
    while (temp > 0) {
           len = avcodec_decode_video2(c, rawFrame, &got_picture, avpkt);

           if (len < 0) {
               fprintf(stderr, "Error while decoding frame %d\n", frame);
               exit(1);
               }
           temp -= len;
           avpkt->data += len;

           if (got_picture) {
               printf("saving frame %3d\n", frame);
               fflush(stdout);
           //TODO  
           //memcpy(decodedpixels,rawFrame->data[0],rawFrame->linesize[0]);
           //  decodedpixels +=rawFrame->linesize[0];

               frame++;
               }

           }

    avcodec_close(c);
    av_free(c);
    //free(rawBuf);
    //free(rgbBuf);
    av_frame_free(&rawFrame);
    av_frame_free(&rgbFrame);

    }

    The log i get

    For the decode_inline() function :


    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] Invalid mix of idr and non-idr slices
    01-02 14:50:50.160: I/stderr(3407): Error while decoding frame 0

    Edit : Changing GOP value :

    If i change c->gop_size = 3; as expected it emits one I frame every three frames. The non-existing PPS 0 referenced message is not there for in every third execution, but all other have this message