Recherche avancée

Médias (91)

Autres articles (42)

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (5168)

  • Android h264 decode non-existing PPS 0 referenced

    22 janvier 2014, par nmxprime

    In Android JNI, using ffmpeg with libx264 use below codes to encode and decode raw rgb data !. I should use swscale to convert rgb565 to yuv420p as required by H.264. But not clear about this conversion.Please help, where i am wrong, with regard the log i get !

    Code for Encoding

    codecinit()- called once(JNI wrapper function)

    int Java_com_my_package_codecinit (JNIEnv *env, jobject thiz) {
    avcodec_register_all();
    codec = avcodec_find_encoder(AV_CODEC_ID_H264);//AV_CODEC_ID_MPEG1VIDEO);
    if(codec->id == AV_CODEC_ID_H264)
       __android_log_write(ANDROID_LOG_ERROR, "set","h264_encoder");

    if (!codec) {
       fprintf(stderr, "codec not found\n");
       __android_log_write(ANDROID_LOG_ERROR, "codec", "not found");

    }
       __android_log_write(ANDROID_LOG_ERROR, "codec", "alloc-contest3");
    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    picture= av_frame_alloc();

    if(picture == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "picture","context-null");

    c->bit_rate = 400000;
    c->height = 800;
    c->time_base= (AVRational){1,25};
    c->gop_size = 10;
    c->max_b_frames=1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    outbuf_size = 768000;
    c->width = 480;

    size = (c->width * c->height);

    if (avcodec_open2(c, codec,NULL) < 0) {

    __android_log_write(ANDROID_LOG_ERROR, "codec", "could not open");


    }

    ret = av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                        c->pix_fmt, 32);
    if (ret < 0) {
           __android_log_write(ANDROID_LOG_ERROR, "image","alloc-failed");
       fprintf(stderr, "could not alloc raw picture buffer\n");

    }

    picture->format = c->pix_fmt;
    picture->width  = c->width;
    picture->height = c->height;
    return 0;

    }

    encodeframe()-called in a while loop

    int Java_com_my_package_encodeframe (JNIEnv *env, jobject thiz,jbyteArray buffer) {
    jbyte *temp= (*env)->GetByteArrayElements(env, buffer, 0);
    Output = (char *)temp;
    const uint8_t * const inData[1] = { Output };
    const int inLinesize[1] = { 2*c->width };

    //swscale should implement here

       av_init_packet(&pkt);
       pkt.data = NULL;    // packet data will be allocated by the encoder
       pkt.size = 0;

       fflush(stdout);
    picture->data[0] = Output;
    ret = avcodec_encode_video2(c, &pkt, picture,&got_output);

       fprintf(stderr,"ret = %d, got-out = %d \n",ret,got_output);
        if (ret < 0) {
                   __android_log_write(ANDROID_LOG_ERROR, "error","encoding");
           if(got_output > 0)
           __android_log_write(ANDROID_LOG_ERROR, "got_output","is non-zero");

       }

       if (got_output) {
           fprintf(stderr,"encoding frame %3d (size=%5d): (ret=%d)\n", 1, pkt.size,ret);
           fprintf(stderr,"before caling decode");
           decode_inline(&pkt); //function that decodes right after the encode
           fprintf(stderr,"after caling decode");


           av_free_packet(&pkt);
       }


    fprintf(stderr,"y val: %d \n",y);


    (*env)->ReleaseByteArrayElements(env, buffer, Output, 0);
    return ((ret));
    }

    decode_inline() function

    decode_inline(AVPacket *avpkt){
    AVCodec *codec;
    AVCodecContext *c = NULL;
    int frame, got_picture, len = -1,temp=0;

    AVFrame *rawFrame, *rgbFrame;
    uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
    char buf[1024];
    char rawBuf[768000],rgbBuf[768000];

    struct SwsContext *sws_ctx;

    memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
    avcodec_register_all();

    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    codec = avcodec_find_decoder(AV_CODEC_ID_H264);
    if (!codec) {
       fprintf(stderr, "codec not found\n");
       fprintf(stderr, "codec = %p \n", codec);
       }
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->width = 480;
    c->height = 800;

    rawFrame = av_frame_alloc();
    rgbFrame = av_frame_alloc();

    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
       }
    sws_ctx = sws_getContext(c->width, c->height,/*PIX_FMT_RGB565BE*/
               PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB565/*PIX_FMT_YUV420P*/,
               SWS_BILINEAR, NULL, NULL, NULL);


    frame = 0;

    unsigned short *decodedpixels = &rawBuf;
    rawFrame->data[0] = &rawBuf;
    rgbFrame->data[0] = &rgbBuf;

    fprintf(stderr,"size of avpkt %d \n",avpkt->size);
    temp = avpkt->size;
    while (temp > 0) {
           len = avcodec_decode_video2(c, rawFrame, &got_picture, avpkt);

           if (len < 0) {
               fprintf(stderr, "Error while decoding frame %d\n", frame);
               exit(1);
               }
           temp -= len;
           avpkt->data += len;

           if (got_picture) {
               printf("saving frame %3d\n", frame);
               fflush(stdout);
           //TODO  
           //memcpy(decodedpixels,rawFrame->data[0],rawFrame->linesize[0]);
           //  decodedpixels +=rawFrame->linesize[0];

               frame++;
               }

           }

    avcodec_close(c);
    av_free(c);
    //free(rawBuf);
    //free(rgbBuf);
    av_frame_free(&rawFrame);
    av_frame_free(&rgbFrame);

    }

    The log i get

    For the decode_inline() function :


    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] Invalid mix of idr and non-idr slices
    01-02 14:50:50.160: I/stderr(3407): Error while decoding frame 0

    Edit : Changing GOP value :

    If i change c->gop_size = 3; as expected it emits one I frame every three frames. The non-existing PPS 0 referenced message is not there for in every third execution, but all other have this message

  • Android h264 decode non-existing PPS 0 referenced

    22 janvier 2014, par nmxprime

    In Android JNI, using ffmpeg with libx264 use below codes to encode and decode raw rgb data !. I should use swscale to convert rgb565 to yuv420p as required by H.264. But not clear about this conversion.Please help, where i am wrong, with regard the log i get !

    Code for Encoding

    codecinit()- called once(JNI wrapper function)

    int Java_com_my_package_codecinit (JNIEnv *env, jobject thiz) {
    avcodec_register_all();
    codec = avcodec_find_encoder(AV_CODEC_ID_H264);//AV_CODEC_ID_MPEG1VIDEO);
    if(codec->id == AV_CODEC_ID_H264)
       __android_log_write(ANDROID_LOG_ERROR, "set","h264_encoder");

    if (!codec) {
       fprintf(stderr, "codec not found\n");
       __android_log_write(ANDROID_LOG_ERROR, "codec", "not found");

    }
       __android_log_write(ANDROID_LOG_ERROR, "codec", "alloc-contest3");
    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    picture= av_frame_alloc();

    if(picture == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "picture","context-null");

    c->bit_rate = 400000;
    c->height = 800;
    c->time_base= (AVRational){1,25};
    c->gop_size = 10;
    c->max_b_frames=1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    outbuf_size = 768000;
    c->width = 480;

    size = (c->width * c->height);

    if (avcodec_open2(c, codec,NULL) < 0) {

    __android_log_write(ANDROID_LOG_ERROR, "codec", "could not open");


    }

    ret = av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                        c->pix_fmt, 32);
    if (ret < 0) {
           __android_log_write(ANDROID_LOG_ERROR, "image","alloc-failed");
       fprintf(stderr, "could not alloc raw picture buffer\n");

    }

    picture->format = c->pix_fmt;
    picture->width  = c->width;
    picture->height = c->height;
    return 0;

    }

    encodeframe()-called in a while loop

    int Java_com_my_package_encodeframe (JNIEnv *env, jobject thiz,jbyteArray buffer) {
    jbyte *temp= (*env)->GetByteArrayElements(env, buffer, 0);
    Output = (char *)temp;
    const uint8_t * const inData[1] = { Output };
    const int inLinesize[1] = { 2*c->width };

    //swscale should implement here

       av_init_packet(&pkt);
       pkt.data = NULL;    // packet data will be allocated by the encoder
       pkt.size = 0;

       fflush(stdout);
    picture->data[0] = Output;
    ret = avcodec_encode_video2(c, &pkt, picture,&got_output);

       fprintf(stderr,"ret = %d, got-out = %d \n",ret,got_output);
        if (ret < 0) {
                   __android_log_write(ANDROID_LOG_ERROR, "error","encoding");
           if(got_output > 0)
           __android_log_write(ANDROID_LOG_ERROR, "got_output","is non-zero");

       }

       if (got_output) {
           fprintf(stderr,"encoding frame %3d (size=%5d): (ret=%d)\n", 1, pkt.size,ret);
           fprintf(stderr,"before caling decode");
           decode_inline(&pkt); //function that decodes right after the encode
           fprintf(stderr,"after caling decode");


           av_free_packet(&pkt);
       }


    fprintf(stderr,"y val: %d \n",y);


    (*env)->ReleaseByteArrayElements(env, buffer, Output, 0);
    return ((ret));
    }

    decode_inline() function

    decode_inline(AVPacket *avpkt){
    AVCodec *codec;
    AVCodecContext *c = NULL;
    int frame, got_picture, len = -1,temp=0;

    AVFrame *rawFrame, *rgbFrame;
    uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
    char buf[1024];
    char rawBuf[768000],rgbBuf[768000];

    struct SwsContext *sws_ctx;

    memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
    avcodec_register_all();

    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    codec = avcodec_find_decoder(AV_CODEC_ID_H264);
    if (!codec) {
       fprintf(stderr, "codec not found\n");
       fprintf(stderr, "codec = %p \n", codec);
       }
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->width = 480;
    c->height = 800;

    rawFrame = av_frame_alloc();
    rgbFrame = av_frame_alloc();

    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
       }
    sws_ctx = sws_getContext(c->width, c->height,/*PIX_FMT_RGB565BE*/
               PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB565/*PIX_FMT_YUV420P*/,
               SWS_BILINEAR, NULL, NULL, NULL);


    frame = 0;

    unsigned short *decodedpixels = &rawBuf;
    rawFrame->data[0] = &rawBuf;
    rgbFrame->data[0] = &rgbBuf;

    fprintf(stderr,"size of avpkt %d \n",avpkt->size);
    temp = avpkt->size;
    while (temp > 0) {
           len = avcodec_decode_video2(c, rawFrame, &got_picture, avpkt);

           if (len < 0) {
               fprintf(stderr, "Error while decoding frame %d\n", frame);
               exit(1);
               }
           temp -= len;
           avpkt->data += len;

           if (got_picture) {
               printf("saving frame %3d\n", frame);
               fflush(stdout);
           //TODO  
           //memcpy(decodedpixels,rawFrame->data[0],rawFrame->linesize[0]);
           //  decodedpixels +=rawFrame->linesize[0];

               frame++;
               }

           }

    avcodec_close(c);
    av_free(c);
    //free(rawBuf);
    //free(rgbBuf);
    av_frame_free(&rawFrame);
    av_frame_free(&rgbFrame);

    }

    The log i get

    For the decode_inline() function :


    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] Invalid mix of idr and non-idr slices
    01-02 14:50:50.160: I/stderr(3407): Error while decoding frame 0

    Edit : Changing GOP value :

    If i change c->gop_size = 3; as expected it emits one I frame every three frames. The non-existing PPS 0 referenced message is not there for in every third execution, but all other have this message

  • Error in video streaming using libavformat : VBV buffer size not set, muxing may fail

    15 janvier 2014, par Blue Sky

    I stream a video using libavformat as follows :

    static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
                           enum AVCodecID codec_id)
    {
    AVCodecContext *c;
    AVStream *st;
    /* find the encoder */
    *codec = avcodec_find_encoder(codec_id);
    if (!(*codec)) {
       fprintf(stderr, "Could not find encoder for '%s'\n",
               avcodec_get_name(codec_id));
       exit(1);
    }
    st = avformat_new_stream(oc, *codec);
    if (!st) {
       fprintf(stderr, "Could not allocate stream\n");
       exit(1);
    }
    st->id = oc->nb_streams-1;
    c = st->codec;
    switch ((*codec)->type) {
    case AVMEDIA_TYPE_AUDIO:
       c->sample_fmt  = (*codec)->sample_fmts ?
           (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
       c->bit_rate    = 64000;
       c->sample_rate = 44100;
       c->channels    = 2;
       break;
    case AVMEDIA_TYPE_VIDEO:
       c->codec_id = codec_id;
       c->bit_rate = 400000;
       /* Resolution must be a multiple of two. */
       c->width    = outframe_width;
       c->height   = outframe_height;
       /* timebase: This is the fundamental unit of time (in seconds) in terms
        * of which frame timestamps are represented. For fixed-fps content,
        * timebase should be 1/framerate and timestamp increments should be
        * identical to 1. */
       c->time_base.den = STREAM_FRAME_RATE;
       c->time_base.num = 1;
       c->gop_size      = 12; /* emit one intra frame every twelve frames at most */
       c->pix_fmt       = STREAM_PIX_FMT;
       if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
           /* just for testing, we also add B frames */
           c->max_b_frames = 2;
       }
       if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
           /* Needed to avoid using macroblocks in which some coeffs overflow.
            * This does not happen with normal video, it just happens here as
            * the motion of the chroma plane does not match the luma plane. */
           c->mb_decision = 2;
       }
    break;
    default:
       break;
    }
    /* Some formats want stream headers to be separate. */
    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
       c->flags |= CODEC_FLAG_GLOBAL_HEADER;
    return st;
    }

    But when I run this code, I get the following error/warning :

    [mpeg @ 01f3f040] VBV buffer size not set, muxing may fail

    Do you know how I can set the VBV buffer size in the code ? In fact, when I use ffplay to display the streamed video, ffplay doesn't show anything for short videos but for long videos, it start displaying the video immediately. So, it looks like ffplay needs a buffer to be filled up by some amount so that it can start displaying the stream. Am I right ?