Recherche avancée

Médias (91)

Autres articles (100)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (6457)

  • Android h264 decode non-existing PPS 0 referenced

    22 janvier 2014, par nmxprime

    In Android JNI, using ffmpeg with libx264 use below codes to encode and decode raw rgb data !. I should use swscale to convert rgb565 to yuv420p as required by H.264. But not clear about this conversion.Please help, where i am wrong, with regard the log i get !

    Code for Encoding

    codecinit()- called once(JNI wrapper function)

    int Java_com_my_package_codecinit (JNIEnv *env, jobject thiz) {
    avcodec_register_all();
    codec = avcodec_find_encoder(AV_CODEC_ID_H264);//AV_CODEC_ID_MPEG1VIDEO);
    if(codec->id == AV_CODEC_ID_H264)
       __android_log_write(ANDROID_LOG_ERROR, "set","h264_encoder");

    if (!codec) {
       fprintf(stderr, "codec not found\n");
       __android_log_write(ANDROID_LOG_ERROR, "codec", "not found");

    }
       __android_log_write(ANDROID_LOG_ERROR, "codec", "alloc-contest3");
    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    picture= av_frame_alloc();

    if(picture == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "picture","context-null");

    c->bit_rate = 400000;
    c->height = 800;
    c->time_base= (AVRational){1,25};
    c->gop_size = 10;
    c->max_b_frames=1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    outbuf_size = 768000;
    c->width = 480;

    size = (c->width * c->height);

    if (avcodec_open2(c, codec,NULL) < 0) {

    __android_log_write(ANDROID_LOG_ERROR, "codec", "could not open");


    }

    ret = av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                        c->pix_fmt, 32);
    if (ret < 0) {
           __android_log_write(ANDROID_LOG_ERROR, "image","alloc-failed");
       fprintf(stderr, "could not alloc raw picture buffer\n");

    }

    picture->format = c->pix_fmt;
    picture->width  = c->width;
    picture->height = c->height;
    return 0;

    }

    encodeframe()-called in a while loop

    int Java_com_my_package_encodeframe (JNIEnv *env, jobject thiz,jbyteArray buffer) {
    jbyte *temp= (*env)->GetByteArrayElements(env, buffer, 0);
    Output = (char *)temp;
    const uint8_t * const inData[1] = { Output };
    const int inLinesize[1] = { 2*c->width };

    //swscale should implement here

       av_init_packet(&pkt);
       pkt.data = NULL;    // packet data will be allocated by the encoder
       pkt.size = 0;

       fflush(stdout);
    picture->data[0] = Output;
    ret = avcodec_encode_video2(c, &pkt, picture,&got_output);

       fprintf(stderr,"ret = %d, got-out = %d \n",ret,got_output);
        if (ret < 0) {
                   __android_log_write(ANDROID_LOG_ERROR, "error","encoding");
           if(got_output > 0)
           __android_log_write(ANDROID_LOG_ERROR, "got_output","is non-zero");

       }

       if (got_output) {
           fprintf(stderr,"encoding frame %3d (size=%5d): (ret=%d)\n", 1, pkt.size,ret);
           fprintf(stderr,"before caling decode");
           decode_inline(&pkt); //function that decodes right after the encode
           fprintf(stderr,"after caling decode");


           av_free_packet(&pkt);
       }


    fprintf(stderr,"y val: %d \n",y);


    (*env)->ReleaseByteArrayElements(env, buffer, Output, 0);
    return ((ret));
    }

    decode_inline() function

    decode_inline(AVPacket *avpkt){
    AVCodec *codec;
    AVCodecContext *c = NULL;
    int frame, got_picture, len = -1,temp=0;

    AVFrame *rawFrame, *rgbFrame;
    uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
    char buf[1024];
    char rawBuf[768000],rgbBuf[768000];

    struct SwsContext *sws_ctx;

    memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
    avcodec_register_all();

    c= avcodec_alloc_context3(codec);
    if(c == NULL)
       __android_log_write(ANDROID_LOG_ERROR, "avcodec","context-null");

    codec = avcodec_find_decoder(AV_CODEC_ID_H264);
    if (!codec) {
       fprintf(stderr, "codec not found\n");
       fprintf(stderr, "codec = %p \n", codec);
       }
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->width = 480;
    c->height = 800;

    rawFrame = av_frame_alloc();
    rgbFrame = av_frame_alloc();

    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
       }
    sws_ctx = sws_getContext(c->width, c->height,/*PIX_FMT_RGB565BE*/
               PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB565/*PIX_FMT_YUV420P*/,
               SWS_BILINEAR, NULL, NULL, NULL);


    frame = 0;

    unsigned short *decodedpixels = &rawBuf;
    rawFrame->data[0] = &rawBuf;
    rgbFrame->data[0] = &rgbBuf;

    fprintf(stderr,"size of avpkt %d \n",avpkt->size);
    temp = avpkt->size;
    while (temp > 0) {
           len = avcodec_decode_video2(c, rawFrame, &got_picture, avpkt);

           if (len < 0) {
               fprintf(stderr, "Error while decoding frame %d\n", frame);
               exit(1);
               }
           temp -= len;
           avpkt->data += len;

           if (got_picture) {
               printf("saving frame %3d\n", frame);
               fflush(stdout);
           //TODO  
           //memcpy(decodedpixels,rawFrame->data[0],rawFrame->linesize[0]);
           //  decodedpixels +=rawFrame->linesize[0];

               frame++;
               }

           }

    avcodec_close(c);
    av_free(c);
    //free(rawBuf);
    //free(rgbBuf);
    av_frame_free(&rawFrame);
    av_frame_free(&rgbFrame);

    }

    The log i get

    For the decode_inline() function :


    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] non-existing PPS 0 referenced
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] decode_slice_header error
    01-02 14:50:50.160: I/stderr(3407): [h264 @ 0x8db540] Invalid mix of idr and non-idr slices
    01-02 14:50:50.160: I/stderr(3407): Error while decoding frame 0

    Edit : Changing GOP value :

    If i change c->gop_size = 3; as expected it emits one I frame every three frames. The non-existing PPS 0 referenced message is not there for in every third execution, but all other have this message

  • Error in video streaming using libavformat : VBV buffer size not set, muxing may fail

    15 janvier 2014, par Blue Sky

    I stream a video using libavformat as follows :

    static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
                           enum AVCodecID codec_id)
    {
    AVCodecContext *c;
    AVStream *st;
    /* find the encoder */
    *codec = avcodec_find_encoder(codec_id);
    if (!(*codec)) {
       fprintf(stderr, "Could not find encoder for '%s'\n",
               avcodec_get_name(codec_id));
       exit(1);
    }
    st = avformat_new_stream(oc, *codec);
    if (!st) {
       fprintf(stderr, "Could not allocate stream\n");
       exit(1);
    }
    st->id = oc->nb_streams-1;
    c = st->codec;
    switch ((*codec)->type) {
    case AVMEDIA_TYPE_AUDIO:
       c->sample_fmt  = (*codec)->sample_fmts ?
           (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
       c->bit_rate    = 64000;
       c->sample_rate = 44100;
       c->channels    = 2;
       break;
    case AVMEDIA_TYPE_VIDEO:
       c->codec_id = codec_id;
       c->bit_rate = 400000;
       /* Resolution must be a multiple of two. */
       c->width    = outframe_width;
       c->height   = outframe_height;
       /* timebase: This is the fundamental unit of time (in seconds) in terms
        * of which frame timestamps are represented. For fixed-fps content,
        * timebase should be 1/framerate and timestamp increments should be
        * identical to 1. */
       c->time_base.den = STREAM_FRAME_RATE;
       c->time_base.num = 1;
       c->gop_size      = 12; /* emit one intra frame every twelve frames at most */
       c->pix_fmt       = STREAM_PIX_FMT;
       if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
           /* just for testing, we also add B frames */
           c->max_b_frames = 2;
       }
       if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
           /* Needed to avoid using macroblocks in which some coeffs overflow.
            * This does not happen with normal video, it just happens here as
            * the motion of the chroma plane does not match the luma plane. */
           c->mb_decision = 2;
       }
    break;
    default:
       break;
    }
    /* Some formats want stream headers to be separate. */
    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
       c->flags |= CODEC_FLAG_GLOBAL_HEADER;
    return st;
    }

    But when I run this code, I get the following error/warning :

    [mpeg @ 01f3f040] VBV buffer size not set, muxing may fail

    Do you know how I can set the VBV buffer size in the code ? In fact, when I use ffplay to display the streamed video, ffplay doesn't show anything for short videos but for long videos, it start displaying the video immediately. So, it looks like ffplay needs a buffer to be filled up by some amount so that it can start displaying the stream. Am I right ?

  • ffmpeg mysteriously adding start delay [migrated]

    13 janvier 2014, par swizzcheez

    When converting a mp4 to TS, I am observing ffmpeg adding a "start" delay that the input file did not seem to possess. For my input, ffprobe reveals :

    ffprobe version N-57943-g7b76976 Copyright (c) 2007-2013 the FFmpeg developers
     built on Nov  6 2013 14:00:40 with gcc 4.4.5 (Debian 4.4.5-8)
     configuration: --enable-libx264 --enable-gpl
     libavutil      52. 52.100 / 52. 52.100
     libavcodec     55. 41.100 / 55. 41.100
     libavformat    55. 21.100 / 55. 21.100
     libavdevice    55.  5.100 / 55.  5.100
     libavfilter     3. 90.102 /  3. 90.102
     libswscale      2.  5.101 /  2.  5.101
     libswresample   0. 17.104 /  0. 17.104
     libpostproc    52.  3.100 / 52.  3.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mp4-in-7A8FEADA-5EA6-11E3-AD13-4DD2258FBC88.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: mp42mp41
       creation_time   : 2013-11-08 15:15:12
     Duration: 00:00:11.56, start: 0.000000, bitrate: 2994 kb/s
       Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv), 1280x720 [SAR 1:1 DAR 16:9], 2807 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
       Metadata:
         creation_time   : 2013-11-08 15:15:12
         handler_name    : ?Mainconcept Video Media Handler
       Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 189 kb/s (default)
       Metadata:
         creation_time   : 2013-11-08 15:15:12
         handler_name    : #Mainconcept MP4 Sound Media Handler

    But when processed using ffmpeg :

    ffmpeg -i '/tmp/test-no-qp.C2162DFC-6297-11E3-A68D-05E505A3FB93/output.mp4-in-7A8FEADA-5EA6-11E3-AD13-4DD2258FBC88.mp4' -s 1920x1080 -preset ultrafast -f mpegts -c:v libx264 -qp:v 18 `enter code here`

    I get an extra start delay :

    (Snipping the same headers from the input side)
    Input #0, mpegts, from 'output.mp4-out-7A8FEADA-5EA6-11E3-AD13-4DD2258FBC88.mp4':
     Duration: 00:00:11.55, start: 1.400000, bitrate: 1985 kb/s
     Program 1
       Metadata:
         service_name    : Service01
         service_provider: FFmpeg
       Stream #0:0[0x100]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc
       Stream #0:1[0x101](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 128 kb/s

    Duration seems to have been adjusted as well, but I didn't ask for the adjustment. How do I get rid of that ? What did I do that triggered that effect ? Is there something else about my ffmpeg line that looks off ?

    (ffmpeg version is the same as the ffprobe above.)