Recherche avancée

Médias (91)

Autres articles (69)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (9234)

  • Encoding for HTTP Live Streaming with Xuggle

    26 mai 2012, par Luuk D. Jansen

    I have created a server system based on Xuggle to encode an incoming file to H264 and segment it. However, when playing the video back in Quicktime it almost works (with a small hiccup in the audio sometimes) but when changing fro one quality stream to another the image gets lost.

    So I ran the 'mediastreamvalidator'and got the following error :

    ERROR : (-1) Unknown video codec : 1836069494 (program 0, track 0)
    ERROR : (-1) failed to parse segment as either an MPEG-2 TS or an ES

    So I used FFMPEG to get some info on the codex :
    The result of my Xuggler encoding :

    Input #0, mpegts, from 'segment_0.ts':
     Duration: 00:00:09.40, start: 0.000000, bitrate: 3618 kb/s
     Program 1
       Metadata:
         service_name    : Service01
         service_provider: FFmpeg
       Stream #0.0[0x100]: Video: mpeg2video (Main), yuv420p, 960x540 [PAR 1:1 DAR 16:9], 104857 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc
       Stream #0.1[0x101]: Audio: mp2, 48000 Hz, stereo, s16, 128 kb/s

    The result of a file created by Compressor :

    Seems stream 0 codec frame rate differs from container frame rate: 180000.00 (180000/1) -> 25.00 (25/1)
    Input #0, mpegts, from 'fileSequence1.ts':
     Duration: 00:00:09.97, start: 19.984578, bitrate: 5308 kb/s
     Program 1
       Stream #0.0[0x101]: Video: h264 (Main), yuv420p, 960x540, 25 tbr, 90k tbn, 180k tbc
       Stream #0.1[0x102]: Audio: aac, 22050 Hz, stereo, s16, 32 kb/s

    The main difference seems to me that for the Xuggler encoded file it says Video : mpeg2video instead of h264. However, while encoding I did specifically set the Coder to ICodec.ID.CODEC_ID_H264.

    How can I force it to use h264. The same with audio. I specified AAC and get MP2.

    I subsequent used FFMPEG directly and that results in :

    Input #0, mpegts, from 'encoded.ts':
     Duration: 00:00:24.16, start: 1.400000, bitrate: 360 kb/s
     Program 1
       Metadata:
         service_name    : Service01
         service_provider: FFmpeg
       Stream #0.0[0x100]: Video: h264 (Main), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
       Stream #0.1[0x101](eng): Audio: aac, 48000 Hz, stereo, s16, 57 kb/s

    That looks better. I could use FFMPEG directly, but by using Xuggler I can segment the file while easier keep track of progress of the process.

  • converting images to mp4 using ffmpeg on iphone

    29 novembre 2011, par user633901

    Up till now, i can create mpeg1 but with no luck for mp4.Maybe we can talk and share information.Someone told me that i have to set some flags for using mp4.But i am stuck at using it...

    following is the working code :

    av_register_all();
    printf("Video encoding\n");

    /// find the mpeg video encoder
    //codec=avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
    codec = avcodec_find_encoder(CODEC_ID_MPEG4);

    if (!codec) {
       fprintf(stderr, "codec not found\n");
       exit(1);
    }

    c = avcodec_alloc_context();
    picture = avcodec_alloc_frame();

    // put sample parameters
    c->bit_rate = 400000;
    /// resolution must be a multiple of two
    c->width = 240;
    c->height = 320;
    //c->codec_id = fmt->video_codec;
    //frames per second
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /// emit one intra frame every ten frames
    c->max_b_frames=1;
    c->pix_fmt =PIX_FMT_YUV420P; // PIX_FMT_YUV420P

    if (avcodec_open(c, codec) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
    }

    f = fopen([[NSHomeDirectory() stringByAppendingPathComponent:@"test.mp4"] UTF8String], "wb");

    if (!f) {
       fprintf(stderr, "could not open %s\n",[@"test.mp4" UTF8String]);
       exit(1);
    }

    // alloc image and output buffer
    outbuf_size = 100000;
    outbuf = malloc(outbuf_size);
    size = c->width * c->height;

    #pragma mark -

    AVFrame* outpic = avcodec_alloc_frame();
    int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height); //this is half size of numbytes.

    //create buffer for the output image
    uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

    #pragma mark -  
    for(k=0;k<1;k++) {
       for(i=0;i<25;i++) {
           fflush(stdout);

           int numBytes = avpicture_get_size(PIX_FMT_RGBA, c->width, c->height);
           uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

           UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i+1]];
           CGImageRef newCgImage = [image CGImage];

           CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
           CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
           buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  
           ///////////////////////////
           //outbuffer=(uint8_t *)CFDataGetBytePtr(bitmapData);
           //////////////////////////
           avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGBA, c->width, c->height);
           avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);//does not have image data.

           struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                                                         PIX_FMT_RGBA,
                                                         c->width, c->height,
                                                         PIX_FMT_YUV420P,
                                                         SWS_FAST_BILINEAR, NULL, NULL, NULL);

           //perform the conversion
           sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
           // Here is where I try to convert to YUV

           // encode the image
           out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
           printf("encoding frame %3d (size=%5d)\n", i, out_size);
           fwrite(outbuf, 1, out_size, f);

           free(buffer);
           buffer = NULL;
       }

       // get the delayed frames
       for(; out_size; i++) {
           fflush(stdout);

           out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
           printf("write frame %3d (size=%5d)\n", i, out_size);
           fwrite(outbuf, 1, outbuf_size, f);      
       }
    }

    // add sequence end code to have a real mpeg file
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, f);
    fclose(f);
    free(picture_buf);
    free(outbuf);

    avcodec_close(c);
    av_free(c);
    av_free(picture);
    //av_free(outpic);
    printf("\n");

    my msn:hieeli@hotmail.com

  • h.264 Hardware Encoding with ffmpeg

    23 mai 2016, par Paolo

    I must capture h.264 raw stream in Linux from a UVC USB camera with Sonix SN9C291B h.264 hardware encoder, maybe the most used camera controller encoder.
    Now, I only found sample code to capture stream using avformat_open_input with /dev/video0 or /dev/video1 function but it does not work.
    What is the ffmpeg function I must use to capture h.264 raw data from the camera ?

    After call the function :

    c = avcodec_alloc_context3 (codec);

    c->bit_rate = 400000;

    c->width = 640; this setting generate an error! Why?