Recherche avancée

Médias (33)

Mot : - Tags -/creative commons

Autres articles (111)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (15336)

  • iPhone camera shooting video using the AVCaptureSession and using ffmpeg CMSampleBufferRef a change in h.264 format is the issue. please advice

    4 janvier 2012, par isaiah

    My goal is h.264/AAC , mpeg2-ts streaming to server from iphone device.

    Current my source is FFmpeg+libx264 compile success. I Know gnu License. I want the demo program.

    I'm want to know that

    1.CMSampleBufferRef to AVPicture data is success ?

    avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);
     pFrame linesize and data is not null but pst -9233123123 . outpic also .
    Because of this I have to guess 'non-strictly-monotonic PTS' message

    2.This log is repeat.

    encoding frame (size= 0)
    encoding frame = "" , 'avcodec_encode_video' return 0 is success but always 0 .

    I don't know what to do...

    2011-06-01 15:15:14.199 AVCam[1993:7303] pFrame = avcodec_alloc_frame();
    2011-06-01 15:15:14.207 AVCam[1993:7303] avpicture_fill = 1228800
    Video encoding
    2011-0601 15:5:14.215 AVCam[1993:7303] codec = 5841844
    [libx264 @ 0x1441e00] using cpu capabilities: ARMv6 NEON
    [libx264 @ 0x1441e00] profile Constrained Baseline, level 2.0[libx264 @ 0x1441e00] non-strictly-monotonic PTS
    encoding frame (size=    0)
    encoding frame
    [libx264 @ 0x1441e00] final ratefactor: 26.74

    3.I have to guess 'non-strictly-monotonic PTS' message is the cause of all problems.
    what is this 'non-strictly-monotonic PTS' .

     this is source 

    (void)        captureOutput:(AVCaptureOutput *)captureOutput
           didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
                  fromConnection:(AVCaptureConnection *)connection
    {

       if( !CMSampleBufferDataIsReady(sampleBuffer) )
       {
           NSLog( @"sample buffer is not ready. Skipping sample" );
           return;
       }


       if( [isRecordingNow isEqualToString:@"YES"] )
       {
           lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
           if( videoWriter.status != AVAssetWriterStatusWriting  )
           {
               [videoWriter startWriting];
               [videoWriter startSessionAtSourceTime:lastSampleTime];
           }

           if( captureOutput == videooutput )
           {
               [self newVideoSample:sampleBuffer];

               CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
               CVPixelBufferLockBaseAddress(pixelBuffer, 0);

               // access the data
               int width = CVPixelBufferGetWidth(pixelBuffer);
               int height = CVPixelBufferGetHeight(pixelBuffer);
               unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);

               AVFrame *pFrame;
               pFrame = avcodec_alloc_frame();
               pFrame->quality = 0;

               NSLog(@"pFrame = avcodec_alloc_frame(); ");

    //          int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);

    //          int bytesSize = height * bytesPerRow ;  

    //          unsigned char *pixel = (unsigned char*)malloc(bytesSize);

    //          unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

    //          memcpy (pixel, rowBase, bytesSize);


               int avpicture_fillNum = avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
               //NSLog(@"rawPixelBase = %i , rawPixelBase -s = %s",rawPixelBase, rawPixelBase);
               NSLog(@"avpicture_fill = %i",avpicture_fillNum);
               //NSLog(@"width = %i,height = %i",width, height);



               // Do something with the raw pixels here

               CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

               //avcodec_init();
               //avdevice_register_all();
               av_register_all();





               AVCodec *codec;
               AVCodecContext *c= NULL;
               int  out_size, size, outbuf_size;
               //FILE *f;
               uint8_t *outbuf;

               printf("Video encoding\n");

               /* find the mpeg video encoder */
               codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
               NSLog(@"codec = %i",codec);
               if (!codec) {
                   fprintf(stderr, "codec not found\n");
                   exit(1);
               }

               c= avcodec_alloc_context();

               /* put sample parameters */
               c->bit_rate = 400000;
               c->bit_rate_tolerance = 10;
               c->me_method = 2;
               /* resolution must be a multiple of two */
               c->width = 352;//width;//352;
               c->height = 288;//height;//288;
               /* frames per second */
               c->time_base= (AVRational){1,25};
               c->gop_size = 10;//25; /* emit one intra frame every ten frames */
               //c->max_b_frames=1;
               c->pix_fmt = PIX_FMT_YUV420P;

               c ->me_range = 16;
               c ->max_qdiff = 4;
               c ->qmin = 10;
               c ->qmax = 51;
               c ->qcompress = 0.6f;

               /* open it */
               if (avcodec_open(c, codec) < 0) {
                   fprintf(stderr, "could not open codec\n");
                   exit(1);
               }


               /* alloc image and output buffer */
               outbuf_size = 100000;
               outbuf = malloc(outbuf_size);
               size = c->width * c->height;

               AVFrame* outpic = avcodec_alloc_frame();
               int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

               //create buffer for the output image
               uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

    #pragma mark -  

               fflush(stdout);

    <pre>//         int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
    //          uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
    //          
    //          //UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"10%d", i]];
    //          CGImageRef newCgImage = [self imageFromSampleBuffer:sampleBuffer];//[image CGImage];
    //          
    //          CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
    //          CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
    //          buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  
    //          
    //          avpicture_fill((AVPicture*)pFrame, buffer, PIX_FMT_RGB8, c->width, c->height);
               avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

               struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                                                              PIX_FMT_RGB8,
                                                              c->width, c->height,
                                                              PIX_FMT_YUV420P,
                                                              SWS_FAST_BILINEAR, NULL, NULL, NULL);

               //perform the conversion
               sws_scale(fooContext, pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
               // Here is where I try to convert to YUV

               /* encode the image */

               out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
               printf("encoding frame (size=%5d)\n", out_size);
               printf("encoding frame %s\n", outbuf);


               //fwrite(outbuf, 1, out_size, f);

               //              free(buffer);
               //              buffer = NULL;      



               /* add sequence end code to have a real mpeg file */
    //          outbuf[0] = 0x00;
    //          outbuf[1] = 0x00;
    //          outbuf[2] = 0x01;
    //          outbuf[3] = 0xb7;
               //fwrite(outbuf, 1, 4, f);
               //fclose(f);
               free(outbuf);

               avcodec_close(c);
               av_free(c);
               av_free(pFrame);
               printf("\n");
    </pre>
  • How to encode using the FFMpeg in Android (using H263)

    3 juillet 2012, par Kenny910

    I am trying to follow the sample code on encoding in the ffmpeg document and successfully build a application to encode and generate a mp4 file but I face the following problems :

    1) I am using the H263 for encoding but I can only set the width and height of the AVCodecContext to 176x144, for other case (like 720x480 or 640x480) it will return fail.

    2) I can't play the output mp4 file by using the default Android player, isn't it support H263 mp4 file ? p.s. I can play it by using other player

    3) Is there any sample code on encoding other video frame to make a new video (which mean decode the video and encode it back in different quality setting, also i would like to modify the frame content) ?

    Here is my code, thanks !

    JNIEXPORT jint JNICALL Java_com_ffmpeg_encoder_FFEncoder_nativeEncoder(JNIEnv* env, jobject thiz, jstring filename){

    LOGI("nativeEncoder()");

    avcodec_register_all();
    avcodec_init();
    av_register_all();

    AVCodec         *codec;
    AVCodecContext  *codecCtx;
    int             i;
    int             out_size;
    int             size;
    int             x;
    int             y;
    int             output_buffer_size;
    FILE            *file;
    AVFrame         *picture;
    uint8_t         *output_buffer;
    uint8_t         *picture_buffer;

    /* Manual Variables */
    int             l;
    int             fps = 30;
    int             videoLength = 5;

    /* find the H263 video encoder */
    codec = avcodec_find_encoder(CODEC_ID_H263);
    if (!codec) {
       LOGI("avcodec_find_encoder() run fail.");
    }

    codecCtx = avcodec_alloc_context();
    picture = avcodec_alloc_frame();

    /* put sample parameters */
    codecCtx->bit_rate = 400000;
    /* resolution must be a multiple of two */
    codecCtx->width = 176;
    codecCtx->height = 144;
    /* frames per second */
    codecCtx->time_base = (AVRational){1,fps};
    codecCtx->pix_fmt = PIX_FMT_YUV420P;
    codecCtx->codec_id = CODEC_ID_H263;
    codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;

    /* open it */
    if (avcodec_open(codecCtx, codec) &lt; 0) {
       LOGI("avcodec_open() run fail.");
    }

    const char* mfileName = (*env)->GetStringUTFChars(env, filename, 0);

    file = fopen(mfileName, "wb");
    if (!file) {
       LOGI("fopen() run fail.");
    }

    (*env)->ReleaseStringUTFChars(env, filename, mfileName);

    /* alloc image and output buffer */
    output_buffer_size = 100000;
    output_buffer = malloc(output_buffer_size);

    size = codecCtx->width * codecCtx->height;
    picture_buffer = malloc((size * 3) / 2); /* size for YUV 420 */

    picture->data[0] = picture_buffer;
    picture->data[1] = picture->data[0] + size;
    picture->data[2] = picture->data[1] + size / 4;
    picture->linesize[0] = codecCtx->width;
    picture->linesize[1] = codecCtx->width / 2;
    picture->linesize[2] = codecCtx->width / 2;

    for(l=0;l/encode 1 second of video
       for(i=0;i/prepare a dummy image YCbCr
           //Y
           for(y=0;yheight;y++) {
               for(x=0;xwidth;x++) {
                   picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
               }
           }

           //Cb and Cr
           for(y=0;yheight/2;y++) {
               for(x=0;xwidth/2;x++) {
                   picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
                   picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
               }
           }

           //encode the image
           out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, picture);
           fwrite(output_buffer, 1, out_size, file);
       }

       //get the delayed frames
       for(; out_size; i++) {
           out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, NULL);
           fwrite(output_buffer, 1, out_size, file);
       }
    }

    //add sequence end code to have a real mpeg file
    output_buffer[0] = 0x00;
    output_buffer[1] = 0x00;
    output_buffer[2] = 0x01;
    output_buffer[3] = 0xb7;

    fwrite(output_buffer, 1, 4, file);
    fclose(file);
    free(picture_buffer);
    free(output_buffer);
    avcodec_close(codecCtx);
    av_free(codecCtx);
    av_free(picture);

    LOGI("finish");

    return 0; }
  • Encoding using ffmpeg library fails

    29 septembre 2012, par Erik Swansson

    I've spent some time looking at the ffmpeg library and setting things up, I'm opening a .flv file. Reading and decoding and the frames, now I'm trying to encode it to MP4 but my packets end up empty.

    My code as follows

    int main (){

       avformat_open_input(&amp;pFC, "c://wav//test2.flv", NULL, NULL);

       po = av_find_stream_info(pFC);

       //ADD LOGIC TO FIND VIDEO STREAM
       pCodecC = pFC->streams[0]->codec;

       decoder = avcodec_find_decoder(pCodecC->codec_id);
       encoder = avcodec_find_encoder(pCodecC->codec_id);
       po = avcodec_open(pCodecC, decoder);

       pCodecE =  avcodec_alloc_context3(encoder);
       /* put sample parameters */
       pCodecE->bit_rate = 400000;
       /* resolution must be a multiple of two */
       pCodecE->width = 352;
       pCodecE->height = 288;
       /* frames per second */
       pCodecE->time_base.den = 25;
       pCodecE->time_base.num = 1;
       pCodecE->gop_size = 10; /* emit one intra frame every ten frames */
       pCodecE->max_b_frames=1;
       pCodecE->pix_fmt = PIX_FMT_YUV420P;

       if(pCodecC->codec_id == CODEC_ID_H264)
           av_opt_set(pCodecE->priv_data, "preset", "slow", 0);

       po =  avcodec_open2(pCodecE, encoder, NULL);

       AVFrame *pFrame;
       // Allocate an AVFrame structure



       // Allocate video frame
       pFrame=avcodec_alloc_frame();
       int frameFinished = 0;
       int frame = 0;
       int gotpacket = 0;

       while(av_read_frame(pFC, &amp;packet) >= 0)
       {
           if(packet.stream_index==0) //the video stream is 0
           {
               int len = avcodec_decode_video2(pCodecC, pFrame, &amp;frameFinished, &amp;packet);
               if(frameFinished)
               {
                   printf("frame # %i", frame);

                   po =avcodec_encode_video2(pCodecE, &amp;spacket, pFrame, &amp;gotpacket);
                   if(gotpacket)
                   {
                       printf("packet recieved");
                   }
                   frame++;
               }
           }
           av_free_packet(&amp;packet);
       }


       printf("encoding done");

       return 0;
    }

    Basically everything works up to

    po =avcodec_encode_video2(pCodecE, &amp;spacket, pFrame, &amp;gotpacket);

    Where &amp;gotpacket returns 0, as in an empty frame.

    Not sure what I'm doing wrong.