Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Packages are installed on server but not displaying in phpinfo()

    5 janvier 2012, par onkar

    I have installed imagemagick and ffmpeg on my CentOs 5.5 Godaddy VPS.

    I have added extension in php.ini also.

    I can able to see the installed packages in /usr/local/bin.

    But when I print phpinfo(), I am not able to see those. Not event I can use those packages.

    Any issue with installation? any help on this?

    Thanks, Onkar

  • Joining and cutting mpeg 2 video files

    4 janvier 2012, par D.Rosado

    I have multiple mpeg2 video files. What I need to do is get a clip of video that is contained in several of these mpeg2 videos. What I'm doing at the moment is:

    • Join them using the command:

          copy /b file1.mpg + ... + fileN.mpg output.mpg
      
    • The ouput.mpg duration is wrong so I'm using FFMpeg to fix that:

          ffmpeg -y -i output.mpeg -target pal-dvd outputFixed.mpg
      

    The problem is when I try to extract only one portion of this output.mpg, at the same time I "fix" it, with the -ss and -t FFMpeg commands becouse the video duration is wrong as i said.

    So the question is:

    • Is there any way to combine Mpeg2 files without getting the duration wrong?
    • Is there any way to fix the duration of a Mpeg2 file and extract one portion at the same time?

    Any suggestion would be appreciated.

  • iPhone camera shooting video using the AVCaptureSession and using ffmpeg CMSampleBufferRef a change in h.264 format is the issue. please advice

    4 janvier 2012, par isaiah

    My goal is h.264/AAC , mpeg2-ts streaming to server from iphone device.

    Current my source is FFmpeg+libx264 compile success. I Know gnu License. I want the demo program.

    I'm want to know that

    1.CMSampleBufferRef to AVPicture data is success?

     avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);
      pFrame linesize and data is not null but pst -9233123123 . outpic also .
     Because of this I have to guess 'non-strictly-monotonic PTS' message 
    

    2.This log is repeat.

    encoding frame (size= 0)
    encoding frame = "" , 'avcodec_encode_video' return 0 is success but always 0 . 
    

    I don't know what to do...

    2011-06-01 15:15:14.199 AVCam[1993:7303] pFrame = avcodec_alloc_frame(); 
    2011-06-01 15:15:14.207 AVCam[1993:7303] avpicture_fill = 1228800
    Video encoding
    2011-0601 15:5:14.215 AVCam[1993:7303] codec = 5841844
    [libx264 @ 0x1441e00] using cpu capabilities: ARMv6 NEON
    [libx264 @ 0x1441e00] profile Constrained Baseline, level 2.0[libx264 @ 0x1441e00] non-strictly-monotonic PTS
    encoding frame (size=    0)
    encoding frame 
    [libx264 @ 0x1441e00] final ratefactor: 26.74
    

    3.I have to guess 'non-strictly-monotonic PTS' message is the cause of all problems. what is this 'non-strictly-monotonic PTS' .

    ~~~~~~~~~this is source ~~~~~~~~~~~~~~~~~~~~

    (void)        captureOutput:(AVCaptureOutput *)captureOutput 
            didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
                   fromConnection:(AVCaptureConnection *)connection
    {
    
        if( !CMSampleBufferDataIsReady(sampleBuffer) )
        {
            NSLog( @"sample buffer is not ready. Skipping sample" );
            return;
        }
    
    
        if( [isRecordingNow isEqualToString:@"YES"] )
        {
            lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
            if( videoWriter.status != AVAssetWriterStatusWriting  )
            {
                [videoWriter startWriting];
                [videoWriter startSessionAtSourceTime:lastSampleTime];
            }
    
            if( captureOutput == videooutput )
            {
                [self newVideoSample:sampleBuffer];
    
                CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
                CVPixelBufferLockBaseAddress(pixelBuffer, 0); 
    
                // access the data 
                int width = CVPixelBufferGetWidth(pixelBuffer); 
                int height = CVPixelBufferGetHeight(pixelBuffer); 
                unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer); 
    
                AVFrame *pFrame; 
                pFrame = avcodec_alloc_frame(); 
                pFrame->quality = 0;
    
                NSLog(@"pFrame = avcodec_alloc_frame(); ");
    
    //          int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
    
    //          int bytesSize = height * bytesPerRow ;  
    
    //          unsigned char *pixel = (unsigned char*)malloc(bytesSize);
    
    //          unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    
    //          memcpy (pixel, rowBase, bytesSize);
    
    
                int avpicture_fillNum = avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
                //NSLog(@"rawPixelBase = %i , rawPixelBase -s = %s",rawPixelBase, rawPixelBase); 
                NSLog(@"avpicture_fill = %i",avpicture_fillNum);
                //NSLog(@"width = %i,height = %i",width, height);
    
    
    
                // Do something with the raw pixels here 
    
                CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 
    
                //avcodec_init();
                //avdevice_register_all();
                av_register_all();
    
    
    
    
    
                AVCodec *codec;
                AVCodecContext *c= NULL;
                int  out_size, size, outbuf_size;
                //FILE *f;
                uint8_t *outbuf;
    
                printf("Video encoding\n");
    
                /* find the mpeg video encoder */
                codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
                NSLog(@"codec = %i",codec);
                if (!codec) {
                    fprintf(stderr, "codec not found\n");
                    exit(1);
                }
    
                c= avcodec_alloc_context();
    
                /* put sample parameters */
                c->bit_rate = 400000;
                c->bit_rate_tolerance = 10;
                c->me_method = 2;
                /* resolution must be a multiple of two */
                c->width = 352;//width;//352;
                c->height = 288;//height;//288;
                /* frames per second */
                c->time_base= (AVRational){1,25};
                c->gop_size = 10;//25; /* emit one intra frame every ten frames */
                //c->max_b_frames=1;
                c->pix_fmt = PIX_FMT_YUV420P;
    
                c ->me_range = 16;
                c ->max_qdiff = 4;
                c ->qmin = 10;
                c ->qmax = 51;
                c ->qcompress = 0.6f;
    
                /* open it */
                if (avcodec_open(c, codec) < 0) {
                    fprintf(stderr, "could not open codec\n");
                    exit(1);
                }
    
    
                /* alloc image and output buffer */
                outbuf_size = 100000;
                outbuf = malloc(outbuf_size);
                size = c->width * c->height;
    
                AVFrame* outpic = avcodec_alloc_frame();
                int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
    
                //create buffer for the output image
                uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
    
    #pragma mark -  
    
                fflush(stdout);
    
    
    //         int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
    //          uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
    //          
    //          //UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"10%d", i]];
    //          CGImageRef newCgImage = [self imageFromSampleBuffer:sampleBuffer];//[image CGImage];
    //          
    //          CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
    //          CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
    //          buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);   
    //          
    //          avpicture_fill((AVPicture*)pFrame, buffer, PIX_FMT_RGB8, c->width, c->height);
                avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
    
                struct SwsContext* fooContext = sws_getContext(c->width, c->height, 
                                                               PIX_FMT_RGB8, 
                                                               c->width, c->height, 
                                                               PIX_FMT_YUV420P, 
                                                               SWS_FAST_BILINEAR, NULL, NULL, NULL);
    
                //perform the conversion
                sws_scale(fooContext, pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
                // Here is where I try to convert to YUV
    
                /* encode the image */
    
                out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
                printf("encoding frame (size=%5d)\n", out_size);
                printf("encoding frame %s\n", outbuf);
    
    
                //fwrite(outbuf, 1, out_size, f);
    
                //              free(buffer);
                //              buffer = NULL;      
    
    
    
                /* add sequence end code to have a real mpeg file */
    //          outbuf[0] = 0x00;
    //          outbuf[1] = 0x00;
    //          outbuf[2] = 0x01;
    //          outbuf[3] = 0xb7;
                //fwrite(outbuf, 1, 4, f);
                //fclose(f);
                free(outbuf);
    
                avcodec_close(c);
                av_free(c);
                av_free(pFrame);
                printf("\n");
    
  • GStreamer vs FFmpeg

    4 janvier 2012, par user1129474

    I try to record a Video with the OpenCV Framework an would like to save that into an Matroska(mkv) Container together with some additional data streams.

    First I thought using FFmpeg is the way that. But while looking into the OpenCV Sourcecode and searching in the web I found GStreamer.

    Because the documentation in GStreamer is much better than the FFmpeg documentation I would prefer using this Framework.

    In my understanding GStreamer is primarily used for Streaming, but could also rncode and mux video data.

    Is there any disadvantage when using GStreamer instead of FFmpeg?

    Thanks in advance Horst

  • FFMPEG what is the funcition combine WAV file and Video File ?

    4 janvier 2012, par ALexF

    i use Qt & opencv record video, QAudioInput for record audio --> wav file, i want combine they into 1 video file, every body talk me should use FFMPEG for combine they, i research very much but i cant found Funcition or class implement it, please help me

    thanks very muchs

    i write it on windows & macos