Newest 'x264' Questions - Stack Overflow
Les articles publiés sur le site
-
Convert OpenGL output to H264 with x264
7 août 2013, par user2660369I want to convert the output of an OpenGL Program to h264 and stream the output. I got collected most of the code somewhere and I get an output file but I have no Idea what to do with it, or if it is valid. Currently the output is just saved in file.h264.
Edit: "Global" Variables
x264_param_t param; x264_t* encoder; x264_picture_t pic_in; x264_picture_t pic_out; x264_nal_t *headers; int i_nal; FILE* pFile;
My init function:
initX264() { pFile = fopen("file.h264", "wb"); x264_param_t param; x264_param_default_preset(¶m, "veryfast", "zerolatency"); param.i_threads = 1; param.i_width = 1024; param.i_height = 768; param.i_fps_num = 30; param.i_fps_den = 1; param.i_keyint_max = 30; param.b_intra_refresh = 1; param.rc.i_rc_method = X264_RC_CRF; param.rc.f_rf_constant = 25; param.rc.f_rf_constant_max = 35; param.b_annexb = 0; param.b_repeat_headers = 0; param.i_log_level = X264_LOG_DEBUG; x264_param_apply_profile(¶m, "baseline"); encoder = x264_encoder_open(¶m); x264_picture_alloc(&pic_in, X264_CSP_I420, 1024, 768); x264_encoder_parameters( encoder, ¶m ); x264_encoder_headers( encoder, &headers, &i_nal ); int size = headers[0].i_payload + headers[1].i_payload + headers[2].i_payload; fwrite( headers[0].p_payload, 1, size, pFile); }
This goes in the Render function and is executed about 30 times per second:
GLubyte *data = new GLubyte[3 * 1024 * 768]; GLubyte *PixelYUV = new GLubyte[3 * 1024 * 768]; glReadPixels(0, 0, 1024, 768, GL_RGB, GL_UNSIGNED_BYTE, data); RGB2YUV(1024, 768, data, PixelYUV, PixelYUV + 1024 * 768, PixelYUV + 1024 * 768 + (1024 * 768) / 4, true); pic_in.img.plane[0] = PixelYUV; pic_in.img.plane[1] = PixelYUV + 1024 * 768; pic_in.img.plane[2] = PixelYUV + 1024 * 768 + (1024 * 768) / 4; x264_nal_t* nals; int i_nals; int frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out); if( frame_size ) { fwrite( (char*)nals[0].p_payload, frame_size, 1, pFile ); }
I got the GRB2YUV funktion from http://svn.gnumonks.org/trunk/21c3-video/cutting_tagging/tools/mpeg4ip-1.2/server/util/rgb2yuv/rgb2yuv.c
The output looks like
x264 [debug]: frame= 0 QP=11.14 NAL=3 Slice:I Poc:0 I:3072 P:0 SKIP:0 size=21133 bytes x264 [debug]: frame= 1 QP=20.08 NAL=2 Slice:P Poc:2 I:0 P:14 SKIP:3058 size=72 bytes x264 [debug]: frame= 2 QP=18.66 NAL=2 Slice:P Poc:4 I:0 P:48 SKIP:3024 size=161 bytes x264 [debug]: frame= 3 QP=18.23 NAL=2 Slice:P Poc:6 I:0 P:84 SKIP:2988 size=293 bytes
On Linux file file.h264 returns data.
-
Struggling with where to start with creating a x264 .Net Wrapper [closed]
3 août 2013, par Rob ElCalvo PerryI have a compiled libx264-129.dll for Windows and its functions are clearly visible in DLL Viewer. However, I havent got a clue where to start to create a .NET wrapper for it. The ultimate aim is to create a piece of screen recording software with x264 as the codec..
Can anyone shed some light on where to start (I understand about P/Invoke etc) What I'm looking for is the fundamentals needed to wrap the library and encode bitmaps from .NET..
I know that x264_param_t plays a part in creating the encoder object but with no knowledge of C, I'm totally stuck really :-s
-
Has any one compiled lib x264 using CLI backend for gcc compiler ?
3 août 2013, par SpenderHas any one compiled lib x264 using CLI backend for gcc compiler? (Compiled x264 into .net dll)
-
iOS allocation grow using x264 encoding
19 juillet 2013, par cssmhylI get the video yuv data in callback and save the image data by NSData.Then I put the data into NSData,And put the array to queue(NSMutableArray). These are code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ if ([Application sharedInstance].isRecording) { if (captureOutput == self.captureManager.videOutput) { uint64_t capturedHostTime = [self GetTickCount]; int allSpace = capturedHostTime - lastCapturedHostTime; NSNumber *spaces = [[NSNumber alloc] initWithInt:allSpace]; NSNumber *startTime = [[NSNumber alloc] initWithUnsignedLongLong:lastCapturedHostTime]; lastCapturedHostTime = capturedHostTime; CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(pixelBuffer, 0); uint8_t *baseAddress0 = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); uint8_t *baseAddress1 = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); size_t width = CVPixelBufferGetWidth(pixelBuffer); size_t height = CVPixelBufferGetHeight(pixelBuffer); NSData *baseAddress0Data = [[NSData alloc] initWithBytes:baseAddress0 length:width*height]; NSData *baseAddress1Data = [[NSData alloc] initWithBytes:baseAddress1 length:width*height/2]; CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); NSArray *array = [[NSArray alloc] initWithObjects:baseAddress0Data,baseAddress1Data,spaces,startTime ,nil]; [baseAddress0Data release]; [baseAddress1Data release]; [spaces release]; [startTime release]; @synchronized([Application sharedInstance].pearVideoQueue){ [[Application sharedInstance] enqueuePearVideo:[Application sharedInstance].pearVideoQueue withData:array]; [array release]; } } } }
now,I run an operation and get data from the queue ,then encode them by x264.I destory de array after encoding.
- (void)main{ while ([Application sharedInstance].pearVideoQueue) { if (![Application sharedInstance].isRecording) { NSLog(@"encode operation break"); break; } if (![[Application sharedInstance].pearVideoQueue isQueueEmpty]) { NSArray *pearVideoArray; @synchronized([Application sharedInstance].pearVideoQueue){ pearVideoArray = [[Application sharedInstance].pearVideoQueue dequeue]; [[Application sharedInstance] encodeToH264:pearVideoArray]; [pearVideoArray release]; pearVideoArray = nil; } } else{ [NSThread sleepForTimeInterval:0.01]; } } }
this is encoding method
- (void)encodeX264:(NSArray *)array{ int i264Nal; x264_picture_t pic_out; x264_nal_t *p264Nal; NSNumber *st = [array lastObject]; NSNumber *sp = [array objectAtIndex:2]; uint64_t startTime = [st unsignedLongLongValue]; int spaces = [sp intValue]; NSData *baseAddress0Data = [array objectAtIndex:0]; NSData *baseAddress1Data = [array objectAtIndex:1]; const char *baseAddress0 = baseAddress0Data.bytes; const char *baseAddress1 = baseAddress1Data.bytes; if (baseAddress0 == nil) { return; } memcpy(p264Pic->img.plane[0], baseAddress0, PRESENT_FRAME_WIDTH*PRESENT_FRAME_HEIGHT); uint8_t * pDst1 = p264Pic->img.plane[1]; uint8_t * pDst2 = p264Pic->img.plane[2]; for( int i = 0; i < PRESENT_FRAME_WIDTH*PRESENT_FRAME_HEIGHT/4; i ++ ) { *pDst1++ = *baseAddress1++; *pDst2++ = *baseAddress1++; } if( x264_encoder_encode( p264Handle, &p264Nal, &i264Nal, p264Pic ,&pic_out) < 0 ) { fprintf( stderr, "x264_encoder_encode failed/n" ); } i264Nal = 0; if (i264Nal > 0) { int i_size; int spslen =0; unsigned char spsData[1024]; char * data = (char *)szBuffer+100; memset(szBuffer, 0, sizeof(szBuffer)); if (ifFirstSps) { ifFirstSps = NO; if (![Application sharedInstance].ifAudioStarted) { NSLog(@"video first"); [Application sharedInstance].startTick = startTime; NSLog(@"startTick: %llu",startTime); [Application sharedInstance].ifVideoStarted = YES; } } for (int i=0 ; inal_buffer_size < p264Nal[i].i_payload*3/2+4) { p264Handle->nal_buffer_size = p264Nal[i].i_payload*2+4; x264_free( p264Handle->nal_buffer ); p264Handle->nal_buffer = x264_malloc( p264Handle->nal_buffer_size ); } i_size = p264Nal[i].i_payload; memcpy(data, p264Nal[i].p_payload, p264Nal[i].i_payload); int splitNum = 0; for (int i=0; i=1) { timeSpace = spaces/(i264Nal-1)*i; }else{ timeSpace = spaces/i264Nal*i; } int timeStamp = startTime-[Application sharedInstance].startTick + timeSpace; switch (type) { case NALU_TYPE_SPS: spslen = i_size-splitNum; memcpy(spsData, data, spslen); break; case NALU_TYPE_PPS: timeStamp = timeStamp - timeSpace; [self pushSpsAndPpsQueue:(char *)spsData andppsData:(char *)data withPPSlength:spslen andPPSlength:(i_size-splitNum) andTimeStamp:timeStamp]; break; case NALU_TYPE_IDR: [self pushVideoNALU:(char *)data withLength:(i_size-splitNum) ifIDR:YES andTimeStamp:timeStamp]; break; case NALU_TYPE_SLICE: case NALU_TYPE_SEI: [self pushVideoNALU:(char *)data withLength:(i_size-splitNum) ifIDR:NO andTimeStamp:timeStamp]; break; default: break; } } } }
the question is : I used instruments and found that the data I captured increase ,but NSLog show that the space-time I create de array and release it did not increase,and when I release ,the array's retain count is 1. the object's retain count it contains is also one. then I didn't encode,the memory didn't increase...I was confused...please help.. the image pixel is 640x480.
intruments leaks picture:
-
FFmpeg installation for x264 codec
17 juillet 2013, par user1830062can anybody help me out in where can i find ffmpeg download for windows x264 codec , am developing a website which converts any file format to mp4h264 baseline format , everythng was fine with another wrap installer called MooO ffmpeg which i found in this link http://www.moo0.com/?top=http://www.moo0.com/software/FFmpeg/ evrythng is fine i can access the exe from my local system as the file is installed rather i could acess the same when its in remote as i knw the batch file location is different , any help related with ffmpeg x264 sourcecode or access batch file would be helpful.
Thanks for your time