Newest 'x264' Questions - Stack Overflow
Les articles publiés sur le site
-
x264 intrarefresh intra coded frame detection
11 juillet 2011, par RobSI have used ffmpeg with libx264 to create a video using the intra-refresh flag.
Using wireshark on the RTP stream when the flag is disabled, it is easy to locate the keyframes (I-slices). When the intra-refresh flag is enabled, the keyframes are replaced by an SEI frame of type Recovery point and only the very first frame is marked as an I-slice.
All the others appear to be P-slices in my experiments.
These findings are as expected. I was wondering if it is possible to detect the intra-coded columns in the RTP packet sequence at a high level or if it is only possible by detailed parsing of the payload content?
I would like to be able to create a filter for the intra only columns using the header information only.
-
FFMPEG/X264 how to use them together
11 juin 2011, par VineetNeed a little help. I am trying to encode large videos using ffmpeg/x264. While x264 is totally unable to encode these videos to .mov, ffmpeg does a decent job.
But I need to use one of the flags which x264 provides to encode my video. So is there a good way to encode large videos using x264 given that i only want .mov as output.
While using x264 I am only mentioning input and output flags, I think that may be causing the problem.
Please guide
-
How would I assign multiple MMAP's from single file descriptor ?
9 juin 2011, par Alex StevensSo, for my final year project, I'm using Video4Linux2 to pull YUV420 images from a camera, parse them through to x264 (which uses these images natively), and then send the encoded stream via Live555 to an RTP/RTCP compliant video player on a client over a wireless network. All of this I'm trying to do in real-time, so there'll be a control algorithm, but that's not the scope of this question. All of this - except Live555 - is being written in C. Currently, I'm near the end of encoding the video, but want to improve performance.
To say the least, I've hit a snag... I'm trying to avoid User Space Pointers for V4L2 and use mmap(). I'm encoding video, but since it's YUV420, I've been malloc'ing new memory to hold the Y', U and V planes in three different variables for x264 to read upon. I would like to keep these variables as pointers to an mmap'ed piece of memory.
However, the V4L2 device has one single file descriptor for the buffered stream, and I need to split the stream into three mmap'ed variables adhering to the YUV420 standard, like so...
buffers[n_buffers].y_plane = mmap(NULL, (2 * width * height) / 3, PROT_READ | PROT_WRITE, MAP_SHARED, fd, buf.m.offset); buffers[n_buffers].u_plane = mmap(NULL, width * height / 6, PROT_READ | PROT_WRITE, MAP_SHARED, fd, buf.m.offset + ((2 * width * height) / 3 + 1) / sysconf(_SC_PAGE_SIZE)); buffers[n_buffers].v_plane = mmap(NULL, width * height / 6, PROT_READ | PROT_WRITE, MAP_SHARED, fd, buf.m.offset + ((2 * width * height) / 3 + width * height / 6 + 1) / sysconf(_SC_PAGE_SIZE));
Where "width" and "height" is the resolution of the video (eg. 640x480).
From what I understand... MMAP seeks through a file, kind of like this (pseudoish-code):
fd = v4l2_open(...); lseek(fd, buf.m.offset + (2 * width * height) / 3); read(fd, buffers[n_buffers].u_plane, width * height / 6);
My code is located in a Launchpad Repo here (for more background): http://bazaar.launchpad.net/~alex-stevens/+junk/spyPanda/files (Revision 11)
And the YUV420 format can be seen clearly from this Wiki illustration: http://en.wikipedia.org/wiki/File:Yuv420.svg (I essentially want to split up the Y, U, and V bytes into each mmap'ed memory)
Anyone care to explain a way to mmap three variables to memory from the one file descriptor, or why I went wrong? Or even hint at a better idea to parse the YUV420 buffer to x264? :P
Cheers! ^^
-
How to install X264
4 juin 2011, par z-bufferI'm trying to make a program. When I run ./configure, this is what I get:
checking for X264... configure: WARNING: Test application not built (x264 codec missing). Either you have not installed x264, or you have not installed it with the Gtk+ interface. If you compile it from source, add these options to configure: --enable-shared --enable-gtk
I have installed various packages that are related to x264 and gtk, but I'm still getting this message. Which packages do I need to install?
-
rtp live stream with libvlc
20 mai 2011, par sokrat3sI would like to use libvlc api to stream live data but instead of file I would like to stream from a data pointer (nal units from x264 encoding).
Is there a way to do this? Is it already implemented?
Thank you in advance.