Newest 'x264' Questions - Stack Overflow
Les articles publiés sur le site
-
"moov atom not found" when using av_interleaved_write_frame but not avio_write
9 octobre 2017, par icStaticI am attempting to put together a class that can take arbitrary frames and construct a video from it using the ffmpeg 3.3.3 API. I've been struggling to find a good example for this as the examples still seem to be using deprecated functions, so I've attempted to patch this using the documentation in the headers and by referring to a few github repos that seem to be using the new version.
If I use av_interleaved_write_frame to write the encoded packets to the output then ffprobe outputs the following:
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002760120] moov atom not found0 X:\Diagnostics.mp4: Invalid data found when processing input
ffplay is unable to play the file generated using this method.
If I instead swap it out for a call to avio_write, ffprobe instead outputs:
Input #0, h264, from 'X:\Diagnostics.mp4': Duration: N/A, bitrate: N/A Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc
ffplay can mostly play this file until it gets towards the end, when it outputs:
Input #0, h264, from 'X:\Diagnostics.mp4': 0KB sq= 0B f=0/0 Duration: N/A, bitrate: N/A Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc [h264 @ 000000000254ef80] error while decoding MB 31 22, bytestream -65 [h264 @ 000000000254ef80] concealing 102 DC, 102 AC, 102 MV errors in I frame nan M-V: nan fd= 1 aq= 0KB vq= 0KB sq= 0B f=0/0
VLC cannot play files from either method. The second method's file displays a single black frame then hides the video output. The first does not display anything. Neither of them give a video duration.
Does anyone have any ideas what's happening here? I assume my solution is close to working as I'm getting a good chunk of valid frames coming through.
Code:
void main() { OutputStream Stream( "Output.mp4", 672, 380, 25, true ); Stream.Initialize(); int i = 100; while( i-- ) { //... Generate a frame Stream.WriteFrame( Frame ); } Stream.CloseFile(); } OutputStream::OutputStream( const std::string& Path, unsigned int Width, unsigned int Height, int Framerate, bool IsBGR ) : Stream() , FrameIndex( 0 ) { auto& ID = *m_InternalData; ID.Path = Path; ID.Width = Width; ID.Height= Height; ID.Framerate.num = Framerate; ID.Framerate.den = 1; ID.PixelFormat = IsBGR ? AV_PIX_FMT_BGR24 : AV_PIX_FMT_RGB24; ID.CodecID = AV_CODEC_ID_H264; ID.CodecTag = 0; ID.AspectRatio.num = 1; ID.AspectRatio.den = 1; } CameraStreamError OutputStream::Initialize() { av_log_set_callback( &InputStream::LogCallback ); av_register_all(); avformat_network_init(); auto& ID = *m_InternalData; av_init_packet( &ID.Packet ); int Result = avformat_alloc_output_context2( &ID.FormatContext, nullptr, nullptr, ID.Path.c_str() ); if( Result < 0 || !ID.FormatContext ) { STREAM_ERROR( UnknownError ); } AVCodec* Encoder = avcodec_find_encoder( ID.CodecID ); if( !Encoder ) { STREAM_ERROR( NoH264Support ); } AVStream* OutStream = avformat_new_stream( ID.FormatContext, Encoder ); if( !OutStream ) { STREAM_ERROR( UnknownError ); } ID.CodecContext = avcodec_alloc_context3( Encoder ); if( !ID.CodecContext ) { STREAM_ERROR( NoH264Support ); } ID.CodecContext->time_base = av_inv_q(ID.Framerate); { AVCodecParameters* CodecParams = OutStream->codecpar; CodecParams->width = ID.Width; CodecParams->height = ID.Height; CodecParams->format = AV_PIX_FMT_YUV420P; CodecParams->codec_id = ID.CodecID; CodecParams->codec_type = AVMEDIA_TYPE_VIDEO; CodecParams->profile = FF_PROFILE_H264_MAIN; CodecParams->level = 40; Result = avcodec_parameters_to_context( ID.CodecContext, CodecParams ); if( Result < 0 ) { STREAM_ERROR( EncoderCreationError ); } } if( ID.IsVideo ) { ID.CodecContext->width = ID.Width; ID.CodecContext->height = ID.Height; ID.CodecContext->sample_aspect_ratio = ID.AspectRatio; ID.CodecContext->time_base = av_inv_q(ID.Framerate); if( Encoder->pix_fmts ) { ID.CodecContext->pix_fmt = Encoder->pix_fmts[0]; } else { ID.CodecContext->pix_fmt = ID.PixelFormat; } } //Snip Result = avcodec_open2( ID.CodecContext, Encoder, nullptr ); if( Result < 0 ) { STREAM_ERROR( EncoderCreationError ); } Result = avcodec_parameters_from_context( OutStream->codecpar, ID.CodecContext ); if( Result < 0 ) { STREAM_ERROR( EncoderCreationError ); } if( ID.FormatContext->oformat->flags & AVFMT_GLOBALHEADER ) { ID.CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; } OutStream->time_base = ID.CodecContext->time_base; OutStream->avg_frame_rate= av_inv_q(OutStream->time_base); if( !( ID.FormatContext->oformat->flags & AVFMT_NOFILE ) ) { Result = avio_open( &ID.FormatContext->pb, ID.Path.c_str(), AVIO_FLAG_WRITE ); if( Result < 0 ) { STREAM_ERROR( FileNotWriteable ); } } Result = avformat_write_header( ID.FormatContext, nullptr ); if( Result < 0 ) { STREAM_ERROR( WriteFailed ); } ID.Output = std::make_unique( ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt ); ID.ConversionContext = sws_getCachedContext( ID.ConversionContext, ID.Width, ID.Height, ID.PixelFormat, ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL ); return CameraStreamError::Success; } CameraStreamError OutputStream::WriteFrame( FFMPEG::Frame* Frame ) { auto& ID = *m_InternalData; ID.Output->Prepare(); int OutputSliceSize = sws_scale( m_InternalData->ConversionContext, Frame->GetFrame()->data, Frame->GetFrame()->linesize, 0, Frame->GetHeight(), ID.Output->GetFrame()->data, ID.Output->GetFrame()->linesize ); ID.Output->GetFrame()->pts = ID.CodecContext->frame_number; int Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() ); if( Result == AVERROR(EAGAIN) ) { CameraStreamError ResultErr = SendAll(); if( ResultErr != CameraStreamError::Success ) { return ResultErr; } Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() ); } if( Result == 0 ) { CameraStreamError ResultErr = SendAll(); if( ResultErr != CameraStreamError::Success ) { return ResultErr; } } FrameIndex++; return CameraStreamError::Success; } CameraStreamError OutputStream::SendAll( void ) { auto& ID = *m_InternalData; int Result; do { AVPacket TempPacket = {}; av_init_packet( &TempPacket ); Result = avcodec_receive_packet( GetData().CodecContext, &TempPacket ); if( Result == 0 ) { av_packet_rescale_ts( &TempPacket, ID.CodecContext->time_base, ID.FormatContext->streams[0]->time_base ); TempPacket.stream_index = ID.FormatContext->streams[0]->index; //avio_write( ID.FormatContext->pb, TempPacket.data, TempPacket.size ); Result = av_interleaved_write_frame( ID.FormatContext, &TempPacket ); if( Result < 0 ) { STREAM_ERROR( WriteFailed ); } av_packet_unref( &TempPacket ); } else if( Result != AVERROR(EAGAIN) ) { continue; } else if( Result != AVERROR_EOF ) { break; } else if( Result < 0 ) { STREAM_ERROR( WriteFailed ); } } while ( Result == 0); return CameraStreamError::Success; } CameraStreamError OutputStream::CloseFile() { auto& ID = *m_InternalData; while( true ) { //Flush int Result = avcodec_send_frame( ID.CodecContext, nullptr ); if( Result == 0 ) { CameraStreamError StrError = SendAll(); if( StrError != CameraStreamError::Success ) { return StrError; } } else if( Result == AVERROR_EOF ) { break; } else { STREAM_ERROR( WriteFailed ); } } int Result = av_write_trailer( ID.FormatContext ); if( Result < 0 ) { STREAM_ERROR( WriteFailed ); } if( !(ID.FormatContext->oformat->flags& AVFMT_NOFILE) ) { Result = avio_close( ID.FormatContext->pb ); if( Result < 0 ) { STREAM_ERROR( WriteFailed ); } } return CameraStreamError::Success; }
Note I've simplified a few things and inlined a few bits that were elsewhere. I've also removed all the shutdown code as anything that happens after the file is closed is irrelevant.
Full repo here: https://github.com/IanNorris/Witness If you clone this the issue is with the 'Diagnostics' output, the Output file is fine. There are two hardcoded paths to X:.
-
How to greatly reduce video size before downloading while maintaining good quality ? [closed]
25 septembre 2017, par Momin ShaikhI need to download huge files from YouTube with my android phone.. need to download a lot of 360p videos for my study purposes but living in a 3rd country like Bangladesh where bandwidth costs is super high forcing me to afford less data. So, I am looking for a way to greatly reduce file size still maintaining good quality.
I know that converting videos to x265 or HEVC format reduces video sizes as well as keeping the good quality keeping . But i think that can not be done online.. Is there any way to just reduce file size without reducing the quality ?
-
How to send large x264 NAL over RTMP ?
17 septembre 2017, par samgakI'm trying to stream video over RTMP using x264 and rtmplib in C++ on Windows.
So far I have managed to encode and stream a test video pattern consisting of animated multi-colored vertical lines that I generate in code. It's possible to start and stop the stream, and start and stop the player, and it works every time. However, as soon as I modify it to send encoded camera frames instead of the test pattern, the streaming becomes very unreliable. It only starts <20% of the time, and stopping and restarting doesn't work.
After searching around for answers I concluded that it must be because the NAL size is too large (my test pattern is mostly flat color so it encodes to a very small size), and there is an Ethernet packet limit of around 1400 bytes that affects it. So, I tried to make x264 only output NALs under 1200 bytes, by setting
i_slice_max_size
in my x264 setup:if (x264_param_default_preset(¶m, "veryfast", "zerolatency") < 0) return false; param.i_csp = X264_CSP_I420; param.i_threads = 1; param.i_width = width; //set frame width param.i_height = height; //set frame height param.b_cabac = 0; param.i_bframe = 0; param.b_interlaced = 0; param.rc.i_rc_method = X264_RC_ABR; param.i_level_idc = 21; param.rc.i_bitrate = 128; param.b_intra_refresh = 1; param.b_annexb = 1; param.i_keyint_max = 25; param.i_fps_num = 15; param.i_fps_den = 1; param.i_slice_max_size = 1200; if (x264_param_apply_profile(¶m, "baseline") < 0) return false;
This reduces the NAL size, but it doesn't seem to make any difference to the reliability issues.
I've also tried fragmenting the NALs, using this Java code and RFC 3984 (RTP Payload Format for H.264 Video) as a reference, but it doesn't work at all (code below), the server says "stream has stopped" immediately after it starts. I've tried including and excluding the NAL header (with the timestamp etc) in each fragment or just the first, but it doesn't work for me either way.
I'm pretty sure my issue has to be with the NAL size and not PPS/SPS or anything like that (as in this question) or with my network connection or test server, because everything works fine with the test pattern.
I'm sending
NAL_PPS
andNAL_SPS
(only once), and allNAL_SLICE_IDR
andNAL_SLICE
packets. I'm ignoringNAL_SEI
and not sending it.One thing that is confusing me is that the source code that I can find on the internet that does similar things to what I want doesn't match up with what the RFC specifies. For example, RFC 3984 section 5.3 defines the NAL octet, which should have the NAL type in the lower 5 bits and the NRI in bits 5 and 6 (bit 7 is zero). The types NAL_SLICE_IDR and NAL_SLICE have values of 5 and 1 respectively, which are the ones in table 7-1 of this document (PDF) referenced by the RFC and also the ones output by x264. But the code that actually works sets the NAL octet to 39 (0x27) and 23 (0x17), for reasons unknown to me. When implementing fragmented NALs, I've tried both following the spec and using the values copied over from the working code, but neither works.
Any help appreciated.
void sendNAL(unsigned char* buf, int len) { Logging::LogNumber("sendNAL", len); RTMPPacket * packet; long timeoffset = GetTickCount() - startTime; if (buf[2] == 0x00) { //00 00 00 01 buf += 4; len -= 4; } else if (buf[2] == 0x01) { //00 00 01 buf += 3; len -= 3; } else { Logging::LogStdString("INVALID x264 FRAME!"); } int type = buf[0] & 0x1f; int maxNALSize = 1200; if (len <= maxNALSize) { packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + len + 9); memset(packet, 0, RTMP_HEAD_SIZE); packet->m_body = (char *)packet + RTMP_HEAD_SIZE; packet->m_nBodySize = len + 9; unsigned char *body = (unsigned char *)packet->m_body; memset(body, 0, len + 9); body[0] = 0x27; if (type == NAL_SLICE_IDR) { body[0] = 0x17; } body[1] = 0x01; //nal unit body[2] = 0x00; body[3] = 0x00; body[4] = 0x00; body[5] = (len >> 24) & 0xff; body[6] = (len >> 16) & 0xff; body[7] = (len >> 8) & 0xff; body[8] = (len) & 0xff; memcpy(&body[9], buf, len); packet->m_hasAbsTimestamp = 0; packet->m_packetType = RTMP_PACKET_TYPE_VIDEO; if (rtmp != NULL) { packet->m_nInfoField2 = rtmp->m_stream_id; } packet->m_nChannel = 0x04; packet->m_headerType = RTMP_PACKET_SIZE_LARGE; packet->m_nTimeStamp = timeoffset; if (rtmp != NULL) { RTMP_SendPacket(rtmp, packet, QUEUE_RTMP); } free(packet); } else { packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + maxNALSize + 90); memset(packet, 0, RTMP_HEAD_SIZE); // split large NAL into multiple smaller ones: int sentBytes = 0; bool firstFragment = true; while (sentBytes < len) { // decide how many bytes to send in this fragment: int fragmentSize = maxNALSize; if (sentBytes + fragmentSize > len) fragmentSize = len - sentBytes; bool lastFragment = (sentBytes + fragmentSize) >= len; packet->m_body = (char *)packet + RTMP_HEAD_SIZE; int headerBytes = firstFragment ? 10 : 2; packet->m_nBodySize = fragmentSize + headerBytes; unsigned char *body = (unsigned char *)packet->m_body; memset(body, 0, fragmentSize + headerBytes); //key frame int NALtype = 0x27; if (type == NAL_SLICE_IDR) { NALtype = 0x17; } // Set FU-A indicator body[0] = (byte)((NALtype & 0x60) & 0xFF); // FU indicator NRI body[0] += 28; // 28 = FU - A (fragmentation unit A) see RFC: https://tools.ietf.org/html/rfc3984 // Set FU-A header body[1] = (byte)(NALtype & 0x1F); // FU header type body[1] += (firstFragment ? 0x80 : 0) + (lastFragment ? 0x40 : 0); // Start/End bits body[2] = 0x01; //nal unit body[3] = 0x00; body[4] = 0x00; body[5] = 0x00; body[6] = (len >> 24) & 0xff; body[7] = (len >> 16) & 0xff; body[8] = (len >> 8) & 0xff; body[9] = (len) & 0xff; //copy data memcpy(&body[headerBytes], buf + sentBytes, fragmentSize); packet->m_hasAbsTimestamp = 0; packet->m_packetType = RTMP_PACKET_TYPE_VIDEO; if (rtmp != NULL) { packet->m_nInfoField2 = rtmp->m_stream_id; } packet->m_nChannel = 0x04; packet->m_headerType = RTMP_PACKET_SIZE_LARGE; packet->m_nTimeStamp = timeoffset; if (rtmp != NULL) { RTMP_SendPacket(rtmp, packet, TRUE); } sentBytes += fragmentSize; firstFragment = false; } free(packet); } }
-
GStreamer x264 on Linux (ARM)
8 septembre 2017, par Tõnu SamuelTrying to get streaming work with x264 encoding.
I am doing some black magic with stitching two images which is known to work:
gst-launch-1.0 -e \ v4l2src device=/dev/video0 ! \ video/x-raw,framerate=90/1,width=640,height=480 ! m.sink_0 \ v4l2src device=/dev/video1 ! \ video/x-raw,framerate=90/1,width=640,height=480 ! m.sink_1 \ videomixer name=m sink_1::xpos=640 ! \ video/x-raw,framerate=90/1,width=1280,height=480 ! \ xvimagesink
Now I am trying to get same thing over x264 stream with help of internet:
gst-launch-1.0 -e \ v4l2src device=/dev/video0 ! \ video/x-raw,framerate=90/1,width=640,height=480 ! m.sink_0 \ v4l2src device=/dev/video1 ! \ video/x-raw,framerate=90/1,width=640,height=480 ! m.sink_1 \ videomixer name=m sink_1::xpos=640 ! \ video/x-raw,framerate=90/1,width=1280,height=480 ! \ x264enc tune=zerolatency byte-stream=true bitrate=3000 threads=2 ! \ h264parse config-interval=1 ! \ rtph264pay ! \ udpsink host=127.0.0.1 port=5000
And seems to work because no errors appear. But I see no way to receive image.
I have tried
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp ! rtph264depay ! h264parse ! avdec_h264 ! xvimagesink
which does not provide anything useful. Also attempted to use VLC with SDP file:
c=IN IP4 127.0.0.1 m=video 5000 RTP/AVP 96 a=rtpmap:96 H264/3000
I must be doing something wrong but unsure what.
EDIT: Question about version of GStreamer. Probably this is information needed:
ubuntu@tegra-ubuntu:~$ gst-launch-1.0 --version gst-launch-1.0 version 1.2.4 GStreamer 1.2.4 https://launchpad.net/distros/ubuntu/+source/gstreamer1.0 ubuntu@tegra-ubuntu:~$ $ dpkg -l | grep gstreamer ii gir1.2-gstreamer-1.0 1.2.4-0ubuntu1.1 armhf Description: GObject introspection data for the GStreamer library ii gstreamer-tools 0.10.36-1.2ubuntu3 armhf Tools for use with GStreamer ii gstreamer0.10-alsa:armhf 0.10.36-1.1ubuntu2 armhf GStreamer plugin for ALSA ii gstreamer0.10-fluendo-mp3:armhf 0.10.23.debian-3 armhf Fluendo mp3 decoder GStreamer 0.10 plugin ii gstreamer0.10-nice:armhf 0.1.4-1 armhf ICE library (GStreamer 0.10 plugin) ii gstreamer0.10-plugins-bad:armhf 0.10.23-7.2ubuntu1.3 armhf GStreamer plugins from the "bad" set ii gstreamer0.10-plugins-bad-multiverse 0.10.21-1ubuntu3 armhf GStreamer plugins from the "bad" set (Multiverse Variant) ii gstreamer0.10-plugins-base:armhf 0.10.36-1.1ubuntu2 armhf GStreamer plugins from the "base" set ii gstreamer0.10-plugins-base-apps 0.10.36-1.1ubuntu2 armhf GStreamer helper programs from the "base" set ii gstreamer0.10-plugins-good:armhf 0.10.31-3+nmu1ubuntu5.2 armhf GStreamer plugins from the "good" set ii gstreamer0.10-plugins-ugly:armhf 0.10.19-2ubuntu5 armhf GStreamer plugins from the "ugly" set ii gstreamer0.10-pulseaudio:armhf 0.10.31-3+nmu1ubuntu5.2 armhf GStreamer plugin for PulseAudio ii gstreamer0.10-tools 0.10.36-1.2ubuntu3 armhf Tools for use with GStreamer ii gstreamer0.10-x:armhf 0.10.36-1.1ubuntu2 armhf GStreamer plugins for X11 and Pango ii gstreamer1.0-alsa:armhf 1.2.4-1~ubuntu2 armhf GStreamer plugin for ALSA ii gstreamer1.0-clutter 2.0.8-1build1 armhf Clutter PLugin for GStreamer 1.0 ii gstreamer1.0-fluendo-mp3:armhf 0.10.23.debian-3 armhf Fluendo mp3 decoder GStreamer 1.0 plugin ii gstreamer1.0-libav:armhf 1.2.4-1~ubuntu1 armhf libav plugin for GStreamer ii gstreamer1.0-nice:armhf 0.1.4-1 armhf ICE library (GStreamer plugin) ii gstreamer1.0-plugins-bad:armhf 1.2.4-1~ubuntu1.1 armhf GStreamer plugins from the "bad" set ii gstreamer1.0-plugins-bad-faad:armhf 1.2.4-1~ubuntu1.1 armhf GStreamer faad plugin from the "bad" set ii gstreamer1.0-plugins-bad-videoparsers:armhf 1.2.4-1~ubuntu1.1 armhf GStreamer videoparsers plugin from the "bad" set ii gstreamer1.0-plugins-base:armhf 1.2.4-1~ubuntu2 armhf GStreamer plugins from the "base" set ii gstreamer1.0-plugins-base-apps 1.2.4-1~ubuntu2 armhf GStreamer helper programs from the "base" set ii gstreamer1.0-plugins-good:armhf 1.2.4-1~ubuntu1.3 armhf GStreamer plugins from the "good" set ii gstreamer1.0-plugins-ugly:armhf 1.2.3-2build1 armhf GStreamer plugins from the "ugly" set ii gstreamer1.0-pulseaudio:armhf 1.2.4-1~ubuntu1.3 armhf GStreamer plugin for PulseAudio ii gstreamer1.0-tools 1.2.4-0ubuntu1.1 armhf Tools for use with GStreamer ii gstreamer1.0-x:armhf 1.2.4-1~ubuntu2 armhf GStreamer plugins for X11 and Pango ii libgstreamer-plugins-bad0.10-0:armhf 0.10.23-7.2ubuntu1.3 armhf GStreamer shared libraries from the "bad" set ii libgstreamer-plugins-bad1.0-0:armhf 1.2.4-1~ubuntu1.1 armhf GStreamer development files for libraries from the "bad" set ii libgstreamer-plugins-base0.10-0:armhf 0.10.36-1.1ubuntu2 armhf GStreamer libraries from the "base" set ii libgstreamer-plugins-base1.0-0:armhf 1.2.4-1~ubuntu2 armhf GStreamer libraries from the "base" set ii libgstreamer-plugins-good1.0-0:armhf 1.2.4-1~ubuntu1.3 armhf GStreamer development files for libraries from the "good" set ii libgstreamer0.10-0:armhf 0.10.36-1.2ubuntu3 armhf Core GStreamer libraries and elements ii libgstreamer1.0-0:armhf 1.2.4-0ubuntu1.1 armhf Core GStreamer libraries and elements $
-
how to reencode with ffmpeg (with limited x264)
6 septembre 2017, par SarfrazUntil now I used this script to reencode my rips for my box (tv decoder):
^_^ ( ~ ) -> cat ~/++/src/convert.sh #! /bin/bash name=$(path -r "$1") # it gives the file name without the extension [ "$1" = *.mp4 ] && ffmpeg -i "$name".mp4 -vcodec copy -acodec copy "$name".mkv x264 --preset veryfast --tune animation --crf 18 --vf resize:720,576,16:15 -o "$name".tmp.mkv "$name".mkv mkvmerge -o "$name [freeplayer sd]".mkv "$name".tmp.mkv --no-video "$1" rm -rf "$name".tmp.mkv [ "$1" = *.mp4 ] && rm -rf "$name".mkv exit 0 #EOF
It works on my ubuntu and archlinux laptops. But it doesn’t on my desktop witch runs fedora. Google says that the x264 package shiped by rpmfusion doesn,t support lavf and ffms2. And I cannot unistall it because smplayer (witch i like) needs it.
Ok, so I have to compile it. Google then says "you have to build ffmpeg, ffms2 tnen x264 ensuring that the flags are correctly refered." Well, didn’t work (ffms2 cannot find LIBAV - even when I am telling where - and x264 does’t configure with lavf...)
My question is: can I use ffmpeg alone to do what my script does. I have ffmpeg version 0.8.11, x264 0.116.2048 59cb2eb and gcc: 4.6.1 20110804 (Red Hat 4.6.1-7)
EDIT: Ok, I found that: ffmpeg -i input file -acodec copy -vcodec libx264 -preset veryfast -tune animation [that part I don’t have] output