Newest 'libx264' Questions - Stack Overflow
Les articles publiés sur le site
-
libavformat/libavcodec providing invalid container header
14 décembre 2017, par seanr8I'm using libavcodec to encode a stream to h264 and libavformat to store it in an mp4. The resulting container has an invalid header that can be played in VLC, but not any other player.
I've found that using the mp4 container and the "mpeg4" codec produces a valid mp4 file, but using libx265 (HEVC) or the libx264 codec produces invalid mp4s.
I can use
ffmpeg -i invalid.mp4 -vcodec copy valid.mp4
and I get a file of almost the exact same size, but in a valid container.Examples of these files are here: Broken file and Repaied file [use the download links in the upper right to examine]
I used a hex editor to see the differences in the headers of the two files and the invalid one is 1 byte smaller than the valid one.
The code I'm using to open the container and codec and to write the header is here:
AVOutputFormat *container_format; AVFormatContext *container_format_context; AVStream *video_stream; int ret; /* allocate the output media context */ avformat_alloc_output_context2(&container_format_context, NULL, NULL, out_file); if (!container_format_context) { log(INFO, "Unable to determine container format from filename, exiting\n"); exit(1); } else { log(INFO, "Using container %s\n", container_format_context->oformat->name); } if (!container_format_context) { log(ERROR, "Could not build container format context. Encoding failed."); exit(1); } container_format = container_format_context->oformat; /* Pull codec based on name */ AVCodec* codec = avcodec_find_encoder_by_name(codec_name); if (codec == NULL) { log(ERROR, "Failed to locate codec \"%s\".", codec_name); exit(1); } /* create stream */ video_stream = NULL; video_stream = avformat_new_stream(container_format_context, codec); if (!video_stream) { log(ERROR, "Could not allocate encoder stream. Cannot continue.\n"); exit(1); } video_stream->id = container_format_context->nb_streams - 1; video_stream->time_base = video_stream->codec->time_base = (AVRational) { 1, 25}; av_dump_format(container_format_context, 0, out_file, 1); /* Retrieve encoding context */ AVCodecContext* avcodec_context = video_stream->codec; if (avcodec_context == NULL) { log(ERROR, "Failed to allocate context for " "codec \"%s\".", codec_name); exit(1); } /* Init context with encoding parameters */ avcodec_context->bit_rate = bitrate; avcodec_context->width = width; avcodec_context->height = height; avcodec_context->gop_size = 10; avcodec_context->max_b_frames = 1; avcodec_context->qmax = 31; avcodec_context->qmin = 2; avcodec_context->pix_fmt = AV_PIX_FMT_YUV420P; av_dump_format(container_format_context, 0, out_file, 1); /* Open codec for use */ if (avcodec_open2(avcodec_context, codec, NULL) < 0) { log(ERROR, "Failed to open codec \"%s\".", codec_name); exit(1); } /* Allocate corresponding frame */ AVFrame* frame = av_frame_alloc(); if (frame == NULL) { exit(1); } /* Copy necessary data for frame from avcodec_context */ frame->format = avcodec_context->pix_fmt; frame->width = avcodec_context->width; frame->height = avcodec_context->height; /* Allocate actual backing data for frame */ if (av_image_alloc(frame->data, frame->linesize, frame->width, frame->height, frame->format, 32) < 0) { exit(1); } /* open the output file, if the container needs it */ if (!(container_format->flags & AVFMT_NOFILE)) { ret = avio_open(&container_format_context->pb, out_file, AVIO_FLAG_WRITE); if (ret < 0) { log(ERROR, "Error occurred while opening output file: %s\n", av_err2str(ret)); exit(1); } } /* write the stream header, if needed */ ret = avformat_write_header(container_format_context, NULL); if (ret < 0) { log(ERROR, "Error occurred while writing output file header: %s\n", av_err2str(ret)); }
The code to encode a frame is here:
/* Init video packet */ AVPacket packet; av_init_packet(&packet); /* Request that encoder allocate data for packet */ packet.data = NULL; packet.size = 0; /* Write frame to video */ int got_data; if (avcodec_encode_video2(avcontext, &packet, frame, &got_data) < 0) { log(WARNING, "Error encoding frame #%" PRId64, video_struct->next_pts); return -1; } /* Write corresponding data to file */ if (got_data) { if (packet.pts != AV_NOPTS_VALUE) { packet.pts = av_rescale_q(packet.pts, video_struct->output_stream->codec->time_base, video_struct->output_stream->time_base); } if (packet.dts != AV_NOPTS_VALUE) { packet.dts = av_rescale_q(packet.dts, video_struct->output_stream->codec->time_base, video_struct->output_stream->time_base); } write_packet(video_struct, &packet, packet.size); av_packet_unref(&packet); }
And the code to write the packet to the video stream:
static int write_packet(video_struct* video, void* data, int size) { int ret; /* use AVStream is not null, otherwise write to output fd */ AVPacket *pkt = (AVPacket*) data; pkt->stream_index = video->output_stream->index; ret = av_interleaved_write_frame(video->container_format_context, pkt); if (ret != 0) { return -1; } /* Data was written successfully */ return ret; }
-
erroneous pipeline : no element "x264enc"
27 novembre 2017, par CeratoI have a gstreamer command that requires x264enc and the error I get is:
WARNING: erroneous pipeline: no element "x264enc"
I saw posts that the solution is to install gstreamer1.0-plugins-ugly, but I need to launch the command in Windows while I was managed to find the plugin only for Linux.
Please help.
-
Android ffmpeg adb shell Unknown encoder 'libx264'
25 novembre 2017, par IChpWhen I run ffmpeg on android by adb shell, it shows this error:
Duration: 00:00:12.00, start: 0.000000, bitrate: 30412 kb/s Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 352x288, 30412 kb/s, 25 tbr, 25 tbn, 25 tbc [4;31mUnknown encoder 'libx264'
I don't understand what went wrong. It bothered me for a lot of days. Can you help me out? Thanks in advance!
(I pushed the compiled
libffmpeg.so
to/system/lib
and pushedffmpeg
to/system/bin
)Target: compile ffmpeg with x264, and run libffmpeg.so on android device by adb shell.
Compiled environment: Ubuntu16.0 32bit,ndk r10b 32bit platform 15, ffmpeg 3.0,x264 latest.
My configure:
cd ffmpeg-3.0.9 export NDK=/home/ichp/project/android-ndk-r10b export PREBUILT=$NDK/toolchains/arm-linux-androideabi-4.8/prebuilt export PLATFORM=$NDK/platforms/android-15/arch-arm export PREFIX=../simplefflib export CURRENT_PATH=/home/ichp/project/FREYA-LIVE-LIBRARY-OPTIMIZER-FOR-ANDROID
./configure --target-os=linux --prefix=$PREFIX --enable-cross-compile --enable-runtime-cpudetect --enable-asm --arch=arm --cpu=armv7-a --enable-libx264 --enable-encoder=libx264 --disable-encoders --disable-protocols --enable-protocol=file --enable-version3 --cc=$PREBUILT/linux-x86/bin/arm-linux-androideabi-gcc --cross-prefix=$PREBUILT/linux-x86/bin/arm-linux-androideabi- --disable-stripping --nm=$PREBUILT/linux-x86/bin/arm-linux-androideabi-nm --sysroot=$PLATFORM --enable-gpl --disable-shared --enable-static --enable-small --disable-ffprobe --disable-ffplay --enable-ffmpeg --disable-ffserver --disable-debug --enable-pthreads --enable-neon --extra-cflags="-I$CURRENT_PATH/temp/armeabi-v7a/include -fPIC -marm -DANDROID -DNDEBUG -static -O3 -march=armv7-a -mfpu=neon -mtune=generic-armv7-a -mfloat-abi=softfp -ftree-vectorize -mvectorize-with-neon-quad -ffast-math" --extra-ldflags="-L$CURRENT_PATH/temp/armeabi-v7a/lib" make clean make make install
-
Live555 : X264 Stream Live source based on "testOnDemandRTSPServer"
26 octobre 2017, par user2660369I am trying to create a rtsp Server that streams the OpenGL output of my program. I had a look at How to write a Live555 FramedSource to allow me to stream H.264 live, but I need the stream to be unicast. So I had a look at testOnDemandRTSPServer. Using the same Code fails. To my understanding I need to provide memory in which I store my h264 frames so the OnDemandServer can read them on Demand.
H264VideoStreamServerMediaSubsession.cpp
H264VideoStreamServerMediaSubsession* H264VideoStreamServerMediaSubsession::createNew(UsageEnvironment& env, Boolean reuseFirstSource) { return new H264VideoStreamServerMediaSubsession(env, reuseFirstSource); } H264VideoStreamServerMediaSubsession::H264VideoStreamServerMediaSubsession(UsageEnvironment& env, Boolean reuseFirstSource) : OnDemandServerMediaSubsession(env, reuseFirstSource), fAuxSDPLine(NULL), fDoneFlag(0), fDummyRTPSink(NULL) { } H264VideoStreamServerMediaSubsession::~H264VideoStreamServerMediaSubsession() { delete[] fAuxSDPLine; } static void afterPlayingDummy(void* clientData) { H264VideoStreamServerMediaSubsession* subsess = (H264VideoStreamServerMediaSubsession*)clientData; subsess->afterPlayingDummy1(); } void H264VideoStreamServerMediaSubsession::afterPlayingDummy1() { // Unschedule any pending 'checking' task: envir().taskScheduler().unscheduleDelayedTask(nextTask()); // Signal the event loop that we're done: setDoneFlag(); } static void checkForAuxSDPLine(void* clientData) { H264VideoStreamServerMediaSubsession* subsess = (H264VideoStreamServerMediaSubsession*)clientData; subsess->checkForAuxSDPLine1(); } void H264VideoStreamServerMediaSubsession::checkForAuxSDPLine1() { char const* dasl; if (fAuxSDPLine != NULL) { // Signal the event loop that we're done: setDoneFlag(); } else if (fDummyRTPSink != NULL && (dasl = fDummyRTPSink->auxSDPLine()) != NULL) { fAuxSDPLine = strDup(dasl); fDummyRTPSink = NULL; // Signal the event loop that we're done: setDoneFlag(); } else { // try again after a brief delay: int uSecsToDelay = 100000; // 100 ms nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecsToDelay, (TaskFunc*)checkForAuxSDPLine, this); } } char const* H264VideoStreamServerMediaSubsession::getAuxSDPLine(RTPSink* rtpSink, FramedSource* inputSource) { if (fAuxSDPLine != NULL) return fAuxSDPLine; // it's already been set up (for a previous client) if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream // Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known // until we start reading the file. This means that "rtpSink"s "auxSDPLine()" will be NULL initially, // and we need to start reading data from our file until this changes. fDummyRTPSink = rtpSink; // Start reading the file: fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this); // Check whether the sink's 'auxSDPLine()' is ready: checkForAuxSDPLine(this); } envir().taskScheduler().doEventLoop(&fDoneFlag); return fAuxSDPLine; } FramedSource* H264VideoStreamServerMediaSubsession::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) { estBitrate = 500; // kb megamol::remotecontrol::View3D_MRC *parent = (megamol::remotecontrol::View3D_MRC*)this->parent; return H264VideoStreamFramer::createNew(envir(), parent->h264FramedSource); } RTPSink* H264VideoStreamServerMediaSubsession::createNewRTPSink(Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* /*inputSource*/) { return H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic); }
FramedSource.cpp
H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env, unsigned preferredFrameSize, unsigned playTimePerFrame) { return new H264FramedSource(env, preferredFrameSize, playTimePerFrame); } H264FramedSource::H264FramedSource(UsageEnvironment& env, unsigned preferredFrameSize, unsigned playTimePerFrame) : FramedSource(env), fPreferredFrameSize(fMaxSize), fPlayTimePerFrame(playTimePerFrame), fLastPlayTime(0), fCurIndex(0) { x264_param_default_preset(¶m, "veryfast", "zerolatency"); param.i_threads = 1; param.i_width = 1024; param.i_height = 768; param.i_fps_num = 30; param.i_fps_den = 1; // Intra refres: param.i_keyint_max = 60; param.b_intra_refresh = 1; //Rate control: param.rc.i_rc_method = X264_RC_CRF; param.rc.f_rf_constant = 25; param.rc.f_rf_constant_max = 35; param.i_sps_id = 7; //For streaming: param.b_repeat_headers = 1; param.b_annexb = 1; x264_param_apply_profile(¶m, "baseline"); param.i_log_level = X264_LOG_ERROR; encoder = x264_encoder_open(¶m); pic_in.i_type = X264_TYPE_AUTO; pic_in.i_qpplus1 = 0; pic_in.img.i_csp = X264_CSP_I420; pic_in.img.i_plane = 3; x264_picture_alloc(&pic_in, X264_CSP_I420, 1024, 768); convertCtx = sws_getContext(1024, 768, PIX_FMT_RGBA, 1024, 768, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL); eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0); } H264FramedSource::~H264FramedSource() { envir().taskScheduler().deleteEventTrigger(eventTriggerId); eventTriggerId = 0; } void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes) { uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]); memcpy(surfaceData, buf, surfaceSizeInBytes); int srcstride = 1024*4; sws_scale(convertCtx, &surfaceData, &srcstride,0, 768, pic_in.img.plane, pic_in.img.i_stride); x264_nal_t* nals = NULL; int i_nals = 0; int frame_size = -1; frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out); static bool finished = false; if (frame_size >= 0) { static bool alreadydone = false; if(!alreadydone) { x264_encoder_headers(encoder, &nals, &i_nals); alreadydone = true; } for(int i = 0; i < i_nals; ++i) { m_queue.push(nals[i]); } } delete [] surfaceData; surfaceData = nullptr; envir().taskScheduler().triggerEvent(eventTriggerId, this); } void H264FramedSource::doGetNextFrame() { deliverFrame(); } void H264FramedSource::deliverFrame0(void* clientData) { ((H264FramedSource*)clientData)->deliverFrame(); } void H264FramedSource::deliverFrame() { x264_nal_t nalToDeliver; if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) { if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) { // This is the first frame, so use the current time: gettimeofday(&fPresentationTime, NULL); } else { // Increment by the play time of the previous data: unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime; fPresentationTime.tv_sec += uSeconds/1000000; fPresentationTime.tv_usec = uSeconds%1000000; } // Remember the play time of this data: fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize; fDurationInMicroseconds = fLastPlayTime; } else { // We don't know a specific play time duration for this data, // so just record the current time as being the 'presentation time': gettimeofday(&fPresentationTime, NULL); } if(!m_queue.empty()) { m_queue.wait_and_pop(nalToDeliver); uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E; newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload); unsigned newFrameSize = nalToDeliver.i_payload; // Deliver the data here: if (newFrameSize > fMaxSize) { fFrameSize = fMaxSize; fNumTruncatedBytes = newFrameSize - fMaxSize; } else { fFrameSize = newFrameSize; } memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload); FramedSource::afterGetting(this); } }
Relevant part of the RTSP-Server Therad
RTSPServer* rtspServer = RTSPServer::createNew(*(parent->env), 8554, NULL); if (rtspServer == NULL) { *(parent->env) << "Failed to create RTSP server: " << (parent->env)->getResultMsg() << "\n"; exit(1); } char const* streamName = "Stream"; parent->h264FramedSource = H264FramedSource::createNew(*(parent->env), 0, 0); H264VideoStreamServerMediaSubsession *h264VideoStreamServerMediaSubsession = H264VideoStreamServerMediaSubsession::createNew(*(parent->env), true); h264VideoStreamServerMediaSubsession->parent = parent; sms->addSubsession(h264VideoStreamServerMediaSubsession); rtspServer->addServerMediaSession(sms); parent->env->taskScheduler().doEventLoop(); // does not return
Once a connection exists the render loop calls
h264FramedSource->AddToBuffer(videoData, 1024*768*4);
-
x264 segmentation error :in ff_rgb24ToY_avx ()
20 octobre 2017, par helloliuOn my Ubunutu 16.04,I compiled the ffmpeg 、nasm and x264 code as follows:
cd ffmeg/ ./configure &&make &&make install git clone http://git.videolan.org/git/x264.git cd x264/ ./configure &&make &&make install
And then, I run in the follow way :
./x264 --input-res=1080x1920 --input-csp=rgb --output ~/x264/raw_framews.264 ~/x264/raw_framews
Unfortunately,it reports with the errors as follows:
lavf [error]: could not open input file avs [error]: failed to load avisynth raw [info]: 1080x1920p 0:0 @ 25/1 fps (cfr) resize [warning]: converting from rgb24 to yuv420p x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 x264 [info]: profile High, level 4.0 [swscaler @ 0x3f31540] Warning: data is not aligned! This can lead to a speedloss Thread 1 "x264encode" received signal SIGSEGV, Segmentation fault. 0x0000000000534777 in ff_rgb24ToY_avx ()
Can anyone help us with this error? many thanks.