Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
ffmpeg mp4 shows blurred frames after seeking in the video with internet explorer
21 février 2014, par LarzzzI have a question regarding JWPlayer 5, IE and video encoding. Basicly I use a Wowza server to stream my files. This works on all devices (windows, iOS and Android). However when i play it in Internet Explorer, it behaves strange. It plays fine, and the entire movie looks good without any issues. But when I seek in the video, it shows a line in the center of my video, like if the previous frame is still showing some part, and it refreshes after a few frames move in the video. The frames itself are not broken, as if i just play the video without seeking it all looks good. This does not happen in Chrome, Firefox or Safari, neither does this happen on android & iOS. I've tested this with JW 6 as well, and it shows the same results for IE.
Altough it's showing fine on other browsers, I still believe it's an issue with encoding, as other videos do not show this behavior.
Example viewable here : http://www.mobileevolution.be/standardcode-withsmil.html
The FFMPEG code i use to convert any file (.avi in this case) to an MP4:
"ffmpeg.exe" -i "%1" -vcodec libx264 -strict experimental -c:a aac -profile:v baseline -level 3 -movflags faststart -bufsize 1000k -tune zerolatency -b:v 1000k -minrate 600k -maxrate 1500k "%5%71000k.mp4"
the %1, %5 and %7 are variables i send with a script.
I have tried various options, but could not figure out what the problem is. I have also tried converting with handbrake, but this shows similar results.
My questions are: Has anyone seen this before? Does anyone know a solution? What's wrong with my FFMPEG settings?
Thanks for any help, Grts
EDIT pictures: http://www.mobileevolution.be/foto1.jpg http://www.mobileevolution.be/foto2.jpg console output: http://www.mobileevolution.be/consoleoutput.txt
-
Qt 5.2 / OpenCV 2.4.8 - Can’t open video files via VideoCapture
21 février 2014, par ZamahraI have a big problem that i can’t solve by myself. OpenCV itself works fine, but i’m not able to load videos. Here’s my code:
PRO- File
QT += core gui greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = videoredux TEMPLATE = app INCLUDEPATH += C:/OpenCV/opencv_bin/install/include LIBS += -LC:\\OpenCV\\opencv_bin\\bin \ libopencv_core248d \ libopencv_highgui248d \ libopencv_imgproc248d \ libopencv_features2d248d \ libopencv_calib3d248d \ libopencv_video248d \ SOURCES += main.cpp\ mainwindow.cpp HEADERS += mainwindow.h FORMS += mainwindow.ui
and the MainWindow Class:
#include "mainwindow.h" #include "ui_mainwindow.h" #include
#include #include #include core/core.hpp> #include highgui/highgui.hpp> #include imgproc/imgproc.hpp> #include cv.h> MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); ui->videoStatusLabel->setText("Kein Video geladen."); // SIGNALS & SLOTS QObject::connect(ui->chooseVideoButton,SIGNAL(clicked()), this,SLOT(chooseVideo())); QObject::connect(ui->startButton,SIGNAL(clicked()), this,SLOT(startProcess())); } void MainWindow::chooseVideo(){ QString fileName = QFileDialog::getOpenFileName(this, tr("Open Video"), "/home", tr("Video Files (*.avi *.mp4 *.mpeg *.mpg)")); qDebug() << "Path:" << fileName; ui->videoStatusLabel->setText(fileName); } void MainWindow::startProcess(){ QString videoPath = ui->videoStatusLabel->text(); QFileInfo video(videoPath); if(video.exists()){ const std::string path = videoPath.toUtf8().constData(); cv::VideoCapture capture(path); cv::Mat frame; if(!capture.isOpened()){ qDebug() << "Error, video not loaded"; } cv::namedWindow("window",1); while(true) { bool success = capture.read(frame); if(success == false){ break; } cv::imshow("window",frame); cv::waitKey(20); } cv::waitKey(0); } else{ qDebug() << "Error, File doesn't exist"; } } The paths are correct, I tried many different video formats but he never loads the videos. I’m running Qt on a Windows 8 machine and i have “K-Lite Codec Pack 10.2.0 Basic” and ffmpeg installed. The videos are playing properly with my video players. I also tried to copy the .dll to the working directory, searched for opencv dll's in the system32 directory and rebuild OpenCV with mingw on this computer. I know that many people have the same problems, but none of their suggestions solved it. Does anyone know how to solve this problem?
Thank you very much!
Nadine
----UPDATE---- I still can't open video files, so I programmed the application on a Windows7 64-Bit system. It worked fine, but when I try to open the application on a Windows8 computer it still can't open the file. It doesn't matter which codecs are installed, because it generally runs on every Windows7 computer and fails on every Windows8 computer.. The same for older OpenCV-Versions. Is there a general problem with OpenCV and Windows8?
-
Error using FFMPEG to convert each input image into H264 compiling in Visual Studio running in MevisLab
21 février 2014, par user3012914I am creating a ML Module in MevisLab Framework, I am using FFMPEG to convert each image i get into a H264 Video and save it after I get all the frames. But unfortunately I have problem allocating the output buffer size. The application crashes when I include this in my code.If I am not including it, the output file size is just 4kb. Nothing is stored in it.
I am also not very sure whether it is correct way of getting the HBitmap into the Encoder. Would be great to have your suggestions.
My Code:
BITMAPINFO bitmapInfo; HDC hdc; ZeroMemory(&bitmapInfo, sizeof(bitmapInfo)); BITMAPINFOHEADER &bitmapInfoHeader = bitmapInfo.bmiHeader; bitmapInfoHeader.biSize = sizeof(bitmapInfoHeader); bitmapInfoHeader.biWidth = _imgWidth; bitmapInfoHeader.biHeight = _imgHeight; bitmapInfoHeader.biPlanes = 1; bitmapInfoHeader.biBitCount = 24; bitmapInfoHeader.biCompression = BI_RGB; bitmapInfoHeader.biSizeImage = ((bitmapInfoHeader.biWidth * bitmapInfoHeader.biBitCount / 8 + 3) & 0xFFFFFFFC) * bitmapInfoHeader.biHeight; bitmapInfoHeader.biXPelsPerMeter = 10000; bitmapInfoHeader.biYPelsPerMeter = 10000; bitmapInfoHeader.biClrUsed = 0; bitmapInfoHeader.biClrImportant = 0; //RGBQUAD* Ref = new RGBQUAD[_imgWidth,_imgHeight]; HDC hdcscreen = GetDC(0); hdc = CreateCompatibleDC(hdcscreen); ReleaseDC(0, hdcscreen); _hbitmap = CreateDIBSection(hdc, (BITMAPINFO*) &bitmapInfoHeader, DIB_RGB_COLORS, &_bits, NULL, NULL);
To get the BitMap I use the above code. Then I allocate the Codec Context as followed
c->bit_rate = 400000; // resolution must be a multiple of two c->width = 1920; c->height = 1080; // frames per second frame_rate = _framesPerSecondFld->getIntValue(); //AVRational rational = {1,10}; //c->time_base = (AVRational){1,25}; //c->time_base = (AVRational){1,25}; c->gop_size = 10; // emit one intra frame every ten frames c->max_b_frames = 1; c->keyint_min = 1; //minimum GOP size c->time_base.num = 1; // framerate numerator c->time_base.den = _framesPerSecondFld->getIntValue(); c->i_quant_factor = (float)0.71; // qscale factor between P and I frames c->pix_fmt = AV_PIX_FMT_RGB32; std::string msg; msg.append("Context is stored"); _messageFld->setStringValue(msg.c_str());
I create the Bitmap Image as followed from the input
PagedImage *inImg = getUpdatedInputImage(0); ML_CHECK(inImg); ImageVector imgExt = inImg->getImageExtent(); if ((imgExt.x = _imgWidth) && (imgExt.y == _imgHeight)) { if (((imgExt.x % 4)==0) && ((imgExt.y % 4) == 0)) { // read out input image and write output image into video // get input image as an array void* imgData = NULL; SubImageBox imageBox(imgExt); // get the whole image getTile(inImg, imageBox, MLuint8Type, &imgData); iData = (MLuint8*)imgData; int r = 0; int g = 0;int b = 0; // since we have only images with // a z-ext of 1, we can compute the c stride as follows int cStride = _imgWidth * _imgHeight; uint8_t offset = 0; // pointer into the bitmap that is // used to write images into the avi UCHAR* dst = (UCHAR*)_bits; for (int y = _imgHeight-1; y >= 0; y--) { // reversely scan the image. if y-rows of DIB are set in normal order, no compression will be available. offset = _imgWidth * y; for (int x = 0; x < _imgWidth; x++) { if (_isGreyValueImage) { r = iData[offset + x]; *dst++ = (UCHAR)r; *dst++ = (UCHAR)r; *dst++ = (UCHAR)r; } else { b = iData[offset + x]; // windows bitmap need reverse order: bgr instead of rgb g = iData[offset + x + cStride ]; r = iData[offset + x + cStride + cStride]; *dst++ = (UCHAR)r; *dst++ = (UCHAR)g; *dst++ = (UCHAR)b; } // alpha channel in input image is ignored } }
Then I add it to the Encoder as followed as write as H264
in_width = c->width; in_height = c->height; out_width = c->width; out_height = c->height; ibytes = avpicture_get_size(PIX_FMT_BGR32, in_width, in_height); obytes = avpicture_get_size(PIX_FMT_YUV420P, out_width, out_height); outbuf_size = 100000 + c->width*c->height*(32>>3); // allocate output buffer outbuf = static_cast(malloc(outbuf_size)); if(!obytes) { std::string msg; msg.append("Bytes cannot be allocated"); _messageFld->setStringValue(msg.c_str()); } else { std::string msg; msg.append("Bytes allocation done"); _messageFld->setStringValue(msg.c_str()); } //create buffer for the output image inbuffer = (uint8_t*)av_malloc(ibytes); outbuffer = (uint8_t*)av_malloc(obytes); inbuffer = (uint8_t*)dst; //create ffmpeg frame structures. These do not allocate space for image data, //just the pointers and other information about the image. AVFrame* inpic = avcodec_alloc_frame(); AVFrame* outpic = avcodec_alloc_frame(); //this will set the pointers in the frame structures to the right points in //the input and output buffers. avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height); avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height); av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1); inpic->data[0] += inpic->linesize[0]*(_imgHeight-1); // flipping frame inpic->linesize[0] = -inpic->linesize[0]; if(!inpic) { std::string msg; msg.append("Image is empty"); _messageFld->setStringValue(msg.c_str()); } else { std::string msg; msg.append("Picture has allocations"); _messageFld->setStringValue(msg.c_str()); } //create the conversion context fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL); //perform the conversion sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize); //out_size = avcodec_encode_video(c, outbuf,outbuf_size, outpic); if(!out_size) { std::string msg; msg.append("Outsize is not valid"); _messageFld->setStringValue(msg.c_str()); } else { std::string msg; msg.append("Outsize is valid"); _messageFld->setStringValue(msg.c_str()); } fwrite(outbuf, 1, out_size, f); if(!fwrite) { std::string msg; msg.append("Frames couldnt be written"); _messageFld->setStringValue(msg.c_str()); } else { std::string msg; msg.append("Frames written to the file"); _messageFld->setStringValue(msg.c_str()); } // for (;out_size; i++) // { out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL); std::string msg; msg.append("Writing Frames"); _messageFld->setStringValue(msg.c_str());// encode the delayed frames _numFramesFld->setIntValue(_numFramesFld->getIntValue()+1); fwrite(outbuf, 1, out_size, f); // } outbuf[0] = 0x00; outbuf[1] = 0x00; // add sequence end code to have a real mpeg file outbuf[2] = 0x01; outbuf[3] = 0xb7; fwrite(outbuf, 1, 4, f); }
Then close and clean the Image Buffer and file
ML_TRACE_IN("MovieCreator::_endRecording()") if (_numFramesFld->getIntValue() == 0) { _messageFld->setStringValue("Empty movie, nothing saved."); } else { _messageFld->setStringValue("Movie written to disk."); _numFramesFld->setIntValue(0); if (_hbitmap) { DeleteObject(_hbitmap); } if (c != NULL) { av_free(outbuffer); av_free(inpic); av_free(outpic); fclose(f); avcodec_close(c); // freeing memory free(outbuf); av_free(c); } }
}
I think the Main Problem is over here !!
//out_size = avcodec_encode_video(c, outbuf,outbuf_size, outpic);
-
FFMPEG library's some command not working on android
21 février 2014, par Saurabh PrajapatiI need following 2 commands to work on android platform. I found many article on this site where they inform these command works fine for them but it is not working at my end
For Fedding Effect: "ffmpeg -i filename1 fade=in:5:8 output.mp4"
For Concate Video Files: "ffmpeg -i concat: filename1|filename2 -codec copy output.mp4"
Error: App throws error like unknown command "concate" and "fad-in5:8".
My Goal: I need to concate 2 "mp4" video files on android platform with Fed In/Fed Out effects.
Following is my code
public class VideoTest extends Activity {
public static final String LOGTAG = "MJPEG_FFMPEG"; byte[] previewCallbackBuffer; boolean recording = false; boolean previewRunning = false; File jpegFile; int fileCount = 0; FileOutputStream fos; BufferedOutputStream bos; Button recordButton; Camera.Parameters p; NumberFormat fileCountFormatter = new DecimalFormat("00000"); String formattedFileCount; ProcessVideo processVideo; String[] libraryAssets = {"ffmpeg","ffmpeg.so", "libavcodec.so", "libavcodec.so.52", "libavcodec.so.52.99.1", "libavcore.so", "libavcore.so.0", "libavcore.so.0.16.0", "libavdevice.so", "libavdevice.so.52", "libavdevice.so.52.2.2", "libavfilter.so", "libavfilter.so.1", "libavfilter.so.1.69.0", "libavformat.so", "libavformat.so.52", "libavformat.so.52.88.0", "libavutil.so", "libavutil.so.50", "libavutil.so.50.34.0", "libswscale.so", "libswscale.so.0", "libswscale.so.0.12.0" }; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); for (int i = 0; i < libraryAssets.length; i++) { try { InputStream ffmpegInputStream = this.getAssets().open(libraryAssets[i]); FileMover fm = new FileMover(ffmpegInputStream,"/data/data/com.mobvcasting.mjpegffmpeg/" + libraryAssets[i]); fm.moveIt(); } catch (IOException e) { e.printStackTrace(); } } Process process = null; try { String[] args = {"/system/bin/chmod", "755", "/data/data/com.mobvcasting.mjpegffmpeg/ffmpeg"}; process = new ProcessBuilder(args).start(); try { process.waitFor(); } catch (InterruptedException e) { e.printStackTrace(); } process.destroy(); } catch (IOException e) { e.printStackTrace(); } File savePath = new File(Environment.getExternalStorageDirectory().getPath() + "/com.mobvcasting.mjpegffmpeg/"); savePath.mkdirs(); requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); setContentView(R.layout.main); processVideo = new ProcessVideo(); processVideo.execute(); } @Override public void onConfigurationChanged(Configuration conf) { super.onConfigurationChanged(conf); } private class ProcessVideo extends AsyncTask { @Override protected Void doInBackground(Void... params) { Log.d("test", "VideoTest doInBackground Start"); /*String videofile = Environment.getExternalStorageDirectory().getPath() + "/com.mobvcasting.mjpegffmpeg/splitter.mp4"; File file = new File(videofile); if(file.exists()) file.delete(); file=null;*/ Process ffmpegProcess = null; try { String filename1 = Environment.getExternalStorageDirectory().getPath()+ "/com.mobvcasting.mjpegffmpeg/test.mp4"; String filename2 = Environment.getExternalStorageDirectory().getPath()+ "/com.mobvcasting.mjpegffmpeg/splitter.mp4"; String StartPath = Environment.getExternalStorageDirectory().getPath() + "/com.mobvcasting.mjpegffmpeg/"; //String[] ffmpegCommand = {"/data/data/com.mobvcasting.mjpegffmpeg/ffmpeg", "-i", "concat:\""+ filename1+"|"+ filename2+"\"", "-codec", "copy", Environment.getExternalStorageDirectory().getPath() + "/com.mobvcasting.mjpegffmpeg/output.mp4"}; //String[] ffmpegCommand = {"/data/data/com.mobvcasting.mjpegffmpeg/ffmpeg", "-i", filename1, "fade=in:5:8", Environment.getExternalStorageDirectory().getPath() + "/com.mobvcasting.mjpegffmpeg/output.mp4"}; ffmpegProcess = new ProcessBuilder(ffmpegCommand).redirectErrorStream(true).start(); OutputStream ffmpegOutStream = ffmpegProcess.getOutputStream(); BufferedReader reader = new BufferedReader(new InputStreamReader(ffmpegProcess.getInputStream())); String line; Log.d("test", "***Starting FFMPEG***"); while ((line = reader.readLine()) != null) { Log.d("test", "***"+line+"***"); } Log.d("test", "***Ending FFMPEG***"); } catch (IOException e) { e.printStackTrace(); } if (ffmpegProcess != null) { ffmpegProcess.destroy(); } Log.d("test", "doInBackground End"); return null; } protected void onPostExecute(Void... result) { Log.d("test", "onPostExecute"); Toast toast = Toast.makeText(VideoTest.this, "Done Processing Video", Toast.LENGTH_LONG); toast.show(); } }
}
Just for your information, I have copy source from following library
https://github.com/pvskalyan/Android-MJPEG-Video-Capture-FFMPEG?source=c
-
FFMPEG with x264 encoding
21 février 2014, par mmmaaakI'm trying ton encode video from set of jpeg images to h264, using ffmpeg + x264 for it. I init AVCodecContext in such way:
_outputCodec = avcodec_find_encoder(AV_CODEC_ID_H264); _outputCodecContext = avcodec_alloc_context3(_outputCodec); avcodec_get_context_defaults3(_outputCodecContext, _outputCodec); _outputCodecContext->width = _currentWidth; _outputCodecContext->height = _currentHeight; _outputCodecContext->pix_fmt = AV_PIX_FMT_YUV420P; _outputCodecContext->time_base.num = 1; _outputCodecContext->time_base.den = 25; _outputCodecContext->profile =FF_PROFILE_H264_BASELINE; _outputCodecContext->level = 50;
avcodec_open return no errors, anything is OK, but when I call avcodec_encode_video2() I get such messages (I think it's from x264):
using mv_range_thread = %d %s profile %s, level %s
And then app crashs. My be there are more neccessary settings for codec context, when use x264 &&