Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Python - OpenCV - VideoCapture
6 octobre 2016, par Sina DabiriI am using Python 2.7.12 and OpenCV 2.4.13. I am trying the get video frames using cv2.VideoCapture(). But it doesn't work. I already added C:\OpenCV\3rdparty\sources\ffmpeg to the Windows PATH environment variable and copied all files from C:\OpenCV\3rdparty\sources\ffmpeg to C:\Python27\ by renaming opencv_ffmpeg.dll and opencv_ffmpeg_64.dll to opencv_ffmpeg2413.dll and opencv_ffmpeg2413_64.dll in turn. But still doesn't work. Here is my code:
cap = cv2.VideoCapture("Video1.mp4") cap.open("Video1.mp4") print cap.isOpened()
I receive False. I also used the path instead of Video1.mp4, but again doesnt work. Any suggestion is appreciated.
-
Animated overlay for videos on iOS
6 octobre 2016, par SharpAffairWhat's the common modern standard for animated video overlays? (e.g. if you want to add an animated logo to video recorded from the camera)
During research, I've found the following options:
GIF - seems to be pretty outdated technology
FLV - supports alpha-channel, but no longer supported by Adobe. Requires FFMPEG.
PNG sequence - the downside of this is having multiple files for each frame.
What's the right format/technology to use?
Ideally, what is natively supported on iOS (doesn't require FFMPEG)?
-
Set presentation timestamps on sample buffers before giving them to AVSampleBufferDisplayLayer
6 octobre 2016, par MoustikI am trying to decode and render a H264 video network stream using AVSampleBufferDisplayLayer on iOS 10.
I am getting the frame packets using ffmpeg. Then, I am converting the NALUs to AVCC format and creating sample buffers. Finally I pass the buffers to the AVSampleBufferDisplayLayer for rendering. The stream is displaying well when kCMSampleAttachmentKey_DisplayImmediately is set to kCFBooleanTrue. However when I am trying to use a controlTimebase in order to define the presentation timestamps, the display is quite stuck.
Any idea or help with the handling of presentation timestamps?
AVPacket packet; av_read_frame(_formatCtx, &packet); // ... // Parse NALUs and create blockbuffer // ... AVStream *st = _formatCtx->streams[_videoStream]; CMSampleTimingInfo* timing; timing = malloc(sizeof(CMSampleTimingInfo)); timing->presentationTimeStamp = CMTimeMake(packet.pts, st->time_base.den); timing->duration = CMTimeMake(packet.duration, st->time_base.den); timing->decodeTimeStamp = CMTimeMake(packet.dts, st->time_base.den); const size_t sampleSize = blockLength; _status = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, true, NULL, NULL, _formatDescriptionRef, 1, 1, timing, 1, &sampleSize, &sampleBuffer); [self.renderer enqueueSampleBuffer:sampleBuffer];
-
Android cropping and decreasing video size
6 octobre 2016, par hellsayenciwe are going to start an app that captures videos or picks videos from gallery. The videos will be exactly 30 seconds, so users can crop or trim videos like video editor.
At this point, we got 2 main problems. The first one is video size. You know the video size could be huge for uploading :) So we need to decrease video size. The other problem is when iOS device captures and uploads video to server, there are some problems when trying to play that video on android app.
We have used “ffmpeg” library before for an another project but that library has varous problems like slow compression or library .so file’s size. And also have compatibility issues with sdk 24 (nougat)
Anyone has an experience about that? Or any ideas to overcome those problems? Thank u all.
-
how to recognize video codec of a file with ffmpeg
6 octobre 2016, par GerryMulliganI have often problems reading AVI file with my TV dvd player if they are not divx or xvid (DX50,i.e., is not readable).
I'd like to make a fast script to recognize the video codec of these files before burn it on cdrom/dvd.
The command :
ffmpeg -i file.avi
give the "container" of the video stream (mpeg4,mpeg2,etc), not the codec.
Any hint?
Thanks