Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
How to compress webcam videos recorded by html5 MediaRecorder api ?
25 mai 2017, par JasonYI successfully recorded my webcam using MediaRecorder api and the resulting filesizes seemed far too big for their quality.
For example, for an 8 second video that was 480x640 I got a 1mB file size. That does not seem right.
My code to record()
navigator.mediaDevices.getUserMedia({video: true, audio: true}) .then(function(stream){ var options = { mimeType : "video/webm;codecs=vp9" //I don't set bitrate here even if I do the quality is too bad } var media_recorder = new MediaRecorder(media_stream, options); var recorded_data = []; media_recorder.ondataavailable = function(e){ recorded_data.push(e.data); } media_recorder.onstop = function(e){ recorded_data.push(e.data); var recorded_blob = new Blob(recorded_data, { 'type' : 'video/webm; codecs=vp9' }); var recorded_video_url = window.URL.createObjectURL(recorded_blob); //here I write some code to download the blob from this url through a href } })
The file obtained by this method is unreasonably large which makes me wonder if it was even compressed when encoded by VP9? A 7 second video is about 870kB!
Inspecting the file with a mediainfo tool gives me
General Count : 323 Count of stream of this kind : 1 Kind of stream : General Kind of stream : General Stream identifier : 0 Count of video streams : 1 Count of audio streams : 1 Video_Format_List : VP9 Video_Format_WithHint_List : VP9 Codecs Video : V_VP9 Video_Language_List : English Audio_Format_List : Opus Audio_Format_WithHint_List : Opus Audio codecs : Opus Audio_Language_List : English Complete name : recorded_video.webm File name : recorded_video File extension : webm Format : WebM Format : WebM Format/Url : http://www.webmproject.org/ Format/Extensions usually used : webm Commercial name : WebM Format version : Version 2 Internet media type : video/webm Codec : WebM Codec : WebM Codec/Url : http://www.webmproject.org/ Codec/Extensions usually used : webm File size : 867870 File size : 848 KiB File size : 848 KiB File size : 848 KiB File size : 848 KiB File size : 847.5 KiB File last modification date : UTC 2017-05-19 05:48:00 File last modification date (local) : 2017-05-19 17:48:00 Writing application : Chrome Writing application : Chrome Writing library : Chrome Writing library : Chrome IsTruncated : Yes Video Count : 332 Count of stream of this kind : 1 Kind of stream : Video Kind of stream : Video Stream identifier : 0 StreamOrder : 1 ID : 2 ID : 2 Unique ID : 62101435245162993 Format : VP9 Commercial name : VP9 Codec ID : V_VP9 Codec ID/Url : http://www.webmproject.org/ Codec : V_VP9 Codec : V_VP9 Width : 640 Width : 640 pixels Height : 480 Height : 480 pixels Pixel aspect ratio : 1.000 Display aspect ratio : 1.333 Display aspect ratio : 4:3 Frame rate mode : VFR Frame rate mode : Variable Language : en Language : English Language : English Language : en Language : eng Language : en Default : Yes Default : Yes Forced : No Forced : No Audio Count : 272 Count of stream of this kind : 1 Kind of stream : Audio Kind of stream : Audio Stream identifier : 0 StreamOrder : 0 ID : 1 ID : 1 Unique ID : 32224324715799545 Format : Opus Format/Url : http://opus-codec.org/ Commercial name : Opus Internet media type : audio/opus Codec ID : A_OPUS Codec ID/Url : http://opus-codec.org Codec : Opus Codec : Opus Codec/Family : PCM Channel(s) : 1 Channel(s) : 1 channel Channel positions : Front: C Channel positions : 1/0/0 Sampling rate : 48000 Sampling rate : 48.0 KHz Compression mode : Lossy Compression mode : Lossy Delay : 718 Delay : 718ms Delay : 718ms Delay : 718ms Delay : 00:00:00.718 Delay, origin : Container Delay, origin : Container Language : en Language : English Language : English Language : en Language : eng Language : en Default : Yes Default : Yes Forced : No Forced : No
What did I do wrong? Do I have to re-encode it after the chunks get appended? Is there some attribute I'm missing? VP9 is supposed to reduce file sizes drastically.
-
Combining Video and Audio of different length in bulk
25 mai 2017, par user2981223I am going through a tv series right now and editing the files to be to my liking. I have one set which has the video I want and one set that has the audio. I have a batch file that I can run that takes the video from every file in folder "A" and the audio from every file in folder "B" and outputs it to a folder named "output." But with this particular series, that is only half of what I need done.
At the end of every episode of the files in the "B" folder there are some extra things. What I would like to do is take the audio and video from "A" and the audio from "B", combine it all into one file and also take the "A" and "B" files, compare the time stamps, and add the extra video from "B" to the output file.
Let me put it another way. Let's say "A" is 1080p with Japanese audio and is 20 minutes long. Let's say "B" is 720p with English audio and is 23 minutes long. I want the whole 1080p video with both audio tracks, plus the 720p video spliced onto the end. Both files start at the same spot so syncing isn't an issue. The issue is that the difference in time is different for every episode. So some episodes are 3 minutes longer, some only 30 seconds. Is there a way to make ffmpeg or another tool look at the difference in times and just add the excess to the output file?
Sorry for being long winded. Thanks for any help and guidance.
-
Read dumepd RTP stream in libav
25 mai 2017, par Pawel KHi I am in a need of a bit of a help/guidance because I got stuck in my research.
The problem:
How to convert RTP data using either gstreamer or avlib (ffmpeg) in either API (by programming) or console versions.
Data
I have RTP dump that comes from RTP/RTCP over TCP so I can get the precise start and stop for each RTP packet in file. It's a H264 video stream dump. The data is in this fashion because I need to acquire the RTCP/RTP interleaved stream via libcurl (which I'm currently doing)
Status
I've tried to use ffmpeg to consume pure RTP packets but is seems that using rtp either by console or by programming involves "starting" the whole rtsp/rtp session business in ffmpeg. I've stopped there and for the time being I didn't pursue this avenue deeper. I guess this is possible with lover level RTP API like
ff_rtp_parse_packet()
I'm too new with this lib to do it straight out.Then there is the gstreamer It has somewhat more capabilities to do it without programming, but for the time being I'm not able to figure out how to pass it the RTP dump I have.
I have also tried to do a little bit of a trickery and stream the dump via socat/nc to the udp port and listen on it via ffplay with sdp file as an input, there seems to be some progress the rtp at least gets recognized, but for socat there are loads of packet missing (data sent too fast perhaps?) and in the end the data is not visualized. When I used nc the video was badly misshapen but at least there were not that much receive errors.
One way or another the data is not properly visualized.
I know I can depacketize the data "by hand" but the idea is to do it via some kind of library because in the end there would also be second stream with audio that would have to be muxed together with the video.
I would appreciate any help on how to tackle this problem. Thanks.
-
update library ffmpeg version
25 mai 2017, par NewUserI'm currently using this library writingminds to run ffmpeg in my application. The problem is that the ffmpeg version is out of date and I am experiencing issue that have been fixed with a newer version of ffmpeg.
My question is; Is it possible to update the ffmpeg version that the library is using, or am I better of compiling ffmpeg myself?
-
Issue in FFmpegAndroid library when i compress video it convert video time into 1 or 2 second
25 mai 2017, par Fateh Singh SainiUsed this dependencies: compile 'com.writingminds:FFmpegAndroid:0.3.2'
I used blow code for video compress public static final String VIDEOCODEC = "-vcodec"; public static final String AUDIOCODEC = "-acodec";
public static final String VIDEOBITSTREAMFILTER = "-vbsf"; public static final String AUDIOBITSTREAMFILTER = "-absf"; public static final String VERBOSITY = "-v"; public static final String FILE_INPUT = "-i"; public static final String SIZE = "-s"; public static final String FRAMERATE = "-r"; public static final String FORMAT = "-f"; public static final String BITRATE_VIDEO = "-b:v"; public static final String BITRATE_AUDIO = "-b:a"; public static final String CHANNELS_AUDIO = "-ac"; public static final String FREQ_AUDIO = "-ar";
String[] complexCommand = {"-y", FILE_INPUT, yourRealPath, SIZE, "480x360", FRAMERATE, "25", VIDEOCODEC, "mpeg4", BITRATE_VIDEO, "150k", BITRATE_AUDIO, "48000", CHANNELS_AUDIO, "2", FREQ_AUDIO, "22050", filePath};
/** * Executing ffmpeg binary */ private static String execFFmpegBinary(final String[] command) { try { ffmpeg.execute(command, new ExecuteBinaryResponseHandler() { @Override public void onFailure(String s) { Log.d(TAG, "FAILED with output : " + s); } @Override public void onSuccess(String s) { Log.d(TAG, "SUCCESS with output : " + s); } @Override public void onProgress(String s) { Log.d(TAG, "Started command : ffmpeg " + command); Log.d(TAG, "progress : " + s); } @Override public void onStart() { Log.d(TAG, "Started command : ffmpeg " + command); } @Override public void onFinish() { Log.d(TAG, "Finished command : ffmpeg " + command); } }); } catch (FFmpegCommandAlreadyRunningException e) { // do nothing for now } return filePath; }