Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
How to automatically cut video based on motion-less scenes ?
18 mars 2018, par user1202136I recently recorded a very long video (45 minutes) using the H264 codec, with I-frames every 30 seconds.
I would like to cut this video in smaller parts separated by "quiet" periods, i.e., parts of the video where no motion is detected for 5 seconds. I would also prefer to not re-encode the video, but rather cut around the I-frames.
Is there a tool to automatically achieve this?
-
Ignore streams when finding stream info
17 mars 2018, par CSNewmanI’m trying to speed up the start of ffmpeg when processing my live stream, and have narrowed down the issue to the ‘avformat_find_stream_info’ function. The source I’m trying to process seems to have a number of streams that ffmpeg is unable to determine what they are and therefore spends a while trying to find information about them (The entire analyse window). I know ahead of time that I only want to find information about stream #0:0 and #0:1, and I will be disregarding the other streams anyways.
I managed to work around this using the API by setting the number of streams before the find information call and then restoring the value afterwards.
inputContext->nb_streams = (uint) 2; if (ffmpeg.avformat_find_stream_info(inputContext, null) < 0) { throw new InvalidOperationException("Could not read stream information."); } inputContext->nb_streams = oldSize;
However, I would prefer to use the CLI interface of ffmpeg.
My current command
ffmpeg -find_stream_info false -i http://192.168.1.112:5004/auto/v1 -c copy -map 0:v:0 -map 0:a:0 -ignore_unknown -f hls -hls_flags delete_segments -segment_list playlist.m3u8 -segment_list_type hls -segment_list_size 10 -segment_list_flags +live -segment_time 10 -f segment stream%%05d.ts
Giving the following input (Unneeded logging removed)
[mpeg2video @ 000001ce79423440] Invalid frame dimensions 0x0. Last message repeated 9 times [mpegts @ 000001ce7941b740] Could not find codec parameters for stream 5 (Unknown: none ([11][0][0][0] / 0x000B)): unknown codec Consider increasing the value for the 'analyzeduration' and 'probesize' options [mpegts @ 000001ce7941b740] Could not find codec parameters for stream 6 (Unknown: none ([11][0][0][0] / 0x000B)): unknown codec Consider increasing the value for the 'analyzeduration' and 'probesize' options [mpegts @ 000001ce7941b740] Could not find codec parameters for stream 7 (Unknown: none ([5][0][0][0] / 0x0005)): unknown codec Consider increasing the value for the 'analyzeduration' and 'probesize' options [mpegts @ 000001ce7941b740] Could not find codec parameters for stream 8 (Unknown: none ([5][0][0][0] / 0x0005)): unknown codec Consider increasing the value for the 'analyzeduration' and 'probesize' options Input #0, mpegts, from 'http://192.168.1.112:5004/auto/v1': Duration: N/A, start: 91296.182311, bitrate: N/A Program 4165 Stream #0:0[0x65]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, top first), 704x576 [SAR 16:11 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0:1[0x66](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 256 kb/s Stream #0:2[0x6a](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, mono, s16p, 64 kb/s (visual impaired) (dependent) Stream #0:3[0x69](eng): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006) Stream #0:4[0x98]: Audio: mp2 ([6][0][0][0] / 0x0006), 48000 Hz, stereo, s16p, 128 kb/s Stream #0:5[0x1c21]: Unknown: none ([11][0][0][0] / 0x000B) Stream #0:6[0x1c33]: Unknown: none ([11][0][0][0] / 0x000B) Stream #0:7[0x1bbf]: Unknown: none ([5][0][0][0] / 0x0005) Stream #0:8[0x1bc1]: Unknown: none ([5][0][0][0] / 0x0005)
I’m unsure if there’s a way to achieve the same functionality with the CLI, however I’m open changing how my setup works.
One idea that I considered was running two ffmpeg instances, one to strip the unneeded streams (and not find the stream info) and then have another that takes that stripped stream and performs the rest of the functionality.
Any insight here would be grateful, thanks in advance.
-
Piping ppm files to ffmpeg to create movie in c++
17 mars 2018, par chasep255I want to create a movie of a zoom on the Mandelbrot set. To do this I want to create image data in the ppm format and then pipe it into ffmpeg using popen. The following command works if I first save the ppm to my disc and then run ffmpeg through the terminal.
ffmpeg -i out.ppm -r 1/5 out.mp4
Here is what I am trying to do in code.
FILE* p = popen("ffmpeg -i /dev/stdin -r 1/5 out.mp4", "w"); ppm_pipe(p, pix_buffers[0], w, h); fclose(p); ... void ppm_pipe(FILE* f, unsigned char* pix, int w, int h) { assert(fprintf(f, "P6 %d %d 255\n", w, h) > 0); size_t sz = 3 * (size_t)w * (size_t)h; assert(fwrite(pix, 1, sz, f) == sz); }
I get the following error message.
ffmpeg version 2.5.8-0ubuntu0.15.04.1 Copyright (c) 2000-2015 the FFmpeg developers built with gcc 4.9.2 (Ubuntu 4.9.2-10ubuntu13) configuration: --prefix=/usr --extra-version=0ubuntu0.15.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --shlibdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --enable-shared --disable-stripping --enable-avresample --enable-avisynth --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libshine --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libwavpack --enable-libwebp --enable-libxvid --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzvbi --enable-libzmq --enable-frei0r --enable-libvpx --enable-libx264 --enable-libsoxr --enable-gnutls --enable-openal --enable-libopencv --enable-librtmp --enable-libx265 libavutil 54. 15.100 / 54. 15.100 libavcodec 56. 13.100 / 56. 13.100 libavformat 56. 15.102 / 56. 15.102 libavdevice 56. 3.100 / 56. 3.100 libavfilter 5. 2.103 / 5. 2.103 libavresample 2. 1. 0 / 2. 1. 0 libswscale 3. 1.101 / 3. 1.101 libswresample 1. 1.100 / 1. 1.100 libpostproc 53. 3.100 / 53. 3.100 /dev/stdin: Invalid data found when processing input
-
JW Player fails with error with wma files : Task Queue failed at step 5
17 mars 2018, par SabeenaI have a JW Player which plays MP3 files but with WMA files it gives the error:
Task Queue failed at step 5: Playlist could not be loaded: Playlist file did not contain a valid playlist
I thought of two reasons:
- There is no support for WMA but please confirm me this.
- Somewhere I need to setup the type of file I am using in this player.
If WMA not supported in JW Player how can I play WMA and MP3 files in my website?
Is
ffmpeg
needed to convert WMA to MP3 while uploading? -
Adjust PTS and DTS before mp4 video creation
17 mars 2018, par CristianoI'm retrieving raw h264 compressed frames from usb camera to create an mp4 video. This is my simple code:
for(int i = 0; i<120;i++) { AVPacket pkt; av_init_packet(&pkt); ret = av_read_frame(inputFormatCtx, &pkt); pkt.pts = pkt.dts = i; pkt.pts = av_rescale_q(pkt.pts, inputStream->time_base, outputStream->time_base); pkt.dts = av_rescale_q(pkt.dts, inputStream->time_base, outputStream->time_base); ret = av_interleaved_write_frame(outputFormatCtx, &pkt); av_packet_unref(&pkt); } ret = av_write_trailer(outputFormatCtx);
This works well. Now I would like to store these AVPackets to create the video in a second moment. I changed my code in this way
for(int i = 0; i<120;i++){ AVPacket pkt; av_init_packet(&pkt); ret = av_read_frame(inputFormatCtx, &pkt); packets.push_back(pkt); } vector
::reverse_iterator it; int j = 0; for(it = packets.rbegin(); it != packets.rend(); it++){ AVPacket n = (*it); n.pts = n.dts = j; j++; n.pts = av_rescale_q(n.pts, inputStream->time_base, outputStream->time_base); n.dts = av_rescale_q(n.dts, inputStream->time_base, outputStream->time_base); ret = av_interleaved_write_frame(outputFormatCtx, &n); av_packet_unref(&n); } ret = av_write_trailer(outputFormatCtx); The resulting video is not so fluid and I used ffprobe to see more details. These are the first three frame generating with the first block of code.
[FRAME] media_type=video stream_index=0 key_frame=1 pkt_pts=0 pkt_pts_time=0.000000 pkt_dts=0 pkt_dts_time=0.000000 best_effort_timestamp=0 best_effort_timestamp_time=0.000000 pkt_duration=512 pkt_duration_time=0.033333 pkt_pos=48 pkt_size=12974 width=1920 height=1080 pix_fmt=yuv420p sample_aspect_ratio=N/A pict_type=I coded_picture_number=0 display_picture_number=0 interlaced_frame=0 top_field_first=0 repeat_pict=0 [/FRAME] [FRAME] media_type=video stream_index=0 key_frame=0 pkt_pts=512 pkt_pts_time=0.033333 pkt_dts=512 pkt_dts_time=0.033333 best_effort_timestamp=512 best_effort_timestamp_time=0.033333 pkt_duration=512 pkt_duration_time=0.033333 pkt_pos=13022 pkt_size=473 width=1920 height=1080 pix_fmt=yuv420p sample_aspect_ratio=N/A pict_type=P coded_picture_number=1 display_picture_number=0 interlaced_frame=0 top_field_first=0 repeat_pict=0 [/FRAME] [FRAME] media_type=video stream_index=0 key_frame=0 pkt_pts=1024 pkt_pts_time=0.066667 pkt_dts=1024 pkt_dts_time=0.066667 best_effort_timestamp=1024 best_effort_timestamp_time=0.066667 pkt_duration=512 pkt_duration_time=0.033333 pkt_pos=13495 pkt_size=511 width=1920 height=1080 pix_fmt=yuv420p sample_aspect_ratio=N/A pict_type=P coded_picture_number=2 display_picture_number=0 interlaced_frame=0 top_field_first=0 repeat_pict=0 [/FRAME]
While these are the same frames created using the second block of code.
[FRAME] media_type=video stream_index=0 key_frame=1 pkt_pts=14848 pkt_pts_time=0.966667 pkt_dts=14848 pkt_dts_time=0.966667 best_effort_timestamp=14848 best_effort_timestamp_time=0.966667 pkt_duration=512 pkt_duration_time=0.033333 pkt_pos=757791 pkt_size=65625 width=1920 height=1080 pix_fmt=yuv420p sample_aspect_ratio=N/A pict_type=I coded_picture_number=58 display_picture_number=0 interlaced_frame=0 top_field_first=0 repeat_pict=0 [/FRAME] [FRAME] media_type=video stream_index=0 key_frame=0 pkt_pts=15360 pkt_pts_time=1.000000 pkt_dts=15360 pkt_dts_time=1.000000 best_effort_timestamp=15360 best_effort_timestamp_time=1.000000 pkt_duration=512 pkt_duration_time=0.033333 pkt_pos=823416 pkt_size=29642 width=1920 height=1080 pix_fmt=yuv420p sample_aspect_ratio=N/A pict_type=P coded_picture_number=60 display_picture_number=0 interlaced_frame=0 top_field_first=0 repeat_pict=0 [/FRAME] [FRAME] media_type=video stream_index=0 key_frame=1 pkt_pts=30208 pkt_pts_time=1.966667 pkt_dts=30208 pkt_dts_time=1.966667 best_effort_timestamp=30208 best_effort_timestamp_time=1.966667 pkt_duration=512 pkt_duration_time=0.033333 pkt_pos=1546454 pkt_size=66021 width=1920 height=1080 pix_fmt=yuv420p sample_aspect_ratio=N/A pict_type=I coded_picture_number=117 display_picture_number=0 interlaced_frame=0 top_field_first=0 repeat_pict=0 [/FRAME]
I immediately noticed that pkt_pts_time and other fields are not starting from 0 and are not increasing linearly (respect to the 30 fps that I established). It is possible to obtain the same video writing the AVPackets in a second moment as I'm doing? Thank you.