
Recherche avancée
Autres articles (54)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (6363)
-
FFmpeg : Green images from TS file
5 septembre 2017, par TatsianaI use ffmpeg to generate images from a TS file and at some places it generates green images. I checked a couple of ffmpeg versions. The issue reproduces on
3.3.3, 2.8.11, 2.7.7, but doesn’t on 2.6.9.
What might be the reason ?For example, the output of 2.8.11 :
$ ffmpeg -ss 00:42:20 -i video.ts -frames:v 1 out_42_20.jpg -loglevel debug ffmpeg version 2.8.11-0ubuntu0.16.04.1 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1 16.04.4) 20160609 configuration : —prefix=/usr —extra-version=0ubuntu0.16.04.1 —build-suffix=-ffmpeg —toolchain=hardened —libdir=/usr/lib/x86_64-linux-=/usr/include/x86_64-linux-gnu —cc=cc —cxx=g++ —enable-gpl —enable-shared —disable-stripping —disable-decoder=libopenjpeg —disable-dhroedinger —enable-avresample —enable-avisynth —enable-gnutls —enable-ladspa —enable-libass —enable-libbluray —enable-libbs2b —enab-enable-libcdio —enable-libflite —enable-libfontconfig —enable-libfreetype —enable-libfribidi —enable-libgme —enable-libgsm —enable—enable-libmp3lame —enable-libopenjpeg —enable-libopus —enable-libpulse —enable-librtmp —enable-libschroedinger —enable-libshine —enpy —enable-libsoxr —enable-libspeex —enable-libssh —enable-libtheora —enable-libtwolame —enable-libvorbis —enable-libvpx —enable-linable-libwebp —enable-libx265 —enable-libxvid —enable-libzvbi —enable-openal —enable-opengl —enable-x11grab —enable-libdc1394 —enab83 —enable-libzmq —enable-frei0r —enable-libx264 —enable-libopencv libavutil 54. 31.100 / 54. 31.100 libavcodec 56. 60.100 / 56. 60.100 libavformat 56. 40.101 / 56. 40.101 libavdevice 56. 4.100 / 56. 4.100 libavfilter 5. 40.101 / 5. 40.101 libavresample 2. 1. 0 / 2. 1. 0 libswscale 3. 1.101 / 3. 1.101 libswresample 1. 2.101 / 1. 2.101 libpostproc 53. 3.100 / 53. 3.100 Splitting the commandline. Reading option ’-ss’ ... matched as option ’ss’ (set the start time offset) with argument ’00:42:20’. Reading option ’-i’ ... matched as input url with argument ’video.ts’. Reading option ’-frames:v’ ... matched as option ’frames’ (set the number of frames to output) with argument ’1’. Reading option ’out_42_20.jpg’ ... matched as output url. Reading option ’-loglevel’ ... matched as option ’loglevel’ (set logging level) with argument ’debug’. Finished splitting the commandline. Parsing a group of options : global . Applying option loglevel (set logging level) with argument debug. Successfully parsed a group of options. Parsing a group of options : input url video.ts. Applying option ss (set the start time offset) with argument 00:42:20. Successfully parsed a group of options. Opening an input file : video.ts. [mpegts @ 0x22ba400] Format mpegts probed with size=2048 and score=100 [mpegts @ 0x22ba400] stream=0 stream_type=1b pid=1e1 prog_reg_desc= [mpegts @ 0x22ba400] stream=1 stream_type=f pid=1e2 prog_reg_desc= [mpegts @ 0x22ba400] stream=2 stream_type=6 pid=1e3 prog_reg_desc= [mpegts @ 0x22ba400] Before avformat_find_stream_info() pos : 0 bytes read:32768 seeks:0 [h264 @ 0x22be840] no picture [mpegts @ 0x22ba400] max_analyze_duration 5000000 reached at 5000000 microseconds st:0 [mpegts @ 0x22ba400] stream 0 : no PTS found at end of file, duration not set [mpegts @ 0x22ba400] After avformat_find_stream_info() pos : 0 bytes read:4026800 seeks:3 frames:550 Input #0, mpegts, from ’video.ts’ : Duration : 02:15:27.97, start : 1.079989, bitrate : 4983 kb/s Program 1 Stream #0:0[0x1e1], 216, 1/90000 : Video : h264 (High), 4 reference frames ([27][0][0][0] / 0x001B), yuv420p(left), 1920x1080 (1920x1088)R 16:9], 1/50, 25 fps, 50 tbr, 90k tbn, 50 tbc Stream #0:1[0x1e2](und), 203, 1/90000 : Audio : aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 127 kb/s Stream #0:2[0x1e3](und), 131, 1/90000 : Audio : ac3 ([6][0][0][0] / 0x0006), 48000 Hz, 5.1(side), fltp, 384 kb/s Successfully opened the file. Parsing a group of options : output url out_42_20.jpg. Applying option frames:v (set the number of frames to output) with argument 1. Successfully parsed a group of options. Opening an output file : out_42_20.jpg. Successfully opened the file. detected 1 logical cores [graph 0 input from stream 0:0 @ 0x22ba280] Setting ’video_size’ to value ’1920x1080’ [graph 0 input from stream 0:0 @ 0x22ba280] Setting ’pix_fmt’ to value ’0’ [graph 0 input from stream 0:0 @ 0x22ba280] Setting ’time_base’ to value ’1/90000’ [graph 0 input from stream 0:0 @ 0x22ba280] Setting ’pixel_aspect’ to value ’1/1’ [graph 0 input from stream 0:0 @ 0x22ba280] Setting ’sws_param’ to value ’flags=2’ [graph 0 input from stream 0:0 @ 0x22ba280] Setting ’frame_rate’ to value ’25/1’ [graph 0 input from stream 0:0 @ 0x22ba280] w:1920 h:1080 pixfmt:yuv420p tb:1/90000 fr:25/1 sar:1/1 sws_param:flags=2 [format @ 0x2328360] compat : called with args=[yuvj420p|yuvj422p|yuvj444p] [format @ 0x2328360] Setting ’pix_fmts’ to value ’yuvj420p|yuvj422p|yuvj444p’ [auto-inserted scaler 0 @ 0x2326600] Setting ’flags’ to value ’bicubic’ [auto-inserted scaler 0 @ 0x2326600] w:iw h:ih flags :’bicubic’ interl:0 [format @ 0x2328360] auto-inserting filter ’auto-inserted scaler 0’ between the filter ’Parsed_null_0’ and the filter ’format’ [AVFilterGraph @ 0x2327ba0] query_formats : 5 queried, 3 merged, 1 already done, 0 delayed [auto-inserted scaler 0 @ 0x2326600] picking yuvj420p out of 3 ref:yuv420p alpha:0 [swscaler @ 0x2310660] deprecated pixel format used, make sure you did set range correctly [auto-inserted scaler 0 @ 0x2326600] w:1920 h:1080 fmt:yuv420p sar:1/1 -> w:1920 h:1080 fmt:yuvj420p sar:1/1 flags:0x4 [mjpeg @ 0x2327620] Forcing thread count to 1 for MJPEG encoding, use -thread_type slice or a constant quantizer if you want to use multipl [mjpeg @ 0x2327620] intra_quant_bias = 96 inter_quant_bias = 0 Output #0, image2, to ’out_42_20.jpg’ : Metadata : encoder : Lavf56.40.101 Stream #0:0, 0, 1/25 : Video : mjpeg, 1 reference frame, yuvj420p(pc, left), 1920x1080 [SAR 1:1 DAR 16:9], 1/25, q=2-31, 200 kb/s, 25 fps tbc Metadata : encoder : Lavc56.60.100 mjpeg Stream mapping : Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native)) Press [q] to stop, [?] for help [h264 @ 0x232a040] Missing reference picture, default is 0 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 0 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 0 [h264 @ 0x232a040] decode_slice_header error timestamp discontinuity 0, new offset= -2541079989 [h264 @ 0x232a040] no picture [h264 @ 0x232a040] illegal short term buffer state detected [h264 @ 0x232a040] Missing reference picture, default is 65537 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 65537 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 65537 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 65537 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 65537 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 65537 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 65537 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 65537 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] Missing reference picture, default is 65537 [h264 @ 0x232a040] decode_slice_header error [h264 @ 0x232a040] no picture *** 0 dup ! [AVIOContext @ 0x2448c00] Statistics : 0 seeks, 2 writeouts No more output streams to write to, finishing. frame= 1 fps=0.0 q=2.2 Lsize=N/A time=00:00:00.04 bitrate=N/A dup=1 drop=1 video:32kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : unknown Input file #0 (video.ts) : Input stream #0:0 (video) : 9 packets read (261340 bytes) ; 3 frames decoded ; Input stream #0:1 (audio) : 0 packets read (0 bytes) ; Input stream #0:2 (audio) : 0 packets read (0 bytes) ; Total : 9 packets (261340 bytes) demuxed Output file #0 (out_42_20.jpg) : Output stream #0:0 (video) : 1 frames encoded ; 1 packets muxed (33209 bytes) ; Total : 1 packets (33209 bytes) muxed 3 frames successfully decoded, 0 decoding errors [AVIOContext @ 0x22c2e40] Statistics : 8798032 bytes read, 36 seeks
-
Cuda Memory Management : re-using device memory from C calls (multithreaded, ffmpeg), but failing on cudaMemcpy
4 mars 2013, par Nuke StollakI'm trying to CUDA-fy my ffmpeg filter that was taking over 90% of the CPU time, according to gprof. I first went from one core to OpenMP on 4 cores and got a 3.8x increase in frames encoded per second, but it's still too slow. CUDA seemed like the next natural step.
I've gotten a modest (20% ?) increase by replacing one of my filter's functions with a CUDA kernel call, and just to get things up and running, I was cudaMalloc'ing and cudaMemcpy'ing on each frame. I suspected I would get better results if I weren't doing this each frame, so before I go ahead and move the rest of my code to CUDA, I wanted to fix this by allocating the memory before my filter is called and freeing it afterwards, but the device memory isn't having it. I'm only storing the device memory locations outside of code that knows about CUDA ; I'm not trying to use the data there, just save it for the next time I call a CUDA-aware function that needs it.
Here's where I am so far :
Environment : the last AMI Linux on EC2's GPU Cluster, latest updates installed. Everything is fairly standard.
My filter is split into two files : vf_myfilter.c (compiled by gcc, like almost every other file in ffmpeg) and vf_myfilter_cu.cu (compiled by nvcc). My Makefile's link step includes
-lcudart
and both .o files. I build vf_myfilter_cu.o using (as one line)nvcc -I. -I./ -I/opt/nvidia/cuda/include $(CPPFLAGS)
-Xcompiler "$(CFLAGS)"
-c -o libfilter/vf_myfilter_cu.o libfilter/vf_myfilter_cu.cuWhen the variables (set by configure) are expanded, here's what I get, again all in one line but split up here for easier reading. I just noticed the duplicate include path directives, but it shouldn't hurt.
nvcc -I. -I./ -I/opt/nvidia/cuda/include -I. -I./ -D_ISOC99_SOURCE
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_POSIX_C_SOURCE=200112
-D_XOPEN_SOURCE=600 -DHAVE_AV_CONFIG_H
-XCompiler "-fopenmp -std=c99 -fomit-frame-pointer -pthread -g
-Wdeclaration-after-statment -Wall -Wno-parentheses
-Wno-switch -Wno-format-zero-length -Wdisabled-optimization
-Wpointer-arith -Wredundant-decls -Wno-pointer-sign
-Wwrite-strings -Wtype-limits -Wundef -Wmissing-prototypes
-Wno-pointer-to-int-case -Wstrict-prototypes -O3 -fno-math-errno
-fno-signed-zeros -fno-tree-vectorize
-Werror=implicit-function-declaration -Werror=missing-prototypes
-Werror=vla "
-c -o libavfilter/vf_myfilter_cu.o libavfilter/vf_myfilter_cu.cuvf_myfilter.c calls three functions from vf_myfilter_cu.cu file which handle memory and call the CUDA kernel code. I thought I would be able to save the device pointers from my memory initialization, which runs once per ffmpeg run, and re-use that space each time I called the wrapper for my kernel function, but when I cudaMemcpy from my host memory to my device memory that I stored, it fails with cudaInvalidValue. If I cudaMalloc my device memory on every frame, I'm fine.
I plan on using pinned host memory, once I have everything up in CUDA code and have minimized the number of times I need to return to the main ffmpeg code.
Steps taken :
First sign of trouble : search the web. I found Passing a pointer to device memory between classes in CUDA and printed out the pointers at various places in my execution to ensure that the device memory values were the same everywhere, and they are. FWIW, they seem to start around 0x90010000.
ffmpeg's
configure
gave me -pthreads, so I checked to see if my filter was being called from multiple threads according to how can I tell if pthread_self is the main (first) thread in the process ? and checkingsyscall(SYS_gettid) == getpid()
to ensure that I'm not calling CUDA from different threads—I'm indeed in the primary thread at every step, according to those two funcs. I am still using OpenMP later around some for loops in the main .c filter function, but the calls to CUDA don't occur in those loops.Code Overview :
ffmpeg provides me a MyfilterContext structure pointer on each frame, as well as on the filter's config_input and uninit routines (called once per file), so I added some *host_var and *dev_var variables (a few of each, float and unsigned char).
There is a whole lot of code I skipped for this post, but most of it has to do with my algorithm and details involved in writing an ffmpeg filter. I'm actually using about 6 host variables and 7 device variables right now, but for demonstration I limited it to one of each.
Here is, broadly, what my vf_myfilter.c looks like.
// declare my functions from vf_myfilter_cu.cu
extern void cudaMyInit(unsigned char **dev_var, size_t mysize);
extern void cudaMyUninit(unsigned char *dev_var);
extern void cudaMyFunction(unsigned char *host_var, unsigned char *dev_var, size_t mysize);
// part of the MyFilterContext structure, which ffmpeg keeps track of for me.
typedef struct {
unsigned char *host_var;
unsigned char *dev_var;
} MyFilterContext;
// ffmpeg calls this function once per file, before any frames are processed.
static int config_input(AVFilterLink *inlink) {
// how ffmpeg passes me my context, fairly standard.
MyfilterContext * myContext = inlink->dst->priv;
// compute the size one video plane of one frame of video
size_t mysize = sizeof(unsigned char) * inlink->w * inlink->h;
// av_mallocz is a malloc wrapper provided and required by ffmpeg
myContext->host_var = (unsigned char*) av_mallocz(size);
// Here's where I attempt to allocate my device memory.
cudaMyInit( & myContext->dev_var, mysize);
}
// Called once per frame of video
static int filter_frame(AVFilterLink *inlink, AVFilterBufferRef *frame) {
MyFilterContext *myContext = inlink->dst->priv;
// sanity check to make sure that this isn't part of the multithreaded code
if ( syscall(SYS_gettid) == getpid() )
av_log(.... ); // This line never runs, so it's not threaded?
// ...fill host_var with data from frame,
// set mysize to the size of the buffer
// Call my wrapper function defined in the .cu file
cudaMyFunction(myContext->host_var, myContext->dev_var, mysize);
// ... take the results from host_var and apply them to frame
// ... and return the processed frame to ffmpeg
}
// called after everything else has happened: free up the memory.
static av_cold void uninit(AVFilterContext *ctx) {
MyFilterContext *myContext = ctx->priv;
// free my host_var
if(myContext->host_var!=NULL) {
av_free(myContext->host_var);
myContext->host_var=NULL;
}
// free my dev_var
cudaMyUninit(myContext->dev_var);
}Here is, broadly, what my vf_myfilter_cu.cu looks like :
// my kernel function that does the work.
__global__ void myfunc(unsigned char *dev_var, size_t mysize) {
// find the offset for this particular GPU thread to process
// exit this function if the block/thread combo points to somewhere
// outside the frame
// make sure we're less than mysize bytes from the beginning of dev_var
// do things to dev_var[some_offset]
}
// Allocate the device memory
extern "C" void cudaMyInit(unsigned char **dev_var, size_t mysize) {
if(cudaMalloc( (void**) dev_var, mysize) != cudaSuccess) {
printf("Cannot allocate the memory\n");
}
}
// Free the device memory.
extern "C" void cudaMyUninit(unsigned char *dev_var) {
cudaFree(dev_var);
}
// Copy data from the host to the device,
// Call the kernel function, and
// Copy data from the device to the host.
extern "C" void cudaMyFunction(
unsigned char *host_var,
unsigned char *dev_var,
size_t mysize )
{
cudaError_t cres;
// dev_works is what I want to get rid of, but
// to make sure that there's not something more obvious going
// on, I made sure that my cudaMemcpy works if I'm allocating
// the device memory in every frame.
unsigned char *dev_works;
if(cudaMalloc( (void **) &dev_works, mysize)!=cudaSuccess) {
// I don't see this message
printf("failed at per-frame malloc\n");
}
// THIS PART WORKS, copying host_var to dev_works
cres=cudaMemcpy( (void *) dev_works, host_var, mysize, cudaMemcpyHostToDevice);
if(cres!=cudaSuccess) {
if(cres==cudaErrorInvalidValue) {
// I don't see this message.
printf("cudaErrorInvalidValue at per-frame cudaMemcpy\n");
}
}
// THIS PART FAILS, copying host_var to dev_var
cres=cudaMemcpy( (void *) dev_var, host_var, mysize, cudaMemcpyHostToDevice);
if(cres!=cudaSuccess) {
if(cres==cudaErrorInvalidValue) {
// this is the error code that prints.
printf("cudaErrorInvalidValue at per-frame cudaMemcpy\n");
}
// I check for other error codes, but they're not being hit.
}
// and this works with dev_works
myfunc<<>>(dev_works, mysize);
if(cudaMemcpy(host_var, dev_works, mysize, cudaMemcpyDeviceToHost)!=cudaSuccess) {
// I don't see this message.
printf("Failed to copy post-kernel func\n");
}
cudaFree(dev_works);
}Any ideas ?
-
ffmpeg concatenation after using drawtext filter
12 août 2016, par Sven HoskensI’m fairly new to ffmpeg, but after a few days of searching on this issue, I’ve completely hit a brick wall. Any help would be appreciated.
My use case : Our client wants to upload videos for multiple regions. Each video will be the same format, 1920x1080, mp4. For each region, they want to add a different image at the end of the video, for a few seconds. This image contains their logo, some additional info, and a variable code. They will enter this code alongside the uploaded video. The image stays the same, so is already present on the server.
So basically, I have an input video, a video of an image, and a small code. I need to add this code to the video of the image (in a predefined position), and then I need to add the resulting video to the end of the input video. Once that is complete, I just need to output the video in 1920x1080 and in 1024x576.I have tried several things, but the concatenation step always fails with the manipulated video’s.
Attempt 1
In my first attempt, I used ffmpeg to create a video from an image, and add the text in the designated area.
ffmpeg -y -f lavfi -i image.png -r 30 -t 10 -pix_fmt yuv420p -map 0:v -vf drawtext="fontfile=HelveticaNeue.dfont: text='GLNS/TEST/1234b': fontcolor=black: fontsize=20: box=1: boxcolor=white: boxborderw=7: x=179: y=805" imageVideo.mp4
This command creates a .mp4 video of the correct size, with a duration of 10 seconds, and adds the text ’GLNS/TEST/1234b’ in the correct location.
Next, I use the following command to concatenate the two videos. Both have the same resolution and codec.
ffmpeg -f concat -safe 0 -i config.txt -vf scale=1920:1080 outputHD.mp4 -vf scale=1024:576 outputSD.mp4
config.txt contains following :
file my_input_file.mp4
file ImageVideo.mp4This concatenation works with regular videos. However, when I use it with ImageVideo.mp4 (the one created by the first command) I get this error log :
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f86dc924600] Auto-inserting h264_mp4toannexb bitstream filtereed=0.509x
[aac @ 0x7f86dc019e00] Number of bands (31) exceeds limit (5).
Error while decoding stream #0:1: Invalid data found when processing input
[aac @ 0x7f86dc019e00] Number of bands (27) exceeds limit (8).
Error while decoding stream #0:1: Invalid data found when processing input
[h264 @ 0x7f86dd857200] Error splitting the input into NAL units.
[h264 @ 0x7f86dd829400] Invalid NAL unit size.
[h264 @ 0x7f86dd829400] Error splitting the input into NAL units.
[aac @ 0x7f86dc019e00] Number of bands (10) exceeds limit (1).
Error while decoding stream #0:1: Invalid data found when processing input
[h264 @ 0x7f86dd816800] Invalid NAL unit size.
[h264 @ 0x7f86dd816800] Error splitting the input into NAL units.
[aac @ 0x7f86dc019e00] Number of bands (24) exceeds limit (1).
Error while decoding stream #0:1: Invalid data found when processing input
#this goes on for a few hundred linesThe resulting output is identical to the input video, but does not contain the desired image video at the end.
Attempt 2
Since the above attempt didn’t work, I tried concatenating a video I let our designer make of the image with Adobe After Effects. This video was also saved as a .mp4 with the H264 codec. If I concatenate the input video and this one, I get a correct result. However, as soon as I add the code in the designated area with this command :
ffmpeg -i new_image_video.mp4 -vf drawtext="fontfile=HelveticaNeue.dfont: text='GLNS/TEST/1234b': fontcolor=black: fontsize=20: box=1: boxcolor=white: boxborderw=7: x=179: y=805" -c:v libx264 imageVideo.mp4
I get this error :
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7ff94c800000] Auto-inserting h264_mp4toannexb bitstream filter97x
[h264 @ 0x7ff94b053800] top block unavailable for requested intra mode -1
[h264 @ 0x7ff94b053800] error while decoding MB 0 0, bytestream 49526
[h264 @ 0x7ff94b053e00] number of reference frames (1+3) exceeds max (3; probably corrupt input), discarding one
[h264 @ 0x7ff94b053e00] chroma_log2_weight_denom 28 is out of range
[h264 @ 0x7ff94b053e00] illegal long ref in memory management control operation 2
[h264 @ 0x7ff94b053e00] cabac_init_idc 32 overflow
[h264 @ 0x7ff94b053e00] decode_slice_header error
[h264 @ 0x7ff94b053e00] no frame!
[h264 @ 0x7ff94b053800] concealing 8160 DC, 8160 AC, 8160 MV errors in I frame
[h264 @ 0x7ff94b072a00] reference overflow 22 > 15 or 0 > 15
[h264 @ 0x7ff94b072a00] decode_slice_header error
[h264 @ 0x7ff94b072a00] no frame!
[h264 @ 0x7ff94b01a400] illegal modification_of_pic_nums_idc 20
[h264 @ 0x7ff94b01a400] decode_slice_header error
[h264 @ 0x7ff94b01a400] no frame!
[h264 @ 0x7ff94b01aa00] illegal modification_of_pic_nums_idc 20
[h264 @ 0x7ff94b01aa00] decode_slice_header error
[h264 @ 0x7ff94b01aa00] no frame!
Error while decoding stream #0:0: Invalid data found when processing input
[h264 @ 0x7ff94b053800] deblocking_filter_idc 8 out of range
[h264 @ 0x7ff94b053800] decode_slice_header error
[h264 @ 0x7ff94b053800] no frame!
Error while decoding stream #0:0: Invalid data found when processing input
[h264 @ 0x7ff94b053e00] illegal memory management control operation 8
[h264 @ 0x7ff94b053e00] co located POCs unavailable
[h264 @ 0x7ff94b053e00] error while decoding MB 2 0, bytestream -35
[h264 @ 0x7ff94b053e00] concealing 8160 DC, 8160 AC, 8160 MV errors in B frame
[h264 @ 0x7ff94b072a00] number of reference frames (1+3) exceeds max (3; probably corrupt input), discarding one
# this goes on for a while...
[h264 @ 0x7ff94b01a400] concealing 4962 DC, 4962 AC, 4962 MV errors in B frame
Error while decoding stream #0:0: Invalid data found when processing input
frame= 2553 fps= 17 q=-1.0 Lsize= 26995kB time=00:01:42.16 bitrate=2164.6kbits/s dup=0 drop=60 speed=0.697x
video:25258kB audio:1661kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.285236%
[libx264 @ 0x7ff94b810400] frame I:35 Avg QP:17.45 size: 55070
[libx264 @ 0x7ff94b810400] frame P:711 Avg QP:19.73 size: 18712
[libx264 @ 0x7ff94b810400] frame B:1807 Avg QP:21.53 size: 5884
[libx264 @ 0x7ff94b810400] consecutive B-frames: 3.4% 5.0% 4.9% 86.6%
[libx264 @ 0x7ff94b810400] mb I I16..4: 38.2% 49.3% 12.5%
[libx264 @ 0x7ff94b810400] mb P I16..4: 12.4% 14.0% 1.0% P16..4: 29.6% 4.8% 1.9% 0.0% 0.0% skip:36.2%
[libx264 @ 0x7ff94b810400] mb B I16..4: 1.5% 1.2% 0.1% B16..8: 27.3% 1.6% 0.1% direct: 1.8% skip:66.4% L0:45.8% L1:51.4% BI: 2.8%
[libx264 @ 0x7ff94b810400] 8x8 transform intra:49.5% inter:85.4%
[libx264 @ 0x7ff94b810400] coded y,uvDC,uvAC intra: 21.2% 22.3% 2.5% inter: 4.6% 7.0% 0.0%
[libx264 @ 0x7ff94b810400] i16 v,h,dc,p: 23% 26% 10% 41%
[libx264 @ 0x7ff94b810400] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 19% 35% 3% 3% 3% 3% 3% 2%
[libx264 @ 0x7ff94b810400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 20% 16% 5% 7% 6% 5% 5% 4%
[libx264 @ 0x7ff94b810400] i8c dc,h,v,p: 67% 16% 15% 2%
[libx264 @ 0x7ff94b810400] Weighted P-Frames: Y:7.3% UV:4.2%
[libx264 @ 0x7ff94b810400] ref P L0: 66.3% 8.7% 17.9% 7.0% 0.1%
[libx264 @ 0x7ff94b810400] ref B L0: 88.2% 10.1% 1.7%
[libx264 @ 0x7ff94b810400] ref B L1: 94.9% 5.1%
[libx264 @ 0x7ff94b810400] kb/s:2026.12
[aac @ 0x7ff94b072400] Qavg: 635.626The resulting output is identical to the input video, but does not contain the desired image video at the end.
One thing I have noticed : When I inspect the video files on mac (Get info) they always contain these lines at ’More info’ :
Dimensions: 1920 x 1080
Codecs: H.264, AAC
Color profile: HD(1-1-1)
Duration: 01:42
Audio channels: 2
Last opened: Today 11:02However, the video’s which pass through the drawtext filter have this :
Dimensions: 1920 x 1080
Codecs: AAC, H.264
Duration: 00:10
Audio channels: 2
Last opened: Today 11:07As you can see, there is no color profile entry, and the codecs have switched places. I assume this is related to my issue, but I can’t seem to find a fix for it.
PS : The application will run in a php environment (Symfony). I noticed the concat command wasn’t available in the Symfony bundle for ffmpeg, so I’m using the regular terminal commands. I’ll execute these using php.
EDIT
Attempt 3On advise of a coworker, I tried converting the video to .avi and reconverting to .mp4, in the hopes this would lose any corrupted or extra info included by the drawtext filter. This spits out a completely different error.
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f812413da00] Auto-inserting h264_mp4toannexb bitstream filtereed=0.516x
[concat @ 0x7f8124009a00] DTS 1569260 < 2551000 out of order
[h264 @ 0x7f8124846800] left block unavailable for requested intra4x4 mode -1
[h264 @ 0x7f8124846800] error while decoding MB 0 0, bytestream 47919
[h264 @ 0x7f8124846800] concealing 8160 DC, 8160 AC, 8160 MV errors in I frame
[aac @ 0x7f8125809a00] Queue input is backward in time
[aac @ 0x7f8125815a00] Queue input is backward in time
[h264 @ 0x7f8124846e00] number of reference frames (1+3) exceeds max (3; probably corrupt input), discarding one
[h264 @ 0x7f8124846e00] chroma_log2_weight_denom 26 is out of range
[h264 @ 0x7f8124846e00] deblocking_filter_idc 32 out of range
[h264 @ 0x7f8124846e00] decode_slice_header error
[h264 @ 0x7f8124846e00] no frame!
[mp4 @ 0x7f8124802200] Non-monotonous DTS in output stream 0:1; previous: 4902912, current: 4505491; changing to 4902913. This may result in incorrect timestamps in the output file.
[mp4 @ 0x7f8125813000] Non-monotonous DTS in output stream 1:1; previous: 4902912, current: 4505491; changing to 4902913. This may result in incorrect timestamps in the output file.
[h264 @ 0x7f8124803400] reference overflow 20 > 15 or 0 > 15
[h264 @ 0x7f8124803400] decode_slice_header error
[h264 @ 0x7f8124803400] no frame!
[mp4 @ 0x7f8124802200] Non-monotonous DTS in output stream 0:1; previous: 4902913, current: 4506515; changing to 4902914. This may result in incorrect timestamps in the output file.
[mp4 @ 0x7f8125813000] Non-monotonous DTS in output stream 1:1; previous: 4902913, current: 4506515; changing to 4902914. This may result in incorrect timestamps in the output file.
[mp4 @ 0x7f8124802200] Non-monotonous DTS in output stream 0:1; previous: 4902914, current: 4507539; changing to 4902915. This may result in incorrect timestamps in the output file.
[mp4 @ 0x7f8125813000] Non-monotonous DTS in output stream 1:1; previous: 4902914, current: 4507539; changing to 4902915. This may result in incorrect timestamps in the output file.
# Again, this continues for quite a while.