
Recherche avancée
Autres articles (20)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (6367)
-
Correctly Allocate And Fill Frame In FFmpeg
14 avril 2022, par Michel FeinsteinI am filling a
Frame
with a BGR image for encoding, and I am getting a memory leak. I think I got to the source of the problem but it appears to be a library issue instead. Since FFmpeg is such a mature library, I think I am misusing it and I would like to be instructed on how to do it correctly.


I am allocating a
Frame
using :


AVFrame *bgrFrame = av_frame_alloc();




And later I allocate the image in the
Frame
using :


av_image_alloc(bgrFrame->data, bgrFrame->linesize, bgrFrame->width, bgrFrame->height, AV_PIX_FMT_BGR24, 32);




Then I fill the image allocated using :



av_image_fill_pointers(bgrFrame->data, AV_PIX_FMT_BGR24, bgrFrame->height, originalBGRImage.data, bgrFrame->linesize);




Where
originalBGRImage
is an OpenCVMat
.


And this has a memory leak, apparently,
av_image_alloc()
allocates memory, andav_image_fill_pointers()
also allocates memory, on the same pointers (I can seebgrFrame->data[0]
changing between calls).


If I call



av_freep(&bgrFrame->data[0]);




After
av_image_alloc()
, it's fine, but if I call it afterav_image_fill_pointers()
, the program crashes, even thoughbgrFrame->data[0]
is notNULL
, which I find very curious.


Looking FFmpeg's
av_image_alloc()
source code, I see it callsav_image_fill_pointers()
twice inside it, once allocating a bufferbuff
....and later inav_image_fill_pointers()
source code,data[0]
is substituted by the image pointer, which is (I think) the source of the memory leak, sincedata[0]
was holdingbuf
from the previousav_image_alloc()
call.


So this brings the final question : What's the correct way of filling a frame with an image ?.


-
keepalive type and frequency in ffmpeg [on hold]
19 novembre 2013, par Jack SimthMy company has a bunch of IP cameras that we distribute - specifically Grandstream - and the manufacturer has changed their firmware. The normal keepalive that ffmpeg uses for the rtsp streams ( either ff_rtsp_send_cmd_async(s, "GET_PARAMETER", rt->control_uri, NULL) ; or ff_rtsp_send_cmd_async(s, "OPTIONS", "*", NULL) ; both in in libavformat/rtspdec.c) is no longer working, for two reasons :
1) The new Grandstream firmware is now checking for a receiver report to determine whether or not the program reading the stream is live, not just anything.
2) The new Grandstream firmware requires that the receiver report to keep the connection alive happen at least once every 25 seconds, and on the audio stream it is currently only happening about every 30 seconds or so (video is getting it every 7 seconds or so).
So after about a minute with ffmpeg connected, the camera stops sending the audio stream, the audio stream on ffmpeg reads end-of-file, and then ffmpeg shuts everything down.
As I can't change the firmware, I'm trying to dig through the ffmpeg code to make it send the appropriate receiver report for the keep alive... but I am getting nowhere. I've added a little snippet of code into the receiver reports so I know when they're running when I call ffmpeg on debug, but... well, it's not going well.
Test command :
ffmpeg -loglevel debug -i rtsp ://admin:admin@192.168.4.3:554/0 -acodec libmp3lame -ar 22050 -vcodec copy -y -f flv /dev/null &> test.txtTest output :
`[root@localhost ffmpeg]# cat test.txt
ffmpeg version 2.0 Copyright (c) 2000-2013 the FFmpeg developers
built on Aug 21 2013 14:24:28 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-3)
configuration: --datadir=/usr/share/ffmpeg --bindir=/usr/local/bin --libdir=/usr/local/lib --incdir=/usr/local/include --shlibdir=/usr/lib --mandir=/usr/share/man --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables' --enable-avfilter --enable-libx264 --enable-gpl --enable-version3 --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-x11grab --enable-librtmp --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-static --enable-libgsm --enable-libxvid --enable-libvpx --enable-libvorbis --enable-libvo-aacenc --enable-libmp3lame
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
Splitting the commandline.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-i' ... matched as input file with argument 'rtsp://admin:admin@192.168.4.3:554/0'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'libmp3lame'.
Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '22050'.
Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'copy'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
Reading option '/dev/null' ... matched as output file.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input file rtsp://admin:admin@192.168.4.3:554/0.
Successfully parsed a group of options.
Opening an input file: rtsp://admin:admin@192.168.4.3:554/0.
[rtsp @ 0x9d9ccc0] SDP:
v=0
o=StreamingServer 3331435948 1116907222000 IN IP4 192.168.4.3
s=h264.mp4
c=IN IP4 0.0.0.0
t=0 0
a=control:*
m=video 0 RTP/AVP 96
a=control:trackID=0
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1; sprop-parameter-sets=Z0LgHtoCgPRA,aM4wpIA=
m=audio 0 RTP/AVP 0
a=control:trackID=1
a=rtpmap:0 PCMU/8000
a=ptime:20
m=application 0 RTP/AVP 107
a=control:trackID=2
a=rtpmap:107 vnd.onvif.metadata/90000
[rtsp @ 0x9d9ccc0] video codec set to: h264
[NULL @ 0x9d9f400] RTP Packetization Mode: 1
[NULL @ 0x9d9f400] Extradata set to 0x9d9f900 (size: 22)!
[rtsp @ 0x9d9ccc0] audio codec set to: pcm_mulaw
[rtsp @ 0x9d9ccc0] audio samplerate set to: 8000
[rtsp @ 0x9d9ccc0] audio channels set to: 1
[rtsp @ 0x9d9ccc0] hello state=0
[h264 @ 0x9d9f400] Current profile doesn't provide more RBSP data in PPS, skipping
Last message repeated 1 times
[rtsp @ 0x9d9ccc0] All info found
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://admin:admin@192.168.4.3:554/0':
Metadata:
title : h264.mp4
Duration: N/A, start: 0.000000, bitrate: 64 kb/s
Stream #0:0, 28, 1/90000: Video: h264 (Constrained Baseline), yuv420p, 640x480, 1/180000, 10 tbr, 90k tbn, 180k tbc
Stream #0:1, 156, 1/8000: Audio: pcm_mulaw, 8000 Hz, mono, s16, 64 kb/s
Successfully opened the file.
Parsing a group of options: output file /dev/null.
Applying option acodec (force audio codec ('copy' to copy stream)) with argument libmp3lame.
Applying option ar (set audio sampling rate (in Hz)) with argument 22050.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument copy.
Applying option f (force format) with argument flv.
Successfully parsed a group of options.
Opening an output file: /dev/null.
Successfully opened the file.
detected 2 logical cores
[graph 0 input from stream 0:1 @ 0x9f15380] Setting 'time_base' to value '1/8000'
[graph 0 input from stream 0:1 @ 0x9f15380] Setting 'sample_rate' to value '8000'
[graph 0 input from stream 0:1 @ 0x9f15380] Setting 'sample_fmt' to value 's16'
[graph 0 input from stream 0:1 @ 0x9f15380] Setting 'channel_layout' to value '0x4'
[graph 0 input from stream 0:1 @ 0x9f15380] tb:1/8000 samplefmt:s16 samplerate:8000 chlayout:0x4
[audio format for output stream 0:1 @ 0x9efa7c0] Setting 'sample_fmts' to value 's32p|fltp|s16p'
[audio format for output stream 0:1 @ 0x9efa7c0] Setting 'sample_rates' to value '22050'
[audio format for output stream 0:1 @ 0x9efa7c0] Setting 'channel_layouts' to value '0x4|0x3'
[audio format for output stream 0:1 @ 0x9efa7c0] auto-inserting filter 'auto-inserted resampler 0' between the filter 'Parsed_anull_0' and the filter 'audio format for output stream 0:1'
[AVFilterGraph @ 0x9f15980] query_formats: 4 queried, 9 merged, 3 already done, 0 delayed
[auto-inserted resampler 0 @ 0x9dfada0] ch:1 chl:mono fmt:s16 r:8000Hz -> ch:1 chl:mono fmt:s16p r:22050Hz
Output #0, flv, to '/dev/null':
Metadata:
title : h264.mp4
encoder : Lavf55.12.100
Stream #0:0, 0, 1/1000: Video: h264 ([7][0][0][0] / 0x0007), yuv420p, 640x480, 1/90000, q=2-31, 1k tbn, 90k tbc
Stream #0:1, 0, 1/1000: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, mono, s16p
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (pcm_mulaw -> libmp3lame)
Press [q] to stop, [?] for help
Current profile doesn't provide more RBSP data in PPS, skippingrate= 135.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 134.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 135.0kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 135.5kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 136.9kbits/s
Queue input is backward in time= 233kB time=00:00:13.69 bitrate= 139.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 136.3kbits/s
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 13926; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 13952; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 13979; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14005; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14031; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14057; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14083; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14109; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14135; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14161; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14188; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14214; changing to 14239. This may result in incorrect timestamps in the output file.
Current profile doesn't provide more RBSP data in PPS, skippingrate= 141.5kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 142.0kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 142.5kbits/s
Receiver Report delay: 469789, gettime: -1527669086, last_recep: 322446, timebase: -1534837492
Current profile doesn't provide more RBSP data in PPS, skippingrate= 141.5kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 141.7kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 141.1kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 140.6kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 140.7kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.9kbits/s
Receiver Report delay: 132993, gettime: -1516538925, last_recep: 322446, timebase: -1518568234
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.6kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.6kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.7kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 140.0kbits/s
Receiver Report delay: 897727, gettime: -1504870331, last_recep: 322446, timebase: -1518568552
[NULL @ 0x9d9f400] Current profile doesn't provide more RBSP data in PPS, skipping
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.1kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.0kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.0kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 138.6kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 138.5kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 138.2kbits/s
EOF on sink link output stream 0:1:default.time=00:00:58.40 bitrate= 139.6kbits/s
No more output streams to write to, finishing.
[libmp3lame @ 0x9dfa580] Trying to remove 344 more samples than there are in the queue
frame= 589 fps= 11 q=-1.0 Lsize= 1003kB time=00:00:58.85 bitrate= 139.5kbits/s
video:724kB audio:231kB subtitle:0 global headers:0kB muxing overhead 4.955356%
2959 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0x9e021c0] Statistics: 3 seeks, 2860 writeouts
[root@localhost ffmpeg]# -
Extract RGB values from a AVFrame (FFMPEG) in C++
9 novembre 2011, par ExtrakunI am currently trying to read in video frames by using FFMPEG. The format is PIX_FMT_RGB24 ; For each frame, the RGB values are all combined together in frame->data[0] (Where frame is of the type AVFrame).
How do I extract the individual R, G and B values for each frame ? This is for processing the video. I would think it would work the same way as extracting the RGB values from a bitmap too. Thanks !