
Recherche avancée
Médias (1)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (38)
-
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (5510)
-
How extract JPEG image from H264 stream in constant time
27 août 2021, par Ross GardinerI want to extract a JPEG frame from a H264 stream on disk. The extraction needs to be as fast as possible for my real-time requirements.


Until now I have been using
ffmpeg-python
lib which is just a python wrapper forffmpeg
. Here is a code snippet :

out, _ = (
 ffmpeg
 .input('./5sec.h264')
 .filter('select', 'gte(n,{})'.format(144))
 .output('pipe:', vframes=1, format='image2', vcodec='h264')
 .run(capture_stdout=True)
)



This outputs the jpeg to stdout, with some effort I could read this into my program.


However, as I use larger and larger stream files the extraction time to grab the JPEG increases. I thought lookup time would be constant as
ffmpeg
is highly optimised ?

Is there a constant time solution to lookup and return a frame from a h264 (or even mjpeg) format stream on disk ?


Edit :
Heres the command I use without the python wrapper :

ffmpeg -i 5sec.h264 -frames:v 1 -filter:v "select=gte(n\,25)" -f image2 frame.jpg


here's output :


ffmpeg version 4.1.6-1~deb10u1+rpt2 Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 8 (Raspbian 8.3.0-6+rpi1)
 configuration: --prefix=/usr --extra-version='1~deb10u1+rpt2' --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-omx-rpi --enable-mmal --enable-neon --enable-rpi --enable-vout-drm --enable-v4l2-request --enable-libudev --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared --libdir=/usr/lib/arm-linux-gnueabihf --cpu=arm1176jzf-s --arch=arm
 libavutil 56. 22.100 / 56. 22.100
 libavcodec 58. 35.100 / 58. 35.100
 libavformat 58. 20.100 / 58. 20.100
 libavdevice 58. 5.100 / 58. 5.100
 libavfilter 7. 40.101 / 7. 40.101
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 3.100 / 5. 3.100
 libswresample 3. 3.100 / 3. 3.100
 libpostproc 55. 3.100 / 55. 3.100
Input #0, h264, from '5sec.h264':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264 (High), yuv420p(progressive), 640x480, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[swscaler @ 0x1a25390] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to 'frame.jpg':
 Metadata:
 encoder : Lavf58.20.100
 Stream #0:0: Video: mjpeg, yuvj420p(pc), 640x480, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
 Metadata:
 encoder : Lavc58.35.100 mjpeg
 Side data:
 cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame= 1 fps=0.4 q=6.8 Lsize=N/A time=00:00:01.04 bitrate=N/A speed=0.467x 
video:63kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown



Note, achieved FPS is 0.4. When I increase the requested frame to be the 125th frame rather than the 25th, the FPS goes down to 0.1.


-
LibVLC : Is a filter like v360:fisheye from ffmpeg available in LibVLC ?
8 septembre 2021, par CraabI'm having problem playing fisheye(180) videos panoramically on the fly
using LibVlcSharp in a WinForms appliction.


More specifically : what I want to do is filtering video with libvlcsharp in a WinForms application(c#) with the same effect like


-vf v360=fisheye:e:ih_fov=180:iv_fov=180:pitch=90



from ffmpeg.


FYI : this will not make the video tagged with equirectanglar projection, the v360=fisheye filter dewarps the video and the video stays rectanglar.


The input video is like this :




And the the expected output should be like :




This can be achieved by :


./ffmpeg -i input.mov -vf v360=fisheye:e:ih_fov=180:iv_fov=180:pitch=90,crop=in_w:in_h/2:in_w:in_h/2 output.mov



but I need to do this with LibVLC and on the fly.


Can I achieve this with LibVLC's video filter ?


Or I need to develop my own filter and make a wrapper for c# ?


-
Trying to save frames as colored image using Ffmpeg in C++
2 septembre 2021, par TolgaI am new to FFmpeg and I am trying to save the video frames as colored images. I have achieved saving them as grayscale using Netpbm, however, I need to save the frames as colored. I have tried implementing the code in this link.


However, I get an error :


'Exception thrown at 0x00E1FC4F (swscale-5.dll) in VideoDecoding2.exe:
 0xC0000005: Access violation writing location 0xCCCCCCCC.'



Is there any way to improve this code or another way to save frames as colored ?


Here is my code below.


- 

src_pix_fmt
isAV_PIX_FMT_YUV420p
.dst_pix_fmt
isAV_PIX_FMT_RGB24
.






src_pix_fmt = avcc->pix_fmt;

src_width = avcc->width;
src_height = avcc->height;

dst_width = src_width;
dst_height = src_height;

numBytes = av_image_get_buffer_size(dst_pix_fmt, dst_width, dst_height, 0);

buffer = (uint8_t*)av_malloc(numBytes);

if ((ret = av_image_alloc(src_data, src_linesize, src_width, src_height, src_pix_fmt, 16)) < 0)
{
 printf("Couldn't allocate source image.\n");
 return 0;
}

av_image_fill_arrays(frameRGB->data, frameRGB->linesize, buffer, dst_pix_fmt, dst_width, dst_height, 0);

while (av_read_frame(avfc, packet) >= 0)
{
 ret = avcodec_send_packet(avcc, packet);
 if (ret < 0)
 {
 printf("Packets could not supplied to decoder.\n");
 return -1;
 }

 ret = avcodec_receive_frame(avcc, frame);
 printf("%d", ret);

 if (packet->stream_index == videoStream)
 {
 sws_ctx = sws_getContext(src_width, src_height, src_pix_fmt,
 dst_width, dst_height, dst_pix_fmt,
 SWS_BILINEAR, NULL, NULL, NULL);

 if (!sws_ctx)
 {
 printf("Cannot create scale context for conversion\n"
 "fmt:%s s:%dx%d --> fmt:%s s:%dx%d\n",
 av_get_pix_fmt_name(src_pix_fmt), src_width, src_height,
 av_get_pix_fmt_name(dst_pix_fmt), dst_width, dst_height);
 return 0;
 }

 sws_scale(sws_ctx, (const uint8_t* const*)frame->data, frame->linesize, 0, frame->height, dst_data, dst_linesize);

 FILE* f;
 char szFilename[32];
 int y;

 snprintf(szFilename, sizeof(szFilename), "frame%d.ppm", avcc->frame_number);
 fopen_s(&f, szFilename, "wb");
 
 if (f == NULL)
 {
 printf("Couldn't open file.\n");
 return 0;
 }
 
 fprintf(f, "P6\n%d %d\n255\n", dst_width, dst_height);

 for (y = 0; y < dst_height; y++)
 fwrite(frameRGB->data[0] + y * frameRGB->linesize[0], 1, dst_width * 3, f);
 
 fclose(f);
 }
}