
Recherche avancée
Autres articles (50)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (9191)
-
Intel IPP RGBToYUV420 function is getting IppStsSizeErr result code
6 février 2018, par yesilcimen.ahmetI am using
IPP 2017.0.3(r55431)
andDelphi 10.2
, I am trying convertRGB
toYUV420P
, but I am gettingIppStsSizeErr
result code.I have
m_dst_picture, m_src_picture: AVPicture
structure created byFFMPEG
.{ allocate the encoded raw picture }
ret := avpicture_alloc(@m_dst_picture, AV_PIX_FMT_YUV420P, c^.width, c^.height);
if (ret < 0) then
Exit(False);
{ allocate BGR frame that we can pass to the YUV frame }
ret := avpicture_alloc(@m_src_picture, AV_PIX_FMT_BGR24, c^.width, c^.height);
if (ret < 0) then
Exit(False);
//It works fine.
{ convert BGR frame (m_src_picture) to and YUV frame (m_dst_picture) }
sws_scale(sws_ctx, @m_src_picture.data[0], @m_src_picture.linesize, 0, c^.height, @m_dst_picture.data[0], @m_dst_picture.linesize);I want to convert the
RGB
buffer directly toYUV420P
. The original code first loadsRGB
into theAVPicture
then convertRGB
toYUV420P
withsws_scale
and it causes slowness.Here I copy the
BGR
buffer tom_src_picture
ofFFMPEG
. But this leads to performance loss, so I want to convert it directly toYUV420P
usingIntel IPP
.procedure WriteFrameBGR24(frame: PByte);
var
y: Integer;
begin
for y := 0 to m_c^.height - 1 do
Move(PByte(frame - (y * dstStep))^, PByte(m_src_picture.data[0] + (y * m_src_picture.linesize[0]))^, dstStep);
end;In the code below I am trying to convert using
Intel IPP
.{ Converting RGB to YUV420P. }
**roiSize is 1920 and 1080
**The values created by FFMPEG for YUV420P in m_dst_picture.linesize are [0]=1920,[1]=960,[2]=960 respectively.Do I need to convert the values of the linesize to another value ?
**The reason why the srcStep parameter is a minus sign is the Bottom-Up Bitmap and the frame pointer indicates the Bmp.ScanLine[0] address, which indicates the highest pointer address.
srcStep := (((width * (3 * 8)) + 31) and not 31) div 8; //for 24 bitmap
{ Swap of BGR channels to RGB. }
//It works fine
st := ippiSwapChannels_8u_C3IR(frame, -srcStep, roiSize, @BGRToRGBArray[0]);
{ Convert RGB to YUV420P. }
//IppStsSizeErr
st := ippiRGBToYUV420_8u_C3P3R(frame, -srcStep, @m_dst_picture.data[0], @m_dst_picture.linesize[0], roiSize);How do I solve this problem ?
Thank you.
-
FFMPEG Video Capture fails
20 août 2018, par Ankur TripathiI am trying to capture video using FFmpeg and Directshow(https://github.com/rdp/screen-capture-recorder-to-video-windows-free) filter but it works sometimes and fails some time
Start FFmpeg
arguments= -y -rtbufsize 100M -f dshow -framerate 60 -i
video="screen-capture-recorder":audio="Microphone Array (Realtek
Audio)" -c:v libx264 -r 60 -preset ultrafast -tune zerolatency -crf 28
-pix_fmt yuv420p -s 940x576 -c:a aac -strict -2 -ac 2 -b:a 128k "C :\Users\ankur\AppData\Roaming\Take by MangoApps\Recent\Temp\1.mp4"Stop FFmpeg
Call WriteInput("q");
public void WriteInput(string input)
{
if (processRunning && process != null && process.StartInfo !=
null && process.StartInfo.RedirectStandardInput)
{
process.StandardInput.WriteLine(input);
}
}Getting Error
8/20/2018 6:56:54 PM - ****VIDEO ERROffmpeg version N-84348-gdb7a05d Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.3.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 50.100 / 55. 50.100
libavcodec 57. 83.101 / 57. 83.101
libavformat 57. 66.105 / 57. 66.105
libavdevice 57. 3.100 / 57. 3.100
libavfilter 6. 78.100 / 6. 78.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
Guessed Channel Layout for Input Stream #0.1 : stereo
Input #0, dshow, from 'video=screen-capture-recorder:audio=Microphone Array (Realtek Audio)':
Duration: N/A, start: 29271.852000, bitrate: N/A
Stream #0:0: Video: rawvideo, bgr0, 940x576, 60 fps, 60 tbr, 10000k tbn, 10000k tbc
Stream #0:1: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Stream #0:1 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0000000002553d40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
[libx264 @ 0000000002553d40] profile Constrained Baseline, level 3.2
[libx264 @ 0000000002553d40] 264 - core 148 r2762 90a61ec - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=4 lookahead_threads=4 sliced_threads=1 slices=4 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=28.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
Output #0, mp4, to 'C:\Users\ankur\AppData\Roaming\Take\Recent\Temp\0.mp4':
Metadata:
encoder : Lavf57.66.105
Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p(progressive), 940x576, q=-1--1, 60 fps, 15360 tbn, 60 tbc
Metadata:
encoder : Lavc57.83.101 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc57.83.101 aac
frame= 31 fps=0.0 q=27.0 size= 125kB time=00:00:00.50 bitrate=2043.9kbits/s speed=0.991x
frame= 62 fps= 61 q=20.0 size= 189kB time=00:00:01.01 bitrate=1522.0kbits/s speed=1.01x
frame= 93 fps= 61 q=22.0 size= 220kB time=00:00:01.53 bitrate=1177.1kbits/s speed=1.01x
frame= 123 fps= 61 q=20.0 size= 253kB time=00:00:02.03 bitrate=1018.2kbits/s speed=1.01x
frame= 153 fps= 61 q=20.0 size= 263kB time=00:00:02.53 bitrate= 851.5kbits/s speed= 1x
frame= 184 fps= 61 q=23.0 size= 273kB time=00:00:03.05 bitrate= 734.0kbits/s speed=1.01x
frame= 214 fps= 61 q=20.0 size= 323kB time=00:00:03.55 bitrate= 745.4kbits/s speed= 1x
frame= 244 fps= 60 q=20.0 size= 341kB time=00:00:04.05 bitrate= 689.7kbits/s speed= 1x
frame= 274 fps= 60 q=22.0 size= 465kB time=00:00:04.55 bitrate= 837.4kbits/s speed= 1x -
Writing A Dreamcast Media Player
6 janvier 2017, par Multimedia Mike — Sega DreamcastI know I’m not the only person to have the idea to port a media player to the Sega Dreamcast video game console. But I did make significant progress on an implementation. I’m a little surprised to realize that I haven’t written anything about it on this blog yet, given my propensity for publishing my programming misadventures.
This old effort had been on my mind lately due to its architectural similarities to something else I was recently brainstorming.
Early Days
Porting a multimedia player was one of the earliest endeavors that I embarked upon in the multimedia domain. It’s a bit fuzzy for me now, but I’m pretty sure that my first exposure to the MPlayer project in 2001 arose from looking for a multimedia player to port. I fed it through the Dreamcast development toolchain but encountered roadblocks pretty quickly. However, this got me looking at the MPlayer source code and made me wonder how I could contribute, which is how I finally broke into practical open source multimedia hacking after studying the concepts and technology for more than a year at that point.Eventually, I jumped over to the xine project. After hacking on that for awhile, I remembered my DC media player efforts and endeavored to compile xine to the console. The first attempt was to simply compile the codebase using the Dreamcast hobbyist community’s toolchain. This is when I came to fear the multithreaded snake pit in xine’s core. Again, my memories are hazy on the specifics, but I remember the engine having a bunch of threading hacks with comments along the lines of “this code deadlocks sometimes, so on shutdown, monitor this lock and deliberately break it if it has been more than 3 seconds”.
Something Workable
Eventually, I settled on a combination of FFmpeg’s libavcodec library for audio and video decoders, xine’s demuxer library, and xine’s input API, combined with my own engine code to tie it all together along with video and output drivers provided by the KallistiOS hobbyist OS for Dreamcast. Here is a simple diagram of the data movement through this player :
Details and Challenges
This is a rare occasion when I actually got to write the core of a media player engine. I made some mistakes.xine’s internal clock ran at 90000 Hz. At least, its internal timestamps were all in reference to a 90 kHz clock. I got this brilliant idea to trigger timer interrupts at 6000 Hz to drive the engine. Whatever the timer facilities on the Dreamcast, I found that 6 kHz was the greatest common divisor with 90 kHz. This means that if I could have found an even higher GCD frequency, I would have used that instead.
So the idea was that, for a 30 fps video, the engine would know to render a frame on every 200th timer interrupt. I eventually realized that servicing 6000 timer interrupts every second would incur a ridiculous amount of overhead. After that, my engine’s philosophy was to set a timer to fire for the next frame while beginning to process the current frame. I.e., when rendering a frame, set a timer to call back in 1/30th of a second. That worked a lot better.
As I was still keen on 8-bit paletted image codecs at the time (especially since they were simple and small for bootstrapping this project), I got to use output palette images directly thanks to the Dreamcast’s paletted textures. So that was exciting. The engine didn’t need to convert the paletted images to a different colorspace before rendering. However, I seem to recall that the Dreamcast’s PowerVR graphics hardware required that 8-bit textures be twiddled/swizzled. Thus, it was still required to manipulate the 8-bit image before rendering.
I made good progress on this player concept. However, a huge blocker for me was that I didn’t know how to make a proper user interface for the media player. Obviously, programming the Dreamcast occurred at a very low level (at least with the approach I was using), so there were no UI widgets easily available.
This was circa 2003. I assumed there must have been some embedded UI widget libraries with amenable open source licenses that I could leverage. I remember searching and checking out a library named libSTK. I think STK stood for “set-top toolkit” and was positioned specifically for doing things like media player UIs on low-spec embedded computing devices. The domain hosting the project is no longer useful but this appears to be a backup of the core code.
It sounded promising, but the libSTK developers had a different definition of “low-spec embedded” device than I did. I seem to recall that they were targeting something along with likes of a Pentium III clocked at 800 MHz with 128 MB RAM. The Dreamcast, by contrast, has a 200 MHz SH-4 CPU and 16 MB RAM. LibSTK was also authored in C++ and leveraged the Boost library (my first exposure to that code), and this all had the effect of making binaries quite large while I was trying to keep the player in lean C.
Regrettably, I never made any serious progress on a proper user interface. I think that’s when the player effort ran out of steam.
The Code
So, that’s another project that I never got around to finishing or publishing. I was able to find the source code so I decided to toss it up on github, along with 2 old architecture outlines that I was able to dig up. It looks like I was starting small, just porting over a few of the demuxers and decoders that I knew well.I’m wondering if it would still be as straightforward to separate out such components now, more than 13 years later ?
The post Writing A Dreamcast Media Player first appeared on Breaking Eggs And Making Omelettes.