
Recherche avancée
Autres articles (7)
-
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (4452)
-
FFMpeg - Ideal hardware build out [on hold]
9 février 2015, par ShawnFFMpeg noob question here - I’m building out a system who’s primary job is to take different video files (mp4, flv, avi, etc), and spit them out to thumbnail jpeg images (1 frame/sec). I’m using the below FFMpeg command line to do this :
ffmpeg -i sourcevideo.mp4 -y -vf scale=100 :-1 -r 1 frame_%6d.jpg
This system will be handling thousands of video files per day - so my question is, what would be the dream hardware build out to support this ? As many procs as possible ? SSD ? Fast drive controllers ? Graphic cards for hardware acceleration ?
Any thoughts on this would be appreciated.
-
Reduce latency in ffmpeg snapshot
3 décembre 2015, par Acorian0I have a latency problem with FFMPEG.
I have a streaming server running and I occasionally need to take multiple snapshots over a period of time (in the example below, 5s), and then examine the snapshots taken. Time is critical.
With this command I take 25 frames per second over 5 seconds, meaning I will have 125 snapshots in my folder.
ffmpeg.exe -ss 0.05 -re -i udp://239.255.0.xx:xxxx -ss 0 -vf fps=25 -to 5 -y \Test\%5d.jpg
The
-vf fps
is forcing the 25 frames per second even if the server can’t provide them.The problem is that
-ss 0.05
is not doing its job. It is supposed to delay the initial snapshot for 50 milliseconds but FFMPEG takes around 200ms to "start" the service on itself :/. This means the first snapshot is taken after around 200ms after I called the service. (200ms is way too big of a latency for my purpose... I can survive with 100ms)How can I force FFMPEG to start taking snapshots earlier ? Or is there a way to get snapshots, let’s say, from the buffer (e.g. start taking snapshots 150ms in the past) ?
PS : Changing the
-ss
from 0.05 to 0.00 is not doing anything, I can only see the-ss
doing something if the value is bigger than 0.2/0.25. Another thing, using a newer version of FFMPEG is by now impossibleSample OUTPUT :
ffmpeg version 2.4.5 Copyright (c) 2000-2014 the FFmpeg developers
built on Dec 30 2014 14:53:50 with gcc 4.9.2 (GCC)
configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32thread
s --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-icon
v --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable
-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencor
e-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-l
ibschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-li
bvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-l
ibwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --ena
ble-lzma --enable-decklink --enable-zlib
libavutil 54. 7.100 / 54. 7.100
libavcodec 56. 1.100 / 56. 1.100
libavformat 56. 4.101 / 56. 4.101
libavdevice 56. 0.100 / 56. 0.100
libavfilter 5. 1.100 / 5. 1.100
libswscale 3. 0.100 / 3. 0.100
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 0.100 / 53. 0.100
[mpeg2video @ 01d77120] Invalid frame dimensions 0x0.
Input #0, mpegts, from 'udp://239.255.0.14:5014':
Duration: N/A, start: 159.051978, bitrate: 105241 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: mpeg1video ([2][0][0][0] / 0x0002), yuv420p(tv), 1920x1080 [SAR 1:1 D
AR 16:9], 104857 kb/s, 50 tbr, 90k tbn, 50 tbc
Stream #0:1[0x101]: Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 384 kb/s
[swscaler @ 01d70060] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to 'C:\Test\%5d.jpg':
Metadata:
encoder : Lavf56.4.101
Stream #0:0: Video: mjpeg, yuvj420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25
tbn, 25 tbc
Metadata:
encoder : Lavc56.1.100 mjpeg
Stream mapping:
Stream #0:0 -> #0:0 (mpeg1video (native) -> mjpeg (native))
Press [q] to stop, [?] for help
frame= 125 fps= 25 q=24.8 Lsize=N/A time=00:00:05.00 bitrate=N/A dup=1 drop=0
video:9710kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown -
YUV4:2:0 conversion to RGB outputs overly green image
27 février 2023, par luckybromaI'm decoding video and getting YUV 420 frames. In order to render them using D3D11, they need to get converted to RGB (or at least I assume that the render target view cannot be YUV itself).


The YUV frames are all in planar format, meaning UV and not packed. I'm creating 3 textures and ShaderResourceViews of type
DXGI_FORMAT_R8G8_UNORM
. I'm copying each plane from the frame into it's own ShaderResourceView. I'm then relying on the sampler to account for the differences in size between the Y and UV planes. Black/White only looks great. If I add in color though, I get an overly Green Picture :


I'm at a huge loss of what I could be doing wrong.. I've tried switching the UV and planes around, I've also tried tweaking the conversion values. I'm following Microsoft's guide on picture conversion.


Here is my shader :


min16float4 main(PixelShaderInput input) : SV_TARGET
{
 float y = YChannel.Sample(defaultSampler, input.texCoord).r;
 float u = UChannel.Sample(defaultSampler, input.texCoord).r - 0.5;
 float v = VChannel.Sample(defaultSampler, input.texCoord).r - 0.5;

 float r = y + 1.13983 * v;
 float g = y - 0.39465 * u - 0.58060 * v;
 float b = y + 2.03211 * u;

 return min16float4(r, g, b , 1.f);
}



Creating my ShaderResourceViews :


D3D11_TEXTURE2D_DESC texDesc;
 ZeroMemory(&texDesc, sizeof(texDesc));
 texDesc.Width = 1670;
 texDesc.Height = 626;
 texDesc.MipLevels = 1;
 texDesc.ArraySize = 1;
 texDesc.Format = DXGI_FORMAT_R8_UNORM;
 texDesc.SampleDesc.Count = 1;
 texDesc.SampleDesc.Quality = 0;
 texDesc.Usage = D3D11_USAGE_DYNAMIC;
 texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
 texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;


 dev->CreateTexture2D(&texDesc, NULL, &pYPictureTexture);
 dev->CreateTexture2D(&texDesc, NULL, &pUPictureTexture);
 dev->CreateTexture2D(&texDesc, NULL, &pVPictureTexture);
 
 D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
 shaderResourceViewDesc.Format = DXGI_FORMAT_R8_UNORM;
 shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
 shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
 shaderResourceViewDesc.Texture2D.MipLevels = 1;

 dev->CreateShaderResourceView(pYPictureTexture, &shaderResourceViewDesc, &pYPictureTextureResourceView);

 dev->CreateShaderResourceView(pUPictureTexture, &shaderResourceViewDesc, &pUPictureTextureResourceView);
 
 dev->CreateShaderResourceView(pVPictureTexture, &shaderResourceViewDesc, &pVPictureTextureResourceView);




And then How I'm copying the decoded ffmpeg AVFrames :


int height = 626;
 int width = 1670; 

 D3D11_MAPPED_SUBRESOURCE msY;
 D3D11_MAPPED_SUBRESOURCE msU;
 D3D11_MAPPED_SUBRESOURCE msV;
 devcon->Map(pYPictureTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &msY);

 memcpy(msY.pData, frame->data[0], height * width);
 devcon->Unmap(pYPictureTexture, 0);

 devcon->Map(pUPictureTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &msU);
 memcpy(msU.pData, frame->data[1], (height*width) / 4);
 devcon->Unmap(pUPictureTexture, 0);


 devcon->Map(pVPictureTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &msV);
 memcpy(msV.pData, frame->data[2], (height*width) / 4);
 devcon->Unmap(pVPictureTexture, 0);



PS : Happy to provide any more additional requested code ! I just wanted to be concise as possible.