
Recherche avancée
Autres articles (71)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (8067)
-
Read a Bytes image from Amazon Kinesis output in python
14 février 2020, par Varun_RathinamI used
imageio.get_reader(BytesIO(a), 'ffmpeg')
to load a bytes image and save it as normal image.But the below error throws when I read the image using
imageio.get_reader(BytesIO(a), 'ffmpeg')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/core/functions.py", line 186, in get_reader
return format.get_reader(request)
File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/core/format.py", line 164, in get_reader
return self.Reader(self, request)
File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/core/format.py", line 214, in __init__
self._open(**self.request.kwargs.copy())
File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py", line 323, in _open
self._initialize()
File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py", line 466, in _initialize
self._meta.update(self._read_gen.__next__())
File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio_ffmpeg/_io.py", line 150, in read_frames
raise IOError(fmt.format(err2))
OSError: Could not load meta information
=== stderr ===
ffmpeg version 4.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)
configuration: --prefix=/home/tango/anaconda3 --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1566210161358/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --disable-openssl --enable-avresample --enable-gnutls --enable-gpl --enable-hardcoded-tables --enable-libfreetype --enable-libopenh264 --enable-libx264 --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[matroska,webm @ 0x5619b9da3cc0] File ended prematurely
[matroska,webm @ 0x5619b9da3cc0] Could not find codec parameters for stream 0 (Video: h264, none, 1280x720): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, matroska,webm, from '/tmp/imageio_zm6hhpgr':
Metadata:
title : Kinesis Video SDK
encoder : Kinesis Video SDK 1.0.0
AWS_KINESISVIDEO_FRAGMENT_NUMBER: 91343852333183888465720004820715065721442989478
AWS_KINESISVIDEO_SERVER_TIMESTAMP: 1580791384.096
AWS_KINESISVIDEO_PRODUCER_TIMESTAMP: 1580791377.843
Duration: N/A, bitrate: N/A
Stream #0:0(eng): Video: h264, none, 1280x720, SAR 1:1 DAR 16:9, 1k tbr, 1k tbn, 2k tbc (default)
Metadata:
title : kinesis_video
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Press [q] to stop, [?] for help
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Conversion failed!
</module></stdin>The above approach to read a MKV bytes file was done based on this thread
Or is there is any approach to parse and read the MKV bytes file.
-
Direct3d Color space conversion in GPU
12 juin 2019, par erusluCreating direct3d surface inYV12 format and rendering video frames in yuv420 format causes a blur video. Seems like there is a smog on video. I think this is because yuv 420 color space format’s data range is in 16-235 for Y planes and 16-240 for U and V planes. They are not in range of 0-255.
I changed color format to BGRA in CPU by using ffmpeg’s sws_scale() function and I created direct3d surface in BGRA format and then displayed video is OK. But cpu consumption is very high due to color space conversion. Is there any way to make color conversion in GPU or is there another way to have sharp video display ?
This is how I create YV12 surface
m_pDirect3DDevice->CreateOffscreenPlainSurface(_srcWidth, _srcHeight, (D3DFORMAT)MAKEFOURCC('Y', 'V', '1', '2'), D3DPOOL_DEFAULT, &m_pDirect3DSurfaceRender, NULL);
Here I copy yuv planes of camera’s video frames data to direct3d surface
BYTE* pict = (BYTE*)d3d_rect.pBits;
BYTE* Y = pY;
BYTE* V = pV;
BYTE* U = pU;
for (int y = 0; y < _srcHeight; y++)
{
memcpy(pict, Y, p1);
pict += d3d_rect.Pitch;
Y += p1;
}
for (int y = 0; y < _srcHeight >> 1; y++)
{
memcpy(pict, V, p3);
pict += d3d_rect.Pitch >> 1;
V += p3;
}
for (int y = 0; y < _srcHeight >> 1; y++)
{
memcpy(pict, U, p2);
pict += d3d_rect.Pitch >> 1;
U += p2;
}I appreciate your help, thank you.
-
How to merge Video and Subtitle on Google Colab with only specifying file path using mkvmerge ?
25 août 2022, par SomeNameOn Google Colab I found a code to merge Video and Subtitle using mkvmerge with only specifying the folder path also there is a option to include attachments fonts if preferred but The code doesn't work ?


I tried this but it didn't seem to work? When I execute the code it does nothing? İt won't start muxing video and Subtitle? https://pastebin.com/raw/q85DTkta


Could someone help me with this code ? What am I missing ?