
Recherche avancée
Autres articles (110)
-
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)
Sur d’autres sites (16069)
-
FFmpeg meaningful video thumbnails
18 avril 2015, par MaverickI know this question has been discussed already here. However, it did not solve the problem completely. I trying to retrieve meaning thumbnails from video and while executing the following command.
ffmpeg -ss 3 -i input.avi -vf "select=gt(scene\,0.4)" -frames:v 5 -vsync vfr -vf "fps=fps=1/600" out%02d.jpg
As a result, I am only able to get 1 image out of this video. I am not sure, what is going wrong over here.
Also, it would be great, if someone kindly give a quick explanation of this command parameters and filters. The command originally comes from here
FFMPEG console output :
ffmpeg version 2.6 Copyright (c) 2000-2015 the FFmpeg developers
built with Apple LLVM version 6.0 (clang-600.0.57) (based on LLVM 3.5svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/2.6 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-libfreetype --enable-libtheora --enable-libvorbis --enable-libvpx --enable-librtmp --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --enable-ffplay --enable-libspeex --enable-libschroedinger --enable-libfdk-aac --enable-libopus --enable-frei0r --enable-libopenjpeg --disable-decoder=jpeg2000 --extra-cflags='-I/usr/local/Cellar/openjpeg/1.5.1_1/include/openjpeg-1.5 ' --enable-nonfree --enable-vda
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 26.100 / 56. 26.100
libavformat 56. 25.101 / 56. 25.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 11.102 / 5. 11.102
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, avi, from 'tagesschau.avi':
Metadata:
encoder : Lavf56.25.101
Duration: 00:00:20.03, start: 0.000000, bitrate: 4357 kb/s
Stream #0:0: Video: mpeg4 (Simple Profile) (xvid / 0x64697678), yuv420p, 720x540 [SAR 1:1 DAR 4:3], 4190 kb/s, 29.97 fps, 29.97 tbr, 29.97 tbn, 29.97 tbc
Stream #0:1: Audio: ac3 ([0] [0][0] / 0x2000), 44100 Hz, stereo, fltp, 160 kb/s
[swscaler @ 0x7fcb2a00f600] deprecated pixel format used, make sure you did set range correctly
[mjpeg @ 0x7fcb2a803c00] bitrate tolerance 4000000 too small for bitrate 200000, overriding
Output #0, image2, to 'out%02d.jpg':
Metadata:
encoder : Lavf56.25.101
Stream #0:0: Video: mjpeg, yuvj420p(pc), 720x540 [SAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 0k fps, 0k tbn, 0k tbc
Metadata:
encoder : Lavc56.26.100 mjpeg
Stream mapping:
Stream #0:0 -> #0:0 (mpeg4 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
frame= 1 fps=0.0 q=6.9 Lsize=N/A time=00:10:00.00 bitrate=N/A
video:44kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown -
avcodec/vdpau : Support for VDPAU accelerated HEVC decoding
13 juin 2015, par Philip Langdaleavcodec/vdpau : Support for VDPAU accelerated HEVC decoding
This change introduces basic support for HEVC decoding through vdpau.
Right now, there are problems with the nvidia driver/library implementation
that mean that frames are incorrectly laid out in memory when they are
returned from the decoder, and it is normally impossible to recover the
complete decoded frame due to loss of data from alignment inconsistencies.I obviously hope that nvidia will be fixing it in due course - I’ve verified
the problems exist with their example application.As such, this support is not useful for any real world application, but I
believe that it is correct (with the caveat that the mangled frames may hide
problems) and will work properly once the nvidia problem is fixed.Right now it appears that any file encoded by x265 or nvenc is decoded
correctly, but that’s because these files don’t use a bunch of HEVC
features.Quick summary :
Features that seem to work :
1) Short Term References
2) Scaling Lists
3) TilingFeatures with known problems :
1) Long Term References
It’s hard to tell what’s going on here. After I read the nvidia example
app that does not set the IsLongTerm flag on LTRs, and changed my code,
a bunch of frames using LTR started to display correctly, but there
are still samples with glitches that are related to LTRs.In terms of real world files, both x265 and nvenc only use short term
refs from this list. The divx encoder seems similar.Signed-off-by : Philip Langdale <philipl@overt.org>
-
How do I extract color matrix from MP4 an x264 stream in Media Foundation
23 août 2016, par JulesI am playing a video (mp4 containing x264 encoded video stream) with a custom player using media foundation.
When I convert the YUV information into RGB I need to account for the color matrix and range used at encode time.
Some of my videos have this information, I can use MediaInfo.exe or FFMPEG to see that it is present.
However, for such videos if I look at the relevant Media Foundation properties (Extended Color Information) the properties are not present in the files.
So, somehow I need to find a way to access the information.
Media Foundation does provide access to MF_MT_MPEG4_SAMPLE_DESCRIPTION and MF_MT_MPEG_SEQUENCE_HEADER for the video stream but I can’t find descriptions of what these contain.
I noticed that the MF_MT_MPEG_SEQUENCE_HEADER is much longer for the videos with the information present and this (MPEG Headers Quick Reference) seems to suggest headers might contain the information I need.
I’m looking for Color Range (limited/full), Color Primaries, Transfer Characteristics and Matrix Coefficients (BT.709 etc).
I’d greatly appreciate any help finding this information from a Media Foundation video stream.
Thanks
Jules
Update - Sequence Header
The sequence header appears to be a subset of MPEG4 sample description, though I can’t find anything that indicates what either bits of data actually contains / doesn’t contain specifically.
The sequence header appears to contain data structured as an MP4 byte stream as described in the H264 Standards Document and includes the VUI (Video Usability Information - Annex E of document) which may then include the colour information I’m interested in.
Given that it’s a byte stream I need to know where it starts and whether there’s some existing code I could use to decode it.
In FFMPEG in libavcodec/h264_ps.c there is a function called ff_h264_decode_seq_parameter_set which ends up calling decode_vui_parameters. It seems possible that seq_parameter_set maps to MF_MT_MPEG_SEQUENCE_HEADER and it may be possible to use that code to decode the data.
If anyone one has any direct experience with decoding this data it would be very useful.
Thanks again
Update - Related posts
I found this How to decode sprop-parameter-sets in a H264 SDP ? and Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream which are fairly helpful.
The sequence header would appear to be Sequence or picture parameter set (pps) and the parameters I want are the VUI extension subset.
Plus this post H.264 stream structure gives the high level of how the stream data is structured, and the MF_MT_MPEG_SEQUENCE_HEADER appears to start with a NAL 0x00 0x00 0x01 so I’m guessing it is a NAL containing the PPS.