
Recherche avancée
Autres articles (57)
-
L’agrémenter visuellement
10 avril 2011MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté. -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (11743)
-
opus : add a native Opus encoder
11 février 2017, par Rostislav Pehlivanovopus : add a native Opus encoder
This marks the first time anyone has written an Opus encoder without
using any libopus code. The aim of the encoder is to prove how far
the format can go by writing the craziest encoder for it.Right now the encoder’s basic, it only supports CBR encoding, however
internally every single feature the CELT layer has is implemented
(except the pitch pre-filter which needs to work well with the rest of
whatever gets implemented). Psychoacoustic and rate control systems are
under development.The encoder takes in frames of 120 samples and depending on the value of
opus_delay the plan is to use the extra buffered frames as lookahead.
Right now the encoder will pick the nearest largest legal frame size and
won’t use the lookahead, but that’ll change once there’s a
psychoacoustic system.Even though its a pretty basic encoder its already outperforming
any other native encoder FFmpeg has by a huge amount.The PVQ search algorithm is faster and more accurate than libopus’s
algorithm so the encoder’s performance is close to that of libopus
at zero complexity (libopus has more SIMD).
The algorithm might be ported to libopus or other codecs using PVQ in
the future.The encoder still has a few minor bugs, like desyncs at ultra low
bitrates (below 9kbps with 20ms frames).Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>
-
MEAN stack express.js video uploader/converter
5 janvier 2017, par MattJThe idea is a social site where people can upload their videos. I am planning to use multer for uploading (limiting by size and by mimetype). Then for performance and mostly storage purposes to use fluent-ffmpeg and convert it to mp4 format and store it somewhere on the server with a reference in mongodb. Since I do not want the user to wait while the whole process is done, I plan to separate it into to main parts :
- Uploading
- Converting and storing.
Where the user uploads the file and then some separate node process ( using node-schedule) which run checks every 1 min. or so to convert all files in the directory and after that adds the references to mongodb. So what do you guys think ? What is the best approach performance-wise ?
-
How to convert VP8 track with different frame resolution to h264
13 septembre 2016, par NikitaI have a .webm file with VP8 track, recorded from WebRTC stream by external service (TokBox Archiving). The stream is adaptive, so each frame in track could have different resolution. Most players (in webkit browsers) use video resolution from track description (which is always 640x480) and scale frames to this resolution. Firefox and VLC player uses real frame resolution, changing video resolution respectively.
I want to achieve 2 goals :
- play this video in Internet Explorer 9+ without additional plugin installation.
- change frames resolution to one fixed resolution, so the video will look identically in different browsers.
So, my plan is :
- extract frames from source webm file to images with real frame resolution (e.g. PNG or BMP) (how could I do that ?)
- find max width and max height of images
- add black padding to images, so smaller frames will be in the center of a new frame (of size MAX_WIDHTxMAX_HEIGHT)
- combine images to h264 track using ffmpeg
Is all correct ? How can I achieve this ? Can this algorithm be optimized some way ?
I tried ffmpeg to extract images, but it does not parse real frame resolution, using resolution from track header.
I think some libwebm functions can help me (to parse frame headers and extract images). Maybe someone has some code snippets to do this ?Example .webm (download source, do not play google-converted version) : https://drive.google.com/file/d/0BwFZRvYNn9CKcndhMzlVa0psX00/view?usp=sharing
Official description of adaptive stream from TokBox support : https://support.tokbox.com/hc/en-us/community/posts/206241666-Archived-video-resolution-is-supposed-to-be-720x1280-but-reports-as-640x480