
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (73)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (5099)
-
FFMPEG video editing application. Need time and date stamp burned into video
11 mai 2022, par JacobI am developing an application for video editing. The main component of this application is to produce a single video file from several video files captured from a camcorder with the time and date stamp displayed on the final rendered video, much like the final product from a security camera. I have figured out, by using FFMPEG, how to burn the date and time into the video with a .SRT file as well as with DrawText like the following :


ffmpeg -y -i video.mp4 -vf “drawtext=fontfile=roboto.ttf:fontsize=12:fontcolor=yellow:text='%{pts\:localtime\:1575526882\:%A, %d, %B %Y %I\\\:%M\\\:%S %p}'" -preset ultrafast -f mp4 output_new.mp4 



I would rather use the DrawText method so the user does not have to wait longer while creating the .SRT files. I am new to FFMPEG and I find their documentation very confusing. I guess I am hoping there is someone out there who has experience with it.


Everything seems to work when I pass in the date created meta data from the video file and drawtext just does its thing. The problem is my application allows for editing of the video. I do this, for lack of better solution, by allowing the user to select beginning and ending frames they do not want, from the UI and then the code simply deletes the frames from the directory where they were split and saved. I then use FFMPEG to iterate through the directory and combine the remaining frames to make a video file.


This approach starts the time and date from the date created metadata ; however, cutting the frames out of the video will make the DT stamp inaccurate, due to the missing frames.


Is there any way to tell FFMPEG to burn in the date and time from date/time retrieved from each individual frame ? I appreciate any advice that you may have.


-
ffmpeg merge silent video with another video+audio
4 mars 2016, par Mark DaggerI want to create, in a single command, a video from 3 sources :
- a silent background video ;
- a smaller video to be overlayed (same length of 1), KEEPING its AUDIO ;
- a PNG logo to be overlayed
I can create the video but cannot get the audio track. I don’t understand if
-vf
is supposed to work in this case. This is what I’ve tried to do :ffmpeg.exe -y -i MASTER_SILENT_VIDEO.mp4 -vf "movie=SMALLER_VIDEO_WITH_AUDIO.flv, scale=320:-1[inner];movie=MY_LOGO.png[inner2]; [in][inner] overlay=800:480,amerge [step1]; [step1][inner2] overlay=30:30 [out]" completed.mp4
The "amerge" filter should do the audio merging job, but of course it doesn’t work. I’ve found similar questions involving
-map
or filtergraph but they refer to mixing a video source and an audio source ; I tried several filtergraph examples without success. Any idea ? -
libavutil : add mastering display metadata sidedata
17 janvier 2016, par Neil Birkbecklibavutil : add mastering display metadata sidedata
Adding mastering display metadata struct to avutil. The mastering display metadata contains information
about the mastering display color volume (SMPTE 2086:2014).This info comes from HEVC in the SEI_TYPE_MASTERING_DISPLAY_INFO and is soon to be included in MKV :
https://mailarchive.ietf.org/arch/search/?email_list=cellar&gbt=1&index=sZyfPTM-QY69P-0omfOIiTN622o
so it is similar to SEI FPA / stereo_mode in MKV and as such this patch follows how AVStereo3D is implemented.I’ll add support to HEVC in a follow-up (and MKV when spec is approved).
Signed-off-by : Neil Birkbeck <neil.birkbeck@gmail.com>
Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>