
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (79)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (15447)
-
While using skvideo.io.FFmpegReader and skvideo.io.FFmpegWriter for video throughput the input video and output video length differ
28 juin 2024, par Kaesebrotus AnonymousI have an h264 encoded mp4 video of about 27.5 minutes length and I am trying to create a copy of the video which excludes the first 5 frames. I am using scikit-video and ffmpeg in python for this purpose. I do not have a GPU, so I am using libx264 codec for the output video.


It generally works and the output video excludes the first 5 frames. Somehow, the output video results in a length of about 22 minutes. When visually checking the videos, the shorter video does seem slightly faster and I can identify the same frames at different timestamps. In windows explorer, when clicking properties and then details, both videos' frame rates show as 20.00 fps.


So, my goal is to have both videos of the same length, except for the loss of the first 5 frames which should result in a duration difference of 0.25 seconds, and use the same (almost same) codec and not lose quality.


Can anyone explain why this apparent loss of frames is happening ?


Thank you for your interest in helping me, please find the details below.


Here is a minimal example of what I have done.


framerate = str(20)
reader = skvideo.io.FFmpegReader(inputvideo.mp4, inputdict={'-r': framerate})
writer = skvideo.io.FFmpegWriter(outputvideo.mp4, outputdict={'-vcodec': 'libx264', '-r': framerate})

for idx,frame in enumerate(reader.nextFrame()):
 if idx < 5:
 continue
 writer.writeFrame(frame)

reader.close()
writer.close()



When I read the output video again using FFmpegReader and check the .probeInfo, I can see that the output video has less frames in total. I have also managed to replicate the same problem for shorter videos (now not excluding the first 5 frames, but only throughputting a video), e.g. 10 seconds input turns to 8 seconds output with less frames. I have also tried playing around with further parameters of the outputdict, e.g. -pix_fmt, -b. I have tried to set -time_base in the output dict to the same value as in the inputdict, but that did not seem to have the desired effect. I am not sure if the name of the parameter is right.


For additional info, I am providing the .probeInfo of the input video, of which I used 10 seconds, and the .probeInfo of the 8 second output video it produced.


**input video** .probeInfo:
input dict

{'video': OrderedDict([('@index', '0'),
 ('@codec_name', 'h264'),
 ('@codec_long_name',
 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10'),
 ('@profile', 'High 4:4:4 Predictive'),
 ('@codec_type', 'video'),
 ('@codec_tag_string', 'avc1'),
 ('@codec_tag', '0x31637661'),
 ('@width', '4096'),
 ('@height', '3000'),
 ('@coded_width', '4096'),
 ('@coded_height', '3000'),
 ('@closed_captions', '0'),
 ('@film_grain', '0'),
 ('@has_b_frames', '0'),
 ('@sample_aspect_ratio', '1:1'),
 ('@display_aspect_ratio', '512:375'),
 ('@pix_fmt', 'yuv420p'),
 ('@level', '60'),
 ('@chroma_location', 'left'),
 ('@field_order', 'progressive'),
 ('@refs', '1'),
 ('@is_avc', 'true'),
 ('@nal_length_size', '4'),
 ('@id', '0x1'),
 ('@r_frame_rate', '20/1'),
 ('@avg_frame_rate', '20/1'),
 ('@time_base', '1/1200000'),
 ('@start_pts', '0'),
 ('@start_time', '0.000000'),
 ('@duration_ts', '1984740000'),
 ('@duration', '1653.950000'),
 ('@bit_rate', '3788971'),
 ('@bits_per_raw_sample', '8'),
 ('@nb_frames', '33079'),
 ('@extradata_size', '43'),
 ('disposition',
 OrderedDict([('@default', '1'),
 ('@dub', '0'),
 ('@original', '0'),
 ('@comment', '0'),
 ('@lyrics', '0'),
 ('@karaoke', '0'),
 ('@forced', '0'),
 ('@hearing_impaired', '0'),
 ('@visual_impaired', '0'),
 ('@clean_effects', '0'),
 ('@attached_pic', '0'),
 ('@timed_thumbnails', '0'),
 ('@non_diegetic', '0'),
 ('@captions', '0'),
 ('@descriptions', '0'),
 ('@metadata', '0'),
 ('@dependent', '0'),
 ('@still_image', '0')])),
 ('tags',
 OrderedDict([('tag',
 [OrderedDict([('@key', 'language'),
 ('@value', 'und')]),
 OrderedDict([('@key', 'handler_name'),
 ('@value', 'VideoHandler')]),
 OrderedDict([('@key', 'vendor_id'),
 ('@value', '[0][0][0][0]')])])]))])}

**output video** .probeInfo:
{'video': OrderedDict([('@index', '0'),
 ('@codec_name', 'h264'),
 ('@codec_long_name',
 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10'),
 ('@profile', 'High'),
 ('@codec_type', 'video'),
 ('@codec_tag_string', 'avc1'),
 ('@codec_tag', '0x31637661'),
 ('@width', '4096'),
 ('@height', '3000'),
 ('@coded_width', '4096'),
 ('@coded_height', '3000'),
 ('@closed_captions', '0'),
 ('@film_grain', '0'),
 ('@has_b_frames', '2'),
 ('@pix_fmt', 'yuv420p'),
 ('@level', '60'),
 ('@chroma_location', 'left'),
 ('@field_order', 'progressive'),
 ('@refs', '1'),
 ('@is_avc', 'true'),
 ('@nal_length_size', '4'),
 ('@id', '0x1'),
 ('@r_frame_rate', '20/1'),
 ('@avg_frame_rate', '20/1'),
 ('@time_base', '1/10240'),
 ('@start_pts', '0'),
 ('@start_time', '0.000000'),
 ('@duration_ts', '82944'),
 ('@duration', '8.100000'),
 ('@bit_rate', '3444755'),
 ('@bits_per_raw_sample', '8'),
 ('@nb_frames', '162'),
 ('@extradata_size', '47'),
 ('disposition',
 OrderedDict([('@default', '1'),
 ('@dub', '0'),
 ('@original', '0'),
 ('@comment', '0'),
 ('@lyrics', '0'),
 ('@karaoke', '0'),
 ('@forced', '0'),
 ('@hearing_impaired', '0'),
 ('@visual_impaired', '0'),
 ('@clean_effects', '0'),
 ('@attached_pic', '0'),
 ('@timed_thumbnails', '0'),
 ('@non_diegetic', '0'),
 ('@captions', '0'),
 ('@descriptions', '0'),
 ('@metadata', '0'),
 ('@dependent', '0'),
 ('@still_image', '0')])),
 ('tags',
 OrderedDict([('tag',
 [OrderedDict([('@key', 'language'),
 ('@value', 'und')]),
 OrderedDict([('@key', 'handler_name'),
 ('@value', 'VideoHandler')]),
 OrderedDict([('@key', 'vendor_id'),
 ('@value', '[0][0][0][0]')]),
 OrderedDict([('@key', 'encoder'),
 ('@value',
 'Lavc61.8.100 libx264')])])]))])}



I used 10 seconds by adding this to the bottom of the loop shown above :


if idx >= 200:
 break



-
I have two ffmpeg command one is for Overlay text on video and other is for add picture logo on video, i want to merge them but it's not working
18 février 2019, par Zeeshan Sheikhi want to combine these two commands
one is for overlay text on video :1) ffmpeg -i video.mp4 -vf "movie=logo.png [watermark] ; [in][watermark] overlay=10:10 [out]" -y output.mp4
2) ffmpeg -i video.mp4 -filter:v drawtext="fontfile=font/arial.ttf:text=’Text’:fontcolor=white@1.0:fontsize=30:y=h/2:x=0:enable=’between(t,6,10)’" -t 10 e :\output.mp4
i want to use both functionaties into one command. but it’s not working on my side
-
FFmpeg command to overlay first video on top of other video in batch file [closed]
18 septembre 2022, par Ionut BejinariuWith this code I'm trying to add a video with a transparent background over another, more precisely a waveform over another mp4 video.


what is wrong in the code below ?
I try it on a batch file, but is not executed..


ffmpeg -y -i test.mp4 -i waveform.mp4 -filter_complex [1]format=rgb24,colorkey=black:0.3:0.2,colorchannelmixer=aa=0.3,setpts=PTS+8/TB[1d]; [0][1d]overlay=enable='between(t,8, 13)'[v1]; -map [v1] -map 0:a -c:a copy -c:v libx264 -preset ultrafast export.mp4





thank you for your gelp