
Recherche avancée
Autres articles (34)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (5717)
-
pyav / ffmpeg / libav select number of P-frames and B-frames
27 mai 2021, par user1315621I am streaming from an rtsp source. It looks like half of the frames received are key frames. Is there a way to reduce this percentage and have an higher number of P-frames and B-frames ? If possible, I would like to increase the number of P-frames (not the one of B-frames).
I am using
pyav
which is a Python wrapper forlibav
(ffmpeg
)

Code :


container = av.open(
 url, 'r',
 options={
 'rtsp_transport': 'tcp',
 'stimeout': '5000000',
 'max_delay': '5000000',
 }
)
stream = container.streams.video[0]
codec_context = stream.codec_context
codec_context.export_mvs = True
codec_context.gop_size = 25 

for packet in self.container.demux(video=0):
 for video_frame in packet.decode():
 print(video_frame.is_key_frame)



Output :


True
False
True
False
...



Note 1 : I can't edit the source. I can just edit the code used to stream the video.


Note 2 : same solution should apply to
pyav
,libavi
andffmpeg
.

Edit : it seems that B-frames are disabled :
codec_context.has_b_frames
isFalse


-
FFMPEG scaling video disable viewing it while process is not ended, for video transcoding on the fly
5 juillet 2022, par Lucas FGood day all,


I'm working on a video player with 1080p original video files, and I would like to change their resolution on the fly :


Actually I host all my original video files under a web mp4 1080p format, and I would like to be able to offer 720p, 480p, etc ... qualities.


So I started to look for tutorials about video transcoding on the fly and I found the FFMPEG tool.


I'm actually using the following command to scale videos - to 720p for e.g. :


ffmpeg -i input.mp4 -vf scale=-1:720 output.mp4



The problem is, once FFMPEG starts scaling it, I have to wait the end of the process before being able to play the video. Do you know if there is any parameter for this command to allow playing the video while it's under scaling ?


Or any workaround that can help me doing this ?


Thank you in advance for your help !


EDIT


Now I found how to access readable content while transcoding (by transcoding to fragmented MP4) :


ffmpeg -i input.mp4 -vf scale=-2:720 -movflags +frag_keyframe+separate_moof+omit_tfhd_offset+empty_moov output.mp4



But my problem is when opening the video it comes as an "ended" video of the current transcoded data.


So if video lasts 1min and is half transcoded, I'll see only 30s, it will not load more data, once the rest is transcoded.


-
How to acurately trim audio and video with ffmpeg ? [closed]
6 janvier 2024, par ws90I'm trying to automate the trimming and concatenation of video clips that also contain audio using ffmpeg.
The following commands are being used to trim clips and then concatenate the trimmed clips.


.\ffmpeg -ss $startInSeconds -i $inputFile -t $partDurationInSeconds $outputFile



This is done once per input file, the values of $startInSeconds and $partDurationInSeconds are different for different clips.


.\ffmpeg -f concat -safe 0 -i .\list.txt -c copy $concatOutputFile



list.txt is a list of $outputFile from the first trim command.


The audio of the concat video file gradually de-syncs over time (slowing down compared to the trimmed clips), which is the problem I'm looking to fix.
It seems to slow down by about half a frame at each concat join.


Why is the concat command causing this de-sync and what can I do about it ?


I thought this was due to a mismatch in duration between the audio and video tracks of a trimmed clip (the audio would often be shorter than the video after trimming). I then tried padding the audio to match the video before concatenating but the problem persists.
I also found an example where the tracks were identical in length and the problem persists, so I think the concat command is the culprit, not the trim command.