
Recherche avancée
Médias (1)
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
Autres articles (33)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (7359)
-
Revision 64742f825d : Merge "Elevate NEWMV mode checking threshold in real time"
2 juillet 2014, par Yunqing WangMerge "Elevate NEWMV mode checking threshold in real time"
-
How can I capture real time command line output of x265.exe with Python ?
29 février 2020, par LeeRoermondI would like to write a GUI for x265.exe which presents a better (more humanized) real time progress .
Here’s the code I used to capture subprocess’s output :
import subprocess
cmd = r'ping www.baidu.com -n 4'
popen = subprocess.Popen(cmd, stdout = subprocess.PIPE ,stderr=subprocess.STDOUT ,shell=True)
while True:
next_line = popen.stdout.readline()
if next_line == b'' and popen.poll() != None:
break
else:
print(next_line.decode('ascii').replace('\r\n','\n') , end='')It performs perfectly with ’ping’.
However ,when I swiched to ’x265’ command ,thing goes to wired.
For example, If I replaced string variable
'cmd'
into"x265 --y4m --crf 21 --output output.hevc input.y4m"
in the preceding code.Theoretically , it should gives out the following output in lines arranged in order of time :y4m [info]: 1920x1080 fps 24000/1001 i420p10 frames 0 - 100 of 101
x265 [info]: Using preset ultrafast & tune none
raw [info]: output file: C:\temp\output.hevc
x265 [info]: Main 10 profile, Level-4 (Main tier)
x265 [info]: Thread pool created using 16 threads
x265 [info]: Slices : 1
x265 [info]: frame threads / pool features : 4 / wpp(34 rows)
x265 [info]: Coding QT: max CU size, min CU size : 32 / 16
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge : dia / 57 / 0 / 2
x265 [info]: Keyframe min / max / scenecut / bias: 23 / 250 / 0 / 5.00
x265 [info]: Lookahead / bframes / badapt : 5 / 3 / 0
x265 [info]: AQ: mode / str / qg-size / cu-tree : 1 / 0.0 / 32 / 1
x265 [info]: Rate Control / qCompress : CRF-21.0 / 0.60
x265 [info]: tools: strong-intra-smoothing lslices=6 deblock
[1.0%] 1/101 frames, 6.289 fps, 7217.8 kb/s
[25.7%] 26/101 frames, 59.23 fps, 299.23 kb/s
[45.5%] 46/101 frames, 66.76 fps, 322.81 kb/s
[69.3%] 70/101 frames, 73.30 fps, 224.53 kb/s
[93.1%] 94/101 frames, 77.05 fps, 173.67 kb/s
x265 [info]: frame I: 1, Avg QP:23.45 kb/s: 7098.44
x265 [info]: frame P: 25, Avg QP:25.71 kb/s: 311.24
x265 [info]: frame B: 75, Avg QP:28.33 kb/s: 23.89
x265 [info]: consecutive B-frames: 3.8% 0.0% 0.0% 96.2%
encoded 101 frames in 1.22s (82.58 fps), 165.06 kb/s, Avg QP:27.64But the truth is, those output block in the middle part which indicates the real time progress will not be captured every specific it updated .
popen.stdout.readline()
command will be blocked until progress goes to 100% and then output altogether. Obviously that’s not what I want.( ↓ I mean by this part)
[1.0%] 1/101 frames, 6.289 fps, 7217.8 kb/s
[25.7%] 26/101 frames, 59.23 fps, 299.23 kb/s
[45.5%] 46/101 frames, 66.76 fps, 322.81 kb/s
[69.3%] 70/101 frames, 73.30 fps, 224.53 kb/s
[93.1%] 94/101 frames, 77.05 fps, 173.67 kb/sCould anyone help me fiture out what’s going on and how to fix it to achieve my goal ?
Thanks a lot.
-
libavcodec initialization to achieve real time playback with frame dropping when necessary
20 octobre 2019, par Blake SenftnerI have a C++ computer vision application linking with the ffmpeg libraries that provides frames from video streams to analysis routines. The idea being one can provide a moderately generic video stream identifier, and that video source will be decompressed and passed frame after frame to an analysis routine (which runs the user’s analysis functions.) The "moderately generic video identifier" covers 3 generic video stream types : paths to video files on disk, IP video streams (cameras or video streaming services), and USB webcam pins with desired format & rate.
My current video player is generic as possible : video only, ignoring audio and other streams. It has a switch case for retrieving a stream’s frame rate based upon the stream’s source and codec, which is used to estimate the delay between decompressing frames. I’ve had many issues with trying to get reliable timestamps from the streams, so I am currently ignoring pts and dts. I know ignoring pts/dts is bad for variable frame rate streams. I plan to special case them later. The player currently checks to see if the last decompressed frame is more than 2 frames late (assuming a constant frame rate), and if so "drops the frame" - does not pass it to the user’s analysis routine.
Essentially, the video player’s logic is determining when to skip frames (not pass them to the time consuming analysis routine) so the analysis is fed video frames in as close as possible to real time.
I am looking for examples or discussions how one can initialize and/or maintain their AVFormatContext, AVStream, and AVCodecContext using (presumably but not limited to) AVDictionary options such that frame dropping as is necessary to maintain real time is performed at the libav libraries level, and not at my video player level. If achieving this requires separate AVDictionaies (or more) for each stream type and codec, then so be it. I am interested in understanding the pros and cons of both approachs : dropping frames at the player level or at the libav level.
(When some analysis requires every frame, the existing player implementation with frame dropping disabled is fine. I suspect if I can get frame dropping to occur at the libav level, I’ll save the packet to frame decompression time as well, reducing the processing more than my current version.)