Recherche avancée

Médias (91)

Autres articles (35)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (4241)

  • libavcodec initialization to achieve real time playback with frame dropping when necessary

    20 octobre 2019, par Blake Senftner

    I have a C++ computer vision application linking with the ffmpeg libraries that provides frames from video streams to analysis routines. The idea being one can provide a moderately generic video stream identifier, and that video source will be decompressed and passed frame after frame to an analysis routine (which runs the user’s analysis functions.) The "moderately generic video identifier" covers 3 generic video stream types : paths to video files on disk, IP video streams (cameras or video streaming services), and USB webcam pins with desired format & rate.

    My current video player is generic as possible : video only, ignoring audio and other streams. It has a switch case for retrieving a stream’s frame rate based upon the stream’s source and codec, which is used to estimate the delay between decompressing frames. I’ve had many issues with trying to get reliable timestamps from the streams, so I am currently ignoring pts and dts. I know ignoring pts/dts is bad for variable frame rate streams. I plan to special case them later. The player currently checks to see if the last decompressed frame is more than 2 frames late (assuming a constant frame rate), and if so "drops the frame" - does not pass it to the user’s analysis routine.

    Essentially, the video player’s logic is determining when to skip frames (not pass them to the time consuming analysis routine) so the analysis is fed video frames in as close as possible to real time.

    I am looking for examples or discussions how one can initialize and/or maintain their AVFormatContext, AVStream, and AVCodecContext using (presumably but not limited to) AVDictionary options such that frame dropping as is necessary to maintain real time is performed at the libav libraries level, and not at my video player level. If achieving this requires separate AVDictionaies (or more) for each stream type and codec, then so be it. I am interested in understanding the pros and cons of both approachs : dropping frames at the player level or at the libav level.

    (When some analysis requires every frame, the existing player implementation with frame dropping disabled is fine. I suspect if I can get frame dropping to occur at the libav level, I’ll save the packet to frame decompression time as well, reducing the processing more than my current version.)

  • How can I capture real time command line output of x265.exe with Python ?

    29 février 2020, par LeeRoermond

    I would like to write a GUI for x265.exe which presents a better (more humanized) real time progress .

    Here’s the code I used to capture subprocess’s output :

    import subprocess

    cmd = r'ping www.baidu.com -n 4'
    popen = subprocess.Popen(cmd, stdout = subprocess.PIPE ,stderr=subprocess.STDOUT ,shell=True)
    while True:
       next_line = popen.stdout.readline()
       if next_line == b'' and popen.poll() != None:
           break
       else:
           print(next_line.decode('ascii').replace('\r\n','\n') , end='')

    It performs perfectly with ’ping’.

    However ,when I swiched to ’x265’ command ,thing goes to wired.

    For example, If I replaced string variable 'cmd' into "x265 --y4m --crf 21 --output output.hevc input.y4m" in the preceding code.Theoretically , it should gives out the following output in lines arranged in order of time :

    y4m  [info]: 1920x1080 fps 24000/1001 i420p10 frames 0 - 100 of 101
    x265 [info]: Using preset ultrafast & tune none
    raw  [info]: output file: C:\temp\output.hevc
    x265 [info]: Main 10 profile, Level-4 (Main tier)
    x265 [info]: Thread pool created using 16 threads
    x265 [info]: Slices                              : 1
    x265 [info]: frame threads / pool features       : 4 / wpp(34 rows)
    x265 [info]: Coding QT: max CU size, min CU size : 32 / 16
    x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
    x265 [info]: ME / range / subpel / merge         : dia / 57 / 0 / 2
    x265 [info]: Keyframe min / max / scenecut / bias: 23 / 250 / 0 / 5.00
    x265 [info]: Lookahead / bframes / badapt        : 5 / 3 / 0
    x265 [info]: AQ: mode / str / qg-size / cu-tree  : 1 / 0.0 / 32 / 1
    x265 [info]: Rate Control / qCompress            : CRF-21.0 / 0.60
    x265 [info]: tools: strong-intra-smoothing lslices=6 deblock

    [1.0%] 1/101 frames, 6.289 fps, 7217.8 kb/s
    [25.7%] 26/101 frames, 59.23 fps, 299.23 kb/s
    [45.5%] 46/101 frames, 66.76 fps, 322.81 kb/s  
    [69.3%] 70/101 frames, 73.30 fps, 224.53 kb/s
    [93.1%] 94/101 frames, 77.05 fps, 173.67 kb/s  

    x265 [info]: frame I:      1, Avg QP:23.45  kb/s: 7098.44
    x265 [info]: frame P:     25, Avg QP:25.71  kb/s: 311.24  
    x265 [info]: frame B:     75, Avg QP:28.33  kb/s: 23.89  
    x265 [info]: consecutive B-frames: 3.8% 0.0% 0.0% 96.2%

    encoded 101 frames in 1.22s (82.58 fps), 165.06 kb/s, Avg QP:27.64

    But the truth is, those output block in the middle part which indicates the real time progress will not be captured every specific it updated . popen.stdout.readline() command will be blocked until progress goes to 100% and then output altogether. Obviously that’s not what I want.

    ( ↓ I mean by this part)

    [1.0%] 1/101 frames, 6.289 fps, 7217.8 kb/s
    [25.7%] 26/101 frames, 59.23 fps, 299.23 kb/s
    [45.5%] 46/101 frames, 66.76 fps, 322.81 kb/s  
    [69.3%] 70/101 frames, 73.30 fps, 224.53 kb/s
    [93.1%] 94/101 frames, 77.05 fps, 173.67 kb/s  

    Could anyone help me fiture out what’s going on and how to fix it to achieve my goal ?

    Thanks a lot.

  • Revision 64742f825d : Merge "Elevate NEWMV mode checking threshold in real time"

    2 juillet 2014, par Yunqing Wang

    Merge "Elevate NEWMV mode checking threshold in real time"