Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (72)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

Sur d’autres sites (10789)

  • very low latency streaminig with ffmpeg using a webcam

    5 avril, par userDtrm

    I'm trying to configure ffmpeg to do a real-time video streaming using a webcam. The ffmpeg encoder command I use is as follows.

    



    ffmpeg -f v4l2 -input_format yuyv422 -s 640x480 -i /dev/video0 -c:v libx264 -profile:v baseline -trellis 0 -subq 1 -level 32 -preset superfast -tune zerolatency -me_method epzs -crf 30 -threads 0 -bufsize 1 -refs 4 -coder 0 -b_strategy 0 -bf 0 -sc_threshold 0 -x264-params vbv-maxrate=2000:slice-max-size=1500:keyint=30:min-keyint=10: -pix_fmt yuv420p -an -f mpegts udp://192.168.1.8:5001


    



    The ffplay command used to display the video feed is,

    



    ffplay -analyzeduration 1 -fflags -nobuffer -i udp://192.168.1.8:5001


    



    However, I'm experiencing a latency of 0.5 - 1.0s latency in the video stream. Is there a way to reduce this to a number less than 100ms. Also, when I replace the v4l2 camera capture with a screen capture using x11grab, the stream is almost real-time and I experience no noticeable delays. Moreover, changing the encoder from x264 to mpeg2 had no effect on the latency. In addition, the statistics from the ffmpeg shows that the encoder is performing at a 30fps rate, which I believe indicates that the encoding is real-time. This leaves me with only one reason for the experienced delay.

    



      

    • Is there a significant delay in buffers when using v4l2 during video capturing in a webcam ?
    • 


    • I don't think the transmission delay is in effect in this case as I see no latencies when screen capture is used under the same conditions.
    • 


    • Can this latency be further reduced ?. Can someone think of a different encoder configuration to be used instead of the one that I've been using ?
    • 


    


  • How to avoid these .bin files to execute ffmpeg library python

    11 avril 2019, par Josemi

    I had this code in Python 3.7 in Windows :

    import numpy as np
    import matplotlib.pyplot as plt
    from skimage import io, color
    import skvideo.io
    video = skvideo.io.vread('foreman.avi')

    But the terminal gave me the error : Cannot find installation of real FFmpeg (which comes with ffprobe). Therefore, I followed this guide : Cannot find installation of real FFmpeg (which comes with ffprobe) . However, they say that I had to add the .bin files from /bin directory into my path, and I added them to my workspace and everything worked.

    Now, the question is if there’s a way I don’t have to add the .bin files everytime I want to use the skvideo.io.vread function. I have even paste those files to the ffmpeg directory in C:\ProgramData\Anaconda3\Lib\site-packages\ffmpeg.

    Thank you for your attention.

  • What do I have to do to decompress MP4 video with NVDEC directly to a Texture buffer on an NVidia Jetson AGX Xavier computer ?

    13 septembre 2020, par Alexis Wilke

    What I'm trying to do is decompress two MP4 frames (one per NVDEC) and then manipulate them with OpenGL.

    


    Right now, though, this is too slow (definitely not real time), as I have to make copies of 4K images (3840x2160x3 in RGB) and that's just too much data to copy.

    


    I wrote a small benchmark to verify that part. I can only copy between 240 and 250 such buffers per second with memcpy(). That's too slow when the input movies are 60 fps...

    


    I'm able to use the NVDEC chip to decompress to a buffer through ffmpeg, but to place that in a texture, I then have to get the frame out of ffmpeg (copy 1) and then send that image to a texture (copy 2). Do that for two videos, that's 4 copies... 4K is huge ! So the CPUs don't have the time to do that much work 60 times per second.

    


    So right now I'm looking for a way to send the output of the NVDEC directly to a texture. Looking at GStreamer (the gst-launch-1.0), it takes like 3% CPU and can playback a 4K video in real time. What am I doing wrong ?