Recherche avancée

Médias (91)

Autres articles (71)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (7905)

  • How to get an accurate duration of any audio file quickly ?

    27 septembre 2023, par Steve M

    There are many audio files in the wild encoded with VBR which don't have an accurate duration tag, and command line FFMpeg would output the following :

    



    


    Estimating duration from bitrate, this may be inaccurate

    


    



    (and indeed, it is)

    



    Using libav, we can see that this is tested with the has_duration() function from libavformat/utils.c

    



    I have an audio player that needs to get an accurate duration information using libav. I've tried completely decoding the file and counting the total number of samples decoded which is the usual recommendation (and is accurate), but this can take 10+ seconds on a 60 minute audio file, which is not acceptable.

    



    I notice that there are some closed source audio decoders and music playing apps which can get an accurate duration for these files immediately. There must be some cheap way of determining the accurate duration ? Perhaps a snippet or high-level description would help me out.

    


  • Multiple RTSPs receive method

    8 février 2020, par CDY

    I am trying to code a project which I have at least 20 rtsp CCTV URL going to access at same time.

    I tried to use ffmpeg to reach out my goal via multiple input method. However, there is a problem.

    ffmpeg -i URL_1 -i URL_2 -

    The command above is the example I tried. I wish that I can access two rtsps via ffmpeg and output them into two different queues for the future use. If I use this command and read bytes after that, I can not distinguish which bytes belongs to which input rtsp.

    Is there any other way which I can access more rtsp at same time ?

    Edit : Adding Code

    import ffmpeg
    import numpy as np
    import subprocess as sp
    import threading
    import queue
    import time
    class CCTVReader(threading.Thread):
       def __init__(self, q, in_stream, name):
           super().__init__()
           self.name = name
           self.q = q
           self.command = ["ffmpeg",
                           "-c:v", "h264",     # Tell ffmpeg that input stream codec is h264
                           "-i", in_stream,    # Read stream from file vid.264
                           "-c:v", "copy",     # Tell ffmpeg to copy the video stream as is (without decding and encoding)
                           "-an", "-sn",       # No audio an no subtites
                           "-f", "h264",       # Define pipe format to be h264
                           "-"]                # Output is a pipe

       def run(self):
           pipe = sp.Popen(self.command, stdout=sp.PIPE, bufsize=1024**3)  # Don't use shell=True (you don't need to execute the command through the shell).

           # while True:
           for i in range(1024*10):  # Read up to 100KBytes for testing
               data = pipe.stdout.read(1024)  # Read data from pip in chunks of 1024 bytes
               self.q.put(data)

               # Break loop if less than 1024 bytes read (not going to work with CCTV, but works with input file)
               if len(data) < 1024:
                   break

           try:
               pipe.wait(timeout=1)  # Wait for subprocess to finish (with timeout of 1 second).
           except sp.TimeoutExpired:
               pipe.kill()           # Kill subprocess in case of a timeout (there should be a timeout because input stream still lives).

           if self.q.empty():
               print("There is a problem (queue is empty)!!!")
           else:
               # Write data from queue to file vid_from_queue.264 (for testingg)
               with open(self.name+".h264", "wb") as queue_save_file:
                   while not self.q.empty():
                       queue_save_file.write(self.q.get())


    # Build synthetic video, for testing begins:
    ################################################
    # width, height = 1280, 720
    # in_stream = "vid.264"
    # sp.Popen("ffmpeg -y -f lavfi -i testsrc=size=1280x720:duration=5:rate=1 -c:v libx264 -crf 23 -pix_fmt yuv420p " + in_stream).wait()
    ################################################

    #Use public RTSP Streaming for testing
    readers = {}
    queues = {}
    dict = {
           "name1":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name2":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name3":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name4":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name5":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name6":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name7":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name8":{"ip":"rtsp://xxx.xxx.xxx.xxx/",
           "name9":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name10":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name11":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name12":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name13":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name14":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           "name15":{"ip":"rtsp://xxx.xxx.xxx.xxx/"},
           }

    for key in dict:
       ip = dict[key]["ip"]
       name = key
       q = queue.Queue()
       queues[name] = q
       cctv_reader = CCTVReader(q, ip, name)
       readers[name] = cctv_reader
       cctv_reader.start()
       cctv_reader.join()
  • How are ARM GPUs supported by Video display/decoding/encoding Programs ?

    7 juillet 2020, par John Allard

    I often see ARM-based chips advertising onboard GPUs, like the RPI3 that came with "Broadcom VideoCore IV @ 250 MHz" and the OdroidC2 that comes with a "Mali-450 GPU". These chips advertise stuff like "Decode 4k/30FPS, Encode 1080p,30FPS" as the capabilities of the GPU for encoding and decoding videos.

    


    My question is this - how does a program like Kodi, VLC, or FFMPEG come to make use of these GPUs for actual encoding and decoding ? When I do research on how to make use of the Mali-450 GPU, for example, I find some esoteric and poorly documented C-examples of sending compressed frames to the GPU and getting decoded frames back. If I were to use a device like the OdroidC2 and install VLC on it, how does VLC make use of the GPU ? Did someone have to write logic into VLC to use the specific encoding/decoding API exposed by the Mali GPU in order to use it or do these GPUs follow some sort of consistent API that is exposed by all GPUs and VLC/Kodi can just program against this system API ?

    


    The reason I ask this question is that VLC and Kodi tend to support these GPUs out of the Box, but a very popular program like FFMPEG that prides itself on supporting as many codecs and accelerators as possible has no support for decoding and encoding with the Mali GPU series. Why would VLC/Kodi support encoding/decoding and not FFMPEG ? Why do these manufacturers claim wild decoding and encoding support if these GPUs are difficult to program against and one must use their custom esoteric APIs instead of something like libavcodec ?

    


    I hope my question makes sense, I guess what I'm curious about is that GPUs on most systems whether it be the Intel HD Graphics, Nvidia cards, AMD cards, etc seem to be used automatically by most video players but when it comes to using something like FFMPEG against these devices the process becomes much more process and you need to custom compile the build and give special flags to use the device as intended. Is there something I'm missing here ? Is VLC programmed to make use of all of these different type of GPUs ? And why, in that case, does FFMEPG not support Mali GPUs out of the Box ?