Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (80)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (14065)

  • How to send RTP stream to Janus from NGINX RTMP module ? [closed]

    25 novembre 2024, par Matéo

    I'm trying to create a stream and display it in a browser. I have already configured NGINX with the rtmp module and my stream works very well with HLS (between 5 and 10 seconds of latency).

    


    Now I would like to set up a low-latency stream and that's why I have installed the janus-gateway webRTC server that allows to take in input an RTP stream and provide in output a webRTC stream.

    


    Here's the schema I'd like to follow :

    


    OBS -> RTMP -> Nginx-rtmp-module -> ffmpeg -> RTP -> Janus -> webRTC -> Browser

    


    But I have a problem with this part : "nginx-rtmp-module -> ffmpeg -> janus"

    


    In fact, my janus's server is running and demos streaming works very well in localhost, but when i try to provide an RTP stream, Janus don't detect the stream in the demos (it shows "No remote video available").

    


    Anyone can help me, please ?

    


    Ressources :

    


      

    • My janus.plugin.streaming.jcfg configuration :
    • 


    


    rtp-sample: {
        type = "rtp"
        id = 1
        description = "Opus/VP8 live stream coming from external source"
        metadata = "You can use this metadata section to put any info you want!"
        audio = true
        video = true
        audioport = 5002
        audiopt = 111
        audiortpmap = "opus/48000/2"
        videoport = 5004
        videopt = 100
        videortpmap = "VP8/90000"
        secret = "adminpwd"
}



    


      

    • My nginx.conf application :
    • 


    


    application test {

        deny play all;

        live on;
        on_publish http://localhost/test/backend/sec/live_auth.php;

        exec ffmpeg -i rtmp://localhost/test/$name -an -c:v copy -flags global_header -bsf dump_extra -f rtp rtp://localhost:5004;

}


    


  • How to read realtime microphone audio volume in python and ffmpeg or similar

    20 octobre 2016, par Ryan Martin

    I’m trying to read, in near-realtime, the volume coming from the audio of a USB microphone in Python.

    I have the pieces, but can’t figure out how to put it together.

    If I already have a .wav file, I can pretty simply read it using wavefile :

    from wavefile import WaveReader

    with WaveReader("/Users/rmartin/audio.wav") as r:
       for data in r.read_iter(size=512):
           left_channel = data[0]
           volume = np.linalg.norm(left_channel)
           print volume

    This works great, but I want to process the audio from the microphone in real-time, not from a file.

    So my thought was to use something like ffmpeg to PIPE the real-time output into WaveReader, but my Byte knowledge is somewhat lacking.

    import subprocess
    import numpy as np

    command = ["/usr/local/bin/ffmpeg",
               '-f', 'avfoundation',
               '-i', ':2',
               '-t', '5',
               '-ar', '11025',
               '-ac', '1',
               '-acodec','aac', '-']

    pipe = subprocess.Popen(command, stdout=subprocess.PIPE, bufsize=10**8)
    stdout_data = pipe.stdout.read()
    audio_array = np.fromstring(stdout_data, dtype="int16")

    print audio_array

    That looks pretty, but it doesn’t do much. It fails with a [NULL @ 0x7ff640016600] Unable to find a suitable output format for ’pipe :’ error.

    I assume this is a fairly simple thing to do given that I only need to check the audio for volume levels.

    Anyone know how to accomplish this simply ? FFMPEG isn’t a requirement, but it does need to work on OSX & Linux.

  • How can I decode a packet data to image ? [duplicate]

    20 décembre 2020, par codcod55

    I am getting data which in .h264 format. The data which coming from camera is something like this :

    


    b'\x00\x00\x00\x01A\xf9\xc2 ;\xd7\x10\x0fQ\xbf\xa4+\x024\xd0\xf3_'\xceT\xe3\x1c4\xf7\xa2*\xc0`/J ;\xa5\xe8i\x99\xb1\x85\xf2\xe65\xf4\xeb\xcfD\x9e\x0b\xf2\xe5*\xcf2U\xabe\xf1\x0fJp\ ........ (It is longer)

    


    I want to decode this data format with ffmpeg or opencv functions. How can I save this data as a jpeg image.

    


    Here a piece of code :

    


     packet_data = b''
    while True:
        try:
            res_string, ip = self.socket_video.recvfrom(2048)
            packet_data += res_string
            print(packet_data)
            print(len(res_string))
            # end of frame
            if len(res_string) != 1460:
                for frame in self._h264_decode(packet_data):
                    self.frame = frame
                packet_data = ""

        except socket.error as exc:
            print ("Caught exception socket.error : %s" % exc)

def _h264_decode(self, packet_data):
    """
    decode raw h264 format data from Tello

    :param packet_data: raw h264 data array

    :return: a list of decoded frame
    """
    res_frame_list = []
    frames = ffmpeg.input(packet_data)
    for framedata in frames:
        (frame, w, h, ls) = framedata
        if frame is not None:
            # print 'frame size %i bytes, w %i, h %i, linesize %i' % (len(frame), w, h, ls)

            frame = np.fromstring(frame, dtype=np.ubyte, count=len(frame), sep='')
            frame = (frame.reshape((h, ls / 3, 3)))
            frame = frame[:, :w, :]
            res_frame_list.append(frame)

    return res_frame_list


    


    EDIT : My code working with python2 but I want to work with python3. h264 library doesn't support python3. Actually, I am trying to decode this data format without h264 library. I tried to use base64 format but I couldn't do it. As a result, I am looking for a method to convert this data to image without h264 library in python3.