Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (39)

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (6425)

  • How to interpret ndarray in a pyAV AudioFrame ?

    30 janvier 2024, par Sachin Dole

    I want to process streaming audio (coming in from a person speaking on the peer of a webRTC peer connection) to detect when the person is done talking. I have got the audio track and access to individual frames. I see that each frame can be converted to an nd_array using Frame.to_ndarray. I can also see values in the ndarray changing depending on what the person is speaking, what pitch, what volume etc. Now, I want to detect silence on the stream. My question is what is in the ndarray and how can I make sense of the data ?

    


            while True:
            try:
                frame:AudioFrame = await track.recv()
                frame_nd_array = frame.to_ndarray() 


    


    Where can I learn what is in the frame_nd_array ?

    


  • avcodec/dcahuff : Combine tables, use ff_init_vlc_from_lengths()

    6 septembre 2022, par Andreas Rheinhardt
    avcodec/dcahuff : Combine tables, use ff_init_vlc_from_lengths()
    

    Up until now, initializing the dca VLC tables uses ff_init_vlc_sparse()
    with length tables of type uint8_t and code tables of type uint16_t
    (except for the LBR tables, which uses length and symbols of type
    uint8_t ; these tables are interleaved). In case of the quant index
    codebooks these arrays were accessed via tables of pointers to the
    individual tables.

    This commit changes this : First, we switch to ff_init_vlc_from_lengths()
    to replace the uint16_t code tables by uint8_t symbol tables
    (this necessitates ordering the tables from left-to-right in the tree
    first). These symbol tables are interleaved with the length tables.

    Furthermore, these tables are combined in order to remove the table of
    pointers to individual tables, thereby avoiding relocations (for x64
    elf systems this amounts to 96*24B = 2304B saved in .rela.dyn) and
    saving 1280B from .data.rel.ro (for 64bit systems). Meanwhile the
    savings in .rodata amount to 2709 + 2 * 334 = 3377B. Due to padding
    the actual savings are higher : The ELF x64 ABI requires objects >= 16B
    to be padded to 16B and lots of the tables have 2^n + 1 elements
    of these were from replacing uint16_t codes with uint8_t symbols ;
    the rest was due to the fact that combining the tables eliminated
    padding (the ELF x64 ABI requires objects >= 16B to be padded to 16B
    and lots of the tables have 2^n + 1 elements)). Taking this into
    account gives savings of 4548B. (GCC by default uses an even higher
    alignment (controlled by -malign-data) ; for it the savings are 5748B.)

    These changes also necessitated to modify the init code for
    the encoder tables.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavcodec/dcaenc.c
    • [DH] libavcodec/dcahuff.c
    • [DH] libavcodec/dcahuff.h
  • Dump WebRTC stream to a file

    20 novembre 2014, par Mondain

    I’d like to capture the audio and video from a WebRTC stream to a file or pair of files, if audio and video require their own individual files. The audio and video are not muxed together and are known to be available on a set of server udp ports :

    Port   Encoding
    5000 - VP8 video
    5001 - RTCP (for video)
    5002 - Opus audio @48kHz 2 channels
    5003 - RTCP (for audio)
    

    The SDP file / data is not available and DTLS may be used.

    I would prefer to use avconv or ffmpeg to capture the stream, unless a better tool is suggested.

    Edit : I’ve found that this as inquired will most likely not work. Until I hear otherwise, none of these tools support the initial DTLS handshake followed by the data transmission via SRTP.