Recherche avancée

Médias (1)

Mot : - Tags -/remix

Autres articles (95)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (6561)

  • WebRTC Multi-Stream recording

    11 janvier 2021, par Tim Specht

    I'm currently trying to build a WebRTC streaming architecture that contains multiple users streaming content from their camera in the same "room" and a SFU / MCU on server-side "recording" the incoming video packets, merging them into one image and re-distributing them to the viewers as either RTMP or HLS for added scalability.

    


    Upon doing some initial research on this, Janus Gateway seems like a good fit for this given it's wide adoption across the space + their (seemingly) extensible plugin architecture. Thus, I'm currently trying to figure out what a recommended architecture for my use-case would look like.
I looked at the following plugins :

    


    


    While Janus and the Streaming plugin seem like a good start to get the broadcasting aspect within the group of casters in the room, I'm trying to piece together how I could combine the different video sources into a combined one (split horizontally for example if there are 2 casters active) and retransmit the final result as something optimized for broadcast-consumption like HLS. Some of the ways I could imagine doing that :

    


      

    • Implement a custom Janus plugin that transcodes the incoming buffers on the gateway itself
    • 


    • Forwarding the incoming packets via RTP to a Transcoding server

        

      • In this specific case I am not sure what would be best to implement that ? Are the video frames different tracks ? Could I stream all of them to the same port and have ffmpeg or something similar take care of the merging for me ?
      • 


      


    • 


    


  • YouTube-DL Python details of extracted audio file are not displayed

    16 septembre 2020, par Sushil

    I have written a small piece of code in python to extract the audio from a YouTube video. Here is the code :

    


    from __future__ import unicode_literals
import youtube_dl

link = input("Enter the video link:")

ydl_opts = {
    'format': 'bestaudio/best',
    'postprocessors': [{
        'key': 'FFmpegExtractAudio',
        'preferredcodec': 'mp3',
        'preferredquality': '192',
    }],
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
    info_dict = ydl.extract_info(link, download=False)
    video_title = info_dict.get('title', None)

path = f'D:\\{video_title}.mp3'

ydl_opts.update({'outtmpl':path})

with youtube_dl.YoutubeDL(ydl_opts) as ydl:
    ydl.download([link])


    


    This is the folder where the output audio file is saved :
Output File Folder

    


    As you can see, all the details of the audio file are displayed, such as Date Modified, Type and Size.

    


    However, if I change path = f'D:\\{video_title}.mp3' to path = f'D:\\YT_Files\\{video_title}.mp3', then the file details are not getting displayed.
Output File Folder

    


    Any idea about why this is so ? Is there any way to solve this problem ? Any help would be appreciated. Thanks.

    


  • Audio offset get wrong after some time when streaming audios

    7 septembre 2024, par Antoine Grenard

    I use microsoft-cognitiveservices-speech-sdk (1.38.0) in order to do real time speech to text.
It seems like the offset is right when I send a full audio but it is wrong when I send it cut in a lot of audio chunks.

    


    The more there is audio chunks the more inaccurate the offset is :

    


      

    • No chunks : 1 726 300 000
    • 


    • 369 chunks of 0.5 seconds : 1 729 600 000
    • 


    • 923 chunks of 0.2 seconds : 1 744 600 000
    • 


    • 1443 chunks of 0.1 seconds : 1 757 900 000
    • 


    


    To reproduce here is some piece of code :

    


        const speechConfig = SpeechConfig.fromSubscription(<key>,  {console.log(event)}&#xA;    speechRecognizer.canceled = async (recognizer, event) => {console.log(event)}&#xA;    speechRecognizer.startContinuousRecognitionAsync();&#xA;&#xA;    for (let i = 1; i &lt;= 1443; i&#x2B;&#x2B;) {&#xA;      const formattedNumber = i.toString().padStart(4, &#x27;0&#x27;);&#xA;      const buffer = fs.readFileSync(`/var/tmp/chunks/output_${formattedNumber}.wav`);&#xA;      pushStream.write(buffer);&#xA;    }&#xA;</key>

    &#xA;

    To create the audio chunks :

    &#xA;

    ffmpeg -i  -f segment -segment_time 0.1 -c copy output_%04d.wav&#xA;

    &#xA;

    Here is the audio link : https://drive.google.com/file/d/1H_RJuqMiBaVkpo9XHrgp1bpuFdgQl64O/view?usp=sharing

    &#xA;

    Thanks for your help

    &#xA;