Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP

Autres articles (15)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (5923)

  • FFMPEG audio not lining up

    31 août 2020, par Chris Kooken

    I am using OpenTok to build a live video platform. It generates webm files from each users stream.

    


    I am using FFMPEG to convert webm (WEBRTC) videos to MP4s to edit in my NLE. The problem I am having is my audio is drifting. I THINK it is because the user pauses the audio during the stream. This is the command i'm running

    


    ffmpeg -acodec libopus -i 65520df3-1033-480e-adde-1856d18e2352.webm  -max_muxing_queue_size 99999999 65520df3-1033-480e-adde-1856d18e2352.webm.new.mp4


    


    The problem is I think, whenever the user muted themselves, there are no frames. But the PTS is in tact.

    


    This is from the OpenTok documentation (my WebRTC platform)

    


    


    Audio and video frames may not arrive with monotonic timestamps ; frame rates are not always consistent. This is especially relevant if either the video or audio track is disabled for a time, using one of publishVideo or publishAudio publisher properties.

    


    


    


    Frame presentation timestamps (PTS) are written based on NTP
timestamps taken at the time of capture, offset by the timestamp of
the first received frame. Even if a track is muted and later unmuted,
the timestamp offset should remain consistent throughout the duration
of the entire stream. When decoding in post-processing, a gap in PTS
between consecutive frames will exist for the duration of the track
mute : there are no "silent" frames in the container.

    


    


    How can I convert these files and have them play in sync ? Note, when I play in QuickTime or VLC, the files are synced correctly.

    


    EDIT
I've gotten pretty close with this command :

    


     ffmpeg -acodec libopus -i $f -max_muxing_queue_size 99999999 -vsync 1 -af aresample=async=1 -r 30 $f.mp4


    


    But every once in a while, I get a video where the audio starts right away, and they wont actually be talking in the video until half-way thought the video. My guess is they muted themselves during the video conference... so in some cases audio is 5-10 mins ahead. Again, plays fine in quicktime, but pulled into my NLE, its way off.

    


  • How to process remote audio/video stream on WebRTC server in real-time ? [closed]

    7 septembre 2020, par Kartik Rokde

    I'm new to audio/video streaming. I'm using AntMedia Pro for audio/video conferencing. There will be 5-8 hosts who will be speaking and the expected audience size would be 15-20k (need to mention this as it won't be a P2P conferencing, but an MCU architecture).

    


    I want to give a feature where a user can request for "convert voice to female / robot / whatever", which would let the user hear the manipulated voice in the conference.

    


    From what I know is that I want to do a real-time processing on the server to be able to do this. I want to intercept the stream on the server, and do some processing (change the voice) on each of the tracks, and stream it back to the requestor.

    


    The first challenge I'm facing is how to get the stream and/or the individual tracks on the server ?

    


    I did some research on how to process remote WebRTC streams, real-time on the server. I came across some keywords like RTMP ingestion, ffmpeg.

    


    Here are a few questions I went through, but didn't find answers that I'm looking for :

    


      

    1. Receive webRTC video stream using python opencv in real-time
    2. 


    3. Extract frames as images from an RTMP stream in real-time
    4. 


    5. android stream real time video to streaming server
    6. 


    


    I need help in receiving real-time stream on the server (any technology - preferable Python, Golang) and streaming it back.

    


  • combine 4 video with delay side by side using ffmpeg

    9 janvier 2021, par MeTe-30

    I have at least 4 video from video conference crated by meetecho/janus-gateway
    
Janus create two mjr video and audio file for each user, first i merged them into one webm file, then convert all to 500*500 videos.
    
Now i'm trying to combine these videos like mosaic and found this code :

    


    ffmpeg -i 1.webm -i 2.webm -i 3.webm -i 4.webm \
-speed 8 -deadline realtime -filter_complex "[0]pad=2*iw:2*ih[l]; \
[1]setpts=PTS-STARTPTS+428/TB[1v]; [l][1v]overlay=x=W/2[a]; \
[2]setpts=PTS-STARTPTS+439/TB[2v]; [a][2v]overlay=y=H/2[b]; \
[3]setpts=PTS-STARTPTS+514/TB[3v]; [b][3v]overlay=y=H/2:x=W/2[v]; \
[1]adelay=428372|428372[1a]; \
[2]adelay=439999|439999[2a]; \
[3]adelay=514589|514589[3a]; \
[0][1a][2a][3a]amix=inputs=4[a]" \
-map "[v]" -map "[a]" merged.webm


    


    I calculated the delayed times from created dateTime of each file, related to the first video.

    


    my problmes :

    


      

    1. This code is not working ! after minutes of console freezing, it shows this line :

      


      Killed 29 fps=0.1 q=0.0 size= 1kB time=00:04:30.07 bitrate= 0.0kbits/s speed=0.896x

      


    2. 


    3. I didn't find out the meaning of letters before and after overlay, [1v][2v][3v][l][a][b][v]...

      


    4.