
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (104)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Soumettre bugs et patchs
10 avril 2011Un logiciel n’est malheureusement jamais parfait...
Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
Si vous pensez avoir résolu vous même le bug (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (10793)
-
videostreaming in java with bytedeco maven dependencies
22 octobre 2020, par faenschiwe try to setup a videostreamig service in java. our applications runs in a docker container on openshift. therefore we install ffmpeg in the dockerfile.


in java we use the bytedeco maven dependencies. but i'm really not sure which dependency version runs with which ffmpeg version installed in the container. is there somewhere a overview ? does anybody have a similar setup ?


at the moment we use the 1.5.4 maven dependencies and ffmpgen v4.3.1 in the container :


<dependency>
 <groupid>org.bytedeco</groupid>
 <artifactid>javacv-platform</artifactid>
 <version>1.5.4</version>
 </dependency>

 <dependency>
 <groupid>org.bytedeco</groupid>
 <artifactid>javacpp-platform</artifactid>
 <version>1.5.4</version>
 </dependency>

 
 <dependency>
 <groupid>org.bytedeco</groupid>
 <artifactid>ffmpeg-platform</artifactid>
 <version>4.3.1-1.5.4</version>
 </dependency>



this does not work. the pod crashes as soon as i start the streaming. if i use ffmpeg v 3.x with a older centos base image and the dependency above the pod does not crash..but the streaming does not work either.


any help appreciated
angela


-
FFMPEG audio not lining up
31 août 2020, par Chris KookenI am using OpenTok to build a live video platform. It generates webm files from each users stream.


I am using FFMPEG to convert webm (WEBRTC) videos to MP4s to edit in my NLE. The problem I am having is my audio is drifting. I THINK it is because the user pauses the audio during the stream. This is the command i'm running


ffmpeg -acodec libopus -i 65520df3-1033-480e-adde-1856d18e2352.webm -max_muxing_queue_size 99999999 65520df3-1033-480e-adde-1856d18e2352.webm.new.mp4



The problem is I think, whenever the user muted themselves, there are no frames. But the PTS is in tact.


This is from the OpenTok documentation (my WebRTC platform)




Audio and video frames may not arrive with monotonic timestamps ; frame rates are not always consistent. This is especially relevant if either the video or audio track is disabled for a time, using one of publishVideo or publishAudio publisher properties.






Frame presentation timestamps (PTS) are written based on NTP
timestamps taken at the time of capture, offset by the timestamp of
the first received frame. Even if a track is muted and later unmuted,
the timestamp offset should remain consistent throughout the duration
of the entire stream. When decoding in post-processing, a gap in PTS
between consecutive frames will exist for the duration of the track
mute : there are no "silent" frames in the container.




How can I convert these files and have them play in sync ? Note, when I play in QuickTime or VLC, the files are synced correctly.


EDIT
I've gotten pretty close with this command :


ffmpeg -acodec libopus -i $f -max_muxing_queue_size 99999999 -vsync 1 -af aresample=async=1 -r 30 $f.mp4



But every once in a while, I get a video where the audio starts right away, and they wont actually be talking in the video until half-way thought the video. My guess is they muted themselves during the video conference... so in some cases audio is 5-10 mins ahead. Again, plays fine in quicktime, but pulled into my NLE, its way off.


-
FFMPEG OpenTok archive audio drift
25 août 2020, par Chris KookenI am using OpenTok to build a live video platform. It generates webm files from each users stream.


I am using FFMPEG to convert webm (WEBRTC) videos to MP4s to edit in my NLE. The problem I am having is my audio is drifting. I THINK it is because the user pauses the audio during the stream. This is the command i'm running


ffmpeg -acodec libopus -i 65520df3-1033-480e-adde-1856d18e2352.webm -max_muxing_queue_size 99999999 65520df3-1033-480e-adde-1856d18e2352.webm.new.mp4



The problem is I think, whenever the user muted themselves, there are no frames. But the PTS is in tact.


This is from the OpenTok documentation (my WebRTC platform)




Audio and video frames may not arrive with monotonic timestamps ; frame rates are not always consistent. This is especially relevant if either the video or audio track is disabled for a time, using one of publishVideo or publishAudio publisher properties.






Frame presentation timestamps (PTS) are written based on NTP
timestamps taken at the time of capture, offset by the timestamp of
the first received frame. Even if a track is muted and later unmuted,
the timestamp offset should remain consistent throughout the duration
of the entire stream. When decoding in post-processing, a gap in PTS
between consecutive frames will exist for the duration of the track
mute : there are no "silent" frames in the container.




How can I convert these files and have them play in sync ? Note, when I play in QuickTime or VLC, the files are synced correctly.


EDIT
I've gotten pretty close with this command :


ffmpeg -acodec libopus -i $f -max_muxing_queue_size 99999999 -vsync 1 -af aresample=async=1 -r 30 $f.mp4



But every once in a while, I get a video where the audio starts right away, and they wont actually be talking in the video until half-way thought the video. My guess is they muted themselves during the video conference... so in some cases audio is 5-10 mins ahead. Again, plays fine in quicktime, but pulled into my NLE, its way off.