
Recherche avancée
Autres articles (61)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (11320)
-
Add PNG image overlay to video starting with non-key frame
8 mai 2020, par RamRickAlright, so I'm trying to edit a bunch of short to very long videos.
The goal is to mute a 1 minute part within each video and overlay that small part with a PNG image for the full duration.



Because this all has to be done as fast as possible, I figured it would be the best idea to split every video into 3 parts, then mute and add the overlay to part #2 and put all 3 parts back together.



The first problem I ran into was that part #2 and #3 always started with a 1-2 second freeze in video, while the audio was fine.



I found out that this was caused by me cutting at key frames and ignoring non-key frames at the beginning.



I fixed that by adding the
-copyinkf
parameter to also copy all non-key frames, like so :


ffmpeg -i "in.mp4" \
-to 0:04:00 -c:v copy -c:a copy -copyinkf "p1.mp4" \
-ss 0:04:00 -to 0:05:00 -c:v copy -c:a copy -copyinkf "p2.mp4" \
-ss 0:05:00 -c:v copy -c:a copy -copyinkf "p3.mp4"




So I went on and proceeded to mute part #2, like so :



ffmpeg -i "p2.mp4" -af "volume=0" -c:v copy -copyinkf "p2_muted.mp4"




All fine until this point, and IF I put the parts back together right now it would also be quick and accurate, like so :



--parts.txt
file 'p1.mp4'
file 'p2_muted.mp4'
file 'p3.mp4'

ffmpeg -f concat -safe 0 -i parts.txt -c copy "out.mp4"




But now comes the problem :



Since I have to re-encode part #2 to overlay the PNG image, I get the 1-2 seconds freeze frame problem at the beginning again and I can't for the life of me figure out the "equivalent" of
-copyinkf
for re-encoding a video.


Right now I'm overlaying the PNG image like so :



ffmpeg -i "p2_muted.mp4" -i "../banner.png" -filter_complex "[1][0]scale2ref[i][v];[v][i]overlay" -c:a copy "p2_edited.mp4"




So my question is, what would the equivalent to
-copyinkf
be, or in case there is a better way than mine, what would that be to achieve my task.

-
Permissions issue with Python and ffmpeg on a Mac
13 avril 2020, par EventHorizonI am fairly new to Python ( 4 weeks), and I have been struggling with this all day.



I am using MacOS 10.13, Python 3.7 via Anaconda Navigator 1.9.12 and Spyder 4.0.1.



Somehow (only a noob, remember) I had 2 Anaconda environments. I don't do production code, just research, so I figured I would make life simple and just use the base environment. I deleted the other environment.



I had previously got FFmpeg working and was able to do frame grabs, build mpeg animations, and convert them to gifs for blog posts and such. I had FFmpeg installed in the directories associated with the deleted environment, so it went away.



No worries, I got the git URL, used Terminal to install it in /opt/anaconda3/bin. It's all there and I can run FFmpeg from the Terminal.



My problem : When I attempt to run a module that previously worked fine, I get the following message :



[Errno 13] Permission denied : '/opt/anaconda3/bin/ffmpeg'



In my module I set the default location of FFmpeg : plt.rcParams['animation.ffmpeg_path'] = '/opt/anaconda3/bin/ffmpeg'



In my module I have the following lines :



writer = animation.FFMpegWriter(fps=frameRate, metadata=metadata)
writer.setup(fig, "animation.mp4", 100)




This calls matplotlib's 'animation.py', which runs the following :



def setup(self, fig, outfile, dpi=None):
 '''
 Perform setup for writing the movie file.

 Parameters
 ----------
 fig : `~matplotlib.figure.Figure`
 The figure object that contains the information for frames
 outfile : str
 The filename of the resulting movie file
 dpi : int, optional
 The DPI (or resolution) for the file. This controls the size
 in pixels of the resulting movie file. Default is fig.dpi.
 '''
 self.outfile = outfile
 self.fig = fig
 if dpi is None:
 dpi = self.fig.dpi
 self.dpi = dpi
 self._w, self._h = self._adjust_frame_size()

 # Run here so that grab_frame() can write the data to a pipe. This
 # eliminates the need for temp files.
 self._run()

def _run(self):
 # Uses subprocess to call the program for assembling frames into a
 # movie file. *args* returns the sequence of command line arguments
 # from a few configuration options.
 command = self._args()
 _log.info('MovieWriter.run: running command: %s', command)
 PIPE = subprocess.PIPE
 self._proc = subprocess.Popen(
 command, stdin=PIPE, stdout=PIPE, stderr=PIPE,
 creationflags=subprocess_creation_flags)




Everything works fine up to the last line (i.e. 'command' looks like a well-formatted FFmpeg command line, PIPE returns -1) but subprocess.Popen() bombs out with the error message above.



I have tried changing file permissions - taking a sledgehammer approach and setting everything in /opt/anaconda3/bin/ffmpeg to 777, read, write, and execute. But that doesn't seem to make any difference. I really am clueless when it comes to Apple's OS, file permissions, etc. Any suggestions ?


-
Live streaming : node-media-server + Dash.js configured for real-time low latency
7 juillet 2021, par MaorationWe're working on an app that enables live monitoring of your back yard.
Each client has a camera connected to the internet, streaming to our public node.js server.



I'm trying to use node-media-server to publish an MPEG-DASH (or HLS) stream to be available for our app clients, on different networks, bandwidths and resolutions around the world.



Our goal is to get as close as possible to live "real-time" so you can monitor what happens in your backyard instantly.



The technical flow already accomplished is :



- 

-
ffmpeg process on our server processes the incoming camera stream (separate child process for each camera) and publishes the stream via RTSP on the local machine for node-media-server to use as an 'input' (we are also saving segmented files, generating thumbnails, etc.). the ffmpeg command responsible for that is :



-c:v libx264 -preset ultrafast -tune zerolatency -b:v 900k -f flv rtmp://127.0.0.1:1935/live/office
-
node-media-server is running with what I found as the default configuration for 'live-streaming'



private NMS_CONFIG = {
server: {
 secret: 'thisisnotmyrealsecret',
},
rtmp_server: {
 rtmp: {
 port: 1935,
 chunk_size: 60000,
 gop_cache: false,
 ping: 60,
 ping_timeout: 30,
 },
 http: {
 port: 8888,
 mediaroot: './server/media',
 allow_origin: '*',
 },
 trans: {
 ffmpeg: '/usr/bin/ffmpeg',
 tasks: [
 {
 app: 'live',
 hls: true,
 hlsFlags: '[hls_time=2:hls_list_size=3:hls_flags=delete_segments]',
 dash: true,
 dashFlags: '[f=dash:window_size=3:extra_window_size=5]',
 },
 ],
 },
},




} ;
-
As I understand it, out of the box NMS (node-media-server) publishes the input stream it gets in multiple output formats : flv, mpeg-dash, hls.
with all sorts of online players for these formats I'm able to access and the stream using the url on localhost. with mpeg-dash and hls I'm getting anything between 10-15 seconds of delay, and more.











My goal now is to implement a local client-side mpeg-dash player, using dash.js and configure it to be as close as possible to live.



my code for that is :







 
 
 
 
 
 <div>
 <video autoplay="" controls=""></video>
 </div>
 <code class="echappe-js"><script src="https://cdnjs.cloudflare.com/ajax/libs/dashjs/3.0.2/dash.all.min.js"></script>


<script>&#xD;&#xA; (function(){&#xD;&#xA; // var url = "https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd";&#xD;&#xA; var url = "http://localhost:8888/live/office/index.mpd";&#xD;&#xA; var player = dashjs.MediaPlayer().create();&#xD;&#xA; &#xD;&#xA; &#xD;&#xA;&#xD;&#xA; // config&#xD;&#xA; targetLatency = 2.0; // Lowering this value will lower latency but may decrease the player&#x27;s ability to build a stable buffer.&#xD;&#xA; minDrift = 0.05; // Minimum latency deviation allowed before activating catch-up mechanism.&#xD;&#xA; catchupPlaybackRate = 0.5; // Maximum catch-up rate, as a percentage, for low latency live streams.&#xD;&#xA; stableBuffer = 2; // The time that the internal buffer target will be set to post startup/seeks (NOT top quality).&#xD;&#xA; bufferAtTopQuality = 2; // The time that the internal buffer target will be set to once playing the top quality.&#xD;&#xA;&#xD;&#xA;&#xD;&#xA; player.updateSettings({&#xD;&#xA; &#x27;streaming&#x27;: {&#xD;&#xA; &#x27;liveDelay&#x27;: 2,&#xD;&#xA; &#x27;liveCatchUpMinDrift&#x27;: 0.05,&#xD;&#xA; &#x27;liveCatchUpPlaybackRate&#x27;: 0.5,&#xD;&#xA; &#x27;stableBufferTime&#x27;: 2,&#xD;&#xA; &#x27;bufferTimeAtTopQuality&#x27;: 2,&#xD;&#xA; &#x27;bufferTimeAtTopQualityLongForm&#x27;: 2,&#xD;&#xA; &#x27;bufferToKeep&#x27;: 2,&#xD;&#xA; &#x27;bufferAheadToKeep&#x27;: 2,&#xD;&#xA; &#x27;lowLatencyEnabled&#x27;: true,&#xD;&#xA; &#x27;fastSwitchEnabled&#x27;: true,&#xD;&#xA; &#x27;abr&#x27;: {&#xD;&#xA; &#x27;limitBitrateByPortal&#x27;: true&#xD;&#xA; },&#xD;&#xA; }&#xD;&#xA; });&#xD;&#xA;&#xD;&#xA; console.log(player.getSettings());&#xD;&#xA;&#xD;&#xA; setInterval(() => {&#xD;&#xA; console.log(&#x27;Live latency= &#x27;, player.getCurrentLiveLatency());&#xD;&#xA; console.log(&#x27;Buffer length= &#x27;, player.getBufferLength(&#x27;video&#x27;));&#xD;&#xA; }, 3000);&#xD;&#xA;&#xD;&#xA; player.initialize(document.querySelector("#videoPlayer"), url, true);&#xD;&#xA;&#xD;&#xA; })();&#xD;&#xA;&#xD;&#xA; </script>

 








with the online test video (https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd) I see that the live latency value is close to 2 secs (but I have no way to actually confirm it. it's a video file streamed. in my office I have a camera so I can actually compare latency between real-life and the stream I get).
however when working locally with my NMS, it seems this value does not want to go below 20-25 seconds.



Am I doing something wrong ? any configuration on the player (client-side html) I'm forgetting ?
or is there a missing configuration I should add on the server side (NMS) ?


-