
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (48)
-
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)
Sur d’autres sites (8135)
-
FFmpeg : Encode x264 with AMD GPU on Windows ?
20 septembre 2023, par ZeroTekI am currently trying to record a Video on my Lenovo Laptop with its Built-In Webcam using FFmpeg on Windows 10. One of my goals is to keep the CPU Usage as low as possible, that's why i want to push the h264 encoding to the GPU. 
Now it gets a bit tricky here with my Laptop. Because it uses two GPUs. The first GPU is a Intel HD 5500 Graphics Unit as Part of the CPU. This one is most likly used for non-demanding Applications like office etc. to save Energy. The other one is a AMD R5 M330 that will be used for graphic intense applications like gaming.



Currently, i am using the following command to encode the Webcam Stream on the Intel HD GPU :



ffmpeg -f dshow -vcodec mjpeg -video_size 1280x720 -framerate 30 video="Lenovo EasyCamera":audio="Mikrofon (Realtek High Definition Audio)" -c:v h264_qsv -g 60 -q 28 -look_ahead 0 -preset:v faster -c:a aac -q:a 0.6 -r 30 output.mp4




This does work so far but it seems this GPU does not have enough Power to keep up with the framerate on higher bitrates or with a high amount of i-frames. The Video starts lacking and skipping frames. If i am using CPU encoding everything works smooth.



Now that my Laptop got that second AMD GPU with a lot more Power it would be a nice Try to encode on that one, but i can't find any information about how to encode on AMD Hardware on Windows 10. So my question is : How does the ffmpeg command look like to use AMD Hardware for h264 encoding ?


-
images->video->web canvas : RGB/YUV issues
5 février 2016, par nrobWe’ve written an web app which :
- takes 3D, time dependent weather data
- tiles each 3D time point to make a 2D frame (written out as a png image)
- stitches these frames together into a video (using ffmpeg/avconv)
- streams this video into a web app
- polls the canvas for frames
- sends the frames to the GPU where they are converted back to 3D and ray traced
You can see the app here, code here and you can see the data video here
Currently the pngs are written as RGB images, the video codec is in YUV and getting frames from the canvas returns RGB. As such there is a significant loss of information due to the conversion between image spaces.
Does anyone have suggestions what is the best way round this ?
I’ve tried a bunch of RGB video codecs, but I can’t get any to work, and I don’t know if the web browser will support it anyway. Can anyone suggest a good RGB codec (both lossy and lossless would be great)
Also, is it possible to write to YUV images/read them from a video canvas in HTML5 ?
Ultimately, I don’t even want anything to do with images/videos, I’m just hacking the codecs to stream/compress large animated 3D data volumes
-
Live streaming : node-media-server + Dash.js configured for real-time low latency
7 juillet 2021, par MaorationWe're working on an app that enables live monitoring of your back yard.
Each client has a camera connected to the internet, streaming to our public node.js server.



I'm trying to use node-media-server to publish an MPEG-DASH (or HLS) stream to be available for our app clients, on different networks, bandwidths and resolutions around the world.



Our goal is to get as close as possible to live "real-time" so you can monitor what happens in your backyard instantly.



The technical flow already accomplished is :



- 

-
ffmpeg process on our server processes the incoming camera stream (separate child process for each camera) and publishes the stream via RTSP on the local machine for node-media-server to use as an 'input' (we are also saving segmented files, generating thumbnails, etc.). the ffmpeg command responsible for that is :



-c:v libx264 -preset ultrafast -tune zerolatency -b:v 900k -f flv rtmp://127.0.0.1:1935/live/office
-
node-media-server is running with what I found as the default configuration for 'live-streaming'



private NMS_CONFIG = {
server: {
 secret: 'thisisnotmyrealsecret',
},
rtmp_server: {
 rtmp: {
 port: 1935,
 chunk_size: 60000,
 gop_cache: false,
 ping: 60,
 ping_timeout: 30,
 },
 http: {
 port: 8888,
 mediaroot: './server/media',
 allow_origin: '*',
 },
 trans: {
 ffmpeg: '/usr/bin/ffmpeg',
 tasks: [
 {
 app: 'live',
 hls: true,
 hlsFlags: '[hls_time=2:hls_list_size=3:hls_flags=delete_segments]',
 dash: true,
 dashFlags: '[f=dash:window_size=3:extra_window_size=5]',
 },
 ],
 },
},




} ;
-
As I understand it, out of the box NMS (node-media-server) publishes the input stream it gets in multiple output formats : flv, mpeg-dash, hls.
with all sorts of online players for these formats I'm able to access and the stream using the url on localhost. with mpeg-dash and hls I'm getting anything between 10-15 seconds of delay, and more.











My goal now is to implement a local client-side mpeg-dash player, using dash.js and configure it to be as close as possible to live.



my code for that is :







 
 
 
 
 
 <div>
 <video autoplay="" controls=""></video>
 </div>
 <code class="echappe-js"><script src="https://cdnjs.cloudflare.com/ajax/libs/dashjs/3.0.2/dash.all.min.js"></script>


<script>&#xD;&#xA; (function(){&#xD;&#xA; // var url = "https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd";&#xD;&#xA; var url = "http://localhost:8888/live/office/index.mpd";&#xD;&#xA; var player = dashjs.MediaPlayer().create();&#xD;&#xA; &#xD;&#xA; &#xD;&#xA;&#xD;&#xA; // config&#xD;&#xA; targetLatency = 2.0; // Lowering this value will lower latency but may decrease the player&#x27;s ability to build a stable buffer.&#xD;&#xA; minDrift = 0.05; // Minimum latency deviation allowed before activating catch-up mechanism.&#xD;&#xA; catchupPlaybackRate = 0.5; // Maximum catch-up rate, as a percentage, for low latency live streams.&#xD;&#xA; stableBuffer = 2; // The time that the internal buffer target will be set to post startup/seeks (NOT top quality).&#xD;&#xA; bufferAtTopQuality = 2; // The time that the internal buffer target will be set to once playing the top quality.&#xD;&#xA;&#xD;&#xA;&#xD;&#xA; player.updateSettings({&#xD;&#xA; &#x27;streaming&#x27;: {&#xD;&#xA; &#x27;liveDelay&#x27;: 2,&#xD;&#xA; &#x27;liveCatchUpMinDrift&#x27;: 0.05,&#xD;&#xA; &#x27;liveCatchUpPlaybackRate&#x27;: 0.5,&#xD;&#xA; &#x27;stableBufferTime&#x27;: 2,&#xD;&#xA; &#x27;bufferTimeAtTopQuality&#x27;: 2,&#xD;&#xA; &#x27;bufferTimeAtTopQualityLongForm&#x27;: 2,&#xD;&#xA; &#x27;bufferToKeep&#x27;: 2,&#xD;&#xA; &#x27;bufferAheadToKeep&#x27;: 2,&#xD;&#xA; &#x27;lowLatencyEnabled&#x27;: true,&#xD;&#xA; &#x27;fastSwitchEnabled&#x27;: true,&#xD;&#xA; &#x27;abr&#x27;: {&#xD;&#xA; &#x27;limitBitrateByPortal&#x27;: true&#xD;&#xA; },&#xD;&#xA; }&#xD;&#xA; });&#xD;&#xA;&#xD;&#xA; console.log(player.getSettings());&#xD;&#xA;&#xD;&#xA; setInterval(() => {&#xD;&#xA; console.log(&#x27;Live latency= &#x27;, player.getCurrentLiveLatency());&#xD;&#xA; console.log(&#x27;Buffer length= &#x27;, player.getBufferLength(&#x27;video&#x27;));&#xD;&#xA; }, 3000);&#xD;&#xA;&#xD;&#xA; player.initialize(document.querySelector("#videoPlayer"), url, true);&#xD;&#xA;&#xD;&#xA; })();&#xD;&#xA;&#xD;&#xA; </script>

 








with the online test video (https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd) I see that the live latency value is close to 2 secs (but I have no way to actually confirm it. it's a video file streamed. in my office I have a camera so I can actually compare latency between real-life and the stream I get).
however when working locally with my NMS, it seems this value does not want to go below 20-25 seconds.



Am I doing something wrong ? any configuration on the player (client-side html) I'm forgetting ?
or is there a missing configuration I should add on the server side (NMS) ?


-