
Recherche avancée
Autres articles (69)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...)
Sur d’autres sites (10364)
-
FFMpeg : Protocol agnostic input URL ?
20 juillet 2017, par Matt H.I have a generated M3U8 playlist that is imported into Tvheadend.
The url for each channel is currently like this :pipe:///usr/bin/ffmpeg -i [MEDIA URL] -f mpegts pipe:1
This is fine however the MEDIA URLS can change (I update them nightly).
If they change, Tvheadend assumes it’s a new MUX and removes the old one and adds new one (and therefore removes channel mapping etc).So, I next used a simple NGINX config file with a bunch of 303 redirects as a sort of proxy for the MEDIA URL which allowed me to have URLS like this
pipe:///usr/bin/ffmpeg -i http://example.com/chan1 -f mpegts pipe:1
This worked great and removed the issue of urls changing in Tvheadend when the media url changed.
However, I struck a problem when it is a non-http URL (eg. rtmp).
FFmpeg doesn’t know to follow the redirect and then load that URL.
As far as I know - there is no option for that.
I could use -f rtmp but then it would try to connect to my http redirect site using rtmp = fail.So, my next thought was to have NGINX simply return a basic m3u8 playlist with the correct MEDIA URL in it. However, it appears FFMPEG doesn’t like M3U8 with RTMP streams in it.
So, my final solution was using the ffmpeg concat option. This allows passing ffmpeg a list of files to process. https://trac.ffmpeg.org/wiki/Concatenate
This gave me URLS like so
pipe:///usr/bin/ffmpeg -f concat -safe 0 -protocol_whitelist http,https,tls,rtp,tcp,udp,cypto,httpproxy,rtmp -i http://example.com/chan1 pipe:1
http://example.com/chan1 would simply return 200 response with below text :
file 'MEDIA URL'
Is this the best way of doing what I need ?
Anyway to shorten it ? ie. a command to allow all protocols instead of needing list them allThe goal is to have the ffmpeg command remain unchanged for any media url.
-
How to resolve the issue of FFmpeg.wasm not working about SharedArrayBuffer error properly when using Nginx as a server and use Vite no error ?
18 décembre 2023, par bullyI am using FFmpeg.wasm for some frontend transcoding work. I know that due to certain browser policies, I need to configure some response headers in the Vite server options :

server: { headers: { 'Cross-Origin-Opener-Policy': 'same-origin', 'Cross-Origin-Embedder-Policy': 'require-corp' } },


This works fine and doesn't throw the SharedArrayBuffer error.


Then, I ran yarn run build to generate the dist directory and copied it to my Nginx proxy server. I also configured similar response headers in Nginx as follows :


server {
 listen 80;
 server_name ...My IP;
 add_header 'Cross-Origin-Embedder-Policy' 'require-corp';
 add_header 'Cross-Origin-Opener-Policy' 'same-origin';
 add_header 'Cross-Origin-Resource-Policy' "cross-origin";
 add_header 'Access-Control-Allow-Origin' '*';
 
 location / {
 add_header 'Cross-Origin-Embedder-Policy' 'require-corp';
 add_header 'Cross-Origin-Opener-Policy' 'same-origin';
 }

 root /www/audioserver/dist;
 ...
 }



However, it doesn't work in this setup. I have been trying for a while but haven't been able to solve it.


Here is my code for loading ffmpeg.wasm. It works fine in the development environment. The blob is the cached file of the wasm saved in IndexedDB :


`const blob = await getWasmCoreWasm();
await this.ffmpegInstance.load({
 coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, 'text/javascript'),
 wasmURL: await toBlobURL(URL.createObjectURL(blob), 'application/wasm'),
 workerURL: await toBlobURL(`${baseURL}/ffmpeg-core.worker.js`, 'text/javascript'),
});
`



I have tried checking the response headers of the links, updating Nginx, and even modifying the version of FFmpeg. They all seem to be fine, but I don't know how to resolve this issue. I would really appreciate it if someone could help me out. Thank you very much !


-
Rails 5 - Video streaming using Carrierwave uploaded video size constraint on the server
21 mars 2020, par MilindI have a working Rails 5 apps using Reactjs for frontend and React dropzone uploader to upload video files using carrierwave.
So far, what is working great is listed below -
- User can upload videos and videos are encoded based on the selection made by user - HLS or MPEG-DASH for online streaming.
- Once the video is uploaded on the server, it starts streaming it by :-
- FIRST,copying video on
/tmp
folder. - Running a bash script that uses
ffmpeg
to transcode uploaded video using predefined commands to produce new fragments of videos inside/tmp
folder. - Once the background job is done, all the videos are uploaded on AWS S3, which is how the default
carrierwave
works
- FIRST,copying video on
- So, when multiple videos are uploaded, they are all copied in /tmp folder and then transcoded and eventually uploaded to
S3
.
My questions, where i am looking some help are listed below -
1- The above process is good for small videos, BUT what if there are many concurrent users uploading 2GB of videos ? I know this will kill my server as my
/tmp
folder will keep on increasing and consume all the memory, making it to die hard.How can I allow concurrent videos to upload videos without effecting my server’s memory consumption ?2- Is there a way where I can directly upload the videos on AWS-S3 first, and then use one more proxy server/child application to encode videos from S3, download it to the child server, convert it and again upload it to the destination ? but this is almost the same but doing it on cloud, where memory consumption can be on-demand but will be not cost-effective.
3- Is there some easy and cost-effective way by which I can upload large videos, transcode them and upload it to AWS S3, without effecting my server memory. Am i missing some technical architecture here.
4- How Youtube/Netflix works, I know they do the same thing in a smart way but can someone help me to improve this ?
Thanks in advance.