
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (67)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (13191)
-
ffmpeg watermark without background [closed]
22 décembre 2022, par Edwin PittersI have a problem, I am trying to add a watermark to my videos with ffmpeg using a gtx 1060 graphics card, the process works well and very fast, the problem is that the watermark appears with a black background , the image has no background, it is transparent, the problem happens only when I use the nvidia graphics card, because if I do the process with my processor the watermark is placed correctly as expected, so I am sure it is a problem in my configuration when running ffmpeg


Here I leave the command that I am using :


.\ffmpeg.exe -y -hide_banner -init_hw_device cuda=cuda -filter_hw_device cuda -hwaccel cuda -hwaccel_output_format cuda -i test.mp4 -i watermark.png -filter_complex "[1:v]colorchannelmixer=aa=0.3,scale=iw*0.6:-1,format=nv12,hwupload[img];[0:v][img]overlay_cuda=x='if(lt(mod(t\,16)\,8)\,W-w-W*10/100\,W*10/100)':y='if(lt(mod(t+4\,16)\,8)\,H-h-H*5/100\,H*5/100)'[out]" -map [out] -c:v h264_nvenc -b:v 6M -an -preset fast out_overlay.mp4



If I use my processor with the following command, the guide mark without background is added, that is, as expected


for %%a in ("*.m*") do ffmpeg -y -hide_banner -threads 4 -i "%%a" -preset ultrafast -vcodec libx264 -b:v 4000k -minrate 4000k -maxrate 4000k -bufsize 4000k -c:a aac -b:a 64k -pass 1 -f mp4 NUL && ffmpeg -y -hide_banner -threads 8 -i "%%a" -i watermark.png -preset ultrafast -vcodec libx264 -b:v 4000k -minrate 4000k -maxrate 4000k -bufsize 4000k -filter_complex "[1]colorchannelmixer=aa=0.3,scale=iw*0.8:-1[a];[0][a]overlay=x='if(lt(mod(t\,8)\,4)\,W-w-W*10/100\,W*10/100)':y='if(lt(mod(t+2\,8)\,4)\,H-h-H*10/100\,H*10/100)'" -c:a copy -tune film -movflags +faststart -pass 2 "watermark/%%a"
pause



I also tried changing colorchannelmixer=aa=0.3 for lut=a=val*0.3 but it seems that this command is not having any effect


I find that the image is well reviewed to discard, in fact I tried with other images also with a transparent background and I have the same result, a watermark but with a black background


-
Feeding raw image bytes into ffmpeg rawvideo fails with Invalid buffer size on linux only
13 février 2021, par cherouvimI have a nodejs program which generates raw (rgb24) image(s), which I then pipe into ffmpeg so it saves as png or mp4. My code looks like this :


const fs = require("fs");
// ...
const outputBuffer = Buffer.alloc(outputPngWidth * 3 * outputPngHeight);
// ... write data into outputBuffer
fs.writeSync(process.stdout.fd, outputBuffer);



I then do the following in CLI :


node generate | ffmpeg -f rawvideo -pixel_format rgb24 -video_size 1000x1000 -i - test.png



Alternatively, if I generate lots of images from my program, I do this to generate the video file :


node generate | ffmpeg -f rawvideo -pixel_format rgb24 -video_size 1000x1000 -r 60 -i - -codec:v libx265 test.mp4



On windows this works flawlessly. On linux (either on Ubuntu 20 VM, or Ubuntu 20 installed directly on a physical machine), it consistently fails with :


pipe:: corrupt input packet in stream 0
[rawvideo @ 0x55f5256c8040] Invalid buffer size, packet size 65536 < expected frame_size 3000000
Error while decoding stream #0:0: Invalid argument



If I split this in 2 phases like so, then it works perfectly on linux as well :


node generate > test.raw
cat test.raw | ffmpeg -f rawvideo -pixel_format rgb24 -video_size 1000x1000 -i - test.png



By looking at the error "packet size 65536 < expected frame_size 3000000" it seems that node's
fs.writeSync
only sends 65536 bytes at a time, but ffmpeg expects 3000000 bytes (that is 1000 width * 1000 height * 3 channels).

If I reduce my image size to something small, e.g 50x50 or 100x100, then it works. As soon as
x * y * 3
exceeds 65536, it fails (eg. 160x160 fails with "packet size 65536 < expected frame_size 76800" because 160 * 160 * 3 = 76800).

What I've tried so far to solve the issue without luck :


- 

- Force node to spit out the whole buffer at once :




fs.writeSync(process.stdout.fd, outputBuffer, 0, outputBuffer.length);



- 

- All suggestions of Add a big buffer to a pipe between two commands with various linux commands to buffer between
node
andffmpeg
. - Use Ubuntu 18 instead of 20.
- Use node 12 instead of 15.
- Figure out a way to change the chunk size in https://nodejs.org/api/fs.html










Is there a way to overcome this ?


-
FFMpeg jni in Android ?
4 février 2014, par WhyhowI have built FFMPEG executables and libraries as provided by Bambuser (http://bambuser.com/opensource). So I managed to build the Android executables and libraties. How can I link these libs in my Eclipse project and invoke the FFmpeg functions from Java ? The open source code includes the C header-files.
I am new to native coding for Android, and I could not find an easy answer for this. In basic : having a bunch of Android compatible libraries and some C header files what do I have to do to reuse those libaries' functionality from java (+Android SDK) ?
Any help would be appreciated.
Kind regards,
WhyHow