
Recherche avancée
Autres articles (27)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Emballe Médias : Mettre en ligne simplement des documents
29 octobre 2010, parLe plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (5319)
-
FFMPEG output truncated when adding watermark
17 juin 2020, par user1452030I'm using the following command to generate a watermarked 320Kbps MP3 preview of a wav file :



ffmpeg -i /path/input.wav -y -filter_complex "amovie=/path/wm_padded.wav:loop=0,asetpts=N/SR/TB,adelay=3000|3000[beep];[0][beep]amix=duration=shortest,volume=2" -b:a 320k /path/preview.mp3




(wm_padded.wav is the watermark file padded to 10 seconds and I'm using a Mac. The command was structured based on this post)



While this works as expected at times, it produces a short, garbled preview some other times. Any help in debugging this issue would be greatly appreciated. I've verified the input file and it seems to be fine and here's the FFMPEG command output :



$ ffmpeg -i /path/input.wav -y -filter_complex "amovie=/path/wm_padded.wav:loop=0,asetpts=N/SR/TB,adelay=3000|3000[beep];[0][beep]amix=duration=shortest,volume=2" -b:a 320k -vsync 2 /path/test-Preview.mp3
ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.8)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.1_2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include/darwin -fno-stack-check' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
[wav @ 0x7ff2ba801a00] Discarding ID3 tags because more suitable tags were found.
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from '/path/input.wav':
 Metadata:
 comment : motion graphics, motion, textures
 time_reference : 0
 coding_history : 
 Duration: 00:00:01.75, bitrate: 1520 kb/s
 Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
 Stream #0:1: Video: mjpeg (Baseline), yuvj420p(pc, bt470bg/unknown/unknown), 288x288 [SAR 72:72 DAR 1:1], 90k tbr, 90k tbn, 90k tbc (attached pic)
 Metadata:
 comment : Cover (front)
[Parsed_amovie_0 @ 0x7ff2b9d0c040] Channel layout is not set in output stream 0, guessed channel layout is 'stereo'
Stream mapping:
 Stream #0:0 (pcm_s16le) -> amix:input0 (graph 0)
 volume (graph 0) -> Stream #0:0 (libmp3lame)
 Stream #0:1 -> #0:1 (mjpeg (native) -> png (native))
Press [q] to stop, [?] for help
[swscaler @ 0x108d00000] deprecated pixel format used, make sure you did set range correctly
[Parsed_amovie_0 @ 0x7ff2b9e09840] Channel layout is not set in output stream 0, guessed channel layout is 'stereo'
Output #0, mp3, to '/path/preview.mp3':
 Metadata:
 comment : motion graphics, motion, textures
 time_reference : 0
 coding_history : 
 TSSE : Lavf58.29.100
 Stream #0:0: Audio: mp3 (libmp3lame), 44100 Hz, stereo, fltp, 320 kb/s
 Metadata:
 encoder : Lavc58.54.100 libmp3lame
 Stream #0:1: Video: png, rgb24(progressive), 288x288 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 90k fps, 90k tbn, 90k tbc (attached pic)
 Metadata:
 comment : Cover (front)
 encoder : Lavc58.54.100 png
frame= 1 fps=0.0 q=-0.0 Lsize= 206kB time=00:00:00.10 bitrate=15994.8kbits/s speed=3.62x 
video:200kB audio:5kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.598223%




Thanks in advance !


-
python for loop result in ffmpeg command [duplicate]
6 août 2020, par madiha bouthi im trying to insert for loop results in a ffmpeg command i tried some code but all i get is first text with no sound and it glitches and freezes i tried many modification but no success i appreciate if you have any ideas that would be great, the imported text from api should display only the duration of audio file here is my code :



import requests
import json
import os
import requests
import http.client
import io
import subprocess


response = requests.get('http://api.quran.com:3000/api/v3/chapters/2/verses?text_type=image&language=ar&recitation=10')

json = json.loads(response.content)

data= json['verses']
#print(data)

for i in data:
 input_text = i['text_madani'] 
 input_duration = i['audio']['duration'] 
 input_mp3 = i['audio']['url']
 input_img = i['image']['url']
 # print(input_duration)
 command = f'''ffmpeg -re -stream_loop -1 \
 -i img2.jpeg \
 -i {input_mp3} \
 -vf 'pad=ceil(iw/2)*2:ceil(ih/2)*2,drawtext=enable='between(t, n, {input_duration})':text={input_text}:fontsize=90:x=50:y=50:fontcolor=black@0.8' \
 -c:v libx264 -preset veryfast -b:v 3000k \
 -maxrate 3000k -bufsize 6000k -pix_fmt yuv420p -g 50 \
 -c:a aac -b:a 160k \
 -ac 2 -ar 44100 \
 -y -f flv rtmp://localhost/hls/test''' 
 print(command)
 os.system(command)



-
How do I use FFmpeg to fetch an audio from a local network and decode it to PCM ?
26 mai 2020, par Yousef AlaqraCurrently, I have a node js server which is connected to a specific IP address on the local network (the source of the audio), to receive the audio using VBAN protocol. VBAN protocol, basically uses UDP to send audio over the local network.



Node js implementation :



http.listen(3000, () => {
 console.log("Server running on port 3000");
});

let PORT = 6980;
let HOST = "192.168.1.244";

io.on("connection", (socket) => {
 console.log("a user connected");
 socket.on("disconnect", () => {
 console.log("user disconnected");
 });
});

io.on("connection", () => {

 let dgram = require("dgram");
 let server = dgram.createSocket("udp4");

 server.on("listening", () => {
 let address = server.address();
 console.log("server host", address.address);
 console.log("server port", address.port);
 });

 server.on("message", function (message, remote) {
 let audioData = vban.ProcessPacket(message);
 io.emit("audio", audioData); // // console.log(`Received packet: ${remote.address}:${remote.port}`)
 });
 server.bind({
 address: "192.168.1.230",
 port: PORT,
 exclusive: false,
 });
});




once the server receives a package from the local network, it processes the package, then, using socket.io it emits the processed data to the client.



An example of the processed audio data that's being emitted from the socket, and received in the angular :



audio {
 format: {
 channels: 2,
 sampleRate: 44100,
 interleaved: true,
 float: false,
 signed: true,
 bitDepth: 16,
 byteOrder: 'LE'
 },
 sampleRate: 44100,
 buffer: <buffer 2e="2e" 00="00" ce="ce" ff="ff" 3d="3d" bd="bd" 44="44" b6="b6" 48="48" c3="c3" 32="32" d3="d3" 31="31" d4="d4" 30="30" dd="dd" 38="38" 34="34" e5="e5" 1d="1d" c6="c6" 25="25" 974="974" more="more" bytes="bytes">,
 channels: 2,
}
</buffer>



In the client-side (Angular), after receiving a package using socket.io.clinet, AudioConetext is used to decode the audio and play it :



playAudio(audioData) {
 let audioCtx = new AudioContext();
 let count = 0;
 let offset = 0;
 let msInput = 1000;
 let msToBuffer = Math.max(50, msInput);
 let bufferX = 0;
 let audioBuffer;
 let prevFormat = {};
 let source;

 if (!audioBuffer || Object.keys(audioData.format).some((key) => prevFormat[key] !== audioData.format[key])) {
 prevFormat = audioData.format;
 bufferX = Math.ceil(((msToBuffer / 1000) * audioData.sampleRate) / audioData.samples);
 if (bufferX < 3) {
 bufferX = 3;
 }
 audioBuffer = audioCtx.createBuffer(audioData.channels, audioData.samples * bufferX, audioData.sampleRate);
 if (source) {
 source.disconnect();
 }
 source = audioCtx.createBufferSource();
 console.log("source", source);
 source.connect(audioCtx.destination);
 source.loop = true;
 source.buffer = audioBuffer;
 source.start();
 }
 }




Regardless that audio isn't playing in the client-side, and there is something wrong, this isn't the correct implementation.



Brad, mentioned in the comments below, that I can implement this correctly and less complexity using FFmpeg child-process. And I'm very interested to know how to fetch the audio locally using FFmpeg.