
Recherche avancée
Médias (1)
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (46)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (7671)
-
(Ffmpeg) How to play live audio in the browser from received UDP packets using Ffmpeg ?
26 octobre 2022, par Yousef AlaqraI have .NET Core console application which acts as UDP Server and UDP Client



- 

- UDP client by receiving audio packet.
- UDP server, by sending each received packet.







Here's a sample code of the console app :



static UdpClient udpListener = new UdpClient();
 static IPEndPoint endPoint = new IPEndPoint(IPAddress.Parse("192.168.1.230"), 6980);
 static IAudioSender audioSender = new UdpAudioSender(new IPEndPoint(IPAddress.Parse("192.168.1.230"), 65535));

 static void Main(string[] args)
 {
 udpListener.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
 udpListener.Client.Bind(endPoint);

 try
 {
 udpListener.BeginReceive(new AsyncCallback(recv), null);
 }
 catch (Exception e)
 {
 throw e;
 }

 Console.WriteLine("Press enter to dispose the running service");
 Console.ReadLine();
 }

 private async static void recv(IAsyncResult res)
 {
 byte[] received = udpListener.EndReceive(res, ref endPoint);
 OnAudioCaptured(received);
 udpListener.BeginReceive(new AsyncCallback(recv), null);
 }




On the other side, I have a node js API application, which supposes to execute an FFmpeg command as a child process and to do the following



- 

- receive the audio packet as an input from the console app UDP server.
- convert the received bytes into WebM
- pipe out the result into the response.









Finally, in the client-side, I should have an audio element with source value equals to the http://localhost:3000



For now, I can only execute this FFmpeg command :



ffmpeg -f s16le -ar 48000 -ac 2 -i 'udp://192.168.1.230:65535' output.wav




Which do the following



- 

- Receive UDP packet as an input
- Convert the received bytes into the output.wav audio file.







How would I execute a child process in the node js server which receives the UDP packets and pipe out the result into the response as Webm ?


-
Why Yuv data readed from ffmpeg is different from original input yuv ?
13 mai 2021, par sianyi HuangI use ffmpeg to make HDR test video, my approach is write a image, converting the image to yuv420p and then use ffmpeg to make the HDR test video.



But I found the yuv data readed from mp4 is different from the original input..
I was stucked in here for a while, does anyone know how to read the correct yuv data from mp4 ?



#ffmpeg encode command
ffmpeg_encode_mp4 = \
"ffmpeg -y -s 100*100 -pix_fmt yuv420p -threads 4 -r 1 -stream_loop -1 -f rawvideo -i write_yuv.yuv -vf \
scale=out_h_chr_pos=0:out_v_chr_pos=0,format=yuv420p10le \
-c:v libx265 -tag:v hvc1 -t 10 -pix_fmt yuv420p10le -preset medium -x265-params \
crf=12:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=\"G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)\":max-cll=\"1000,400\" \
-an test.mp4"

#ffmpeg read yuv from mp4 command
ffmpeg_extract_yuv = "ffmpeg -i test.mp4 -vframes 1 -c:v rawvideo -pix_fmt yuv420p read_yuv.yuv"


#make 100*100 yuv raw
w, h = 100, 100
test_gray = 255
test = np.full((100, 100, 3), test_gray, dtype=np.uint8)
yuv_cv = cv.cvtColor(test, cv.COLOR_RGB2YUV_I420)
yuv_cv.tofile("write_yuv.yuv")

#encode yuv raw to mp4 with HDR metadata
print(ffmpeg_encode_mp4)
result = subprocess.check_output(ffmpeg_encode_mp4, shell = True)
print(result)
sleep(0.5)

#extract yuv from mp4
kill_existing_file("read_yuv.yuv")
print(ffmpeg_extract_yuv)
result = subprocess.check_output(ffmpeg_extract_yuv, shell = True)
print(result)
sleep(0.5)

write_yuv = np.fromfile("write_yuv.yuv",dtype='uint8')
read_yuv = np.fromfile("read_yuv.yuv",dtype='uint8')

print("input gray:", test_gray)
print("write_yuv", write_yuv[:10])
print("read_yuv", read_yuv[:10])

reader = imageio.get_reader("test.mp4")
img = reader.get_data(0)
print("imgeio read:", img[50, 50])

'''
ouput result:
input gray: 255
write_yuv [235 235 235 235 235 235 235 235 235 235]
read_yuv [234 235 234 235 234 235 234 235 234 235]
imgeio read: [253 253 253]
'''




I have no idea how to validate the video I made is corret
Any feedback will be very appreciated !


-
Why libyuv : i420ToARGb runs so slowly
14 avril 2020, par AjaxI am using FFMPEG to decode H264 data.Why libyuv : i420ToARGb runs so slowly . Convert a frame need 4.5MS 
,40000-450000 microseconds.FFMPEG's native Sws_scale function conversion is about 3000 microseconds.this is my code.



if (av_read_frame(pFormatCtx, vPacket) >= 0) {
 if ((vPacket)->stream_index != videoindex) {
 continue;
 }
 avcodec_send_packet(pCodecCtx, vPacket);
 int ret = avcodec_receive_frame(pCodecCtx, vFrame);
 if (ret < 0 && ret != AVERROR_EOF) {
 av_packet_unref(vPacket);
 continue;
 }
 long thisTime = getTime();
// sws_scale(img_convert_ctx, vFrame->data, vFrame->linesize,
// 0,
// pCodecCtx->height,
// pFrameRGBA->data, pFrameRGBA->linesize);
 int re = libyuv::I420ToARGB(vFrame->data[0], vFrame->linesize[0],
 vFrame->data[2], vFrame->linesize[2],
 vFrame->data[1], vFrame->linesize[1],
 pFrameRGBA->data[0], pFrameRGBA->linesize[0],
 pCodecCtx->width, pCodecCtx->height);
 long newTime = getTime();
 LOGE("runing Time:%d", newTime - thisTime);
 if (ANativeWindow_lock(nativeWindow, &windowBuffer, nullptr) < 0) {
 } else {
 av_image_fill_arrays(pFrameRGBA->data, pFrameRGBA->linesize,
 (const uint8_t *) windowBuffer.bits, AV_PIX_FMT_RGBA,
 pCodecCtx->width, pCodecCtx->height, 1);
 ANativeWindow_unlockAndPost(nativeWindow);

 }

 };