
Recherche avancée
Médias (91)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (67)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (9079)
-
How to use audio frame after decode mp3 file using pyav, ffmpeg, python
2 janvier 2021, par Long Tran DaiI am using using python with pyav, ffmpeg to decode mp3 in the memory. I know there are some other way to do it, like pipe ffmpeg command. However, I would like to explore pyav and ffmpeg API. So I have the following code. It works but the sound is very noisy, although hearable :


import numpy as np
import av # to convert mp3 to wav using ffmpeg
import pyaudio # to play music

mp3_path = 'D:/MyProg/python/SauTimThiepHong.mp3'

def decodeStream(mp3_path):
 # Run NOT OK
 
 container = av.open(mp3_path)
 stream = next(s for s in container.streams if s.type == 'audio')
 frame_count = 0
 data = bytearray()
 for packet in container.demux(stream):
 # <class>
 # We need to skip the "flushing" packets that `demux` generates.
 #if frame_count == 5000 : break 
 if packet.dts is None:
 continue
 for frame in packet.decode(): 
 #
 # type(frame) : <class>
 #frame.samples = 1152 : 1152 diem du lieu : Number of audio samples (per channel)
 # moi frame co size = 1152 (diem) * 2 (channels) * 4 (bytes / diem) = 9216 bytes
 # 11021 frames
 #arr = frame.to_ndarray() # arr.nbytes = 9216

 #channels = [] 
 channels = frame.to_ndarray().astype("float16")
 #for plane in frame.planes:
 #channels.append(plane.to_bytes()) #plane has 4 bytes / sample, but audio has only 2 bytes
 # channels.append(np.frombuffer(plane, dtype=np.single).astype("float16"))
 #channels.append(np.frombuffer(plane, dtype=np.single)) # kieu np.single co 4 bytes
 if not frame.is_corrupt:
 #data.extend(np.frombuffer(frame.planes[0], dtype=np.single).astype("float16")) # 1 channel: noisy
 # type(planes) : <class>
 frame_count += 1
 #print( '>>>> %04d' % frame_count, frame) 
 #if frame_count == 5000 : break 
 # mix channels:
 for i in range(frame.samples): 
 for ch in channels: # dec_ctx->channels
 data.extend(ch[i]) #noisy
 #fwrite(frame->data[ch] + data_size*i, 1, data_size, outfile)
 return bytes(data)
</class></class></class>


I use pipe ffmpeg to get decoded data to compare and find they are different :


def RunFFMPEG(mp3_path, target_fs = "44100"):
 # Run OK
 import subprocess
 # init command
 ffmpeg_command = ["ffmpeg", "-i", mp3_path,
 "-ab", "128k", "-acodec", "pcm_s16le", "-ac", "0", "-ar", target_fs, "-map",
 "0:a", "-map_metadata", "-1", "-sn", "-vn", "-y",
 "-f", "wav", "pipe:1"]
 # excute ffmpeg command
 pipe = subprocess.run(ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize= 10**8)
 # debug
 #print(pipe.stdout, pipe.stderr)
 # read signal as numpy array and assign sampling rate
 #audio_np = np.frombuffer(buffer=pipe.stdout, dtype=np.uint16, offset=44)
 #audio_np = np.frombuffer(buffer=pipe.stdout, dtype=np.uint16)
 #sig, fs = audio_np, target_fs
 #return audio_np
 return pipe.stdout[78:] 



Then I use pyaudio to play data and find it very noisy


p = pyaudio.PyAudio()
streamOut = p.open(format=pyaudio.paInt16, channels=2, rate= 44100, output=True)
#streamOut = p.open(format=pyaudio.paInt16, channels=1, rate= 44100, output=True)

mydata = decodeStream(mp3_path)
print("bytes of mydata = ", len(mydata))
#print("bytes of mydata = ", mydata.nbytes)

ffMpegdata = RunFFMPEG(mp3_path)
print("bytes of ffMpegdata = ", len(ffMpegdata)) 
#print("bytes of ffMpegdata = ", ffMpegdata.nbytes)

minlen = min(len(mydata), len(ffMpegdata))
print("mydata == ffMpegdata", mydata[:minlen] == ffMpegdata[:minlen]) # ffMpegdata.tobytes()[:minlen] )

#bytes of mydata = 50784768
#bytes of ffMpegdata = 50784768
#mydata == ffMpegdata False

streamOut.write(mydata)
streamOut.write(ffMpegdata)
streamOut.stop_stream()
streamOut.close()
p.terminate()



Please help me to understand decoded frame of pyav api (after for frame in packet.decode() :). Should it be processed more ? or I have some error ?


It makes me crazy for 3 days. I could not guess where to go.


Thank you very much.


-
Merge image, audio, video with no audio, video with audio, with ffmpeg
17 février 2021, par BasjSimilarly to Merge videos and images using ffmpeg (which is not a duplicate for the reasons explained below), I'd like to merge multiple inputs which can be either :


- 

- image only,
- audio only,
- video with audio,
- video without audio










into one output video, with stereo audio.


Note : If multiple audio channels are playing at the same time, they should be mixed ; idem for video : the images from multiple sources should overlap.


I tried this (comments added here) :


ffmpeg 
 -i tmp/%04d.png # [0]
 -f lavfi -t 0.1 -i anullsrc # [1], if needed for inputs without sound?
 -i a.mp3 # [2], we keep 1 sec. from it; should start at 0'05" in output video
 -i b.mp3 # [3], we keep 2 sec. from it; should start at 0'06" in output video
 -i with_sound.mp4 # [4], we keep 3 sec. from it; should start at 0'07" in output video
 -i without_sound.mp4 # [5], we keep 4 sec. from it; should start at 0'08" in output video
 -filter_complex 
 [2]atrim=start=0:duration=1.0,asetpts=PTS-STARTPTS[s2];[s2]adelay=5000|5000[t2];
 [3]atrim=start=0:duration=2.0,asetpts=PTS-STARTPTS[s3];[s3]adelay=6000|6000[t3];
 [4]atrim=start=0:duration=3.0,asetpts=PTS-STARTPTS[s4];[s4]adelay=7000|7000[t4];
 [5]atrim=start=0:duration=4.0,asetpts=PTS-STARTPTS[s5];[s5]adelay=8000|8000[t5];
 [0][1][t2][t3][t4][t5]concat=n=6:a=1:v=1:unsafe=1[outv][outa]
 -map [outv] -map [outa] out.mp4



I tried with various values
concat=n=5
,n=6
, etc. and addedunsafe=1
, but I always get similar errors :



[Parsed_adelay_2 @ 00000000006e8140] Media type mismatch between the 'Parsed_adelay_2' filter output pad 0 (audio) and the 'Parsed_concat_6' filter input pad 2 (video)

[AVFilterGraph @ 00000000006923c0] Cannot create the link adelay:0 -> concat:2



or for the times I got it nearly working, the videos were added one after another and not merged / mixed.


Also, I'm looking for a syntax that would work even if I don't know in advance if the input videos have or don't have audio (I'm doing a script and I don't know in advance if the videos have audio channels).



TL ;DR :


Question : How to mix/merge multiple inputs (image, audio, video with-or-without-sound) with
ffmpeg
, with a precise starting timestamp for each, into a single video output ?

-
How to receive upd stream with OpenCV ?
17 février 2021, par LegionI need to receive my stream from Jetson Nano to my OpenCV program on my PC (Windows 10).


Ok, I stream camera from my device (Jetson Nano) using :


cv::VideoWriter gst_udpsink("appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw, format=BGRx ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! video/x-h264, stream-format=byte-stream ! h264parse ! rtph264pay pt=96 config-interval=1 ! udpsink host=224.1.1.1 port=5000 auto-multicast=true", cv::CAP_GSTREAMER, 0, fps, cv::Size (width, height));



I installed OpenCV with Gstreamer(following that ) and tried that command


c:\gstreamer\1.0\msvc_x86_64\bin\gst-launch-1.0.exe udpsrc uri=udp://224.1.1.1:5000 auto-multicast=true ! application/x-rtp, media=video, encoding-name=H264 ! rtpjitterbuffer latency=300 ! rtph264depay ! decodebin ! d3dvideosink



it is working, unfortunately, no matter what latency I set I still got quite a big lag.
When I try to use OpenCV


cv::VideoCapture cap("udpsrc uri=udp://224.1.1.1:5000 auto-multicast=true ! application/x-rtp, media=video, encoding-name=H264 ! rtpjitterbuffer latency=300 ! rtph264depay ! decodebin ! videoconvert ! video/x-raw, format=BGR ! appsink", cv::CAP_GSTREAMER);



I get


[ WARN:0] global F:\Code\opencv_4.5.1\opencv-4.5.1\modules\videoio\src\cap_gstreamer.cpp (734) cv::GStreamerCapture::open OpenCV | GStreamer warning: Error opening bin: no element "udpsrc"
[ WARN:0] global F:\Code\opencv_4.5.1\opencv-4.5.1\modules\videoio\src\cap_gstreamer.cpp (501) cv::GStreamerCapture::isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created



And .isOpened() give me false.
I’m don’t know why did I install something wrong ?


I added everything to my PATH as instructed




I also tried to use FFmpeg :


setenv ("OPENCV_FFMPEG_CAPTURE_OPTIONS", "protocol_whitelist;file,rtp,udp", 1);
cap = cv::VideoCapture("test.sdp", cv::CAP_FFMPEG);



I get :


[rtp @ 0000014dc1f83bc0] Protocol 'rtp' not on whitelist 'file,crypto,data'!



I have no setenv() so I tried this and it seems that’s a problem, any idea ?


Shell equivalent


ffplay myFile.sdp -protocol_whitelist file,udp,rtp -fflags nobuffer



Is working successfully (with delay but successfully).


I'm willing to change anything to make it work ! If it's possible with FFmpeg/GStreamer/vlclib, I can change the Jetson side as well, thanks for any help !