
Recherche avancée
Médias (16)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (93)
-
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs. -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (10828)
-
How to detect silence in a pyAV AudioFrame ?
29 janvier 2024, par Sachin DoleI want to process streaming audio (coming in from a person speaking on the peer of a webRTC peer connection) to detect when the person is done talking. I have got the audio track and access to individual frames. I see that each frame can be converted to an nd_array using Frame.to_ndarray. I can also see values in the ndarray changing depending on what the person is speaking, what pitch, what volume etc. Now, I want to detect silence on the stream. My question is what is in the ndarray and how can I make sense of the data ?


while True:
 try:
 frame:AudioFrame = await track.recv()
 frame_nd_array = frame.to_ndarray() 



Where can I learn what is in the frame_nd_array ?


-
Is it possible to pipe an ffmpeg output with multiple files (HLS or DASH)
29 août 2020, par New DevI'm using FFmpeg to generate fragmented MP4s in a dash and HLS format.


ffmpeg -i input.mov -f dash -seg_duration 6 -hls_playlist true output.mbd



The above (simplified) command outputs multiple files, in addition to
output.mdb
(e.g. actual segments,master.m3u8
, etc...)

Is there a way to get each produced file into their individual and separate output streams ?



Broader context :


I'm trying to build a transcoder in Node.js running in Google Cloud, with the idea being that it writes directly to a Google Storage through a writable stream. I can only create a stream per file, but since the number of files is dynamic, I'm not sure how to obtain a stream from each file.


-
stream a video file using srt protocol [closed]
13 novembre 2022, par programmerI want to stream a file over the network from a server. I will need to
send the AVFormatContext, and individual AVPackets over the network.For this reason i use libav library(ffmpeg) and srt protocol to send to specific port. But i can not find a function that send avpackets in srt. I find srt_sendmsg() function but it is not usable in this case.Does it have any solution ?
thanks in advance.


As i say i want to design a srt server that streams specific file. it should read the video file and then send it on specific port that clients can connect to that and receive live stream.I want to use c++ programming , libav libraray and srt protocol