
Recherche avancée
Médias (3)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (96)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (11178)
-
Efficient real-time video stream processing and forwarding with RTMP servers
19 mai 2023, par dumbQuestionsI have a scenario where I need to retrieve a video stream from an RTMP server, apply image processing (specifically, adding blur to frames), and then forward the processed stream to another RTMP server (in this case, Twitch).


Currently, I'm using ffmpeg in conjunction with cv2 to retrieve and process the stream. However, this approach introduces significant lag when applying the blur. I'm seeking an alternative method that can achieve the desired result more efficiently. I did attempt to solely rely on ffmpeg for the entire process, but I couldn't find a way to selectively process frames based on a given condition and subsequently transmit only those processed frames.


Is there a more efficient approach or alternative solution that can address this issue and enable real-time video stream processing with minimal lag ?


Thanks in advance !


def forward_stream(server_url, stream_key, twitch_stream_key):
 get_ffmpeg_command = [...]

 send_ffmpeg_command [...]

 # Start get FFmpeg process
 read_process = subprocess.Popen(get_ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)

 # Start send FFmpeg process
 send_process = send_process = subprocess.Popen(send_ffmpeg_command, stdin=subprocess.PIPE, stderr=subprocess.DEVNULL)

 # Open video capture
 cap = cv2.VideoCapture(f'{server_url}')

 while True:
 # Read the frame
 ret, frame = cap.read()
 if ret:
 # Apply machine learning algorithm
 should_blur = machine_learning_algorithm(frame)

 # Apply blur if necessary
 if machine_learning_algorithm(frame):
 frame = cv2.blur(frame, (25, 25))

 # Write the frame to FFmpeg process
 send_process.stdin.write(frame.tobytes())
 else:
 break

 # Release resources
 cap.release()
 read_process.stdin.close()
 read_process.wait()




-
Transcoding WAV audio to AAC in an MP4 container using FFmpeg C API
7 novembre 2022, par vstrom coderI'm trying to compose source video and audio into a final MP4 video.
I have a problem with the WAV audio. After decoding and filtering, I'm getting an error from the output encoder :
[aac @ 0x145e04c40] more samples than frame size


I initially used the following filter graph (minimal reproducible example) :


abuffer -> aformat -> abuffersink



At this point I was getting the error mentioned above.


Then, I tried to insert a
aresample
filter to the graph :

abuffer -> aresample -> aformat -> abuffersink



But still getting the same error.
This was based on the fact that the ffmpeg CLI uses this filter when converting WAV to MP4 :


Command :


ffmpeg -i source.wav output.mp4 -loglevel debug



Output contains :


[graph_0_in_0_0 @ 0x138f06200] Setting 'time_base' to value '1/44100'
 [graph_0_in_0_0 @ 0x138f06200] Setting 'sample_rate' to value '44100'
 [graph_0_in_0_0 @ 0x138f06200] Setting 'sample_fmt' to value 's16'
 [graph_0_in_0_0 @ 0x138f06200] Setting 'channel_layout' to value 'mono'
 [graph_0_in_0_0 @ 0x138f06200] tb:1/44100 samplefmt:s16 samplerate:44100 chlayout:mono
 [format_out_0_0 @ 0x138f06620] Setting 'sample_fmts' to value 'fltp'
 [format_out_0_0 @ 0x138f06620] Setting 'sample_rates' to value '96000|88200|64000|48000|44100|32000|24000|22050|16000|12000|11025|8000|7350'
 [format_out_0_0 @ 0x138f06620] auto-inserting filter 'auto_aresample_0' between the filter 'Parsed_anull_0' and the filter 'format_out_0_0'
 [AVFilterGraph @ 0x138f060f0] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed
 [auto_aresample_0 @ 0x138f06c30] [SWR @ 0x120098000] Using s16p internally between filters
 [auto_aresample_0 @ 0x138f06c30] ch:1 chl:mono fmt:s16 r:44100Hz -> ch:1 chl:mono fmt:fltp r:44100Hz
 Output #0, mp4, to 'output.mp4':
 Metadata:
 encoder : Lavf59.27.100
 Stream #0:0, 0, 1/44100: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, delay 1024, 69 kb/s
 Metadata:
 encoder : Lavc59.37.100 aac



I'm trying to figure out whether I should use the SWR library directly as exemplified in the transcode_aac example.


-
SDL2 won't play with more than 6 audio channels
13 juin 2020, par Hiko HaietoI am trying to stream (raw) video and audio from a capture device as part of my home media setup (with my pc acting similarly to a receiver in a typical home theatre setup), but the biggest problem I haven't been able to get past is that I haven't been able to get ffplay (using SDL2 as its audio backend) to work with all 8 channels in 7.1 streams - two simply get dropped, despite it recognising 8 channel input or me specifying a 7.1 layout.



I have been able to confirm that all 8 channels are present in the source by first using ffmpeg to save the output of a speaker test to a file and playing that back with both mplayer (which works) and ffplay (which doesn't). I also wrote some minimal code to play the audio directly through SDL's API with the same result, so it's not the fault of ffplay. I might simply use mplayer if it weren't for the fact that piping output from ffmpeg adds too much latency for real-time use. I am using libSDL version 2.0.12 and ffplay 4.2.3, both of which are the latest at the time of writing and are ostensibly supposed to support 7.1 audio.



Using output recorded from
speaker-test -c 8
, I am using the following to play it back in mplayer :


mplayer -channels 8 -rawaudio channels=8 -format s16le -demuxer rawaudio speaker-test.pcm




and the following to play it back in ffplay :



ffplay -f s16le -ac 8 -af 'channelmap=channel_layout=7.1' speaker-test.pcm




No matter what I try, the two side channels get dropped. I couldn't figure out how to play raw pcm in SDL, so I repeated the same tests with wav output and used the following code to play it back :



#include <sdl2></sdl2>SDL.h>

int main(int argc, char **argv) {
 SDL_Init(SDL_INIT_AUDIO);
 SDL_AudioSpec wavSpec;
 Uint32 wavLength;
 Uint8 *wavBuffer;
 SDL_LoadWAV("speaker-test.wav", &wavSpec, &wavBuffer, &wavLength);
 SDL_AudioDeviceID deviceID = SDL_OpenAudioDevice(NULL, 0, &wavSpec, NULL, 0);
 SDL_QueueAudio(deviceID, wavBuffer, wavLength);
 SDL_PauseAudioDevice(deviceID, 0);
 SDL_Delay(30000);
 SDL_CloseAudioDevice(deviceID);
 SDL_FreeWAV(wavBuffer);
 SDL_Quit();
 return 0;
}




The above code exhibits the same behaviour of dropping the two additional side channels, despite it being the latest version of SDL that should have supported 7.1 for many releases now. Why might this be happening, and how might I fix it ?