
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (66)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (10139)
-
[ffmpeg][asyncio] main process is held by ffmpeg command
5 octobre 2024, par Michael LopezI created a python program for handling my Arlo Camera. To do that I have been using the
pyaarlo
library (https://github.com/twrecked/pyaarlo) to catch camera's events.
The goal is to monitor if there is an active stream on cameras, get the RTSP stream url and reStream it to a HLS playlist for local usage.

Here the python code :


import asyncio
from decouple import config
import logging
from my_pyaarlo import PyArlo
import urllib.parse
from queue import Queue
import signal

# Read config from ENV (unchanged)
ARLO_USER = config('ARLO_USER')
ARLO_PASS = config('ARLO_PASS')
IMAP_HOST = config('IMAP_HOST')
IMAP_USER = config('IMAP_USER')
IMAP_PASS = config('IMAP_PASS')
DEBUG = config('DEBUG', default=False, cast=bool)
PYAARLO_BACKEND = config('PYAARLO_BACKEND', default=None)
PYAARLO_REFRESH_DEVICES = config('PYAARLO_REFRESH_DEVICES', default=0, cast=int)
PYAARLO_STREAM_TIMEOUT = config('PYAARLO_STREAM_TIMEOUT', default=0, cast=int)
PYAARLO_STORAGE_DIR = config('PYAARLO_STORAGE_DIR', default=None)
PYAARLO_ECDH_CURVE = config('PYAARLO_ECDH_CURVE', default=None)

# Initialize logging
logging.basicConfig(
 level=logging.DEBUG if DEBUG else logging.INFO,
 format='%(asctime)s [%(levelname)s] %(name)s: %(message)s'
)
logger = logging.getLogger(__name__)

ffmpeg_processes = {}
event_queue = Queue()
shutdown_event = asyncio.Event()

async def handle_idle_event(camera):
 logger.info(f"Idle event detected for camera: {camera.name}")
 await stop_ffmpeg_stream(camera.name)

async def get_stream_url(camera):
 try:
 # Attempt to get the stream URL
 stream_url = await asyncio.to_thread(camera.get_stream()
 if stream_url:
 return stream_url
 else:
 logger.warning(f"Unable to get stream URL for {camera.name}. Stream might not be active.")
 return None
 except Exception as e:
 logger.error(f"Error getting stream URL for {camera.name}: {e}")
 return None

async def handle_user_stream_active_event(camera):
 logger.info(f"User stream active event detected for camera: {camera.name}")

 # Get the stream URL
 stream_url = await get_stream_url(camera)
 if stream_url:
 logger.info(f"Stream URL for {camera.name}: {stream_url}")
 await start_ffmpeg_stream(camera.name, stream_url)
 else:
 logger.warning(f"No stream URL available for {camera.name}")

async def event_handler(device, attr, value):
 logger.debug(f"Event: {device.name}, Attribute: {attr}, Value: {value}")
 if attr == 'activityState':
 if value == 'idle':
 await handle_idle_event(device)
 elif value in ['userStreamActive']:
 await handle_user_stream_active_event(device)
 elif attr == 'mediaUploadNotification':
 logger.info(f"Media uploaded for camera: {device.name}")

def sync_event_handler(device, attr, value):
 # This function will be called by PyArlo's synchronous callbacks
 event_queue.put((device, attr, value))

async def process_event_queue():
 while not shutdown_event.is_set():
 try:
 if not event_queue.empty():
 device, attr, value = event_queue.get()
 await event_handler(device, attr, value)
 await asyncio.sleep(0.1) # Small delay to prevent busy-waiting
 except asyncio.CancelledError:
 break
 except Exception as e:
 logger.error(f"Error processing event: {e}")

async def display_status(arlo):
 while not shutdown_event.is_set():
 print("\n--- Camera Statuses ---")
 for camera in arlo.cameras:
 print(f"{camera.name}: {camera.state}")
 print("------------------------")
 await asyncio.sleep(5)

async def start_ffmpeg_stream(camera_name, stream_url):
 if camera_name not in ffmpeg_processes:
 output_hls = f"/tmp/{camera_name}.m3u8"

 try:
 new_url = urllib.parse.quote(stream_url.encode(), safe=':/?&=')
 logger.info(f"NEW_URL: {new_url}")

 ffmpeg_cmd = [
 "ffmpeg", "-hide_banner", "-loglevel", "quiet", "-nostats", "-nostdin", "-y", "-re",
 "-i", new_url,
 "-c:v", "libx264", "-preset", "veryfast",
 "-an", "-sn",
 "-f", "hls", "-hls_time", "4", "-hls_list_size", "10",
 "-hls_flags", "delete_segments", output_hls,
 ]
 logger.info(f"Starting FFmpeg command: {ffmpeg_cmd}")
 
 process = await asyncio.create_subprocess_exec(
 *ffmpeg_cmd,
 stdout=asyncio.subprocess.DEVNULL,
 stderr=asyncio.subprocess.DEVNULL
 )
 ffmpeg_processes[camera_name] = process
 logger.info(f"Started ffmpeg process with PID: {process.pid}")

 except Exception as e:
 logger.error(f"Error starting FFmpeg for {camera_name}: {e}")

async def stop_ffmpeg_stream(camera_name):
 logger.info(f"Stopping ffmpeg process for {camera_name}")
 ffmpeg_process = ffmpeg_processes.pop(camera_name, None)
 if ffmpeg_process:
 ffmpeg_process.terminate()

 try:
 await ffmpeg_process.wait()
 logger.info(f"{camera_name} stopped successfully")
 except Exception as e:
 print(f"FFMPEG Process didn't stop in time, forcefully terminating: {e}")
 ffmpeg_process.kill()
 else:
 logger.info(f"FFmpeg process for {camera_name} already stopped")

async def shutdown(signal, loop):
 logger.info(f"Received exit signal {signal.name}...")
 shutdown_event.set()
 tasks = [t for t in asyncio.all_tasks() if t is not asyncio.current_task()]
 [task.cancel() for task in tasks]
 logger.info(f"Cancelling {len(tasks)} outstanding tasks")
 await asyncio.gather(*tasks, return_exceptions=True)
 loop.stop()

async def main():
 # Initialize PyArlo
 arlo_args = {
 'username': ARLO_USER,
 'password': ARLO_PASS,
 'tfa_source': 'imap',
 'tfa_type': 'email',
 'tfa_host': IMAP_HOST,
 'tfa_username': IMAP_USER,
 'tfa_password': IMAP_PASS,
 'save_session': True,
 'verbose_debug': DEBUG
 }

 # Add optional arguments
 for arg, value in [
 ('refresh_devices_every', PYAARLO_REFRESH_DEVICES),
 ('stream_timeout', PYAARLO_STREAM_TIMEOUT),
 ('backend', PYAARLO_BACKEND),
 ('storage_dir', PYAARLO_STORAGE_DIR),
 ('ecdh_curve', PYAARLO_ECDH_CURVE)
 ]:
 if value:
 arlo_args[arg] = value
 
 try:
 arlo = await asyncio.to_thread(PyArlo, **arlo_args)
 except Exception as e:
 logger.error(f"Failed to initialize PyArlo: {e}")
 return

 logger.info("Connected to Arlo. Monitoring events...")

 # Register event handlers for each camera
 for camera in arlo.cameras:
 camera.add_attr_callback('*', sync_event_handler)

 # Start the status display task
 status_task = asyncio.create_task(display_status(arlo))

 # Start the event processing task
 event_processing_task = asyncio.create_task(process_event_queue())

 # Set up signal handlers
 loop = asyncio.get_running_loop()
 for s in (signal.SIGHUP, signal.SIGTERM, signal.SIGINT):
 loop.add_signal_handler(
 s, lambda s=s: asyncio.create_task(shutdown(s, loop)))

 try:
 # Keep the main coroutine running
 while not shutdown_event.is_set():
 try:
 await asyncio.sleep(1)
 except asyncio.CancelledError:
 break
 except Exception as e:
 logger.error(f"Unexpected error in main loop: {e}")
 finally:
 logger.info("Shutting down...")
 for camera_name in list(ffmpeg_processes.keys()):
 await stop_ffmpeg_stream(camera_name)
 
 # Cancel and wait for all tasks
 tasks = [status_task, event_processing_task]
 for task in tasks:
 if not task.done():
 task.cancel()
 await asyncio.gather(*tasks, return_exceptions=True)
 
 logger.info("Program terminated.")

if __name__ == "__main__":
 try:
 asyncio.run(main())
 except KeyboardInterrupt:
 logger.info("Keyboard interrupt received. Exiting.")
 except Exception as e:
 logger.error(f"Unhandled exception: {e}")
 finally:
 logger.info("Program exit complete.")



My issue is about the ffmpeg command which is hold the main process (or the event loop) when it runs, blocking the events coming from the pyaarlo library. The state of the camera continues to work with the good information.


I tried lot of things, without asyncio, with multiprocessing, with subprocess, ... the behavior is always the same. In some cases, I received the idle event after the key interrupt.


Another information :


- 

- When I stop the active stream, the event is not received but when I start the stream just after, that event is received.
- When I run the same ffmpeg command but with a local long video file, everything is Ok. So, it why I guess that the ffmpeg command is impacting the main process.






I succedeed in running the ffmpeg command with rtsp url stream but without a loop event monitoring :


import asyncio
import signal
import sys
import os

async def run_infinite_command():
 # Start a simple HTTP server as our "infinite" command
 url = "rstp://localhost:8554/camera1/stream" # it is a fake url
 ffmpeg_cmd = [
 "ffmpeg", "-re", "-i", url,
 "-c:v", "libx264", "-preset", "veryfast",
 "-c:a", "copy",
 "-f", "hls", "-hls_time", "4", "-hls_list_size", "10",
 "-hls_flags", "delete_segments", "/tmp/output.m3u8"
 ]
 
 process = await asyncio.create_subprocess_exec(
 *ffmpeg_cmd,
 stdout=asyncio.subprocess.DEVNULL,
 stderr=asyncio.subprocess.DEVNULL
 )
 
 print(f"Started HTTP server with PID: {process.pid}")
 return process

async def main():
 # Start the infinite command
 process = await run_infinite_command()

 # Run the main loop for a few seconds
 for i in range(10):
 print(f"Main loop iteration {i+1}")
 await asyncio.sleep(1)

 # Stop the infinite command
 print("Stopping the HTTP server...")
 if sys.platform == "win32":
 # On Windows, we need to use CTRL_C_EVENT
 os.kill(process.pid, signal.CTRL_C_EVENT)
 else:
 # On Unix-like systems, we can use SIGTERM
 process.send_signal(signal.SIGTERM)

 # Wait for the process to finish
 try:
 await asyncio.wait_for(process.wait(), timeout=5.0)
 print("HTTP server stopped successfully")
 except asyncio.TimeoutError:
 print("HTTP server didn't stop in time, forcefully terminating")
 process.kill()

 print("Program finished")

if __name__ == "__main__":
 asyncio.run(main())



With this script, the ffmpeg command is correctly launched and terminated after the for loop.


Could you help ?


-
ffmpeg crash when updating overlay file (c# console app)
24 décembre 2024, par Vlad StefanI am trying to develop a console app that can record the screen and have cpu/ram/gpu usage of the machine noted on the recording. The problem i am facing is that after a while (2-3 hours) the recording stops because ffmpeg is trying to read the file text while my C# code tries to update the file. I managed to find out that I should use a temp file and replace the original instead of rewriting the whole file but that way I encounter another problem, ffmpeg will try to read the file while the file is replaced or even for that split second when is considered deleted. Any ideas what I should do ? How should the method with the temp file be managed or how can I make the method that updates the same file stable ? I was even thinking to increment the ffmpeg reload frames since it might narrow the chances of crashing but it's not a 100% crash proof solution.


Error message received from ffmpeg with updating only the text from the file :


Error: [Parsed_drawtext_0 @ 000001e68fd95dc0] [FILE @ 0000009fb7ffee70] Error occurred in CreateFileMapping()
Error: [Parsed_drawtext_0 @ 000001e68fd95dc0] The text file 'OverlayFiles/OverlayFile_MyPC.txt' could not be read or is empty
Error: [vf#0:0 @ 000001e68fd48300] Error while filtering: Operation not permitted
Error: [vf#0:0 @ 000001e68fd48300] Task finished with error code: -1 (Operation not permitted)



Error message received from ffmpeg with the use of a temp file that's replacing the original file :


Error: [Parsed_drawtext_0 @ 0000014c815e6200] [FILE @ 000000253d7fee70] Cannot read file 'OverlayFiles/OverlayFile_MyPC.txt': Permission denied
Error: [Parsed_drawtext_0 @ 0000014c815e6200] The text file 'OverlayFiles/OverlayFile_MyPC.txt' could not be read or is empty
Error: [vf#0:0 @ 0000014c81597280] Error while filtering: Permission denied
Error: [vf#0:0 @ 0000014c81597280] Task finished with error code: -13 (Permission denied)
Error: [vf#0:0 @ 0000014c81597280] Terminating thread with return code -13 (Permission denied)



ffmpeg arguments :


string arguments = $"-video_size 1920x1080 -framerate 30 -f gdigrab -i desktop -c:v libx264rgb -crf 0 -preset ultrafast -color_range 2 " +
 $"-vf \"drawtext=fontfile=C\\\\:/Windows/fonts/consola.ttf:fontsize=30:fontcolor='white':textfile={overlayFilePath_}:boxcolor=0x00000080:box=1:x=10:y=H-210:reload=1\" \"" + 
 outputFile + "\"";



The code that updates the overlay file (1st version) :


public void UpdateOverlayText(string filePath)
{
 string usage = GetSystemUsage(filePath); // Get the system usage data

 try
 {
 // Open the file with FileShare.ReadWrite to allow other processes to read it while writing
 using (var fileStream = new FileStream(filePath, FileMode.Create, FileAccess.Write, FileShare.ReadWrite))
 {
 using (var writer = new StreamWriter(fileStream))
 {
 writer.Write(usage); // Write the system usage data to the file
 }
 }

 // Ensure file permissions are set correctly after writing
 SetFilePermissions(filePath);
 }
 catch (IOException ex)
 {
 Console.WriteLine($"Error updating overlay file: {ex.Message}");
 }
}



(UPDATE)Code that updates overlay file using MemoryStream :


public void UpdateOverlayText(string filePath)
{
 string usage = GetSystemUsage(filePath);

 try
 {
 using (var memoryStream = new MemoryStream())
 {
 using (var writer = new StreamWriter(memoryStream))
 {
 writer.Write(usage);
 writer.Flush();

 memoryStream.Position = 0;

 File.WriteAllBytes(filePath, memoryStream.ToArray());
 }
 }

 SetFilePermissions(filePath);
 }
 catch (IOException ex)
 {
 Console.WriteLine($"Error updating overlay file: {ex.Message}");
 }
}



Code tested using MemoryStream and Temp file :


public void UpdateOverlayText(string filePath)
{
 string usage = GetSystemUsage(filePath);
 string tempFilePath = filePath + ".tmp";
 try
 {
 // Write to a temporary file first
 using (var memoryStream = new MemoryStream())
 {
 using (var writer = new StreamWriter(memoryStream))
 {
 writer.Write(usage);
 writer.Flush();

 memoryStream.Position = 0;
 File.WriteAllBytes(tempFilePath, memoryStream.ToArray());
 }
 }
 File.Replace(tempFilePath, filePath, null);
 }
 catch (IOException ex)
 {
 Console.WriteLine($"Error updating overlay file: {ex.Message}");
 }
 finally
 {
 if (File.Exists(tempFilePath))
 {
 File.Delete(tempFilePath);
 }
 }
}



Thanks in advance for your help and sorry for any typo or wrong phrases I used.


-
Computer crashing when using python tools in same script
5 février 2023, par SL1997I am attempting to use the speech recognition toolkit VOSK and the speech diarization package Resemblyzer to transcibe audio and then identify the speakers in the audio.


Tools :


https://github.com/alphacep/vosk-api

https://github.com/resemble-ai/Resemblyzer

I can do both things individually but run into issues when trying to do them when running the one python script.


I used the following guide when setting up the diarization system :




Computer specs are as follows :


Intel(R) Core(TM) i3-7100 CPU @ 3.90GHz, 3912 Mhz, 2 Core(s), 4 Logical Processor(s)

32GB RAM

The following is my code, I am not to sure if using threading is appropriate or if I even implemented it correctly, how can I best optimize this code as to achieve the results I am looking for and not crash.


from vosk import Model, KaldiRecognizer
from pydub import AudioSegment
import json
import sys
import os
import subprocess
import datetime
from resemblyzer import preprocess_wav, VoiceEncoder
from pathlib import Path
from resemblyzer.hparams import sampling_rate
from spectralcluster import SpectralClusterer
import threading
import queue
import gc



def recognition(queue, audio, FRAME_RATE):

 model = Model("Vosk_Models/vosk-model-small-en-us-0.15")

 rec = KaldiRecognizer(model, FRAME_RATE)
 rec.SetWords(True)

 rec.AcceptWaveform(audio.raw_data)
 result = rec.Result()

 transcript = json.loads(result)#["text"]

 #return transcript
 queue.put(transcript)



def diarization(queue, audio):

 wav = preprocess_wav(audio)
 encoder = VoiceEncoder("cpu")
 _, cont_embeds, wav_splits = encoder.embed_utterance(wav, return_partials=True, rate=16)
 print(cont_embeds.shape)

 clusterer = SpectralClusterer(
 min_clusters=2,
 max_clusters=100,
 p_percentile=0.90,
 gaussian_blur_sigma=1)

 labels = clusterer.predict(cont_embeds)

 def create_labelling(labels, wav_splits):

 times = [((s.start + s.stop) / 2) / sampling_rate for s in wav_splits]
 labelling = []
 start_time = 0

 for i, time in enumerate(times):
 if i > 0 and labels[i] != labels[i - 1]:
 temp = [str(labels[i - 1]), start_time, time]
 labelling.append(tuple(temp))
 start_time = time
 if i == len(times) - 1:
 temp = [str(labels[i]), start_time, time]
 labelling.append(tuple(temp))

 return labelling

 #return
 labelling = create_labelling(labels, wav_splits)
 queue.put(labelling)



def identify_speaker(queue1, queue2):

 transcript = queue1.get()
 labelling = queue2.get()

 for speaker in labelling:

 speakerID = speaker[0]
 speakerStart = speaker[1]
 speakerEnd = speaker[2]

 result = transcript['result']
 words = [r['word'] for r in result if speakerStart < r['start'] < speakerEnd]
 #return
 print("Speaker",speakerID,":",' '.join(words), "\n")





def main():

 queue1 = queue.Queue()
 queue2 = queue.Queue()

 FRAME_RATE = 16000
 CHANNELS = 1

 podcast = AudioSegment.from_mp3("Podcast_Audio/Film-Release-Clip.mp3")
 podcast = podcast.set_channels(CHANNELS)
 podcast = podcast.set_frame_rate(FRAME_RATE)

 first_thread = threading.Thread(target=recognition, args=(queue1, podcast, FRAME_RATE))
 second_thread = threading.Thread(target=diarization, args=(queue2, podcast))
 third_thread = threading.Thread(target=identify_speaker, args=(queue1, queue2))

 first_thread.start()
 first_thread.join()
 gc.collect()

 second_thread.start()
 second_thread.join()
 gc.collect()

 third_thread.start()
 third_thread.join()
 gc.collect()

 # transcript = recognition(podcast,FRAME_RATE)
 #
 # labelling = diarization(podcast)
 #
 # print(identify_speaker(transcript, labelling))


if __name__ == '__main__':
 main()



When I say crash I mean everything freezes, I have to hold down the power button on the desktop and turn it back on again. No blue/blank screen, just frozen in my IDE looking at my code. Any help in resolving this issue would be greatly appreciated.