
Recherche avancée
Autres articles (64)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (4655)
-
[ffmpeg][asyncio] main process is held by ffmpeg command
5 octobre 2024, par Michael LopezI created a python program for handling my Arlo Camera. To do that I have been using the
pyaarlo
library (https://github.com/twrecked/pyaarlo) to catch camera's events.
The goal is to monitor if there is an active stream on cameras, get the RTSP stream url and reStream it to a HLS playlist for local usage.

Here the python code :


import asyncio
from decouple import config
import logging
from my_pyaarlo import PyArlo
import urllib.parse
from queue import Queue
import signal

# Read config from ENV (unchanged)
ARLO_USER = config('ARLO_USER')
ARLO_PASS = config('ARLO_PASS')
IMAP_HOST = config('IMAP_HOST')
IMAP_USER = config('IMAP_USER')
IMAP_PASS = config('IMAP_PASS')
DEBUG = config('DEBUG', default=False, cast=bool)
PYAARLO_BACKEND = config('PYAARLO_BACKEND', default=None)
PYAARLO_REFRESH_DEVICES = config('PYAARLO_REFRESH_DEVICES', default=0, cast=int)
PYAARLO_STREAM_TIMEOUT = config('PYAARLO_STREAM_TIMEOUT', default=0, cast=int)
PYAARLO_STORAGE_DIR = config('PYAARLO_STORAGE_DIR', default=None)
PYAARLO_ECDH_CURVE = config('PYAARLO_ECDH_CURVE', default=None)

# Initialize logging
logging.basicConfig(
 level=logging.DEBUG if DEBUG else logging.INFO,
 format='%(asctime)s [%(levelname)s] %(name)s: %(message)s'
)
logger = logging.getLogger(__name__)

ffmpeg_processes = {}
event_queue = Queue()
shutdown_event = asyncio.Event()

async def handle_idle_event(camera):
 logger.info(f"Idle event detected for camera: {camera.name}")
 await stop_ffmpeg_stream(camera.name)

async def get_stream_url(camera):
 try:
 # Attempt to get the stream URL
 stream_url = await asyncio.to_thread(camera.get_stream()
 if stream_url:
 return stream_url
 else:
 logger.warning(f"Unable to get stream URL for {camera.name}. Stream might not be active.")
 return None
 except Exception as e:
 logger.error(f"Error getting stream URL for {camera.name}: {e}")
 return None

async def handle_user_stream_active_event(camera):
 logger.info(f"User stream active event detected for camera: {camera.name}")

 # Get the stream URL
 stream_url = await get_stream_url(camera)
 if stream_url:
 logger.info(f"Stream URL for {camera.name}: {stream_url}")
 await start_ffmpeg_stream(camera.name, stream_url)
 else:
 logger.warning(f"No stream URL available for {camera.name}")

async def event_handler(device, attr, value):
 logger.debug(f"Event: {device.name}, Attribute: {attr}, Value: {value}")
 if attr == 'activityState':
 if value == 'idle':
 await handle_idle_event(device)
 elif value in ['userStreamActive']:
 await handle_user_stream_active_event(device)
 elif attr == 'mediaUploadNotification':
 logger.info(f"Media uploaded for camera: {device.name}")

def sync_event_handler(device, attr, value):
 # This function will be called by PyArlo's synchronous callbacks
 event_queue.put((device, attr, value))

async def process_event_queue():
 while not shutdown_event.is_set():
 try:
 if not event_queue.empty():
 device, attr, value = event_queue.get()
 await event_handler(device, attr, value)
 await asyncio.sleep(0.1) # Small delay to prevent busy-waiting
 except asyncio.CancelledError:
 break
 except Exception as e:
 logger.error(f"Error processing event: {e}")

async def display_status(arlo):
 while not shutdown_event.is_set():
 print("\n--- Camera Statuses ---")
 for camera in arlo.cameras:
 print(f"{camera.name}: {camera.state}")
 print("------------------------")
 await asyncio.sleep(5)

async def start_ffmpeg_stream(camera_name, stream_url):
 if camera_name not in ffmpeg_processes:
 output_hls = f"/tmp/{camera_name}.m3u8"

 try:
 new_url = urllib.parse.quote(stream_url.encode(), safe=':/?&=')
 logger.info(f"NEW_URL: {new_url}")

 ffmpeg_cmd = [
 "ffmpeg", "-hide_banner", "-loglevel", "quiet", "-nostats", "-nostdin", "-y", "-re",
 "-i", new_url,
 "-c:v", "libx264", "-preset", "veryfast",
 "-an", "-sn",
 "-f", "hls", "-hls_time", "4", "-hls_list_size", "10",
 "-hls_flags", "delete_segments", output_hls,
 ]
 logger.info(f"Starting FFmpeg command: {ffmpeg_cmd}")
 
 process = await asyncio.create_subprocess_exec(
 *ffmpeg_cmd,
 stdout=asyncio.subprocess.DEVNULL,
 stderr=asyncio.subprocess.DEVNULL
 )
 ffmpeg_processes[camera_name] = process
 logger.info(f"Started ffmpeg process with PID: {process.pid}")

 except Exception as e:
 logger.error(f"Error starting FFmpeg for {camera_name}: {e}")

async def stop_ffmpeg_stream(camera_name):
 logger.info(f"Stopping ffmpeg process for {camera_name}")
 ffmpeg_process = ffmpeg_processes.pop(camera_name, None)
 if ffmpeg_process:
 ffmpeg_process.terminate()

 try:
 await ffmpeg_process.wait()
 logger.info(f"{camera_name} stopped successfully")
 except Exception as e:
 print(f"FFMPEG Process didn't stop in time, forcefully terminating: {e}")
 ffmpeg_process.kill()
 else:
 logger.info(f"FFmpeg process for {camera_name} already stopped")

async def shutdown(signal, loop):
 logger.info(f"Received exit signal {signal.name}...")
 shutdown_event.set()
 tasks = [t for t in asyncio.all_tasks() if t is not asyncio.current_task()]
 [task.cancel() for task in tasks]
 logger.info(f"Cancelling {len(tasks)} outstanding tasks")
 await asyncio.gather(*tasks, return_exceptions=True)
 loop.stop()

async def main():
 # Initialize PyArlo
 arlo_args = {
 'username': ARLO_USER,
 'password': ARLO_PASS,
 'tfa_source': 'imap',
 'tfa_type': 'email',
 'tfa_host': IMAP_HOST,
 'tfa_username': IMAP_USER,
 'tfa_password': IMAP_PASS,
 'save_session': True,
 'verbose_debug': DEBUG
 }

 # Add optional arguments
 for arg, value in [
 ('refresh_devices_every', PYAARLO_REFRESH_DEVICES),
 ('stream_timeout', PYAARLO_STREAM_TIMEOUT),
 ('backend', PYAARLO_BACKEND),
 ('storage_dir', PYAARLO_STORAGE_DIR),
 ('ecdh_curve', PYAARLO_ECDH_CURVE)
 ]:
 if value:
 arlo_args[arg] = value
 
 try:
 arlo = await asyncio.to_thread(PyArlo, **arlo_args)
 except Exception as e:
 logger.error(f"Failed to initialize PyArlo: {e}")
 return

 logger.info("Connected to Arlo. Monitoring events...")

 # Register event handlers for each camera
 for camera in arlo.cameras:
 camera.add_attr_callback('*', sync_event_handler)

 # Start the status display task
 status_task = asyncio.create_task(display_status(arlo))

 # Start the event processing task
 event_processing_task = asyncio.create_task(process_event_queue())

 # Set up signal handlers
 loop = asyncio.get_running_loop()
 for s in (signal.SIGHUP, signal.SIGTERM, signal.SIGINT):
 loop.add_signal_handler(
 s, lambda s=s: asyncio.create_task(shutdown(s, loop)))

 try:
 # Keep the main coroutine running
 while not shutdown_event.is_set():
 try:
 await asyncio.sleep(1)
 except asyncio.CancelledError:
 break
 except Exception as e:
 logger.error(f"Unexpected error in main loop: {e}")
 finally:
 logger.info("Shutting down...")
 for camera_name in list(ffmpeg_processes.keys()):
 await stop_ffmpeg_stream(camera_name)
 
 # Cancel and wait for all tasks
 tasks = [status_task, event_processing_task]
 for task in tasks:
 if not task.done():
 task.cancel()
 await asyncio.gather(*tasks, return_exceptions=True)
 
 logger.info("Program terminated.")

if __name__ == "__main__":
 try:
 asyncio.run(main())
 except KeyboardInterrupt:
 logger.info("Keyboard interrupt received. Exiting.")
 except Exception as e:
 logger.error(f"Unhandled exception: {e}")
 finally:
 logger.info("Program exit complete.")



My issue is about the ffmpeg command which is hold the main process (or the event loop) when it runs, blocking the events coming from the pyaarlo library. The state of the camera continues to work with the good information.


I tried lot of things, without asyncio, with multiprocessing, with subprocess, ... the behavior is always the same. In some cases, I received the idle event after the key interrupt.


Another information :


- 

- When I stop the active stream, the event is not received but when I start the stream just after, that event is received.
- When I run the same ffmpeg command but with a local long video file, everything is Ok. So, it why I guess that the ffmpeg command is impacting the main process.






I succedeed in running the ffmpeg command with rtsp url stream but without a loop event monitoring :


import asyncio
import signal
import sys
import os

async def run_infinite_command():
 # Start a simple HTTP server as our "infinite" command
 url = "rstp://localhost:8554/camera1/stream" # it is a fake url
 ffmpeg_cmd = [
 "ffmpeg", "-re", "-i", url,
 "-c:v", "libx264", "-preset", "veryfast",
 "-c:a", "copy",
 "-f", "hls", "-hls_time", "4", "-hls_list_size", "10",
 "-hls_flags", "delete_segments", "/tmp/output.m3u8"
 ]
 
 process = await asyncio.create_subprocess_exec(
 *ffmpeg_cmd,
 stdout=asyncio.subprocess.DEVNULL,
 stderr=asyncio.subprocess.DEVNULL
 )
 
 print(f"Started HTTP server with PID: {process.pid}")
 return process

async def main():
 # Start the infinite command
 process = await run_infinite_command()

 # Run the main loop for a few seconds
 for i in range(10):
 print(f"Main loop iteration {i+1}")
 await asyncio.sleep(1)

 # Stop the infinite command
 print("Stopping the HTTP server...")
 if sys.platform == "win32":
 # On Windows, we need to use CTRL_C_EVENT
 os.kill(process.pid, signal.CTRL_C_EVENT)
 else:
 # On Unix-like systems, we can use SIGTERM
 process.send_signal(signal.SIGTERM)

 # Wait for the process to finish
 try:
 await asyncio.wait_for(process.wait(), timeout=5.0)
 print("HTTP server stopped successfully")
 except asyncio.TimeoutError:
 print("HTTP server didn't stop in time, forcefully terminating")
 process.kill()

 print("Program finished")

if __name__ == "__main__":
 asyncio.run(main())



With this script, the ffmpeg command is correctly launched and terminated after the for loop.


Could you help ?


-
ffmpeg video slides vertically after 'Invalid buffer size, packet size expected frame_size' error (vsync screen tearing glitch) ?
7 juillet 2020, par GlabbichRulzI have a video which i want to cut and crop using opencv and ffmpeg.




I want the output to be H265, so i am using a ffmpeg subprocess (writing frame bytes to stdin) as explained here. This is a minimum version of my code that leads to the error :


import os, shlex, subprocess, cv2, imutils

VIDEO_DIR = '../SoExample' # should contain a file 'in.mpg'
TIMESPAN = (3827, 3927) # cut to this timespan (frame numbers)
CROP = dict(min_x=560, max_x=731, min_y=232, max_y=418) # crop to this area

# calculate output video size
size = (CROP['max_x']-CROP['min_x'], CROP['max_y']-CROP['min_y']) # (w,h)
# ffmpeg throws an error when having odd dimensions that are not dividable by 2,
# so i just add a pixel to the size and stretch the original image by 1 pixel later.
size_rounded = (size[0]+1 if size[0] % 2 != 0 else size[0],
 size[1]+1 if size[1] % 2 != 0 else size[1])

# read input video
vid_path_in = os.path.join(VIDEO_DIR, 'in.mpg')
cap = cv2.VideoCapture(vid_path_in)
fps = int(cap.get(cv2.CAP_PROP_FPS))

# generate and run ffmpeg command
ffmpeg_cmd = (f'/usr/bin/ffmpeg -y -s {size_rounded[0]}x{size_rounded[1]} -pixel_format'
 + f' bgr24 -f rawvideo -r {fps} -re -i pipe: -vcodec libx265 -pix_fmt yuv420p'
 + f' -crf 24 -x265-params "ctu=64" "{os.path.join(VIDEO_DIR, "out.mp4")}"')
print("using cmd", ffmpeg_cmd)
process = subprocess.Popen(shlex.split(ffmpeg_cmd), stdin=subprocess.PIPE)

# seek to the beginning of the cutting timespan and loop through frames of input video
cap.set(cv2.CAP_PROP_POS_FRAMES, TIMESPAN[0])
frame_returned = True
while cap.isOpened() and frame_returned:
 frame_returned, frame = cap.read()
 frame_number = cap.get(cv2.CAP_PROP_POS_FRAMES) - 1

 # check if timespan end is not reached yet
 if frame_number < TIMESPAN[1]:

 # crop to relevant image area
 frame_cropped = frame[CROP['min_y']:CROP['max_y'],
 CROP['min_x']:CROP['max_x']]

 # resize to even frame size if needed:
 if size != size_rounded:
 frame_cropped = imutils.resize(frame_cropped, width=size_rounded[0],
 height=size_rounded[1])

 # Show processed image using opencv: I see no errors here.
 cv2.imshow('Frame', frame_cropped)

 # Write cropped video frame to input stream of ffmpeg sub-process.
 process.stdin.write(frame_cropped.tobytes())
 else:
 break

 # Press Q on keyboard to exit earlier
 if cv2.waitKey(25) & 0xFF == ord('q'):
 break

process.stdin.close() # Close and flush stdin
process.wait() # Wait for sub-process to finish
process.terminate() # Terminate the sub-process

print("Done!")



Unfortunately, my output looks like this :




The output should not include this vertical sliding glitch. Does anyone know how to fix it ?


My console output for aboves script shows :


using cmd /usr/bin/ffmpeg -y -s 172x186 -pixel_format bgr24 -f rawvideo -r 23 -i pipe: -vcodec libx265 -pix_fmt yuv420p -crf 24 -x265-params "ctu=64" "../SoExample/out.mp4"
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
 built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
 configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
 WARNING: library configuration mismatch
 avcodec configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared --enable-version3 --disable-doc --disable-programs --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libtesseract --enable-libvo_amrwbenc
 libavutil 55. 78.100 / 55. 78.100
 libavcodec 57.107.100 / 57.107.100
 libavformat 57. 83.100 / 57. 83.100
 libavdevice 57. 10.100 / 57. 10.100
 libavfilter 6.107.100 / 6.107.100
 libavresample 3. 7. 0 / 3. 7. 0
 libswscale 4. 8.100 / 4. 8.100
 libswresample 2. 9.100 / 2. 9.100
 libpostproc 54. 7.100 / 54. 7.100
Input #0, rawvideo, from 'pipe:':
 Duration: N/A, start: 0.000000, bitrate: 17659 kb/s
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 172x186, 17659 kb/s, 23 tbr, 23 tbn, 23 tbc
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> hevc (libx265))
x265 [info]: HEVC encoder version 2.6
x265 [info]: build info [Linux][GCC 7.2.0][64 bit] 8bit+10bit+12bit
x265 [info]: using cpu capabilities: MMX2 SSE2Fast LZCNT SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
x265 [info]: Main profile, Level-2 (Main tier)
x265 [info]: Thread pool created using 4 threads
x265 [info]: Slices : 1
x265 [info]: frame threads / pool features : 2 / wpp(3 rows)
x265 [warning]: Source height < 720p; disabling lookahead-slices
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge : hex / 57 / 2 / 2
x265 [info]: Keyframe min / max / scenecut / bias: 23 / 250 / 40 / 5.00
x265 [info]: Lookahead / bframes / badapt : 20 / 4 / 2
x265 [info]: b-pyramid / weightp / weightb : 1 / 1 / 0
x265 [info]: References / ref-limit cu / depth : 3 / on / on
x265 [info]: AQ: mode / str / qg-size / cu-tree : 1 / 1.0 / 32 / 1
x265 [info]: Rate Control / qCompress : CRF-24.0 / 0.60
x265 [info]: tools: rd=3 psy-rd=2.00 rskip signhide tmvp strong-intra-smoothing
x265 [info]: tools: deblock sao
Output #0, mp4, to '../SoExample/out.mp4':
 Metadata:
 encoder : Lavf57.83.100
 Stream #0:0: Video: hevc (libx265) (hev1 / 0x31766568), yuv420p, 172x186, q=2-31, 23 fps, 11776 tbn, 23 tbc
 Metadata:
 encoder : Lavc57.107.100 libx265
[rawvideo @ 0x564ebd221aa0] Invalid buffer size, packet size 51600 < expected frame_size 95976 
Error while decoding stream #0:0: Invalid argument
frame= 100 fps= 30 q=-0.0 Lsize= 36kB time=00:00:04.21 bitrate= 69.1kbits/s speed=1.25x 
video:32kB audio:0kB subtitle:0kB other streams:0kB global headers:2kB muxing overhead: 12.141185%
x265 [info]: frame I: 1, Avg QP:22.44 kb/s: 179.77 
x265 [info]: frame P: 29, Avg QP:24.20 kb/s: 130.12 
x265 [info]: frame B: 70, Avg QP:29.99 kb/s: 27.82 
x265 [info]: Weighted P-Frames: Y:0.0% UV:0.0%
x265 [info]: consecutive B-frames: 20.0% 3.3% 16.7% 43.3% 16.7% 

encoded 100 frames in 3.35s (29.83 fps), 59.01 kb/s, Avg QP:28.23
Done!



As you can see, there is an error :
Invalid buffer size, packet size 51600 < expected frame_size 95976 Error while decoding stream #0:0: Invalid argument
, do you think this could be the cause to the shown problem ? I am not sure, as in the end, it tells me all 100 frames would have been encoded.

In case you want to reproduce this on the exact same video, you can find
actions1.mpg
in the UCF Aerial Action Dataset.

I would greatly appreciate any help as i am really stuck on this error.


-
difference between using SDL and using media player class that is videoview
21 janvier 2014, par WhoamiI have been surfing the net for some time to get basic understanding of media framework in android. As part of this,
To display video we have media player class or video view component which can easily display the video. When we have such solution provided by the framework itself, then why there are few components avaiable like SDL [ Simple Direct Media Layer], which claims the same functionality as video view.?
How both are different ?
Kindly bare with me, if the question is very basic.