
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (104)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (13518)
-
H.264 video file size from camera is much bigger than x264 output
10 août 2020, par Lawrence songI have a UVC camera which supports h264 protocol. we can see the h264 listed below when we list all formats supported.


msm8909:/data # ./ffmpeg -f v4l2 -list_formats all -i /dev/video1
ffmpeg version N-53546-g5eb4405fc5-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
 configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libfribidi --enable-libass --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libxml2 --enable-libxvid --enable-libzimg
 libavutil 56. 56.100 / 56. 56.100
 libavcodec 58. 97.100 / 58. 97.100
 libavformat 58. 49.100 / 58. 49.100
 libavdevice 58. 11.101 / 58. 11.101
 libavfilter 7. 87.100 / 7. 87.100
 libswscale 5. 8.100 / 5. 8.100
 libswresample 3. 8.100 / 3. 8.100
 libpostproc 55. 8.100 / 55. 8.100
[video4linux2,v4l2 @ 0x4649140] Compressed: h264 : H.264 : 1920x1080 1280x720 640x480 320x240
[video4linux2,v4l2 @ 0x4649140] Compressed: mjpeg : MJPEG : 1920x1080 1280x720 640x480 320x240



I am running the ffmpeg cmd to record UVC camera video to local device.


ffmpeg -f v4l2 -input_format h264 -framerate 30 -video_size 1280*720 -i /dev/video1 -c copy /sdcard/Movies/output.mkv



The video size is way bigger than running the command below :


ffmpeg -f v4l2 -input_format mjpeg -framerate 30 -video_size 1280*720 -i /dev/video1 -c:v libx264 -vf format=yuv420p /sdcard/Movies/output.mp4



I assume the camera already supports h264 protocol. Thus I don't need to re-encode to 264 formats. However, the video size does not look like an H264 encoded video.


-
How can ffmpeg concat MP3s with full metadata incl. cover art ?
13 décembre 2022, par TENAudio books inconveniently split into dozens of MP3s (with spaces in their names) should be merged into one MP3 in a subdirectory (in which ffmpeg version 4.2.7-0ubuntu0.1 is invoked), without time-consuming and possibly degrading conversions, reliably preserving all metadata incl. cover art (present and similar in all MP3s of a title, their differences being significant only in lengths and track numbers).


However, rather than picking the latter from the first input MP3, the https://trac.ffmpeg.org/wiki/Concatenate#protocol loses the cover art, the https://trac.ffmpeg.org/wiki/Concatenate#demuxer documented as more flexible even loses all metadata :


ffmpeg -v verbose -f concat -safe 0 -i <(printf "file '$PWD/%s'\n" ../in\ track*.mp3) -c copy "out.mp3"
...
Input #0, concat, from '/dev/fd/63':
Duration: N/A, start: 0.000000, bitrate: 192 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
Stream #0:1: Video: png, 1 reference frame, rgba(pc), 300x300, 90k tbr, 90k tbn, 90k tbc
Metadata:
title : 12ae3b8152eaf255ae0315c59400c540.png
comment : Cover (front)
...
Output #0, mp3, to 'out.mp3':
Metadata:
TSSE : Lavf58.29.100
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (copy)
...
[AVIOContext @ 0x561459f3dac0] Statistics: 1958050 bytes read, 0 seeks
[mp3 @ 0x561459f3f900] Skipping 0 bytes of junk at 110334.
[mp3 @ 0x561459f3f900] Estimating duration from bitrate, this may be inaccurate
No more output streams to write to, finishing.
size= 75793kB time=00:53:03.12 bitrate= 195.1kbits/s speed= 636x
video:0kB audio:75793kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000865%
Input file #0 (/dev/fd/63):
Input stream #0:0 (audio): 121847 packets read (77611658 bytes);
Input stream #0:1 (video): 40 packets read (4358440 bytes);
Total: 121887 packets (81970098 bytes) demuxed
Output file #0 (out.mp3):
Output stream #0:0 (audio): 121847 packets muxed (77611658 bytes);
Total: 121847 packets (77611658 bytes) muxed
[AVIOContext @ 0x561459ef6700] Statistics: 2 seeks, 298 writeouts
[AVIOContext @ 0x561459f39e40] Statistics: 2006324 bytes read, 0 seeks
[AVIOContext @ 0x561459ee0300] Statistics: 5040 bytes read, 0 seek



The metadata incl. cover PNG as detected (as single-frame "video") should end up in the output MP3, but doesn't (even when adding -movflags use_metadata_tags possibly intended for other formats).


-metadata track="1/1" (or without the /1 ?) may be required as the first input MP3 sometimes wrongly starts at a higher number.


How do I make sure no metadata (incl. image) other than track numbers is lost when concatenating MP3s (by protocol or demuxer, from a set of input files with spaces in their names and a wildcard to match across track numbers) ?


-
OpenCV & RTSP - Python errors
10 février, par Midhun MI’m working on a Python script that reads multiple RTSP streams using OpenCV, detects motion, and saves frames when motion is detected. Initially, I faced issues with ash-colored frames due to H.265 codec, which OpenCV doesn’t support by default. After switching the camera codecs to H.264, the ash-colored frames issue was resolved. However, I’m now encountering decoding errors and glitching frames.


System Specifications :


Processor : Intel Core i3-6100 CPU @ 3.70GHz
RAM:8GB
Resource Usage :
CPU : 35-45%
RAM : 1GB max
Network Speed : 7.4 Mb/s
Disk Usage : 2.3 MB/s


Here’s the Python script I’m using :


import cv2
import os
import datetime
import threading
import argparse
from cryptography.fernet import Fernet

def encrypt_text(text):
 return text

class MotionDetector:
 def __init__(self, base_dir="motion_frames"):
 self.base_dir = base_dir
 self.output_dirs = [os.path.join(self.base_dir, str(i)) for i in range(1, 4)]
 for dir_path in self.output_dirs:
 os.makedirs(dir_path, exist_ok=True)
 self.fgbg_dict = {}

 def initialize_fgbg(self, camera_name):
 if camera_name not in self.fgbg_dict:
 self.fgbg_dict[camera_name] = cv2.createBackgroundSubtractorMOG2(history=500, varThreshold=35, detectShadows=True)

 def detect_motion(self, frame, camera_name):
 self.initialize_fgbg(camera_name)
 fgmask = self.fgbg_dict[camera_name].apply(frame)
 thresh = cv2.threshold(fgmask, 200, 255, cv2.THRESH_BINARY)[1]
 thresh = cv2.dilate(thresh, None, iterations=2)
 contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

 motion_detected = any(cv2.contourArea(contour) > 500 for contour in contours)
 return motion_detected

 def save_frame(self, frame, camera_name, count):
 folder_index = (count - 1) % 3 # This will rotate between 0, 1, and 2
 output_dir = self.output_dirs[folder_index]
 timestamp = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
 filename = f"{camera_name}_Entireframe_object_{count}_{timestamp}.jpg"
 name = encrypt_text(filename)
 pathname = os.path.join(output_dir, f'{name}.jpg')
 cv2.imwrite(pathname, frame)
 # print(f"Motion detected: {pathname}")

def process_camera_stream(rtsp_url, camera_name, detector, stop_event):
 cap = cv2.VideoCapture(rtsp_url, cv2.CAP_FFMPEG)
 print("Started Camera : ",camera_name)
 count = 0

 while not stop_event.is_set():
 ret, frame = cap.read()
 if not ret:
 print(f"Connection lost: {camera_name}. Reconnecting...")
 cap.release()
 cap = cv2.VideoCapture(rtsp_url)
 continue

 if detector.detect_motion(frame, camera_name):
 count += 1
 detector.save_frame(frame, camera_name, count)

 cap.release()

def main():
 parser = argparse.ArgumentParser(description='RTSP Motion Detection')
 parser.add_argument('--output', type=str, default="motion_frames", help='Output directory')
 args = parser.parse_args()

 rtsp_urls = {
 "Camera1": "rtsp://admin:J**@884@192.168.1.103:554/cam/realmonitor?channel=1&subtype=1&protocol=TCP",
 "Camera2": "rtsp://admin:J***@884@192.168.1.105:554/cam/realmonitor?channel=1&subtype=1&protocol=TCP",
 "Camera3": "rtsp://admin:J***@884@192.168.1.104:554/cam/realmonitor?channel=1&subtype=1&protocol=TCP",
 "Camera4": "rtsp://admin:J@884@192.168.1.101:554/cam/realmonitor?channel=1&subtype=1&protocol=TCP",

 "Camera5": "rtsp://admin:admin123@192.168.1.33:554/Streaming/Channels/301&protocol=TCP",
 "Camera6": "rtsp://admin:admin123@192.168.1.33:554/Streaming/Channels/401&protocol=TCP",
 "Camera7": "rtsp://admin:admin123@192.168.1.33:554/Streaming/Channels/701&protocol=TCP",
 }

 detector = MotionDetector(base_dir=args.output)
 stop_event = threading.Event()
 threads = []

 try:
 for camera_name, url in rtsp_urls.items():
 thread = threading.Thread(target=process_camera_stream, args=(url, camera_name, detector, stop_event))
 thread.start()
 threads.append(thread)
 
 while True:
 pass
 except KeyboardInterrupt:
 print("Stopping...")
 stop_event.set()
 for thread in threads:
 thread.join()


if __name__ == "__main__":
 main()




i am getting glitched some images are having glitches, i am attaching some image examples. When running the script, I’m getting the following decoding errors in the terminal :


[h264 @ 00000185da541a80] error while decoding MB 26 1, bytestream -29
[h264 @ 00000185d8fefb00] error while decoding MB 23 31, bytestream -5
[h264 @ 00000185cedcc140] error while decoding MB 36 35, bytestream -7
[h264 @ 00000185d8ae73c0] cabac decode of qscale diff failed at 40 35
[h264 @ 00000185d8ae73c0] error while decoding MB 40 35, bytestream -5
[h264 @ 00000185da541a80] error while decoding MB 32 30, bytestream -11
[h264 @ 00000185e15f8500] error while decoding MB 16 34, bytestream -11
[h264 @ 00000185e15f9700] error while decoding MB 9 33, bytestream -9
[h264 @ 00000185e15fb680] error while decoding MB 6 32, bytestream -5
[h264 @ 00000185e15f8500] error while decoding MB 23 23, bytestream -7
[h264 @ 00000185e15fb680] error while decoding MB 28 23, bytestream -5
[h264 @ 00000185e15fa000] error while decoding MB 27 19, bytestream -37
[h264 @ 00000185e15fa900] error while decoding MB 6 27, bytestream -7
[h264 @ 00000185e15f8500] error while decoding MB 14 12, bytestream -5
[h264 @ 00000185e15f9280] error while decoding MB 22 35, bytestream -7
[h264 @ 00000185d8fefb00] error while decoding MB 31 32, bytestream -7
[h264 @ 00000185e15fb680] error while decoding MB 5 24, bytestream -5
[h264 @ 00000185d8ae7b00] error while decoding MB 29 26, bytestream -7



i have ip cameras & analog cameras, the error from ip
camera are too frequent compared to analog camera.


Questions :


- 

- What could be causing these decoding errors and glitching frames ?
- Are there any specific settings or configurations I need to adjust in OpenCV or FFMPEG to handle H.264 streams more reliably ?
- Could this be related to network latency, hardware limitations, or OpenCV’s handling of RTSP streams ?
- Are there any alternative approaches or libraries I can use to improve the stability of RTSP stream processing ?










the glitches are more on person or moving objects & i have no idea if the error and glitches are rtelated


What I’ve Tried :


- 

- Switched camera codecs from H.265 to H.264 (resolved the ash-colored frames issue).
- Tested the script with a single camera using a different object detection script, but the same errors occurred.






I want the frames saved without the glitches