
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (6549)
-
Revision 104067 : ajout de class "champ" pour éviter certains chevauchements de blocs ...
11 juin 2018, par chankalan@… — Logajout de class "champ" pour éviter certains chevauchements de blocs dans certains cas : https://www.mail-archive.com/spip-zone@rezo.net/msg43339.html
-
Anomalie #4363 (Nouveau) : mot de passe vide bloque le formulaire de réinitialisation du mot de pa...
1er août 2019, par cy_altern -Scénario type :
- un administrateur crée des comptes pour de futurs utilisateurs en renseignant nom, mail, login mais SANS mot de passe
- il envoie un mail à ces utilisateurs en leur donnant l’URL de la page "J’ai perdu mon mot de passe" (.../spip.php ?page=spip_pass) afin qu’ils génèrent leur mot de passe
- l’utilisateur saisit son mail dans le formulaire de cette page squelettes-dist/formulaires/oubli.html et il se prend en retour "Erreur : vous n’avez plus accès à ce site."
Pour l’instant la seule alternative est de créer un mot de passe quelconque lors de la création du compte de l’utilisateur.
Y a t’il une raison valide pour expliquer cette restriction de mot de passe vide à la réinitialisation ?
Sinon, le patch semble rapide :
ligne 85 de squelettes-dist/formulaires/oubli.php
remplacer :- <span class="CodeRay">} elseif ($r[1]['statut'] == '5poubelle' or $r[1]['pass'] == '') {
- </span>
par- <span class="CodeRay">} elseif ($r[1]['statut'] == '5poubelle') {
- </span>
-
FFMPEG Issue : Video breaks a lot hence the real time video gets distorted and gets a little delayed while streaming on a webpage from drone
26 mars 2022, par ashiyaa nunhuck****I am trying to detect a face from my drone camera in real time.The video streams successfully but it is delayed and breaks a lot. Is there any solution to this problem ? How can i have a smooth video streaming with little delay and no video breaking in order to succeed in detecting a face ? Your help will be much appreciated.
Also, this is printed while my code is running: :




INFO:werkzeug:127.0.0.1 - - [27/May/2021 15:16:14] "GET
/video/streaming HTTP/1.1" 200 -
INFO:drone_face_recognition_and_tracking.controllers.server :'action' :
'command', 'cmd' : 'takeOff'
INFO:drone_face_recognition_and_tracking.models.manage_drone :'action' :
'send_command', 'command' : 'takeoff' [h264 @ 0x55aa924a2e40] error
while decoding MB 45 38, bytestream -6 [h264 @ 0x55aa924a2e40]
concealing 424 DC, 424 AC, 424 MV errors in I frame [h264 @
0x55aa922a9a00] concealing 687 DC, 687 AC, 687 MV errors in P frame
[h264 @ 0x55aa923f79c0] left block unavailable for requested intra
mode [h264 @ 0x55aa923f79c0] error while decoding MB 0 34, bytestream
1347 [h264 @ 0x55aa923f79c0] concealing 709 DC, 709 AC, 709 MV errors
in P frame INFO:werkzeug:127.0.0.1 - - [27/May/2021 15:16:17] "POST
/api/command/ HTTP/1.1" 200 - pipe:0 : corrupt decoded frame in stream
0
Last message repeated 2 times [h264 @ 0x55aa922a9a00] error while decoding MB 49 30, bytestream -6 [h264 @ 0x55aa922a9a00] concealing
900 DC, 900 AC, 900 MV errors in P frame pipe:0 : corrupt decoded frame
in stream 0
INFO:drone_face_recognition_and_tracking.models.manage_drone :'action' :
'receive_response', 'response' : b'ok'
INFO:drone_face_recognition_and_tracking.controllers.server :'action' :
'command', 'cmd' : 'faceDetectAndTrack' INFO:werkzeug:127.0.0.1 - -
[27/May/2021 15:16:21] "POST /api/command/ HTTP/1.1" 200 - [h264 @
0x55aa924a2e40] left block unavailable for requested intra4x4 mode -1
[h264 @ 0x55aa924a2e40] error while decoding MB 0 30, bytestream 1624
[h264 @ 0x55aa924a2e40] concealing 949 DC, 949 AC, 949 MV errors in I
frame pipe:0 : corrupt decoded frame in stream 0 [h264 @
0x55aa9244d400] left block unavailable for requested intra mode [h264
@ 0x55aa9244d400] error while decoding MB 0 12, bytestream 2936 [h264
@ 0x55aa9244d400] concealing 2029 DC, 2029 AC, 2029 MV errors in I
frame pipe:0 : corrupt decoded frame in stream 0 [h264 @
0x55aa924bf700] concealing 1632 DC, 1632 AC, 1632 MV errors in P frame
pipe:0 : corrupt decoded frame in stream 0 [h264 @ 0x55aa92414280]
concealing 1571 DC, 1571 AC, 1571 MV errors in P frame




Here is my code :****


import logging
import contextlib
import os
import socket
import subprocess
import threading
import time
import cv2 as cv
import numpy as np

from drone_face_recognition_and_tracking.models.base import Singleton

logger = logging.getLogger(__name__)

DEFAULT_DISTANCE = 0.30
DEFAULT_SPEED = 10
DEFAULT_DEGREE = 10

FRAME_X = int(320) # try 640
FRAME_Y = int(240) # try 480
FRAME_AREA = FRAME_X * FRAME_Y

FRAME_SIZE = FRAME_AREA * 3
FRAME_CENTER_X = FRAME_X / 2
FRAME_CENTER_Y = FRAME_Y / 2

CMD_FFMPEG = (f'ffmpeg -probesize 32 -hwaccel auto -hwaccel_device opencl -i pipe:0 '
 f'-pix_fmt bgr24 -s {FRAME_X}x{FRAME_Y} -f rawvideo pipe:1')

FACE_DETECT_XML_FILE = './drone_face_recognition_and_tracking/models/haarcascade_frontalface_default.xml'


def receive_video(stop_event, pipe_in, host_ip, video_port):
 with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as sock_video:
 sock_video.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
 sock_video.settimeout(.5)
 sock_video.bind((host_ip, video_port))
 data = bytearray(2048)
 while not stop_event.is_set():
 try:
 size, addr = sock_video.recvfrom_into(data)
 # logger.info({'action': 'receive_video', 'data': data})
 except socket.timeout as ex:
 logger.warning({'action': 'receive_video', 'ex': ex})
 time.sleep(0.5)
 continue
 except socket.error as ex:
 logger.error({'action': 'receive_video', 'ex': ex})
 break

 try:
 pipe_in.write(data[:size])
 pipe_in.flush()
 except Exception as ex:
 logger.error({'action': 'receive_video', 'ex': ex})
 break


class Tello_Drone(metaclass=Singleton):
 def __init__(self, host_ip='192.168.10.2', host_port=8889,
 drone_ip='192.168.10.1', drone_port=8889,
 is_imperial=False, speed=DEFAULT_SPEED):
 self.host_ip = host_ip
 self.host_port = host_port
 self.drone_ip = drone_ip
 self.drone_port = drone_port
 self.drone_address = (drone_ip, drone_port)
 self.is_imperial = is_imperial
 self.speed = speed
 self.socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
 self.socket.bind((self.host_ip, self.host_port))

 self.response = None
 self.stop_event = threading.Event()
 self._response_thread = threading.Thread(target=self.receive_response, args=(self.stop_event, ))
 self._response_thread.start()

 self.proc = subprocess.Popen(CMD_FFMPEG.split(' '),
 stdin=subprocess.PIPE,
 stdout=subprocess.PIPE)
 self.proc_stdin = self.proc.stdin
 self.proc_stdout = self.proc.stdout

 self.video_port = 11111

 self._receive_video_thread = threading.Thread(
 target=receive_video,
 args=(self.stop_event, self.proc_stdin,
 self.host_ip, self.video_port,))
 self._receive_video_thread.start()

 self.face_cascade = cv.CascadeClassifier(FACE_DETECT_XML_FILE)
 self._is_enable_face_detect = False

 self.send_command('command')
 self.send_command('streamon')
 self.set_speed(self.speed)

 def receive_response(self, stop_event):
 while not stop_event.is_set():
 try:
 self.response, ip = self.socket.recvfrom(3000)
 logger.info({'action': 'receive_response',
 'response': self.response})
 except socket.error as ex:
 logger.error({'action': 'receive_response',
 'ex': ex})
 break

 def __dell__(self):
 self.stop()

 def stop(self):
 self.stop_event.set()
 retry = 0
 while self._response_thread.is_alive():
 time.sleep(0.3)
 if retry > 30:
 break
 retry += 1
 self.socket.close()
 os.kill(self.proc.pid, 9)

 def send_command(self, command):
 logger.info({'action': 'send_command', 'command': command})
 self.socket.sendto(command.encode('utf-8'), self.drone_address)

 retry = 0
 while self.response is None:
 time.sleep(0.3)
 if retry > 3:
 break
 retry += 1

 if self.response is None:
 response = None
 else:
 response = self.response.decode('utf-8')
 self.response = None
 return response

 def takeoff(self):
 return self.send_command('takeoff')

 def land(self):
 return self.send_command('land')

 def move(self, direction, distance):
 distance = float(distance)
 if self.is_imperial:
 distance = int(round(distance * 30.48))
 else:
 distance = int(round(distance * 100))
 return self.send_command(f'{direction} {distance}')

 def up(self, distance=DEFAULT_DISTANCE):
 return self.move('up', distance)

 def down(self, distance=DEFAULT_DISTANCE):
 return self.move('down', distance)

 def left(self, distance=DEFAULT_DISTANCE):
 return self.move('left', distance)

 def right(self, distance=DEFAULT_DISTANCE):
 return self.move('right', distance)

 def forward(self, distance=DEFAULT_DISTANCE):
 return self.move('forward', distance)

 def back(self, distance=DEFAULT_DISTANCE):
 return self.move('back', distance)

 def set_speed(self, speed):
 return self.send_command(f'speed {speed}')

 def clockwise(self, degree=DEFAULT_DEGREE):
 return self.send_command(f'cw {degree}')

 def counter_clockwise(self, degree=DEFAULT_DEGREE):
 return self.send_command(f'ccw {degree}')

 def video_binary_generator(self):
 while True:
 try:
 frame = self.proc_stdout.read(FRAME_SIZE)
 except Exception as ex:
 logger.error({'action': 'video_binary_generator', 'ex': ex})
 continue

 if not frame:
 continue

 frame = np.fromstring(frame, np.uint8).reshape(FRAME_Y, FRAME_X, 3)
 yield frame

 def enable_face_detect(self):
 self._is_enable_face_detect = True

 def disable_face_detect(self):
 self._is_enable_face_detect = False

 def video_jpeg_generator(self):
 for frame in self.video_binary_generator():
 if self._is_enable_face_detect:
 gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
 faces = self.face_cascade.detectMultiScale(gray, 1.2, 4)
 for (x, y, w, h) in faces:
 cv.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255), 2)
 break

 _, jpeg = cv.imencode('.jpg', frame)
 jpeg_binary = jpeg.tobytes()
 yield jpeg_binary