
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (105)
-
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Problèmes fréquents
10 mars 2010, parPHP et safe_mode activé
Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site
Sur d’autres sites (9081)
-
Produce waveform video from audio using FFMPEG
27 avril 2017, par RhythmicDevilI am trying to create a waveform video from audio. My goal is to produce a video that looks something like this
For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.
ShowWaves
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4
This link will download the output of that command.
ShowSpectrum
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4
This link will download the output of that command.
I posted the simple examples because I didn’t want to confuse the issue by adding all the variations I have tried.
In practice I suppose I can get away with the output from showspectrum but I’d like to understand where/how I am thinking about this incorrectly. Thanks for any advice.
Here is a link to the source audio file.
-
Produce waveform video from audio using FFMPEG
30 novembre 2020, par RhythmicDevilI am trying to create a waveform video from audio. My goal is to produce a video that looks something like this




For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.


ShowWaves


ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4



This link will download the output of that command.


ShowSpectrum


ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4



This link will download the output of that command.


I posted the simple examples because I didn't want to confuse the issue by adding all the variations I have tried.


In practice I suppose I can get away with the output from showspectrum but I'd like to understand where/how I am thinking about this incorrectly. Thanks for any advice.


Here is a link to the source audio file.


-
DNN OpenCV Python using RSTP always crash after few minutes
1er juillet 2022, par renaldyksDescription :


I want to create a people counter using DNN. The model I'm using is MobileNetSSD. The camera I use is IPCam from Hikvision. Python communicates with IPCam using the RSTP protocol.


The program that I made is good and there are no bugs, when running the sample video the program does its job well. But when I replaced it with IPcam there was an unknown error.


Error :


Sometimes the error is :


[h264 @ 000001949f7adfc0] error while decoding MB 13 4, bytestream -6
[h264 @ 000001949f825ac0] left block unavailable for requested intra4x4 mode -1
[h264 @ 000001949f825ac0] error while decoding MB 0 17, bytestream 762



Sometimes the error does not appear and the program is killed.



Update Error


After revising the code, I caught the error. The error found is


[h264 @ 0000019289b3fa80] error while decoding MB 4 5, bytestream -25



Now I don't know what to do, because the error is not in Google.


Source Code :


Old Code


This is my very earliest code before getting suggestions from the comments field.


import time
import cv2
import numpy as np
import math
import threading

print("Load MobileNeteSSD model")

prototxt = "MobileNetSSD_deploy.prototxt"
model = "MobileNetSSD_deploy.caffemodel"

CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
 "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
 "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
 "sofa", "train", "tvmonitor"]

net = cv2.dnn.readNetFromCaffe(prototxt, model)

pos_line = 0
offset = 50
car = 0
detected = False
check = 0
prev_frame_time = 0


def detect():
 global check, car, detected
 check = 0
 if(detected == False):
 car += 1
 detected = True


def center_object(x, y, w, h):
 cx = x + int(w / 2)
 cy = y + int(h / 2)
 return cx, cy


def process_frame_MobileNetSSD(next_frame):
 global car, check, detected

 rgb = cv2.cvtColor(next_frame, cv2.COLOR_BGR2RGB)
 (H, W) = next_frame.shape[:2]

 blob = cv2.dnn.blobFromImage(next_frame, size=(300, 300), ddepth=cv2.CV_8U)
 net.setInput(blob, scalefactor=1.0/127.5, mean=[127.5, 127.5, 127.5])
 detections = net.forward()

 for i in np.arange(0, detections.shape[2]):
 confidence = detections[0, 0, i, 2]

 if confidence > 0.5:

 idx = int(detections[0, 0, i, 1])
 if CLASSES[idx] != "person":
 continue

 label = CLASSES[idx]

 box = detections[0, 0, i, 3:7] * np.array([W, H, W, H])
 (startX, startY, endX, endY) = box.astype("int")

 center_ob = center_object(startX, startY, endX-startX, endY-startY)
 cv2.circle(next_frame, center_ob, 4, (0, 0, 255), -1)

 if center_ob[0] < (pos_line+offset) and center_ob[0] > (pos_line-offset):
 # car+=1
 detect()

 else:
 check += 1
 if(check >= 5):
 detected = False

 cv2.putText(next_frame, label+' '+str(round(confidence, 2)),
 (startX, startY-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
 cv2.rectangle(next_frame, (startX, startY),
 (endX, endY), (0, 255, 0), 3)

 return next_frame


def PersonDetection_UsingMobileNetSSD():
 cap = cv2.VideoCapture()
 cap.open("rtsp://admin:Admin12345@192.168.100.20:554/Streaming/channels/2/")

 global car,pos_line,prev_frame_time

 frame_count = 0

 while True:
 try:
 time.sleep(0.1)
 new_frame_time = time.time()
 fps = int(1/(new_frame_time-prev_frame_time))
 prev_frame_time = new_frame_time

 ret, next_frame = cap.read()
 w_video = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
 h_video = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
 pos_line = int(h_video/2)-50

 if ret == False: break

 frame_count += 1
 cv2.line(next_frame, (int(h_video/2), 0),
 (int(h_video/2), int(h_video)), (255, 127, 0), 3)
 next_frame = process_frame_MobileNetSSD(next_frame)

 cv2.rectangle(next_frame, (248,22), (342,8), (0,0,0), -1)
 cv2.putText(next_frame, "Counter : "+str(car), (250, 20),
 cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
 cv2.putText(next_frame, "FPS : "+str(fps), (0, int(h_video)-10),
 cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
 cv2.imshow("Video Original", next_frame)
 # print(car)

 except Exception as e:
 print(str(e))

 if cv2.waitKey(1) & 0xFF == ord('q'): 
 break


 print("/MobileNetSSD Person Detector")


 cap.release()
 cv2.destroyAllWindows()

if __name__ == "__main__":
 t1 = threading.Thread(PersonDetection_UsingMobileNetSSD())
 t1.start()



New Code


I have revised my code and the program still stops taking frames. I just revised the PersonDetection_UsingMobileNetSSD() function. I've also removed the multithreading I was using. The code has been running for about 30 minutes but after a broken frame, the code will never re-execute the program block
if ret == True
.

def PersonDetection_UsingMobileNetSSD():
 cap = cv2.VideoCapture()
 cap.open("rtsp://admin:Admin12345@192.168.100.20:554/Streaming/channels/2/")

 global car,pos_line,prev_frame_time

 frame_count = 0

 while True:
 try:
 if cap.isOpened():
 ret, next_frame = cap.read()
 if ret:
 new_frame_time = time.time()
 fps = int(1/(new_frame_time-prev_frame_time))
 prev_frame_time = new_frame_time
 w_video = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
 h_video = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
 pos_line = int(h_video/2)-50

 # next_frame = cv2.resize(next_frame,(720,480),fx=0,fy=0, interpolation = cv2.INTER_CUBIC)

 if ret == False: break

 frame_count += 1
 cv2.line(next_frame, (int(h_video/2), 0),
 (int(h_video/2), int(h_video)), (255, 127, 0), 3)
 next_frame = process_frame_MobileNetSSD(next_frame)

 cv2.rectangle(next_frame, (248,22), (342,8), (0,0,0), -1)
 cv2.putText(next_frame, "Counter : "+str(car), (250, 20),
 cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
 cv2.putText(next_frame, "FPS : "+str(fps), (0, int(h_video)-10),
 cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
 cv2.imshow("Video Original", next_frame)
 # print(car)
 else:
 print("Crashed Frame")
 else:
 print("Cap is not open")

 except Exception as e:
 print(str(e))

 if cv2.waitKey(1) & 0xFF == ord('q'): 
 break


 print("/MobileNetSSD Person Detector")


 cap.release()
 cv2.destroyAllWindows()



Requirement :


Hardware : Intel i5-1035G1, RAM 8 GB, NVIDIA GeForce MX330


Software : Python 3.6.2 , OpenCV 4.5.1, Numpy 1.16.0


Question :


- 

- What should i do for fixing this error ?
- What causes this to happen ?






Best Regards,



Thanks