
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (4)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (1676)
-
How to use ffmpeg.js in a ReactJS project
1er septembre 2021, par L3M0LMy react app is using ffmpeg.wasm (https://github.com/ffmpegwasm/ffmpeg.wasm) but because of the "recent" issues with
SharedArrayBuffer
I have to move the project to ffmpeg.js (https://github.com/Kagami/ffmpeg.js).

Here is my problem. I installed the library (
npm i ffmpeg.js
) and tried to use the simple code provided on the github page for the web workers to test if it's working :

const worker = new Worker("ffmpeg-worker-webm.js");
worker.onmessage = function(e) {
 const msg = e.data;
 switch (msg.type) {
 case "ready":
 worker.postMessage({type: "run", arguments: ["-version"]});
 break;
 case "stdout":
 console.log(msg.data);
 break;
 case "stderr":
 console.log(msg.data);
 break;
 case "done":
 console.log(msg.data);
 break;
 }
};



but the
onmessage
method never get's called, instead I get

GET https://localhost:3000/ffmpeg-worker-webm.js 403 (Forbidden)



I'm new to the web worker topic and I could not find enough articles about this problem to wrap my head around (in fact, most of the articles use the exact same code as I do and apparently it works for them). Is the problem localhost specific or is it a ReactJS problem and I'm not able to use the ffmpeg.js library at all ? Can someone guide me on how to solve this issue ?


-
FFMPEG hanging on a frame while streaming to YouTube (streaming incomplete video) - no errors
12 janvier 2024, par ThePrinceI have a Flask application that is running an FFMPEG command.


- 

- I extract frames from a local video file (7 seconds long, 235 frames) and process each frame into a JPEG format. I then restream the images one by one to youtube through the pipe (stdin).
- Simultaneously, I extract the video from the original file and I use -copy to just use the audio from the original file.






Here is my command which I run in python :


command = [
 'ffmpeg',
 '-loglevel', 'trace', # detailed log level
 
 # VIDEO STREAM
 '-f', 'image2pipe',
 '-c:v', 'mjpeg',
 '-i', '-', 

 # AUDIO STREAM
 '-i', f"{input_folder}{filename}", 

 # MAPPING
 '-map', '0:v', # video from the pipe
 '-map', '1:a', # audio from original file

 # OUTPUT STREAM
 '-c:v', 'libx264', # video codec
 '-c:a', 'copy', # copy from original
 
 '-f', 'flv',
 f'{youtube_url}{stream_key}'
]



I get all the debugging info in the console.
It stops on frame 179 out of 235 frames and just hangs there.


frame= 179 fps=4.2 q=28.0 size= 1020kB time=00:00:07.08 bitrate=1180.8kbits/s speed=0.165x



Ignore the FPS and speed since these are just the effect of it hanging. The FPS and speed will decrease gradually each second that passes.


After I Ctrl+C out of it, I see that all 235 frames were encoded and only 180 were muxed.


I increased the buffer size in case that was the issue and it seemed to mux all of them, but the content was still cut short.


To be clear, in my YouTube stream I only got the first 5 out of 7 seconds of video before cutting off.


When I increased the buffer, it hung only on the very last frame... and showed no errors, but again the output video was 7 seconds and was cut short.


A typical output from the log is this :


[libx264 @ 0000021e913b67c0] frame= 233 QP=25.95 NAL=2 Slice:P Poc:106 I:21 P:310 SKIP:1289 size=3403 bytes
[mjpeg @ 0000021e913f9240] marker=d8 avail_size_in_buf=68061
[mjpeg @ 0000021e913f9240] marker parser used 0 bytes (0 bits)
[mjpeg @ 0000021e913f9240] marker=e0 avail_size_in_buf=68059
[mjpeg @ 0000021e913f9240] marker parser used 16 bytes (128 bits)
[mjpeg @ 0000021e913f9240] marker=db avail_size_in_buf=68041
[mjpeg @ 0000021e913f9240] index=0
[mjpeg @ 0000021e913f9240] qscale[0]: 3
[mjpeg @ 0000021e913f9240] marker parser used 67 bytes (536 bits)
[mjpeg @ 0000021e913f9240] marker=db avail_size_in_buf=67972
[mjpeg @ 0000021e913f9240] index=1
[mjpeg @ 0000021e913f9240] qscale[1]: 6
[mjpeg @ 0000021e913f9240] marker parser used 67 bytes (536 bits)
[mjpeg @ 0000021e913f9240] marker=c0 avail_size_in_buf=67903
[mjpeg @ 0000021e913f9240] sof0: picture: 852x480
[mjpeg @ 0000021e913f9240] component 0 2:2 id: 1 quant:0
[mjpeg @ 0000021e913f9240] component 1 1:1 id: 2 quant:1
[mjpeg @ 0000021e913f9240] component 2 1:1 id: 3 quant:1
[mjpeg @ 0000021e913f9240] pix fmt id 22111100
[mjpeg @ 0000021e913f9240] marker parser used 17 bytes (136 bits)
[mjpeg @ 0000021e913f9240] marker=c4 avail_size_in_buf=67884
[mjpeg @ 0000021e913f9240] class=0 index=0 nb_codes=12
[mjpeg @ 0000021e913f9240] marker parser used 31 bytes (248 bits)
[mjpeg @ 0000021e913f9240] marker=c4 avail_size_in_buf=67851
[mjpeg @ 0000021e913f9240] class=1 index=0 nb_codes=162
[mjpeg @ 0000021e913f9240] marker parser used 181 bytes (1448 bits)
[mjpeg @ 0000021e913f9240] marker=c4 avail_size_in_buf=67668
[mjpeg @ 0000021e913f9240] class=0 index=1 nb_codes=12
[mjpeg @ 0000021e913f9240] marker parser used 31 bytes (248 bits)
[mjpeg @ 0000021e913f9240] marker=c4 avail_size_in_buf=67635
[mjpeg @ 0000021e913f9240] class=1 index=1 nb_codes=162
[mjpeg @ 0000021e913f9240] marker parser used 181 bytes (1448 bits)
[mjpeg @ 0000021e913f9240] escaping removed 740 bytes
[mjpeg @ 0000021e913f9240] marker=da avail_size_in_buf=67452
[mjpeg @ 0000021e913f9240] component: 1
[mjpeg @ 0000021e913f9240] component: 2
[mjpeg @ 0000021e913f9240] component: 3
[mjpeg @ 0000021e913f9240] marker parser used 66711 bytes (533681 bits)
[mjpeg @ 0000021e913f9240] marker=d9 avail_size_in_buf=0
[mjpeg @ 0000021e913f9240] decode frame unused 0 bytes
[libx264 @ 0000021e913b67c0] frame= 234 QP=26.34 NAL=2 Slice:P Poc:108 I:23 P:368 SKIP:1229 size=3545 bytes



There really are no errors other than not finding end of file when it hangs and I Ctrl+C.


I have tested the following :


- 

- I decided to loop Frame #1 repeatedly to make sure it wasn't the actual frame content that was the issue (problem still remained).
- I adjusted buffer size, fps, re, FPS on both sources and target.
- I even switched to another video and it still hung.
- When I tried a higher definition video it would hang at a later frame (since there were more frames to work with) but it still would hang close to the 5 second mark on a 7 second video.
- I also changed the audio to a silent stream and it didn't work (same issue happened).
- I even changed the output target to an actual mp4 file instead of a youtube stream, and the output video was cut off !














Note : I notice the default FPS is 25 for the video stream, yet the original FPS from video was 30 FPS so I don't know if this might be causing the issue.


Guys I'm ready to throw in the towel here. I read the ffmpeg documentation in detail and nothing helped (also the online chatrooms for ffmpeg don't work). Looked at video tutorials and learned deeply about MPEG. I am lost for what to do and ready to move on to another tool.


-
DNN OpenCV Python using RSTP always crash after few minutes
1er juillet 2022, par renaldyksDescription :


I want to create a people counter using DNN. The model I'm using is MobileNetSSD. The camera I use is IPCam from Hikvision. Python communicates with IPCam using the RSTP protocol.


The program that I made is good and there are no bugs, when running the sample video the program does its job well. But when I replaced it with IPcam there was an unknown error.


Error :


Sometimes the error is :


[h264 @ 000001949f7adfc0] error while decoding MB 13 4, bytestream -6
[h264 @ 000001949f825ac0] left block unavailable for requested intra4x4 mode -1
[h264 @ 000001949f825ac0] error while decoding MB 0 17, bytestream 762



Sometimes the error does not appear and the program is killed.



Update Error


After revising the code, I caught the error. The error found is


[h264 @ 0000019289b3fa80] error while decoding MB 4 5, bytestream -25



Now I don't know what to do, because the error is not in Google.


Source Code :


Old Code


This is my very earliest code before getting suggestions from the comments field.


import time
import cv2
import numpy as np
import math
import threading

print("Load MobileNeteSSD model")

prototxt = "MobileNetSSD_deploy.prototxt"
model = "MobileNetSSD_deploy.caffemodel"

CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
 "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
 "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
 "sofa", "train", "tvmonitor"]

net = cv2.dnn.readNetFromCaffe(prototxt, model)

pos_line = 0
offset = 50
car = 0
detected = False
check = 0
prev_frame_time = 0


def detect():
 global check, car, detected
 check = 0
 if(detected == False):
 car += 1
 detected = True


def center_object(x, y, w, h):
 cx = x + int(w / 2)
 cy = y + int(h / 2)
 return cx, cy


def process_frame_MobileNetSSD(next_frame):
 global car, check, detected

 rgb = cv2.cvtColor(next_frame, cv2.COLOR_BGR2RGB)
 (H, W) = next_frame.shape[:2]

 blob = cv2.dnn.blobFromImage(next_frame, size=(300, 300), ddepth=cv2.CV_8U)
 net.setInput(blob, scalefactor=1.0/127.5, mean=[127.5, 127.5, 127.5])
 detections = net.forward()

 for i in np.arange(0, detections.shape[2]):
 confidence = detections[0, 0, i, 2]

 if confidence > 0.5:

 idx = int(detections[0, 0, i, 1])
 if CLASSES[idx] != "person":
 continue

 label = CLASSES[idx]

 box = detections[0, 0, i, 3:7] * np.array([W, H, W, H])
 (startX, startY, endX, endY) = box.astype("int")

 center_ob = center_object(startX, startY, endX-startX, endY-startY)
 cv2.circle(next_frame, center_ob, 4, (0, 0, 255), -1)

 if center_ob[0] < (pos_line+offset) and center_ob[0] > (pos_line-offset):
 # car+=1
 detect()

 else:
 check += 1
 if(check >= 5):
 detected = False

 cv2.putText(next_frame, label+' '+str(round(confidence, 2)),
 (startX, startY-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
 cv2.rectangle(next_frame, (startX, startY),
 (endX, endY), (0, 255, 0), 3)

 return next_frame


def PersonDetection_UsingMobileNetSSD():
 cap = cv2.VideoCapture()
 cap.open("rtsp://admin:Admin12345@192.168.100.20:554/Streaming/channels/2/")

 global car,pos_line,prev_frame_time

 frame_count = 0

 while True:
 try:
 time.sleep(0.1)
 new_frame_time = time.time()
 fps = int(1/(new_frame_time-prev_frame_time))
 prev_frame_time = new_frame_time

 ret, next_frame = cap.read()
 w_video = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
 h_video = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
 pos_line = int(h_video/2)-50

 if ret == False: break

 frame_count += 1
 cv2.line(next_frame, (int(h_video/2), 0),
 (int(h_video/2), int(h_video)), (255, 127, 0), 3)
 next_frame = process_frame_MobileNetSSD(next_frame)

 cv2.rectangle(next_frame, (248,22), (342,8), (0,0,0), -1)
 cv2.putText(next_frame, "Counter : "+str(car), (250, 20),
 cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
 cv2.putText(next_frame, "FPS : "+str(fps), (0, int(h_video)-10),
 cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
 cv2.imshow("Video Original", next_frame)
 # print(car)

 except Exception as e:
 print(str(e))

 if cv2.waitKey(1) & 0xFF == ord('q'): 
 break


 print("/MobileNetSSD Person Detector")


 cap.release()
 cv2.destroyAllWindows()

if __name__ == "__main__":
 t1 = threading.Thread(PersonDetection_UsingMobileNetSSD())
 t1.start()



New Code


I have revised my code and the program still stops taking frames. I just revised the PersonDetection_UsingMobileNetSSD() function. I've also removed the multithreading I was using. The code has been running for about 30 minutes but after a broken frame, the code will never re-execute the program block
if ret == True
.

def PersonDetection_UsingMobileNetSSD():
 cap = cv2.VideoCapture()
 cap.open("rtsp://admin:Admin12345@192.168.100.20:554/Streaming/channels/2/")

 global car,pos_line,prev_frame_time

 frame_count = 0

 while True:
 try:
 if cap.isOpened():
 ret, next_frame = cap.read()
 if ret:
 new_frame_time = time.time()
 fps = int(1/(new_frame_time-prev_frame_time))
 prev_frame_time = new_frame_time
 w_video = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
 h_video = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
 pos_line = int(h_video/2)-50

 # next_frame = cv2.resize(next_frame,(720,480),fx=0,fy=0, interpolation = cv2.INTER_CUBIC)

 if ret == False: break

 frame_count += 1
 cv2.line(next_frame, (int(h_video/2), 0),
 (int(h_video/2), int(h_video)), (255, 127, 0), 3)
 next_frame = process_frame_MobileNetSSD(next_frame)

 cv2.rectangle(next_frame, (248,22), (342,8), (0,0,0), -1)
 cv2.putText(next_frame, "Counter : "+str(car), (250, 20),
 cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
 cv2.putText(next_frame, "FPS : "+str(fps), (0, int(h_video)-10),
 cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
 cv2.imshow("Video Original", next_frame)
 # print(car)
 else:
 print("Crashed Frame")
 else:
 print("Cap is not open")

 except Exception as e:
 print(str(e))

 if cv2.waitKey(1) & 0xFF == ord('q'): 
 break


 print("/MobileNetSSD Person Detector")


 cap.release()
 cv2.destroyAllWindows()



Requirement :


Hardware : Intel i5-1035G1, RAM 8 GB, NVIDIA GeForce MX330


Software : Python 3.6.2 , OpenCV 4.5.1, Numpy 1.16.0


Question :


- 

- What should i do for fixing this error ?
- What causes this to happen ?






Best Regards,



Thanks