
Recherche avancée
Autres articles (25)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (2264)
-
How to get real time video stream from iphone camera and send it to server ?
7 novembre 2016, par RonI am using
AVCaptureSession
to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here. -
OpenCV returns an Empty Frame on video.read
20 mars 2019, par Ikechukwu AnudeBelow is the relevant code
import cv2 as cv
import numpy as np
video = cv.VideoCapture(0) #tells obj to use built in camera\
#create a face cascade object
face_cascade =
cv.CascadeClassifier(r"C:\Users\xxxxxxx\AppData\Roaming\Python\Python36\site-
packages\cv2\data\haarcascade_frontalcatface.xml")
a = 1
#create loop to display a video
while True:
a = a + 1
check, frame = video.read()
print(frame)
#converts to a gray scale img
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
#create the faces
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)
for(x, y, w, h) in faces:
print(x, y, w, h)
#show the image
cv.imshow('capturing', gray)
key = cv.waitKey(1) #gen a new frame every 1ms
if key == ord('q'): #once you enter 'q' the loop will be exited
break
print(a) #this will print the number of frames
#captures the first frame
video.release() #end the web cam
#destroys the windows when you are not defined
cv.destroyAllWindows()The code displays a video captured from my webcam camera. Despite that, OpevCV doesn’t seem to be processing any frames as all the frames look like this
[[0 0 0]
[0 0 0]
[0 0 0]
...
[0 0 0]
[0 0 0]
[0 0 0]]]which I assume means that they are empty.
This I believe is preventing the algorithm from being able to detect my face in the frame. I have a feeling that the issue lies in the ffmpeg codec, but I’m not entirely sure how to proceed even if that is the case.
OS : Windows 10
Language : PythonWhy is the frame empty and how can I get OpenCV to detect my face in the frame ?
-
FFmpeg Not Transcoding In Real Time
2 décembre 2018, par NimbleBeen setting up a recording build for a friend to resemble mine, but I can’t seem to get things to work in real-time despite sufficient (as far as I can tell) hardware.
System specs : 8600K, GTX 1050ti, 16Gb RAM, 1tb 860 EVO
Test command :
ffmpeg -y -hide_banner -thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -rtbufsize 2147.48M `
-video_size 1920x1080 -framerate 60 `
-i video="@device_pnp_\\?\usb#vid_07ca&pid_0570&mi_00#7&3886ab1a&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global":audio="@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{0A494693-5F33-4304-88D7-394757E09648}" `
-thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -rtbufsize 2147.48M `
-video_size 1920x1080 -framerate 60 `
-i video="@device_pnp_\\?\usb#vid_07ca&pid_0570&mi_00#7&24df76f&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global":audio="@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{0E6F0DEF-2B29-4117-8D30-13F01160AC5B}" `
-map 0:0,0:1 -map 0:1 -c:v h264_nvenc -r 60 -rc-lookahead 120 -forced-idr 1 -strict_gop 1 -sc_threshold 0 -flags +cgop `
-force_key_frames "expr:gte(t,n_forced*2)" -preset: llhp -pix_fmt nv12 -b:v 250M -minrate 250M -maxrate 250M `
-bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -af "aresample=async=250" -vsync 1 -ss 00:00:00.000 `
-max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 `
-segment_format_options max_delay=0 C:\Users\Jordan\Videos\FFmpeg\Left\Left%02d.ts `
-map 1:0,1:1 -map 1:1 -c:v h264_nvenc -r 60 -rc-lookahead 120 -forced-idr 1 -strict_gop 1 -sc_threshold 0 -flags +cgop `
-force_key_frames "expr:gte(t,n_forced*2)" -preset: llhp -pix_fmt nv12 -b:v 250M -minrate 250M -maxrate 250M `
-bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -af "aresample=async=250" -vsync 1 -ss 00:00:00.000 `
-max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 `
-segment_format_options max_delay=0 C:\Users\Jordan\Videos\FFmpeg\Right\Right%02d.tsFor one reason or another this command is not transcoding video in real-time, which is a big issue when you’re trying to record not simply convert a file. When I omit one of the two outputs or if I halve the resolution of each input / output everything works in real-time. This would make me believe there is a bottleneck in the system somewhere but when monitoring everything in task manager nothing is even close to capping out (GPU encoder, cpu, ram, and SSD below 30% usage).
Furthermore when I try recording both streams in one 4K60 video via OBS things work perfectly fine, as in real-time. So I don’t understand how transcoding two 1080p60 streams in FFmpeg would be anymore intensive than one 4K60 stream in OBS... When looking at the GPUs encoding chip usage it shows about 30% for the two 1080p60 streams on FFmpeg and 80% usage for single 4K60 stream in OBS, yet the OBS recording is real-time while the FFmpeg recording is at .6x speed.
Lastly I run a very similar command at home but instead of two 1080p60 streams I’m doing 2 4K60 streams, a 1080p60 stream, and another 2 audio streams without issue. I do have a GTX 1080 though... however when monitoring usage for the 1050 TI being used in this situation nothing points to a GPU bottlecap.
Any insight would be greatly appreciated, been going at this for hours now with no clue as to what could be causing this.