
Recherche avancée
Autres articles (54)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6685)
-
Stream RAW8 video from camera using openCV and Python [Windows]
14 novembre 2022, par awinWe have a camera that streams RAW8 video at 1920x1080. The GUID used is GREY. We are able to stream video from this camera using ffmpeg on Windows with the below command :


ffmpeg -f dshow -pix_fmt gray -video_size 1920x1080 -i video="CAM0" -f nut - | ffplay -



We are now trying to grab images from this camera using OpenCV using the below code snippet, but its unable to grab any frame (frame_grabbed is always false)


import cv2
import numpy as np

# reading the video from CAM0
source = cv2.VideoCapture(1)

height = 1920
width = 1080

source.set(cv2.CAP_PROP_FRAME_WIDTH, width)
source.set(cv2.CAP_PROP_FRAME_HEIGHT, height)


image = np.zeros([height, width, 3], np.uint8)

while True:
 # Extracting the frames
 frame_grabbed , image = source.read()

 if (frame_grabbed ):
 colour1 = cv2.cvtColor(image, cv2.COLOR_BayerRG2BGR)
 cv2.imshow("Demosaiced image", colour1)
 else:
 print("No images grabbed")

#Exit on q
 key = cv2.waitKey(1)
 if key == ord("q"):
 break

# closing the window
cv2.destroyAllWindows()
source.release()




Are we missing something here ?


We then came across this post to pipe ffmpeg output to python (link). However, when we are passing the command as below :


command = [ 'ffmpeg.exe',
 '-f', 'dshow',
 '-i', 'video="CAM0"',
 '-pix_fmt', 'gray',
 '-video_size','1920x1080'
 '-f', 'nut', '-']



its throwing




Could not find video device with name ["CAM0"] among source devices
of type video. video="CAM0" : I/O error




I have verified that the camera is present using the below command :


command = [ 'ffmpeg.exe',
 '-list_devices', 'true',
 '-f', 'dshow',
 '-i', 'dummy']



This detects CAM0 as shown below :


ffmpeg version 5.0.1-full_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers
 built with gcc 11.2.0 (Rev7, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab 
--enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
 libavutil 57. 17.100 / 57. 17.100
 libavcodec 59. 18.100 / 59. 18.100
 libavformat 59. 16.100 / 59. 16.100
 libavdevice 59. 4.100 / 59. 4.100
 libavfilter 8. 24.100 / 8. 24.100
 libswscale 6. 4.100 / 6. 4.100
 libswresample 4. 3.100 / 4. 3.100
 libpostproc 56. 3.100 / 56. 3.100
[dshow @ 000001ea39e40600] "HP HD Camera" (video)
[dshow @ 000001ea39e40600] Alternative name "@device_pnp_\\?\usb#vid_04f2&pid_b6bf&mi_00#6&1737142c&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow @ 000001ea39e40600] "CAM0" (video)
[dshow @ 000001ea39e40600] Alternative name "@device_pnp_\\?\usb#vid_0400&pid_0011&mi_00#7&1affbd5b&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"



In short, we are able to capture video using ffmpeg commandline, but unable to grab any frame using OpenCV videocapture or ffmpeg in opencv. Any pointers ?


Thanks !


-
Combining png and mp3, why using -loop 1 on image and -shortest tag doesn't cut the output length when audio ends ?
9 novembre 2022, par AlexanderI have several image and audio files and I want to combine each of the set to a separate video file where image is stretched to audio file length. For example a lion image should be displayed as long as the lion voice is being played. The output file should end when lion audio ends.


I also try to normalize output clips resolution since image files doesn't have same width/height.


Here is the command I'm currently running. Command is on one line but I made some line breaks here for your convenience.


-loop 1 
-i "C:\Temp\screenshot\clip_1.png"
-i "C:\Temp\audio\clip_1.mp3" 
-shortest 
-filter_complex "[0:v]scale=3000:-2:force_original_aspect_ratio=decrease" 
-c:v h264_nvenc 
-c:a aac 
-b:a 192k 
-y 
C:\Temp\clips\clip_1.mp4



To my understanding the
-loop 1
before image input should make it to loop infinite and-shortest
should end the video when shortest input (audio in this case) ends. In following example, the audio file is 6 seconds long but the output is always 20 seconds.

Here is the full output of ffmpeg :


ffmpeg version 2022-11-03-git-5ccd4d3060-full_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers
 built with gcc 12.1.0 (Rev2, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enabl
e-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --ena
ble-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --e
nable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
 libavutil 57. 40.100 / 57. 40.100
 libavcodec 59. 51.101 / 59. 51.101
 libavformat 59. 34.101 / 59. 34.101
 libavdevice 59. 8.101 / 59. 8.101
 libavfilter 8. 49.101 / 8. 49.101
 libswscale 6. 8.112 / 6. 8.112
 libswresample 4. 9.100 / 4. 9.100
 libpostproc 56. 7.100 / 56. 7.100
Input #0, png_pipe, from 'C:\Temp\screenshot\clip_1.png':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: png, rgba(pc), 640x148, 25 fps, 25 tbr, 25 tbn
[mp3 @ 0000016c26784a80] Estimating duration from bitrate, this may be inaccurate
Input #1, mp3, from 'C:\Temp\audio\clip_1.mp3':
 Duration: 00:00:06.10, start: 0.000000, bitrate: 32 kb/s
 Stream #1:0: Audio: mp3, 24000 Hz, mono, fltp, 32 kb/s
Stream mapping:
 Stream #0:0 (png) -> scale:default (graph 0)
 scale:default (graph 0) -> Stream #0:0 (h264_nvenc)
 Stream #1:0 -> #0:1 (mp3 (mp3float) -> aac (native))
Press [q] to stop, [?] for help
[aac @ 0000016c267ab180] Too many bits 8192.000000 > 6144 per frame requested, clamping to max
Output #0, mp4, to 'C:\Temp\clips\clip_1.mp4':
 Metadata:
 encoder : Lavf59.34.101
 Stream #0:0: Video: h264 (Main) (avc1 / 0x31637661), rgba(pc, gbr/unknown/unknown, progressive), 3000x694, q=2-31, 2000 kb/s, 25 fps, 12800 tbn
 Metadata:
 encoder : Lavc59.51.101 h264_nvenc
 Side data:
 cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: N/A
 Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 24000 Hz, mono, fltp, 144 kb/s
 Metadata:
 encoder : Lavc59.51.101 aac
frame= 504 fps=149 q=11.0 Lsize= 925kB time=00:00:20.00 bitrate= 378.8kbits/s speed=5.91x
video:852kB audio:67kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.645776%
[aac @ 0000016c267ab180] Qavg: 65496.695



-
ffmpeg connection timeout for rtsp streams
24 décembre 2022, par Bods1994i tried to recoreds rtsp streams using ffmpeg with this command :

ffmpeg -hide_banner -loglevel error -y -rtsp_transport tcp -i rtsp://192.168.0.147:554/live/ch0 -t 600 -acodec copy -vcodec copy -metadata title="CAM1" -timeout 3 -f segment -reset_timestamps 1 -segment_time 60 -segment_atclocktime 1 -segment_format mkv -strftime 1 C:\temp\%Y%m%d_%H%M%S.mkv


If the connection is lost during recording, ffmpeg will exit within severals milliseconds.
But if the camera ist not available when connecting, it will hang forever.


I read severals posts to use "-stimeout Unrecognized option 'stimeout'. Error splitting the argument list : Option not found


For my tests i used the windows version downloaded from the two mirrors listed at https://ffmpeg.org/download.html#build-windows.
Each buils ist from 2022-12-22.


Has someone an idea how to detect conection problems and to a retry after some seconds ?