Recherche avancée
Médias (1)
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (69)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (9411)
-
Is it possible to pipe an ffmpeg output with multiple files (HLS or DASH)
29 août 2020, par New DevI'm using FFmpeg to generate fragmented MP4s in a dash and HLS format.


ffmpeg -i input.mov -f dash -seg_duration 6 -hls_playlist true output.mbd


The above (simplified) command outputs multiple files, in addition to
output.mdb(e.g. actual segments,master.m3u8, etc...)

Is there a way to get each produced file into their individual and separate output streams ?



Broader context :


I'm trying to build a transcoder in Node.js running in Google Cloud, with the idea being that it writes directly to a Google Storage through a writable stream. I can only create a stream per file, but since the number of files is dynamic, I'm not sure how to obtain a stream from each file.


-
How to use FFMPEG on Python/Windows10 with Pipe for Screen recording ?
20 septembre 2020, par TrmottaI'd like to record the screen with ffmpeg as it seems to be the only player out there who can record a region of the screen along with the mouse cursor.


The following code was adapted from i want to display mouse pointer in my recording but it doesn't work on a Windows 10 (x64) setup (using Python 3.6).


#!/usr/bin/env python3

# ffmpeg -y -pix_fmt bgr0 -f avfoundation -r 20 -t 10 -i 1 -vf scale=w=3840:h=2160 -f rawvideo /dev/null

import sys
import cv2
import time
import subprocess
import numpy as np

w,h = 100, 100

def ffmpegGrab():
 """Generator to read frames from ffmpeg subprocess"""

 #ffmpeg -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 640x480 -show_region 1 -i desktop output.mkv #CODE THAT ACTUALLY WORKS WITH FFMPEG CLI

 cmd = 'D:/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -show_region 1 -i desktop -f image2pipe, -pix_fmt bgr24 -vcodec rawvideo -an -sn' 

 proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
 #out, err = proc.communicate()
 while True:
 frame = proc.stdout.read(w*h*3)
 yield np.frombuffer(frame, dtype=np.uint8).reshape((h,w,3))

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
 # Read next frame from ffmpeg
 frame = next(gen)
 nFrames += 1

 cv2.imshow('screenshot', frame)

 if cv2.waitKey(1) == ord("q"):
 break

 fps = nFrames/(time.time()-start)
 print(f'FPS: {fps}')


cv2.destroyAllWindows()
out.release()


By using 'cmd' as stated above, I'll get the following error :


b"ffmpeg version git-2020-08-31-4a11a6f Copyright (c) 2000-2020 the FFmpeg developers\r\n built with gcc 10.2.1 (GCC) 20200805\r\n configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --enable-libsvtav1 --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf\r\n libavutil 56. 58.100 / 56. 58.100\r\n libavcodec 58.101.101 / 58.101.101\r\n libavformat 58. 51.101 / 58. 51.101\r\n libavdevice 58. 11.101 / 58. 11.101\r\n libavfilter 7. 87.100 / 7. 87.100\r\n libswscale 5. 8.100 / 5. 8.100\r\n libswresample 3. 8.100 / 3. 8.100\r\n libpostproc 55. 8.100 / 55. 8.100\r\nTrailing option(s) found in the command: may be ignored.\r\n[gdigrab @ 0000017ab857f100] Capturing whole desktop as 100x100x32 at (10,20)\r\nInput #0, gdigrab, from 'desktop':\r\n Duration: N/A, start: 1599021857.538752, bitrate: 9612 kb/s\r\n Stream #0:0: Video: bmp, bgra, 100x100, 9612 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc\r\n**At least one output file must be specified**\r\n"


Which is the contents of proc (and also of proc.communicate). The program crashes right after when trying to resize this message to an image of size 100x100.


I do not want to have an output file. I need to use Python subprocess along with Pipe in order to directly deliver those screen frames to my Python code, no IO required at all.


If I try the following :


cmd = 'D :/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe'


proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)


Then 'frame', inside 'while True', is filled with b''.


Tried using the following libraries with no success, as I couldnt either find how to capture the mouse cursor or capture the screen at all : https://github.com/abhiTronix/vidgear, https://github.com/kkroening/ffmpeg-python


What am I missing ?
Thank you.


-
How to pipe multiple images, being created in parallel with an index, to ffmpeg so that it can match the speed of image creation ?
23 septembre 2020, par vishwas.mittalWe've a system that spews out 4-channel
pngimages frame-by-frame (we control the output format of these images as well, so we can use something else as long as it supports transparency). Right now, we're waiting for all the images and then encoding them withffmpeginto awebmvideo file withvp8(libvpxencoder). But we now want to pipeline these images to FFmpeg to encode into the WebM video simultaneously as the images are being spewed out so that we don't wait forffmpegto encode all the images afterwards.

This is the current command, in python syntax :


['/usr/bin/ffmpeg', '-hide_banner', '-y', '-loglevel', 'info', '-f', 'rawvideo', '-pix_fmt', 'bgra', '-s', '1573x900', '-framerate', '30', '-i', '-', '-i', 'audio.wav', '-c:v', 'libvpx', '-b:v', '0', '-crf', '30', '-tile-columns', '2', '-quality', 'good', '-speed', '4', '-threads', '16', '-auto-alt-ref', '0', '-g', '300000', '-map', '0:v:0', '-map', '1:a:0', '-shortest', 'video.webm']
# for ease of read:
# /usr/bin/ffmpeg -hide_banner -y -loglevel info -f rawvideo -pix_fmt bgra -s 1573x900 -framerate 30 -i - -i audio.wav -c:v libvpx -b:v 0 -crf 30 -tile-columns 2 -quality good -speed 4 -threads 16 -auto-alt-ref 0 -g 300000 -map 0:v:0 -map 1:a:0 -shortest video.webm

proc = subprocess.Popen(args, stdin=subprocess.PIPE)


Here is a sample example of passing the image to FFMPEG proc stdin as :


# wait for the next frame to get ready
for frame_path in frame_path_list:
 while not os.path.exists(frame_path):
 time.sleep(0.25)
 frame = cv2.imread(frame_path, cv2.IMREAD_UNCHANGED)
 
 # put the frame in stdin so that it gets ready
 proc.stdin.write(frame.astype(np.uint8).tobytes())


The current speed of this process is 0.135x which is a huge bottleneck for us. Earlier when we were taking input as
-pattern_type glob -i images/*.pngwe were getting around 1x-1.2x for this on a single core. So, our conclusion is that we're getting bottlenecked by stdin and hence are looking for ways to pass input through multiple sources or somehow helpffmpegto parallelize this effort - a few options that we're thinking of :

- 

- Somehow feed it to different pipes and make ffmpeg read from them.
- Append a new image to ffmpeg without re-encoding the whole video, but we didn't find a way to do this with giving input images directly.






But we haven't been able to get either of these working, open to any other solutions as well. Will really appreciate the help on this. Thanks !