
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (102)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (9220)
-
How to use FFMPEG on Python/Windows10 with Pipe for Screen recording ?
20 septembre 2020, par TrmottaI'd like to record the screen with ffmpeg as it seems to be the only player out there who can record a region of the screen along with the mouse cursor.


The following code was adapted from i want to display mouse pointer in my recording but it doesn't work on a Windows 10 (x64) setup (using Python 3.6).


#!/usr/bin/env python3

# ffmpeg -y -pix_fmt bgr0 -f avfoundation -r 20 -t 10 -i 1 -vf scale=w=3840:h=2160 -f rawvideo /dev/null

import sys
import cv2
import time
import subprocess
import numpy as np

w,h = 100, 100

def ffmpegGrab():
 """Generator to read frames from ffmpeg subprocess"""

 #ffmpeg -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 640x480 -show_region 1 -i desktop output.mkv #CODE THAT ACTUALLY WORKS WITH FFMPEG CLI

 cmd = 'D:/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -show_region 1 -i desktop -f image2pipe, -pix_fmt bgr24 -vcodec rawvideo -an -sn' 

 proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
 #out, err = proc.communicate()
 while True:
 frame = proc.stdout.read(w*h*3)
 yield np.frombuffer(frame, dtype=np.uint8).reshape((h,w,3))

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
 # Read next frame from ffmpeg
 frame = next(gen)
 nFrames += 1

 cv2.imshow('screenshot', frame)

 if cv2.waitKey(1) == ord("q"):
 break

 fps = nFrames/(time.time()-start)
 print(f'FPS: {fps}')


cv2.destroyAllWindows()
out.release()



By using 'cmd' as stated above, I'll get the following error :


b"ffmpeg version git-2020-08-31-4a11a6f Copyright (c) 2000-2020 the FFmpeg developers\r\n built with gcc 10.2.1 (GCC) 20200805\r\n configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --enable-libsvtav1 --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf\r\n libavutil 56. 58.100 / 56. 58.100\r\n libavcodec 58.101.101 / 58.101.101\r\n libavformat 58. 51.101 / 58. 51.101\r\n libavdevice 58. 11.101 / 58. 11.101\r\n libavfilter 7. 87.100 / 7. 87.100\r\n libswscale 5. 8.100 / 5. 8.100\r\n libswresample 3. 8.100 / 3. 8.100\r\n libpostproc 55. 8.100 / 55. 8.100\r\nTrailing option(s) found in the command: may be ignored.\r\n[gdigrab @ 0000017ab857f100] Capturing whole desktop as 100x100x32 at (10,20)\r\nInput #0, gdigrab, from 'desktop':\r\n Duration: N/A, start: 1599021857.538752, bitrate: 9612 kb/s\r\n Stream #0:0: Video: bmp, bgra, 100x100, 9612 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc\r\n**At least one output file must be specified**\r\n"



Which is the contents of proc (and also of proc.communicate). The program crashes right after when trying to resize this message to an image of size 100x100.


I do not want to have an output file. I need to use Python subprocess along with Pipe in order to directly deliver those screen frames to my Python code, no IO required at all.


If I try the following :


cmd = 'D :/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe'


proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)



Then 'frame', inside 'while True', is filled with b''.


Tried using the following libraries with no success, as I couldnt either find how to capture the mouse cursor or capture the screen at all : https://github.com/abhiTronix/vidgear, https://github.com/kkroening/ffmpeg-python


What am I missing ?
Thank you.


-
Is it possible to pipe an ffmpeg output with multiple files (HLS or DASH)
29 août 2020, par New DevI'm using FFmpeg to generate fragmented MP4s in a dash and HLS format.


ffmpeg -i input.mov -f dash -seg_duration 6 -hls_playlist true output.mbd



The above (simplified) command outputs multiple files, in addition to
output.mdb
(e.g. actual segments,master.m3u8
, etc...)

Is there a way to get each produced file into their individual and separate output streams ?



Broader context :


I'm trying to build a transcoder in Node.js running in Google Cloud, with the idea being that it writes directly to a Google Storage through a writable stream. I can only create a stream per file, but since the number of files is dynamic, I'm not sure how to obtain a stream from each file.


-
FFMPEG bottleneck in relaying data from a dshow camera to stdout PIPE without any processing or conversion
19 août 2020, par koonyookI have a USB camera (FSCAM_CU135) that can encode the video to MJPEG internally and it supports DirectShow. My goal is to retrieve the binary stream of the encoded video as it is (without decoding or preview) and send it to my program for further processing.


I choose to use FFMPEG to read the MJPEG stream and pipe to stdout so that I can read it using Python's subprocess.Popen .


ffmpeg -y -f dshow -vsync 2 -rtbufsize 1000M -video_size 1920x1440 -vcodec mjpeg -i video="FSCAM_CU135" -vcodec copy -f mjpeg pipe:1



At this resolution, the camera is able to capture and transmit at 60 fps.
In this case, I expect FFMPEG to pass the data as fast as possible with no calculation.
With the output of FFMPEG I can tell how fast it moves the data from rtbuffer to the output pipe.


With just one camera, FFMPEG works with no problem and move the data at 60 fps.
However, when I run 2 cameras simultaneously, the cameras still generate data at 60 fps but FFMPEG can only move the data around 55 fps. This means that I am unable to consume the video in realtime and the buffer consumption will be larger over time.


I guess that FFMPEG didn't just simply move the data but did some processing such as searching for the beginning, the end, and the timestamp of each video frame so that it can count frames and report.
Is there a way to force FFMPEG to not doing those things and focus on passing the data only to make it faster ?


If I purely use directshow API without FFMPEG, can it be faster ?