
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (68)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (9490)
-
How to force specific AVInputFormat in code (FFMPEG) ?
18 février 2020, par kugipark++Plaease understand that maybe some words or sentences could not be correct English.++
I’m novice programmer and developing a video player app for Android which can play 5ch .avi video from http. It based on ffplay.c in FFMPEG libarary.
Currently I have 2 problems for this below. It has occured only in android devices.
1) Too much time taken for detecting format.
- opening and finding stream info takes more than a minute when I trying to open 5ch video source whereas normal .mp4 (h264) source open almost immediately.
2) Demuxing is too slow when it comes to large size video.
- If the resolution of video getting larger, then displaying frame rate getting slower even though there are enough memory, network and CPU resources physically.
To resolve my problem, I tried to force the format and decoders but I couldn’t found some information for specifying the input source in code level.
The official doc site only refers to a sentence about those parameters like "If non-NULL, this parameter forces a specific input format. Otherwise the format is autodetected.". So I don’t have any clues how to set the number of streams, decoders, decoders’ private options, and etc. (And also which parameters should I manage.) If someone knows how to set the options(like AVDictionary), and pass to the av functions, please let me know an exmple. The source contains 2 video streams, 1 audio stream, and 2 more extra streams (for custom data like gps). The stream information of video is below. I printed it manually, and these are auto-detected information.---------- File format information ----------
flags=2097152
video_codec_id=0 (NONE)
audio_codec_id=0 (NONE)
ctx_flags=0
data_codec_id=0 (NONE)
format_whitelist=(null)
iformat=0xa19d0d2c
---------- Stream information ----------
stream 1 of 5:
----- common ----------
bit_rate: 11383235
bits_per_coded_sample: 24
bits_per_raw_sample: 0
codec_id: 0x1C (H264)
codec_tag: 875967048
extradata_size: 0
level: -99
profile: -99
sample_rate: 0
----- Video Stream ----------
chroma_location: 0
color_primaries: 2
color_space: 2
color_trc: 2
field_order: 0
format: -1 (NONE)
height: 1080
width: 1920
sample_aspect_ratio.den: 1
sample_aspect_ratio.num: 0
video_delay: 0
----------------------------------------
stream 2 of 5:
----- common ----------
bit_rate: 6185438
bits_per_coded_sample: 24
bits_per_raw_sample: 0
codec_id: 0x1C (H264)
codec_tag: 875967048
extradata_size: 0
level: -99
profile: -99
sample_rate: 0
----- Video Stream ----------
chroma_location: 0
color_primaries: 2
color_space: 2
color_trc: 2
field_order: 0
format: -1 (NONE)
height: 720
width: 1280
sample_aspect_ratio.den: 1
sample_aspect_ratio.num: 0
video_delay: 0
----------------------------------------
stream 3 of 5:
----- common ----------
bit_rate: 352800
bits_per_coded_sample: 16
bits_per_raw_sample: 0
codec_id: 0x10000 (PCM_S16LE)
codec_tag: 1
extradata_size: 0
level: -99
profile: -99
sample_rate: 22050
----- Audio Stream ----------
block_align: 2
channels: 1
channel_layout: 0
color_range: 0
frame_size: 0
initial_padding: 0
seek_preroll: 0
trailing_padding: 0
----------------------------------------
stream 4 of 5:
----- common ----------
bit_rate: 15625
bits_per_coded_sample: 0
bits_per_raw_sample: 0
codec_id: 0x0 (NONE)
codec_tag: 0
extradata_size: 0
level: -99
profile: -99
sample_rate: 0
----- Subtitle Stream ----------
----------------------------------------
stream 5 of 5:
----- common ----------
bit_rate: 33862
bits_per_coded_sample: 0
bits_per_raw_sample: 0
codec_id: 0x0 (NONE)
codec_tag: 0
extradata_size: 0
level: -99
profile: -99
sample_rate: 0
----- Subtitle Stream ---------- -
examples : Add a VA-API encode example.
6 novembre 2017, par Jun Zhaoexamples : Add a VA-API encode example.
Supports only raw NV12 input.
Example use :
./vaapi_encode 1920 1080 test.yuv test.h264Signed-off-by : Jun Zhao <jun.zhao@intel.com>
Signed-off-by : Liu, Kaixuan <kaixuan.liu@intel.com>
Signed-off-by : Mark Thompson <sw@jkqxz.net> -
Getting video properties with Python without calling external software
24 juillet 2019, par ullix[Update :] Yes, it is possible, now some 20 months later. See Update3 below ! [/update]
Is that really impossible ? All I could find were variants of calling FFmpeg (or other software). My current solution is shown below, but what I really would like to get for portability is a Python-only solution that doesn’t require users to install additional software.
After all, I can easily play videos using PyQt’s Phonon, yet I can’t get simply things like dimension or duration of the video ?
My solution uses ffmpy (http://ffmpy.readthedocs.io/en/latest/ffmpy.html ) which is a wrapper for FFmpeg and FFprobe (http://trac.ffmpeg.org/wiki/FFprobeTips). Smoother than other offerings, yet it still requires an additional FFmpeg installation.
import ffmpy, subprocess, json
ffprobe = ffmpy.FFprobe(global_options="-loglevel quiet -sexagesimal -of json -show_entries stream=width,height,duration -show_entries format=duration -select_streams v:0", inputs={"myvideo.mp4": None})
print("ffprobe.cmd:", ffprobe.cmd) # printout the resulting ffprobe shell command
stdout, stderr = ffprobe.run(stderr=subprocess.PIPE, stdout=subprocess.PIPE)
# std* is byte sequence, but json in Python 3.5.2 requires str
ff0string = str(stdout,'utf-8')
ffinfo = json.loads(ff0string)
print(json.dumps(ffinfo, indent=4)) # pretty print
print("Video Dimensions: {}x{}".format(ffinfo["streams"][0]["width"], ffinfo["streams"][0]["height"]))
print("Streams Duration:", ffinfo["streams"][0]["duration"])
print("Format Duration: ", ffinfo["format"]["duration"])Results in output :
ffprobe.cmd: ffprobe -loglevel quiet -sexagesimal -of json -show_entries stream=width,height,duration -show_entries format=duration -select_streams v:0 -i myvideo.mp4
{
"streams": [
{
"duration": "0:00:32.033333",
"width": 1920,
"height": 1080
}
],
"programs": [],
"format": {
"duration": "0:00:32.064000"
}
}
Video Dimensions: 1920x1080
Streams Duration: 0:00:32.033333
Format Duration: 0:00:32.064000UPDATE after several days of experimentation : The hachoire solution as proposed by Nick below does work, but will give you a lot of headaches, as the hachoire responses are too unpredictable. Not my choice.
With opencv coding couldn’t be any easier :
import cv2
vid = cv2.VideoCapture( picfilename)
height = vid.get(cv2.CAP_PROP_FRAME_HEIGHT) # always 0 in Linux python3
width = vid.get(cv2.CAP_PROP_FRAME_WIDTH) # always 0 in Linux python3
print ("opencv: height:{} width:{}".format( height, width))The problem is that it works well on Python2 but not on Py3. Quote : "IMPORTANT NOTE : MacOS and Linux packages do not support video related functionality (not compiled with FFmpeg)" (https://pypi.python.org/pypi/opencv-python).
On top of this it seems that opencv needs the presence of the binary packages of FFmeg at runtime (https://docs.opencv.org/3.3.1/d0/da7/videoio_overview.html).
Well, if I need an installation of FFmpeg anyway, I can stick to my original ffmpy example shown above :-/
Thanks for the help.
UPDATE2 : master_q (see below) proposed MediaInfo. While this failed to work on my Linux system (see my comments), the alternative of using pymediainfo, a py wrapper to MediaInfo, did work. It is simple to use, but it takes 4 times longer than my initial ffprobe approach to obtain duration, width and height, and still needs external software, i.e. MediaInfo :
from pymediainfo import MediaInfo
media_info = MediaInfo.parse("myvideofile")
for track in media_info.tracks:
if track.track_type == 'Video':
print("duration (millisec):", track.duration)
print("width, height:", track.width, track.height)UPDATE3 : OpenCV is finally available for Python3, and is claimed to run on Linux, Win, and Mac ! It makes it really easy, and I verfied that external software - in particular ffmpeg - is NOT needed !
First install OpenCV via Pip :
pip install opencv-python
Run in Python :
import cv2
cv2video = cv2.VideoCapture( videofilename)
height = cv2video.get(cv2.CAP_PROP_FRAME_HEIGHT)
width = cv2video.get(cv2.CAP_PROP_FRAME_WIDTH)
print ("Video Dimension: height:{} width:{}".format( height, width))
framecount = cv2video.get(cv2.CAP_PROP_FRAME_COUNT )
frames_per_sec = cv2video.get(cv2.CAP_PROP_FPS)
print("Video duration (sec):", framecount / frames_per_sec)
# equally easy to get this info from images
cv2image = cv2.imread(imagefilename, flags=cv2.IMREAD_COLOR )
height, width, channel = cv2image.shape
print ("Image Dimension: height:{} width:{}".format( height, width))I also needed the first frame of a video as an image, and used ffmpeg for this to save the image in the file system. This also is easier with OpenCV :
hasFrames, cv2image = cv2video.read() # reads 1st frame
cv2.imwrite("myfilename.png", cv2image) # extension defines image typeBut even better, as I need the image only in memory for use in the PyQt5 toolkit, I can directly read the cv2-image into an Qt-image :
bytesPerLine = 3 * width
# my_qt_image = QImage(cv2image, width, height, bytesPerLine, QImage.Format_RGB888) # may give false colors!
my_qt_image = QImage(cv2image.data, width, height, bytesPerLine, QImage.Format_RGB888).rgbSwapped() # correct colors on my systemsAs OpenCV is a huge program, I was concerned about timing. Turned out, OpenCV was never behind the alternatives. I takes some 100ms to read a slide, all the rest combined takes never more than 10ms.
I tested this successfully on Ubuntu Mate 16.04, 18.04, and 19.04, and on two different installations of Windows 10 Pro. (Did not have Mac avalable). I am really delighted about OpenCV !
You can see it in action in my SlideSorter program, which allows to sort images and videos, preserve sort order, and present as slideshow. Available here : https://sourceforge.net/projects/slidesorter/