
Recherche avancée
Autres articles (65)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (8499)
-
ffmpeg stream chrome kiosk mode ubuntu 16.04 server
21 décembre 2016, par RaulI have a weird out-of-sync issue while using ffmpeg to stream to youtube live a chrome browser from an ub untu 16.04 server.
Issue : output video streamed to youtube has audio/video out of sync, sometimes with as much as 3s
Current flow :
1) start pulseaudio - we using something like this to start it :
pulseaudio --start -vvv --disallow-exit --log-target=syslog --high-priority --exit-idle-time=-1 --daemonize
2) start Xvfb
Xvfb :0 -ac -screen 0 1920x1080x24
3) start chrome linux in kiosk mode
google-chrome --kiosk --disable-gpu --incognito --no-first-run --disable-java --disable-plugins --disable-translate --disk-cache-size=$((1024 * 1024)) --disk-cache-dir=/tmp/chrome/ --user-data-dir=/tmp/chrome/ --force-device-scale-factor=1 --window-size=1920,1080 --window-position=0,0 LOCATION_URL
4) start ffmpeg
ffmpeg -y \
-thread_queue_size 8192 -rtbufsize 250M -f x11grab -video_size 1920x1080 -framerate 24 -i :0 \
-thread_queue_size 8192 -channel_layout stereo -f alsa -i pulse \
-c:v libx264 -pix_fmt yuv420p -c:v libx264 -g 48 -crf 24 -filter:v fps=24 -preset ultrafast -tune zerolatency \
-c:a aac -strict -2 -channel_layout stereo -ab 96k -ac 2 -flags +global_header \
-f flv YOUTUBE_LIVE_STREAMING_RTMPNote : this is running on an amazon ec2 instance, meaning there is no soundcard, so alsa and pulseaudio are creating a dummy audio card. However, the latency does not come from there. Logs :
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Adjust latency mode enabled, configuring sink latency to half of overall latency.
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Requested latency=23.22 ms, Received latency=23.22 ms
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Final latency 69.66 ms = 23.22 ms + 2*11.61 ms + 23.22 msAt this point, here’s what we observed :
-
if we start ffmpeg exactly after issuing the command to start chrome, we see the DTS errors from ffmpeg. Audio is out of sync with the video and has delay of 3-5seconds AHEAD. We also noticed the out of sync remains the same for the full duration of the stream
-
if we start ffmpeg after around 10seconds, audio and video are almost in sync. We then manually added a -itsoffset -0.125 to the ffmpeg command and everything is perfect.
Questions :
- Why would ffmpeg have so much lag if it’s started right after chrome ?
- Is starting the ffmpeg after 10s or X seconds the expected behavior ? That is, is this because the system needs to wait for audio/video signals to be "ready" or something ?
- Is there a way to 100% calculate or know when Chrome is fully ready and start ffmpeg ? We found sometimes it takes 5s, sometimes 10. Depends on the URL we load.
- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything. And a restart is required to "re-balance" the audio/video inputs and get them back in sync.
- Can pulseaudio be the problem in this scenario ?
Thank you
UPDATE Dec 20
We were able to do some tricks to force chrome to start the audio on page load, and that will force connect to pulseaudio. Doing that, plus adding a 3s delay for ffmpeg to start, there is no more delay when ffmpeg starts.
However, our app is a webRTC app, and we have a STRANGER thing happening : if we start the page with no webcam/audio, once the webcam/audio is enabled, ffmpeg (while showing no errors) has a delay of 2s or so. While keep talking, in about max 30s, that delay is GONE.So the new questions are :
- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything.
- What could cause the initial audio/video out of sync issue and then catching up ?
-
-
How to record video from old webcam using ffmpeg on Linux ?
5 février 2015, par MoominAfter couple of hours I finally accepted my absolute lack of knowledge and decided to post this question here.
I’m trying to record video from an old external USB webcam (Media-Tech MT400 -> 0c45:6029) withffmpeg
but without much success.First thing I’ve tried was to run to try something that works for the built-in webcam) :
ffmpeg -f v4l2 -i /dev/video1 test.avi
But that returned a following error :
Cannot find a proper format for codec 'none' (id 0), pixel format 'none' (id -1)
Here is what I get from
v4l2-ctl -d /dev/video1/ --all --list-formats-ext
Driver Info (not using libv4l2):
Driver name : sonixb
Card type : USB camera
Bus info : usb-0000:04:00.0-1
Driver version: 3.10.17
Capabilities : 0x85000001
Video Capture
Read/Write
Streaming
Device Capabilities
Device Caps : 0x05000001
Video Capture
Read/Write
Streaming
Priority: 2
Video input : 0 (sonixb: ok)
Format Video Capture:
Width/Height : 352/288
Pixel Format : 'S910'
Field : None
Bytes per Line: 352
Size Image : 126720
Colorspace : SRGB
Streaming Parameters Video Capture:
Frames per second: invalid (0/0)
Read buffers : 2
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'S910'
Name : S910
Size: Discrete 160x120
Size: Discrete 176x144
Size: Discrete 320x240
Size: Discrete 352x288
Index : 1
Type : Video Capture
Pixel Format: 'BA81'
Name : BA81
Size: Discrete 160x120
Size: Discrete 176x144Unfortunately that was not of much help to me, but I tried capturing webcam output with
VLC
and recording it which.... WORKED !I tried
ffmpeg -i vlc-record[...].avi
onVLC recording
which returned following information :Input #0, avi, from 'vlc-record-2015-02-05-05h22m52s-v4l2____dev_video1-.avi':
Duration: 00:00:01.58, start: 0.000000, bitrate: 26937 kb/s
Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 352x288, 27671 kb/s, 22.10 tbr, 22.10 tbn, 22.10 tbcI thought that was enough information (
rawvideo
andyuv420p
) led to runffmpeg
with following arguments (I have foundyuv4
usingffmpeg -formats
)ffmpeg -f rawvideo -vcodec yuv4 -i /dev/video1 test.avi
The result was only slightly better than one that I saw after my first attempt :
[IMGUTILS @ 0x7fff497d3920] Picture size 0x0 is invalid
[rawvideo @ 0x9c4720] Could not find codec parameters for stream 0 (Video: yuv4, yuv420p, -4 kb/s): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
/dev/video1: could not find codec parametersAgain after some searching and several tries I added ’-video_size cif` option and.... it WORKED !!!!
Well, not really...
Input #0, rawvideo, from '/dev/video1':
Duration: N/A, start: 0.000000, bitrate: 30412 kb/s
Stream #0:0: Video: yuv4, yuv420p, 352x288, 30412 kb/s, 25 tbr, 25 tbn, 25 tbc
Output #0, avi, to 'test.avi':
Metadata:
ISFT : Lavf55.19.104
Stream #0:0: Video: mpeg4 (FMP4 / 0x34504D46), yuv420p, 352x288, q=2-31, 200 kb/s, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (yuv4 -> mpeg4)
Press [q] to stop, [?] for help
frame= 13 fps=6.3 q=24.8 Lsize= 1064kB time=00:00:00.52 bitrate=16759.3kbits/s
video:1058kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.554114%LED on webcam turned on as during capturing video with
VLC
but all that was recorded was TV-like noise.So here I am after 7 hours - still without work solution - asking for your help.
Thanks in advance.
-
Burn subtitles in a stop motion video, with hardware acceleration
9 février 2020, par BelindeI’m trying to make a year long stop motion video with the images taken from a webcam. I’ve created a
input.txt
file with this format inside :ffconcat version 1.0
file 'amianthe201909031230.jpg'
duration 0.093034825870647
file 'amianthe201909031330.jpg'
duration 0.093034825870647The command I’ve crafted (mostly taken from the example in the official ffmpeg documentation) is :
ffmpeg \
-y \
-hwaccel vaapi \
-hwaccel_device /dev/dri/renderD128 \
-hwaccel_output_format vaapi \
-f concat \
-i input.txt \
-vf 'hwmap=mode=read+write+direct,format=nv12,ass=subtitles.ass,hwmap' \
-c:v h264_vaapi \
~/amianthe.mp4But it badly fails with this output :
ffmpeg version 4.1.4-1build2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9 (Ubuntu 9.2.1-4ubuntu1)
configuration: --prefix=/usr --extra-version=1build2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Input #0, concat, from 'input.txt':
Duration: 00:00:18.70, start: 0.000000, bitrate: 5 kb/s
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1920x1080 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (h264_vaapi))
Press [q] to stop, [?] for help
[Parsed_ass_2 @ 0x5557f4ec4ac0] Shaper: FriBidi 0.19.7 (SIMPLE) HarfBuzz-ng 2.6.1 (COMPLEX)
[Parsed_ass_2 @ 0x5557f4ec4ac0] Using font provider fontconfig
[Parsed_ass_2 @ 0x5557f4ec4ac0] Added subtitle file: 'subtitles.ass' (4 styles, 2 events)
[Parsed_hwmap_3 @ 0x5557f5381ec0] Unsupported formats for hwmap: from nv12 (vaapi_vld) to vaapi_vld.
[Parsed_hwmap_3 @ 0x5557f5381ec0] Failed to configure output pad on Parsed_hwmap_3
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:0
Conversion failed!I’m honestly lost : from my understanding, all the elaboration should be done directly in the video card memory, so I don’t understand why it’s converting the surface format. Is there some other parameter I must put in
format
? What am I missing ?