
Recherche avancée
Autres articles (111)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (10172)
-
FFMPEG - Converting TS frame to PNG creating invalid images
20 juillet 2015, par spelleyI am attempting to snag a single frame from a TS live stream and generate a PNG image. The command I’ve been using is :
ffmpeg -y -i "filenamehere" -f image2 -vframes 1 -s 250x250 ./snapshot-250x250.png
When doing this on my local Windows machine using the FFMPEG executable, it generates the image as expected. When using Ubuntu, after doing an apt-get install ffmpeg and using the same command, I am generating grey squares with sporadic noise even while targeting the same file with the exact same command.
Here is the (anonymized) output :
[h264 @ 0xa256a0] no frame!
[h264 @ 0xa256a0] non-existing SPS 0 referenced in buffering period
[h264 @ 0xa256a0] non-existing PPS referenced
[h264 @ 0xa256a0] non-existing SPS 0 referenced in buffering period
[h264 @ 0xa256a0] non-existing PPS 0 referenced
[h264 @ 0xa256a0] decode_slice_header error
[h264 @ 0xa256a0] non-existing PPS 0 referenced
[h264 @ 0xa256a0] decode_slice_header error
[h264 @ 0xa256a0] non-existing PPS 0 referenced
[h264 @ 0xa256a0] decode_slice_header error
[h264 @ 0xa256a0] non-existing PPS 0 referenced
[h264 @ 0xa256a0] decode_slice_header error
[h264 @ 0xa256a0] no frame!
[mpegts @ 0xa1d1e0] max_analyze_duration reached
[NULL @ 0xa28080] start time is not set in estimate_timings_from_pts
Seems stream 0 codec frame rate differs from container frame rate: 59.94 (60000/1001) -> 1000.00 (1000/1)
Input #0, mpegts, from '<file here="here">':
Duration: 00:00:09.07, start: 35968.174711, bitrate: 697 kb/s
Program 1
Stream #0.0[0x100]: Video: h264 (High), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 44.52 fps, 1k tbr, 90k tbn, 59.94 tbc
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 96 kb/s
Stream #0.2[0x102]: Data: [21][0][0][0] / 0x0015
Incompatible pixel format 'yuv420p' for codec 'png', auto-selecting format 'rgb24'
[buffer @ 0xa43480] w:1280 h:720 pixfmt:yuv420p
[avsink @ 0xa28780] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'
[scale @ 0xa26520] w:1280 h:720 fmt:yuv420p -> w:1280 h:720 fmt:rgb24 flags:0x4
Output #0, image2, to './snapshot-250x250.png':
Metadata:
encoder : Lavf53.21.1
Stream #0.0: Video: png, rgb24, 1280x720 [PAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 1k tbc
Stream mapping:
Stream #0.0 -> #0.0
Press ctrl-c to stop encoding
frame= 1 fps= 0 q=0.0 Lsize= -0kB time=0.01 bitrate= -17.6kbits/s dup=31 drop=0
video:8kB audio:0kB global headers:0kB muxing overhead -100.261749%
</file>The initial "no frame !" I believe is just it seeking to the first valid keyframe (And thus no cause for concern) however the only difference from my Windows output is the inclusion of this line :
Incompatible pixel format 'yuv420p' for codec 'png', auto-selecting format 'rgb24'
Windows has an encoder of Lavc56.46.100 instead of Lavf53.21.1 and the stream mapping line on Windows has more information :
Stream #0:0 -> #0:0 **(h264 (native) -> png (native))**
I’ve attempted to manually seek to frames, change the output format from PNG -> GIF, JPEG and just cant seem to find the issue which has no problems on my local Windows machine. Any advice on where to look would be greatly appreciated ! Below is an example of the garbled output.
-
avcodec/mpeg12enc : Disallow using MPEG-2 intra VLC table for mpeg1video
24 novembre 2020, par Andreas Rheinhardtavcodec/mpeg12enc : Disallow using MPEG-2 intra VLC table for mpeg1video
Using MPEG-2 intra VLC tables is spec-incompliant for MPEG-1 and given
that an MPEG-1 bitstream can't signal whether MPEG-2 intra VLC tables
have been used the output is broken. Therefore this option is removed
immediately without any deprecation period.Reviewed-by : James Almer <jamrial@gmail.com>
Reviewed-by : Marton Balint <cus@passwd.hu>
Reviewed-by : Anton Khirnov <anton@khirnov.net>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com> -
Using ffmpeg to encode web dash and audio drift
1er octobre 2016, par John JoskeI’m trying to use ffmpeg to create webm-dash encoded files from a usb web cam. Adapted from this example http://wiki.webmproject.org/adaptive-streaming/instructions-to-do-webm-live-streaming-via-dash
I have managed to get something nearly works with the following command :
ffmpeg -f video4linux2 -framerate 30 -s 640x360 -i /dev/video0
-thread_queue_size 512 -f alsa -ar 44100 -ac 2 -i hw:2 -map 0:0 -pix_fmt yuv420p -c:v libvpx-vp9 -s 640x360 -keyint_min 60 -g 60 -speed 6 -tile-columns 4 -frame-parallel
1 -threads 8 -static-thresh 0 -max-intra-rate 300 -deadline realtime
-lag-in-frames 0 -error-resilient 1 -f webm_chunk -header video_360.hdr" -chunk_start_index 1 video_360_%d.chk -map 1:0 -c:a libvorbis -b:a 128k -ar 44100
-f webm_chunk -copytb 1 -audio_chunk_duration 2000 -header video_171.hdr -chunk_start_index 1 video_171_%d.chkHowever over a period of time the audio chunks slowly get behind the video chunks, can anyone suggest a way to ensure they stay the same.