
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (111)
-
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...) -
Configuration spécifique d’Apache
4 février 2011, parModules spécifiques
Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
Création d’un (...) -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...)
Sur d’autres sites (2226)
-
ffmpeg 4.4 problem with image2 combined with stream loop -1 and overlay
14 mai 2021, par codeSamI have a python app and my code to stream ffmpeg is :


'ffmpeg',
'-thread_queue_size', '1024',
'-i', 'rtsp://...',
'-f', 'image2',
'-stream_loop', '-1',
'-i', 'image.png',
'-filter_complex', 'overlay=(main_w-overlay_w)/2:main_h*0.1-overlay_h',
'-acodec', 'aac',
'-ar', '44100',
'-ab', '128k',
'-f', 'flv',
'-g', '30',
'-vcodec', 'libx264',
'-preset', 'ultrafast',
'-crf', '30',
'rtmp://...'



Works fine on ffmpeg 4.3.2. But after ffmpeg updated to 4.4 the stream doesn't start at all. If I change stream_loop -1 to loop 1 the stream starts but as I want to update the image.png like every 10 seconds or so, it stops being updated on the stream. That is probably because new image.png is being saved at the same time it is being read. stream_loop doesn't mind this as I have understood.


Also if I delay image.png start with -ss -5, the stream starts with main video from rtsp ://... but it stops when image.png starts to be read.


Also if I remove -f image2 from the code, the stream starts fine but image.png is not being updated to the stream.


Would be easy to downgrade ffmpeg to the older 4.3.2 version but it is not possible as I want this thing to run on Android device in Termux. Termux has only the latest ffmpeg 4.4 available.


Any ideas how to make this work on ffmpeg 4.4.


Here is what is printed out when I run the command above. The stream does not start. This is done with Termux in Android device. Stream does not start in my Mac Book Pro's ffmpeg 4.4 either, in that 4.3.2 is ok.


ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
 built with Android (6454773 based on r365631c2) clang version 9.0.8 
(https://android.googlesource.com/toolchain/llvm-project 98c855489587874b2a325e7a516b99d838599c6f) (based on LLVM 9.0.8svn)
 configuration: --arch=aarch64 --as=aarch64-linux-android-clang --cc=aarch64-linux-android-clang
 --cxx=aarch64-linux-android-clang++ --cross-prefix=aarch64-linux-android- --disable-indevs 
 --disable-outdevs --enable-indev=lavfi --disable-static --disable-symver --enable-cross-compile 
 --enable-gnutls --enable-gpl --enable-libass --enable-libdav1d --enable-libmp3lame 
 --enable-libfreetype --enable-libvorbis --enable-libopus --enable-libx264 --enable-libx265 
 --enable-libxvid --enable-libvpx --enable-shared --enable-libsoxr --enable-libvidstab 
 --enable-libwebp --prefix=/data/data/com.termux/files/usr --target-os=android 
 --extra-libs=-landroid-glob --enable-neon
 libavutil 56. 70.100 / 56. 70.100
 libavcodec 58.134.100 / 58.134.100
 libavformat 58. 76.100 / 58. 76.100
 libavdevice 58. 13.100 / 58. 13.100
 libavfilter 7.110.100 / 7.110.100
 libswscale 5. 9.100 / 5. 9.100
 libswresample 3. 9.100 / 3. 9.100
 libpostproc 55. 9.100 / 55. 9.100
[udp @ 0x7ebe823840] 'circular_buffer_size' option was set but it is not supported on this build (pthread support is required)
[udp @ 0x7ebe8238e0] 'circular_buffer_size' option was set but it is not supported on this build (pthread support is required)
[udp @ 0x7ebe823a20] 'circular_buffer_size' option was set but it is not supported on this build (pthread support is required)
[udp @ 0x7ebe823ac0] 'circular_buffer_size' option was set but it is not supported on this build (pthread support is required)

Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://...':
 Metadata:
 title : Session streamed by "TP-LINK RTSP Server"
 comment : stream1
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080, 15 fps, 15 tbr, 90k tbn, 30 tbc
 Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
Input #1, image2, from 'image.png':
 Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
 Stream #1:0: Video: png, rgba(pc), 1152x87 [SAR 8504:8504 DAR 384:29], 25 fps, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
 Stream #0:0 (h264) -> overlay:main (graph 0)
 Stream #1:0 (png) -> overlay:overlay (graph 0)
 overlay (graph 0) -> Stream #0:0 (libx264)
 Stream #0:1 -> #0:1 (pcm_alaw (native) -> aac (native))
Press [q] to stop, [?] for help



-
Can ffmpeg write metadata encoder when transcoding alac/flac to aac audio file ?
11 juin 2022, par David II have a collection of alac and flac files from Bandcamp and an ffmpeg instance compiled with libfdk_aac https://trac.ffmpeg.org/wiki/CompilationGuide/Centos#libfdk_aac and am trying to convert these to lossy audio aac files for non-critical listening.


With
ffmpeg -i Liholesie\ -\ Shamanic\ Twilight\ -\ 09\ Gray\ Wings.m4a -c:a libfdk_aac -vbr 4 -c:v copy 09_Gray_wings_vbr4.m4a
an expected aac .m4a audio file is produced, album art included, works well. There's one slight detail missing :

During the ffmpeg conversion process ffmpeg says :


Output #0, ipod, to '09_Gray_wings_vbr4.m4a':
 Metadata:
 major_brand : M4A 
 minor_version : 512
 compatible_brands: M4A isomiso2
 title : Gray Wings
 artist : Liholesie
 album_artist : Liholesie
 album : Shamanic Twilight
 comment : Visit https://liholesie.bandcamp.com
 date : 2021
 track : 9
 encoder : Lavf59.24.100
 Stream #0:0: Video: mjpeg (Baseline), yuvj444p(pc, bt470bg/unknown/unknown), 700x700 [SAR 72:72 DAR 1:1], q=2-31, 90k tbr, 90k tbn (attached pic)
 Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16 (default)
 Metadata:
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
 encoder : Lavc59.33.100 libfdk_aac 
..





and the file produced looks like that when ffprobed except that the Metadata : encoder field is missing :


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '09_Gray_wings_vbr4.m4a':
 Metadata:
 major_brand : M4A 
 minor_version : 512
 compatible_brands: M4A isomiso2
 title : Gray Wings
 artist : Liholesie
 album_artist : Liholesie
 album : Shamanic Twilight
 date : 2021
 encoder : Lavf59.24.100
 comment : Visit https://liholesie.bandcamp.com
 track : 9
 Duration: 00:06:57.78, start: 0.000000, bitrate: 155 kb/s
 Stream #0:0[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 152 kb/s (default)
 Metadata:
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
 Stream #0:1[0x0]: Video: mjpeg (Baseline), yuvj444p(pc, bt470bg/unknown/unknown), 700x700 [SAR 72:72 DAR 1:1], 90k tbr, 90k tbn (attached pic)



Is there a way to write the encoder field in the Metadata section when transcoding (or is "encoder" not supported for aac m4a ? That would be weird since ffmpeg says what it says when specifying output during transcoding) .


Any hints on how to write a self-defined text to said tag during transcoding are also welcome.


-
Dreamcast Archival
24 mai 2011, par Multimedia Mike — Sega DreamcastConsole homebrew communities have always had a precarious relationship with console pirates. The same knowledge and skills useful for creating homebrew programs can usually be parlayed into ripping games and cajoling a console into honoring ripped copies. For this reason, the Dreamcast homebrew community tried hard to distance itself from pirates, rippers, and other unsavory characters.
Funny how times change. While I toed the same line while I was marginally a part of the community back in the day, now I think I’m performing a service for video game archivists and historians by openly publishing the same information. I know of at least one solution already. But I think it’s possible to do much better.
Pre-existing Art
Famed Japanese game hacker BERO (FFmpeg contributors should recognize his name from a number of Dreamcast-related multimedia contributions including CRI ADX and SH-4 optimizations) crafted a program called dreamrip based on KOS’s precursor called libdream. This is the program I used to extract 4XM multimedia files from Alone in the Dark : The New Nightmare.Fun facts : The Sega Dreamcast used special optical discs called GD-ROMs. The GD stands for ‘GigaDisc’ which implied that they could hold roughly a gigabyte of data. How long do you think it takes to transfer that much data over a serial cable operating at 115,200 bits/second (on the order of 11 Kbytes/sec) ? I seem to recall entire discs requiring on the order of 27-28 hours to archive.
If only I possessed some expertise in data compression which might expedite this process.
KallistiOS’ Unwitting Help
The KallistiOS (KOS) console-oriented RTOS provides all the software infrastructure necessary for archiving (that’s what we’ll call it in this post) Dreamcast games. KOS exposes the optical disc’s filesystem via the/cd
mount point on the VFS. From there, KOS provides functions for communicating with a host computer via ethernet (broadband adapter) or serial line (DC coder’s cable). To this end, KOS exposes another mount point on the VFS named/pc
which allows direct access to the host PC’s filesystem.Thus, it’s pretty straightforward to use KOS to access the files (or raw sectors) of the Dreamcast disc and then send them over the communication line to the host PC. Simple.
Compressing Before Transfer
Right away, I wonder about compiling 3 different compression libraries : libz, libbz2, and liblzma. The latter 2 are exceptionally CPU-intensive to compress. Then again, it doesn’t really matter how long the compressor takes to do its job as long as it can average better than 11 Kbytes/sec on a 200MHz Hitachi SH-4 CPU. KOS can be set up in a preemptive threading mode which means it should be possible to read sectors and compress them while keeping the UART operating at full tilt.A 4th compression algorithm should be in play here as well : FLAC. Since some of these discs contain red book CD audio tracks that need archival, lossless audio compression should be useful.
This post serves as a rough overview for possible future experiments. Readers might have further brainstorms.