
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (87)
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)
Sur d’autres sites (6660)
-
How do I use find and ffmpeg to batch convert a bunch of .flac files to .mp3 ?
30 avril 2019, par KeithI have a directory with a bunch of .flac files that I need to convert to .mp3. I plan to use
ffmpeg
from the command line to do the conversions and I’d like to avoid doing this manually for every file. I’m familiar with thefind
command but I’m having difficulty using it with ffmpeg which requires both input and output filenames. I imagine using something likefind . -name "*.flac" -exec ffmpeg -i {}.flac {}.mp3 +
But of course this doesn’t work. For one thing it fails to strip prefixes and suffixes from the filename being passed to ffmpeg.
Please also note that the filenames include whitespace so the solution has to ignore whitespace successfully. I’m also on OS X having built ffmpeg with homebrew.
-
Why can't I get a manually modified MPEG-4 extended box (chunk) size to work ?
15 avril 2019, par Moshe RubinOverview
As part of a project to write an MPEG-4 (MP4) file parser, I need to understand how an extended box (or chunk) size is processed within an MP4 file. When I tried to manually simulate an MP4 file with an extended box size, media players report that the file is invalid.
Technical Information
Paraphrasing the MPEG-4 specification :
An MP4 file is formed as a series of objects called ’boxes’. All data is contained in boxes, there is no other data within the file.
Here is a screen capture of Section 4.2 : Object Structure, which describes the box header and its size and type fields :
Most MP4 box headers contain two fields : a 32-bit compact box size and a 32-bit box type. The compact box size supports a box’s data up to 4 GB. Occasionally an MP4 box may have more data than that (e.g., a large video file). In this case, the compact box size is set to 1, and eight (8) octets are added immediately following the box type. This 64-bit number is known as the ’extended box size’, and supports a box’s size up to 2^64.
To understand the extended box size better, I took a simple MP4 file and wanted to modify the
moov/trak/mdia
box to use the extended box size, rather than the compact size.Here is what the MP4 file looks like before modifying it. The three box headers are highlighted in RED :
My plan was as follows :
- Modify the
moov/trak/mdia
box- In the
moov/trak/mdia
, insert eight (8) octets immediately following the box type (’mdia’). This will eventually be our extended box size. - Copy the compact box size to the newly-inserted extended box size, adding 8 to the size to compensate for the newly inserted octets. The size is inserted in big-endian order.
- Set the compact size to 1.
- In the
- Modify the
moov/trak
box- Add 8 to the existing compact box size (to compensate for the eight octets added to
mdia
).
- Add 8 to the existing compact box size (to compensate for the eight octets added to
- Modify the
moov
box- Add 8 to the existing compact box size (again, to compensate for the eight octets in
mdia
)
- Add 8 to the existing compact box size (again, to compensate for the eight octets in
Here’s what the MP4 file looks like now, with the modified octets are in RED :
What have we done ?
We have told the MP4 parser/player to take the
moov/trak/mdia
box size from the extended field rather than the compact size field, and have increased all parent boxes by eight (8) to compensate for the newly-inserted extended box size in themdia
box.What’s the problem ?
When I attempt to play the modified MP4 file I receive error messages from different media players :
Why do the media players see the modified file as invalid MP4 ?
- Did I need to alter any other fields ?
- Does the extended box size have to be greater than 2^32 ?
- Can it be that only specific box types support extended box size (e.g., Media Data) ?
- Modify the
-
FFMPEG fails with only two input frames
3 mars 2019, par JeffThompsonI’d like to use ffmpeg’s great frame interpolation to blend two images. I get great results when testing with about a dozen frames, but when only using two it finishes immediately and I get a video file that can’t be opened.
My command :
ffmpeg -y -r 24 -pattern_type glob -i "TestFrames/*[0-1].png" -pix_fmt yuv420p -filter:v "minterpolate='mi_mode=mci:mc_mode=aobmc:me_mode=bidir:vsbmc=1:fps=1024'" -vsync 2 8.mp4
Output from
ffmpeg
:ffmpeg version 4.1.1 Copyright (c) 2000-2019 the FFmpeg developers
built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/openjdk-11.0.2.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/openjdk-11.0.2.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Input #0, image2, from 'TestFrames/*.png':
Duration: 00:00:00.08, start: 0.000000, bitrate: N/A
Stream #0:0: Video: png, rgba(pc), 1080x1080, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x7fa5c2813200] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fa5c2813200] profile High, level 6.1
[libx264 @ 0x7fa5c2813200] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to '8.mp4':
Metadata:
encoder : Lavf58.20.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1080x1080, q=-1--1, 1024 fps, 16384 tbn, 1024 tbc
Metadata:
encoder : Lavc58.35.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown(Why
fps=1024
? I wanted to generate a a bunch of frames between the two images, so I plan to later separate the resulting video into separate images.)