
Recherche avancée
Autres articles (48)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (6228)
-
FFMPEG fails with only two input frames
3 mars 2019, par JeffThompsonI’d like to use ffmpeg’s great frame interpolation to blend two images. I get great results when testing with about a dozen frames, but when only using two it finishes immediately and I get a video file that can’t be opened.
My command :
ffmpeg -y -r 24 -pattern_type glob -i "TestFrames/*[0-1].png" -pix_fmt yuv420p -filter:v "minterpolate='mi_mode=mci:mc_mode=aobmc:me_mode=bidir:vsbmc=1:fps=1024'" -vsync 2 8.mp4
Output from
ffmpeg
:ffmpeg version 4.1.1 Copyright (c) 2000-2019 the FFmpeg developers
built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/openjdk-11.0.2.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/openjdk-11.0.2.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Input #0, image2, from 'TestFrames/*.png':
Duration: 00:00:00.08, start: 0.000000, bitrate: N/A
Stream #0:0: Video: png, rgba(pc), 1080x1080, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x7fa5c2813200] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fa5c2813200] profile High, level 6.1
[libx264 @ 0x7fa5c2813200] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to '8.mp4':
Metadata:
encoder : Lavf58.20.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1080x1080, q=-1--1, 1024 fps, 16384 tbn, 1024 tbc
Metadata:
encoder : Lavc58.35.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown(Why
fps=1024
? I wanted to generate a a bunch of frames between the two images, so I plan to later separate the resulting video into separate images.) -
Why can't I get a manually modified MPEG-4 extended box (chunk) size to work ?
15 avril 2019, par Moshe RubinOverview
As part of a project to write an MPEG-4 (MP4) file parser, I need to understand how an extended box (or chunk) size is processed within an MP4 file. When I tried to manually simulate an MP4 file with an extended box size, media players report that the file is invalid.
Technical Information
Paraphrasing the MPEG-4 specification :
An MP4 file is formed as a series of objects called ’boxes’. All data is contained in boxes, there is no other data within the file.
Here is a screen capture of Section 4.2 : Object Structure, which describes the box header and its size and type fields :
Most MP4 box headers contain two fields : a 32-bit compact box size and a 32-bit box type. The compact box size supports a box’s data up to 4 GB. Occasionally an MP4 box may have more data than that (e.g., a large video file). In this case, the compact box size is set to 1, and eight (8) octets are added immediately following the box type. This 64-bit number is known as the ’extended box size’, and supports a box’s size up to 2^64.
To understand the extended box size better, I took a simple MP4 file and wanted to modify the
moov/trak/mdia
box to use the extended box size, rather than the compact size.Here is what the MP4 file looks like before modifying it. The three box headers are highlighted in RED :
My plan was as follows :
- Modify the
moov/trak/mdia
box- In the
moov/trak/mdia
, insert eight (8) octets immediately following the box type (’mdia’). This will eventually be our extended box size. - Copy the compact box size to the newly-inserted extended box size, adding 8 to the size to compensate for the newly inserted octets. The size is inserted in big-endian order.
- Set the compact size to 1.
- In the
- Modify the
moov/trak
box- Add 8 to the existing compact box size (to compensate for the eight octets added to
mdia
).
- Add 8 to the existing compact box size (to compensate for the eight octets added to
- Modify the
moov
box- Add 8 to the existing compact box size (again, to compensate for the eight octets in
mdia
)
- Add 8 to the existing compact box size (again, to compensate for the eight octets in
Here’s what the MP4 file looks like now, with the modified octets are in RED :
What have we done ?
We have told the MP4 parser/player to take the
moov/trak/mdia
box size from the extended field rather than the compact size field, and have increased all parent boxes by eight (8) to compensate for the newly-inserted extended box size in themdia
box.What’s the problem ?
When I attempt to play the modified MP4 file I receive error messages from different media players :
Why do the media players see the modified file as invalid MP4 ?
- Did I need to alter any other fields ?
- Does the extended box size have to be greater than 2^32 ?
- Can it be that only specific box types support extended box size (e.g., Media Data) ?
- Modify the
-
How to convert images to video using FFMpeg for embedded applications ?
19 avril 2019, par zthatch56I’m encoding images as video using FFmpeg using custom C code rather than linux commands because I am developing the code for an embedded system.
I am currently following through the first dranger tutorial and the code provided in the following question.
I have found some "less abstract" code in the following github location.
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/encode_video.c
And I plan to use it as well.
My end goal is simply to save video on an embedded system using embedded C source code, and I am coming up the curve too slowly. So in summary my question is, Does it seem like I am following the correct path here ? I know that my system does not come with hardware for video codec conversion, which means I need to do it with software, but I am unsure if FFmpeg is even a feasible option for embedded work because I am yet to compile.
The biggest red flag for me thus far is that FFmpeg uses dynamic memory allocation. I am unfamiliar with how to assess the amount of dynamic memory that it uses. This is very important information to me, and if anyone is familiar with the amount of memory used or how to assess it before compiling, I would greatly appreciate the input.