
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (82)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (5882)
-
Is it possible to merge 3 videos with ffmpeg by -f concat and crossfade without video content
10 août 2019, par user3792705I want to merge 3 .mov files quickly without losing any resolution. I want to be able to distinguish the 3 pieces of videos after merge.
"ffmpeg -f concat" does not lose resolution and quick without crossfade.
But, I can’t distinguish 3 videos.
As far as I know ffmpeg filter can be used add crossfade, but it have to use video start/end content to do the merger, which might involve transcoding. It won’t be fast compared with ’concat’, which won’t do transcoding, but simply copying.
Here is the content (ffmpeg -i video.mov) of one of 3 videos :
ffmpeg version 4.1.3 Copyright (c) 2000-2019 the FFmpeg developers
built with Apple LLVM version 10.0.1 (clang-1001.0.46.3)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.3 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/openjdk-12.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/openjdk-12.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '..../(edited)/VMEK8375.MOV':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
creation_time : 2019-06-30T01:28:04.000000Z
com.apple.quicktime.model: iPhone
com.apple.quicktime.software: ZHIYUN
com.apple.quicktime.creationdate: 2019-06-30T09:28:04Z
Duration: 00:00:07.61, start: 0.000000, bitrate: 4386 kb/s
Stream #0:0(und): Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, smpte170m/bt709/bt709), 1280x720, 4329 kb/s, 30.01 fps, 30 tbr, 600 tbn, 600 tbc (default)
Metadata:
creation_time : 2019-06-30T01:28:04.000000Z
handler_name : Core Media Video
encoder : HEVC
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 94 kb/s (default)
Metadata:
creation_time : 2019-06-30T01:28:04.000000Z
handler_name : Core Media AudioIf I don’t care about crossfade with video content, just some ’nice’ black screen in between (It would be nice I add some text, like date + time on the black screen) is good enough for me. Is it possible to do ’concat’ and simple crossfade without video ’content’ ?
-
H.264 conversion with FFmpeg (from a RTP stream)
12 juillet 2014, par TobyEnvironment :
I have an IP Camera, which is capable of streaming it’s data over RTP in a H.264 encoded format. This raw stream is recorded from the ethernet. With that data I have to work.
Goal :
In the end I want to have a *.mp4 file, which I can play with common Media Players (like VLC or Windows MP).
What have I done so far :
I take that raw stream data I have and parse it. Since the data has been transmitted via RTP I need to take care of the NAL Bytes, SPS and PPS.
1. Write a raw file
First I determine the type of each frame received over Ethernet. To do so, I parse the first two bytes of every RTP Payload, so I can get the 8 NAL Unit Bit, the Fragment Type Bits and the Start, Reserved and End Bit. In the payload, they’re arranged like this :
Byte 1: [ 3 NAL Unit Bits | 5 Fragment Type Bits]
Byte 2: [Start Bit | Reserved Bit | End Bit | 5 NAL Unit Bits]From this I can determine :
- Start and End of
a Video Frame -> Start Bit and End Bit - Type of the Payload -> 5 Fragment Type Bits
- NAL Unit Byte
The Fragment types which are necessary in my case are :
Fragment Type 7 = SPS
Fragment Type 8 = PPS
Fragment Type 28 = Video FragmentThe NAL Byte is created by putting the NAL Unit Bits from Byte 1 and 2 together.
Now depending on the fragmentation type I do the following :
SPS/PPS :
- Write the NAL Prefix (
0x00 0x00 0x01
) and then the SPS or PPS data
Fragmentation with Start Bit
- Write NAL Prefix
- Write NAL Unit Byte
- Write remaining raw data
Fragmentation without Start Bit
- Write raw data
This means my raw file looks something like this :
[NAL Prefix][SPS][NAL Prefix][PPS][NAL Prefix][NAL Unit Byte][Raw Video Data][Raw Video Data]....[NAL Prefix][NAL Unit Byte][Raw Video Data]...
For every PPS and SPS I find in my stream data, I just write a NAL Prefix ( 0x00 0x00 0x01 ) and then the SPS/PPS itself.
Now I can’t play this data with some media player, which leads me to :
2. Convert the file
Since I wanted to avoid working much with codecs I just went to use an existing application -> FFmpeg. This I am calling with those parameters :
ffmpeg.exe -f h264 -i <rawinputfile> -vcodec copy -r 25 <outputfilename>.mp4</outputfilename></rawinputfile>
-f h264
: This should tell ffmpeg I have a h264 coded stream-vcodec copy
: Quote from the manpage :Force video codec to codec. Use the "copy" special value to tell that the raw codec data must be copied as is.
-r 25
: Sets the framerate to 25 FPS.When I call ffmpeg with those parameters I get an .mp4 File, which I can play with VLC and Windows MP, so it actually works. But the file now looks a bit different from my raw file.
This leads me to my question :
What did I actually do ?
My problem is not that it is not working. I just want/need to know what I have actually done with calling ffmpeg. I had a raw H264 file which I could not play. After using FFmpeg I can play it.
There are the following differences between the original raw file (which I have written) and the one written by FFmpeg :
- Header : The FFmpeg File has like about 0x30 Bytes of Header
- Footer : The FFmpeg File also has a footer
- Changed Prefix and 2 new Bytes :
While a new Video Frame from the Raw File started like
[NAL Prefix][NAL Unit Byte][Raw Video Data]
in the new file it looks like this :[0x00 0x00][2 "Random" Bytes][NAL Unit Byte][Raw Video Data].....[0x00 0x00[2 other "Random" Bytes][NAL Unit Byte][Raw Video Data]...
I understand that the Video Stream needs a container format (correct me if I am wrong but I assume that the new header and footer are responsible for that). But why does it change actually some Bytes in the raw data ? It can’t be some decoding since the stream itself should get decoded by the player and not ffmpeg.
As you can see I don’t need a new solution for my problem as far more an explanation (so I can explain it by myself). What does ffmpeg actually do ? And why does it change some bytes within the video data ?
- Start and End of
-
avcodec/flac_parser : Fix off-by-one error
6 octobre 2019, par Andreas Rheinhardtavcodec/flac_parser : Fix off-by-one error
The flac parser uses a fifo to buffer its data. Consequently, when
searching for sync codes of flac packets, one needs to take care of
the possibility of wraparound. This is done by using an optimized start
code search that works on each of the continuous buffers separately and
by explicitly checking whether the last pre-wrap byte and the first
post-wrap byte constitute a valid sync code.Moreover, the last MAX_FRAME_HEADER_SIZE - 1 bytes ought not to be searched
for (the start of) a sync code because a header that might be found in this
region might not be completely available. These bytes ought to be searched
lateron when more data is available or when flushing.Unfortunately there was an off-by-one error in the calculation of the
length to search of the post-wrap buffer : It was too large, because the
calculation was based on the amount of bytes available in the fifo from
the last pre-wrap byte onwards. This meant that a header might be
parsed twice (once prematurely and once regularly when more data is
available) ; it could also mean that an invalid header will be treated as
valid (namely if the length of said invalid header is
MAX_FRAME_HEADER_SIZE and the invalid byte that will be treated as the
last byte of this potential header happens to be the right CRC-8).Should a header be parsed twice, the second instance will be the best child
of the first instance ; the first instance's score will be
FLAC_HEADER_BASE_SCORE - FLAC_HEADER_CHANGED_PENALTY ( = 3) higher than
the second instance's score. So the frame belonging to the first
instance will be output and it will be done as a zero length frame (the
difference of the header's offset and the child's offset). This has
serious consequences when flushing, as returning a zero length buffer
signals to the caller that no more data will be output ; consequently the
last frames not yet output will be dropped.Furthermore, a "sample/frame number mismatch in adjacent frames" warning
got output when returning the zero-length frame belonging to the first
header, because the child's sample/frame number of course didn't match
the expected sample frame/number given its parent.filter/hdcd-mix.flac from the FATE-suite was affected by this (the last
frame was omitted) which is the reason why several FATE-tests needed to
be updated.Fixes ticket #5937.
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>