
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (59)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (4364)
-
How to map frame extracted with ffmpeg and subtitle of a video ? (frame accuracy problem)
14 novembre 2019, par Abitbolwould like to generate text files for frames extracted with ffmpeg, containing subtitle of the frame if any, on a video for which I have burn the subtitles using ffmpeg also.
I use a python script with
pysrt
to open the subrip file and generate the text files.
What I am doing is that each frames is named with the frame number by ffmpeg, then and since they are extracted at a constant rate, I can easily retrieve the time position of the frame using the formulat1 = fnum/fps
, wherefnum
is the number of the frame retrieved with the filename, andfps
is the frequency passed to ffmpeg for the frame extraction.Even though I am using the same subtitle file to retrieve the text positions in the timeline, that the one that has been used in the video, I still get accuracy errors. Most I have some text files missing or some that shouldn’t be present.
Because time is not really continuous when talking about frames, I have tried recalibrating
t
using the fps of the video wih the hardcoded subtitles, let’s call that fpsvfps
for video fps (I have ensured that the video fps is the same before and after subtitle burning). I get the formula :t2 = int(t1*vfps)/vfps
.
It still is not 100% accurate.For example, my video is at 30fps (
vfps=30
) and I extracted frames at 4fps (fps=4
).
The extracted frame 166 (fnum=166
) shows no subtitle. In the subrip file, the previous subtitle ends att_prev=41.330
and the next subtitle begins att_next=41.400
, which means thatt_sub
should satisfy :t_prev < t_sub and t_sub < t_next
, but I can’t make this happen.Formulas I have tried :
t1 = fnum/fps # 41.5 > t_next
t2 = int(fnum*vfps/fps)/vfps # 41.5 > t_next
# is it because of a indexing problem? No:
t3 = (fnum-1)/fps # 41.25 < t_prev
t4 = int((fnum-1)*vfps/fps)/vfps # 41.23333333 < t_prev
t5 = int(fnum*vfps/fps - 1)/vfps # 41.466666 > t_next
t6 = int((fnum-1)*vfps/fps + 1)/vfps # 41.26666 < t_prevCommand used :
# burning subtitles
# (previously)
# ffmpeg -r 25 -i nosub.mp4 -vf subtitles=sub.srt withsub.mp4
# now:
ffmpeg -i nosub.mp4 -vf subtitles=sub.srt withsub.mp4
# frames extraction
ffmpeg -i withsub.mp4 -vf fps=4 extracted/%05.bmp -hide_bannerWhy does this happen and how can I solve this ?
One thing I have noticed is that if I extract frames of the original video and the subtitle ones, do a difference of the frames, the result is not only the subtitles, there are variations in the background (that shouldn’t happen). If I do the same experience using the same video two times, the difference is null, which means that the frame extraction is consistant.
Code for the difference :
ffmpeg -i withsub.mp4 -vf fps=4 extracted/%05.bmp -hide_banner
ffmpeg -i no_sub.mp4 -vf fps=4 extracted_no_sub/%05.bmp -hide_banner
for img in no_sub/*.bmp; do
convert extracted/${img##*/} $img -compose minus -composite diff/${img##*/}
doneThanks.
-
Proprietary codecs on Linux. What is legal ?
17 octobre 2016, par George EcoSo, assuming we got a distribution without proprietary codecs installed.
Let’s take Linux Mint for example. I want to store and playback wav and ogg format sounds, either by using my own software, or by using another developer’s software. So far so good right ?Imagine now that we have the following scenario. For some reason, I wanna playback a file that is either an mp4 or mp3 or mpeg or any other format, made by proprietary codecs. Instantly, I will need a codec for these formats.
I read somewhere that Fluendo sells solutions for "legal codec usage" for linux distros.
URL of fluendo : http://www.fluendo.com/en/So here comes the questions :
Using VLC and ffmpeg is enough for me to convert a file to an ogg or ogv so I can playback a song or a video using an open format. You can also playback playback files made by proprietary formats. But are VLC and ffmpeg legal to use, to playback such files made by proprietary codecs ? For example, ss VLC codecs okay to be used without paying anyone for mp4 playback ? Is it okay to convert a file from mp4 to ogv ?
If not, are there any legal and open source and free (as in freedom) codecs around that can solve the issue, or does someone have to pay a product, to be ethically correct, to the developers of the proprietaty codecs ?Note that I do not ask for Windows, since codec licenses are included to the price of the operating system. I ask exclusively for a free linux distribution.
-
avformat/dashdec, hls : Update correct pointer to AVDictionary
7 septembre 2020, par Andreas Rheinhardtavformat/dashdec, hls : Update correct pointer to AVDictionary
open_url() in the DASH as well in the hls demuxer share a common bug :
They modify an AVDictionary (i.e. set a new entry) given to them as
AVDictionary *, yet if this new entry leads to reallocation and
relocation of the AVDictionary, the caller's pointer will become
dangling, leading to use-after-frees. So pass an AVDictionary **.(With the current implementation of AVDictionary the above can only
happen if the AVDictionary was empty initially (in which case the
new AVDictionary leaks) ; furthermore if the I/O is ordinary (i.e. opened
by avio_open2() or ffio_open_whitelist()), the dict is never empty (it
contains an rw_timeout entry from save_avio_options()). So this issue
could only happen if the caller sets a nondefault io_open callback, but
no AVIOContext (the AVFMT_FLAG_CUSTOM_IO flag won't be set in this
case). In case of the HLS demuxer, it was also necessary that setting
the "seekable" entry failed. Yet one should simply not rely on internals
of the AVDict API.)Reviewed-by : Steven Liu <lq@chinaffmpeg.org>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>