
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (103)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (8341)
-
It's possible to catch ffmpeg errors with python ?
4 avril 2019, par Elros RomeoHi I’m trying to make a video converter for django with python, I forked django-ffmpeg module which does almost everything I want, except that doesn’t catch error if conversion failed.
Basically the module passes to the command line interface the ffmpeg command to make the conversion like this :
/usr/bin/ffmpeg -hide_banner -nostats -i %(input_file)s -target
film-dvd %(output_file)Module uses this method to pass the ffmpeg command to cli and get the output :
def _cli(self, cmd, without_output=False):
print 'cli'
if os.name == 'posix':
import commands
return commands.getoutput(cmd)
else:
import subprocess
if without_output:
DEVNULL = open(os.devnull, 'wb')
subprocess.Popen(cmd, stdout=DEVNULL, stderr=DEVNULL)
else:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
return p.stdout.read()But for example, I you upload an corrupted video file it only returns the ffmpeg message printed on the cli, but nothing is triggered to know that something failed
This is an ffmpeg sample output when conversion failed :
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x237d500] Format mov,mp4,m4a,3gp,3g2,mj2
detected only with low score of 1, misdetection possible !
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x237d500] moov atom not found
/home/user/PycharmProjects/videotest/media/videos/orig/270f412927f3405aba041265725cdf6b.mp4 :
Invalid data found when processing inputI was wondering if there’s any way to make that an exception and how, so I can handle it easy.
The only option that came to my mind is to search : "Invalid data found when processing input" in the cli output message string but I’m not shure that if this is the best approach. Anyone can help me and guide me with this please.
-
vulkan_decode : use a single execution pool
3 décembre 2024, par Lynnevulkan_decode : use a single execution pool
Originally, the decoder had a single execution pool, with one
execution context per thread. Execution pools were always intended
to be thread-safe, as long as there were enough execution contexts
in the pool to satisfy all threads.Due to synchronization issues, the threading part was removed at some
point, and, for decoding, each thread had its own execution pool.
Having a single execution pool per context is hacky, not to mention
wasteful.
Most importantly, we *cannot* associate single shaders across multiple
execution pools for a single application. This means that we cannot
use shaders to either apply film grain, or use this framework for
software-defined decoders.The recent commits added threading capabilities back to the execution
pool, and the number of contexts in each pool was increased. This was
done with the assumption that the execution pool was singular, which
it was not. This led to increased parallelism and number of frames
in flight, which is taxing on memory.This commit finally restores proper threading behaviour.
The validation layer has isses that are reported and addressed in the
earlier commit. -
ffmpeg : concat videos and images
23 mai 2016, par YoskoI have 2 videos (same resolution, same encoding) files that I want to concat and I want to insert some text for 3 seconds between them, as a splitter. I’m doing this with ffmpeg on Windows.
Optional ideas that I would be interested in :
- avoid reencoding the video in the process
- having a fade in / fade out at the intersection of each part
For now, I made the text as an image (but I am open to other suggestions). Let’s say I have :
- video1.mp4 : 6:33
- splitter.png (same resolution as video1.mp4)
- video2.mp4 : 16:44
I have tried a few things, but I always end up with the same problem : the video is 23:20 (video1 + 3 seconds + video2), but the 3 seconds gap is just the last video1 frame frozen instead of my image/text...
Any Idea what I did wrong or how I should achieve this ?
Here is what I tried so far :
Method 1 : image to video
Turn the image into a 3 seconds mp4 film, then concat (demuxer) it with the others :
ffmpeg -loop 1 -f image2 -i splitter.png -r 30 -t 3 splitter.mp4
ffmpeg -f concat -i input.txt -codec copy output.mp4Where the
input.txt
looks like :file 'E:\video1.mp4'
file 'E:\splitter.mp4'
file 'E:\video2.mp4'The content of
splitter.png
is visible in thesplitter.mp4
, but not in theoutput.mp4
. Also I’m not entirely sure the splitter.mp4 respects the exact same encoding as the 2 videos, and I don’t know how to verify that.Method 2 : insert image frames
Directly run the concat (demuxer) 90 times (30fps -> 3 seconds) on the image
ffmpeg -f concat -i input.txt -codec copy output.mp4
Where the
input.txt
looks like :file 'E:\video1.mp4'
file 'E:\splitter.png'
...
file 'E:\splitter.png'
file 'E:\video2.mp4'Edit : possible solution ?
Since all I’m doing is screencasting, I might as well screencast my splitter image. This way I would be sure of the audio & video encoding and wouldn’t have any problem merging and it wouldn’t need any reencoding... I know it might sound dumb, but it would probably do the trick...
Note : I didn’t have try it, since I already worked through Openshot.