
Recherche avancée
Autres articles (64)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (10139)
-
How do I use ffmpeg with Python by passing File Objects (instead of locations to files on disk)
1er mai 2012, par Lyle PrattI'm trying to use ffmpeg with Python's subprocess module to convert some audio files. I grab the audio files from a URL and would like to just be able to pass the Python File Objects to ffmpeg, instead of first saving them to disk. It would also be very nice if I could just get back a file stream instead of having ffmpeg save the output to a file.
For reference, this is what I'm doing now :
tmp = "/dev/shm"
audio_wav_file = requests.get(audio_url)
## ## ##
## This is what I don't want to have to do ##
wavfile = open(tmp+filename, 'wrb')
wavfile.write(audio_wav_file.content)
wavfile.close()
## ## ##
conversion = subprocess.Popen('ffmpeg -i "'+tmp+filename+'" -y "'+tmp+filename_noext+'.flac" 2>&1', shell = True, stdout = subprocess.PIPE).stdout.read()Does anyone know how to do this ?
Thanks !
-
ffmpeg SDP file for Darwin Streaming Server
10 septembre 2012, par SP SandhuI am making a streaming server to view live video feed of my webcam on my mobile device.
I considered using ffmpeg , VLC and DSS and made the following setup that worked somewhat, though the frames were skipped :-
video4linux2 > ffserver > VLC transcoding > DSS
(RAW to ffserver) > (outputs to SDP link) > (SDP link to SDP file) > (SDP file to live streaming to mobile)
Later, on testing VLC i found to be very inefficient and slow on my Netbook(Intel Atom N480) as it skips lot of frames.
DSS can stream a SDP file from its /usr/local/movies(default).
And at the same time, ffmpeg's ffserver module can stream live feed to SDP link(not SDP file).
My requirement is that i need to create SDP file in DSS's /usr/local/movies directory so as to pass this DSS for streaming.
So, how to create a sdp file from ffmpeg or how to create SDP file from SDP link (without using VLC's trans-coding).
How to do that ?
-
FFmpeg screencast recording : which codecs to use ?
24 avril 2013, par mkaitoI've been experimenting with recording screencasts using FFmpeg's X11grab module, which has worked more or less fine so far. I understand that a/v encoding is a complex process with many fine details, but I'm doing my best to learn.
I'd like to do "lightweight" recording of a video stream, that puts as little strain as possible on the system while the stream is being recorded. I record two audio streams separately with pacat and sox. Later, the whole thing is filtered, normalized, encoded, and combined into a Matroska container.
Right now, I'm having ffmpeg record a rawvideo stream to be fed to x264's yuv4 demuxer. I experimented with ffv1 and straight x264 recording before. My system can't handle real time encoding with x264 on the settings I want for the final stream, so I have to recompress separately once the recording is done. I've found that ffv1 gives me terrible frame dropping, and yuv4 too, but less so. I suspect this is due to hard drive speed, even if I'm sitting in a SATA3 Caviar Black that's being used exclusively to hold the recorded data.
The question is, which combination of video codecs should I look at ? Record straight in x264 and recompress to "better" x264 later ? Raw video, then compress ? How would I go about pinpointing issues such as the frame drops I've been experiencing ?
EDIT : This is the ffmpeg line I currently use.
ffmpeg -v warning -f x11grab -s 1920x1080 -r 30000/1001 -i :0.0\
-vcodec rawvideo -pix_fmt yuv420p -s 1280x720\
-threads 0\
recvideo.y4m