
Recherche avancée
Médias (1)
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (48)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (7384)
-
Is it possible to combine audio and video from ffmpeg-python without writing to files first ?
22 janvier 2021, par nullUserI'm using the ffmpeg-python library.


I have used the example code : https://github.com/kkroening/ffmpeg-python/tree/master/examples to asynchronously read in and process audio and video streams. The processing is custom and not something a built-in ffmpeg command can achieve (imagine something like tensorflow deep dreaming on both the audio and video). I then want to recombine the audio and video streams that I have created. Currently, the only way I can see to do it is to write both streams out to separate files (as is done e.g. in this answer : How to combine The video and audio files in ffmpeg-python), then use ffmpeg to combine them afterwards. This has the major disadvantage that the result cannot be streamed, i.e. the audio and video must be completely done processing before you can start playing the combined audio/video. Is there any way to combine them without going to files as an intermediate step ?


Technically, the fact that the streams were initially read in from ffmpeg is irrelevant. You may as well assume that I'm in the following situation :


def audio_stream():
 for i in range(10):
 yield bytes(44100 * 2 * 4) # one second of audio 44.1k sample rate, 2 channel, s32le format

def video_stream():
 for i in range(10):
 yield bytes(60 * 1080 * 1920 * 3) # one second of video 60 fps 1920x1080 rgb24 format

# how to write both streams of bytes to file without writing each one separately to a file first?



I would like to use
ffmpeg.concat
, but this requiresffmpeg.inputs
, which only accept filenames as inputs. Is there any other way ? Here are the docs : https://kkroening.github.io/ffmpeg-python/.

-
avdevice/avdevice : Deprecate AVDevice Capabilities API
24 janvier 2021, par Andreas Rheinhardtavdevice/avdevice : Deprecate AVDevice Capabilities API
It has been added in 6db42a2b6b22e6f1928fafcf3faa67ed78201004,
yet since then none of the necessary create/free_device_capabilities
functions has been implemented, making this API completely useless.Because of this one can already simplify
avdevice_capabilities_free/create and can already remove the function
pointers at the next major bump ; given that the documentation explicitly
states that av_device_capabilities is not to be used by a user, it's
options can already be removed (save for the sentinel).Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
How to create a spectrogram image from an audio file in Python just like how FFMPEG does ?
2 mai 2020, par hamandishe MkMy code :



import matplotlib.pyplot as plt
from matplotlib.pyplot import specgram
import librosa
import librosa.display
import numpy as np
import io
from PIL import Image

samples, sample_rate = librosa.load('thabo.wav')
fig = plt.figure(figsize=[4, 4])
ax = fig.add_subplot(111)
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.set_frame_on(False)
S = librosa.feature.melspectrogram(y=samples, sr=sample_rate)
librosa.display.specshow(librosa.power_to_db(S, ref=np.max))
buf = io.BytesIO()
plt.savefig(buf, bbox_inches='tight',pad_inches=0)

# plt.close('all')
buf.seek(0)
im = Image.open(buf)
# im = Image.open(buf).convert('L')
im.show()
buf.close()




Spectrogram produced






Using FFMPEG



ffmpeg -i thabo.wav -lavfi showspectrumpic=s=224x224:mode=separate:legend=disabled spectrogram.png



Spectrogram produced






Please help, i want a spectrogram that is exactly the same as the one produced by FFMPEG, for use with a speech recognition model exported from google's teachable machine.
Offline recognition