
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (52)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (11358)
-
Using Flutter FFMPEG_KIT to convert a sequence of RGBA images into a mp4 video
30 avril 2023, par jdevp2I’m trying to convert a sequence of RGBA byte images into an MP4 video by using Flutter
FFMPEGKit
package. The following is the code snippet but it gives me an error. I’m not sure what is the correct option that I should use to convert a set ofrawrgba
images to a video. I appreciate any help.

static Future<void> videoEncoder() async {
 appTempDir = '${(await getTemporaryDirectory()).path}/workPath';
 
 FFmpegKit.execute(
 '-hide_banner -y -f rawvideo -pix_fmt rgba -i appTempDir/input%d.rgba -c:v mpeg4 -r 12 appTempDir/output.mp4')
 .then((session) async {
 final returnCode = await session.getReturnCode();
 if (ReturnCode.isSuccess(returnCode)) {
 print('SUCCESS');
 } else if (ReturnCode.isCancel(returnCode)) {
 print('CANCEL');
 } else {
 print('ERROR');
 }
 });
 }
}
</void>


The image
rgba
data is saved via the following code snippet.

Utils.getBuffer(renderKey).then((value) async {
 ui.Image buffer = value;
 final data =
 await buffer!.toByteData(format: ui.ImageByteFormat.rawRgba);
 Uint8List uData = data!.buffer.asUint8List();
 VideoUtil.saveImageFileToDirectory(uData, 'input$frameNum.rgba');
 frameNum++;
 });



-
Using HEVC with Alpha to Compose Moviepy Video
3 avril, par James GraceI'm using moviepy, PIL and numpy and trying to compile a video with 3 components : A background image that is a PNG with no transparency, an Overlay Video that is a HEVC with Alpha, and a primary clip that is produced with a collection of PNG images with transparency.


The video is composed with background + overlay video + main video.


The problem I'm having is that the overlay video has a black background, so the background image is covered completely. Moviepy is able to import the HEVC video successfully by it seems as though the Alpha channel is lost on import.


Any ideas ?


Here's my code :


from PIL import Image
import moviepy.editor as mpe
import numpy as np

def CompileVideo() :

 frames = ["list_of_png_files_with_transparency"]
 fps = 30.0
 clips = [mpe.ImageClip(np.asarray(Image.open(frame))).set_duration(1 / int(fps)) for frame in frames]
 ad_clip = mpe.concatenate_videoclips(clips, method="compose")
 bg_clip = mpe.ImageClip(np.asarray(Image.open("path_to_background_file_no_transparency"))).set_duration(ad_clip.duration)

 overlay_clip = mpe.VideoFileClip("path_to_HEVC_with_Alpha.mov")

 comp = [bg_clip, overlay_clip, ad_clip]

 final = mpe.CompositeVideoClip(comp).set_duration(ad_clip.duration)
 final.write_videofile("output.mp4", fps=fps)



-
JavaCV FFmpegFrameGrabber preload audio
1er juillet 2015, par JamesI have an application that streams video data to a RTMP server using javacv’s FFmpegFrameRecorder. I want to add some audio to this stream from a separate file - a short sound clip that I want to play on repeat.
Given the sound clip is very short, I want to preload the audio data into memory and just loop over it - so I can avoid excessive IO etc.
I’ve attempted to add audio to the stream using javacv’s FFmpegFrameGrabber, as prescribed on multiple tutorials.
The addition of audio works perfectly if I don’t attempt to preload/cache any of the audio data, for example :
private FFmpegFrameRecorder frameRecorder;
private FFmpegFrameGrabber frameGrabber;
...
//frameRecorder and frameGrabber setup during initialization
...
public void record(IplImage image) {
try {
frameRecorder.record(image);
Frame frame = frameGrabber.grabFrame();
if(frame == null) {
frameGrabber = new FFmpegFrameGrabber("audioFileHere.wav");
frameGrabber.start();
frame = frameGrabber.grabFrame();
}
frameRecorder.record(frame);
} catch (FrameRecorder.Exception e) {
log.error(getMarker(FATAL), "Can't record frame!", e);
} catch (FrameGrabber.Exception e) {
log.error(getMarker(FATAL), "Can't record frame!", e);
}
}However, if I try to preload the audio data I get garbage sound being played :
private FFmpegFrameRecorder frameRecorder;
private List<framedata> audioData;
private static final class FrameData {
public final Buffer[] samples;
public final Integer sampleRate;
public final Integer audioChannels;
//Constructors, getters and setters here
}
...
//frameRecorder setup during initialization
audioData = new ArrayList<>();
FFmpegFrameGrabber audioGrabber = new FFmpegFrameGrabber("audioFileHere.wav");
try {
audioGrabber.start();
Frame frame;
while ((frame = audioGrabber.grabFrame()) != null) {
Buffer[] buffers = frame.samples;
Buffer[] copiedBuffers = new Buffer[buffers.length];
for (int i = 0; i < buffers.length; i++) {
copiedBuffers[i] = ((ShortBuffer) buffers[i]).duplicate();
}
FrameData frameData = new FrameData(copiedBuffers, frame.sampleRate, frame.audioChannels);
audioData.add(frameData);
}
} catch (FrameGrabber.Exception e) {
e.printStackTrace();
}
...
private int frameCount = 0;
public void record(IplImage image) {
frameCount++;
try {
FrameData frameData = audioData.get(frameCount % audioData.size());
frameRecorder.record(image);
frameRecorder.record(frameData.sampleRate, frameData.audioChannels, frameData.samples);
} catch (FrameRecorder.Exception e) {
log.error(getMarker(FATAL), "Can't record frame!", e);
}
}
</framedata>NOTE : I have to deep copy the Frame object because FFmpegFrameGrabber.grabFrame() recycles a single Frame object
Can someone explain why this doesn’t work and/or how I could achieve the desired result ?