
Recherche avancée
Autres articles (61)
-
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (4825)
-
Precise method of segmenting & transcoding video+audio (via ffmpeg), into an on-demand HLS stream ?
17 novembre 2019, par Felixrecently I’ve been messing around with FFMPEG and streams through Nodejs. My ultimate goal is to serve a transcoded video stream - from any input filetype - via HTTP, generated in real-time as it’s needed in segments.
I’m currently attempting to handle this using HLS. I pre-generate a dummy m3u8 manifest using the known duration of the input video. It contains a bunch of URLs that point to individual constant-duration segments. Then, once the client player starts requesting the individual URLs, I use the requested path to determine which time range of video the client needs. Then I transcode the video and stream that segment back to them.
Now for the problem : This approach mostly works, but has a small audio bug. Currently, with most test input files, my code produces a video that - while playable - seems to have a very small (< .25 second) audio skip at the start of each segment.
I think this may be an issue with splitting using time in ffmpeg, where possibly the audio stream cannot be accurately sliced at the exact frame the video is. So far, I’ve been unable to figure out a solution to this problem.
If anybody has any direction they can steer me - or even a prexisting library/server that solves this use-case - I appreciate the guidance. My knowledge of video encoding is fairly limited.
I’ll include an example of my relevant current code below, so others can see where I’m stuck. You should be able to run this as a Nodejs Express server, then point any HLS player at localhost:8080/master to load the manifest and begin playback. See the
transcode.get('/segment/:seg.ts'
line at the end, for the relevant transcoding bit.'use strict';
const express = require('express');
const ffmpeg = require('fluent-ffmpeg');
let PORT = 8080;
let HOST = 'localhost';
const transcode = express();
/*
* This file demonstrates an Express-based server, which transcodes & streams a video file.
* All transcoding is handled in memory, in chunks, as needed by the player.
*
* It works by generating a fake manifest file for an HLS stream, at the endpoint "/m3u8".
* This manifest contains links to each "segment" video clip, which browser-side HLS players will load as-needed.
*
* The "/segment/:seg.ts" endpoint is the request destination for each clip,
* and uses FFMpeg to generate each segment on-the-fly, based off which segment is requested.
*/
const pathToMovie = 'C:\\input-file.mp4'; // The input file to stream as HLS.
const segmentDur = 5; // Controls the duration (in seconds) that the file will be chopped into.
const getMetadata = async(file) => {
return new Promise( resolve => {
ffmpeg.ffprobe(file, function(err, metadata) {
console.log(metadata);
resolve(metadata);
});
});
};
// Generate a "master" m3u8 file, which the player should point to:
transcode.get('/master', async(req, res) => {
res.set({"Content-Disposition":"attachment; filename=\"m3u8.m3u8\""});
res.send(`#EXTM3U
#EXT-X-STREAM-INF:BANDWIDTH=150000
/m3u8?num=1
#EXT-X-STREAM-INF:BANDWIDTH=240000
/m3u8?num=2`)
});
// Generate an m3u8 file to emulate a premade video manifest. Guesses segments based off duration.
transcode.get('/m3u8', async(req, res) => {
let met = await getMetadata(pathToMovie);
let duration = met.format.duration;
let out = '#EXTM3U\n' +
'#EXT-X-VERSION:3\n' +
`#EXT-X-TARGETDURATION:${segmentDur}\n` +
'#EXT-X-MEDIA-SEQUENCE:0\n' +
'#EXT-X-PLAYLIST-TYPE:VOD\n';
let splits = Math.max(duration / segmentDur);
for(let i=0; i< splits; i++){
out += `#EXTINF:${segmentDur},\n/segment/${i}.ts\n`;
}
out+='#EXT-X-ENDLIST\n';
res.set({"Content-Disposition":"attachment; filename=\"m3u8.m3u8\""});
res.send(out);
});
// Transcode the input video file into segments, using the given segment number as time offset:
transcode.get('/segment/:seg.ts', async(req, res) => {
const segment = req.params.seg;
const time = segment * segmentDur;
let proc = new ffmpeg({source: pathToMovie})
.seekInput(time)
.duration(segmentDur)
.outputOptions('-preset faster')
.outputOptions('-g 50')
.outputOptions('-profile:v main')
.withAudioCodec('aac')
.outputOptions('-ar 48000')
.withAudioBitrate('155k')
.withVideoBitrate('1000k')
.outputOptions('-c:v h264')
.outputOptions(`-output_ts_offset ${time}`)
.format('mpegts')
.on('error', function(err, st, ste) {
console.log('an error happened:', err, st, ste);
}).on('progress', function(progress) {
console.log(progress);
})
.pipe(res, {end: true});
});
transcode.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`); -
aarch64 : Add NEON optimizations for 10 and 12 bit vp9 loop filter
5 janvier 2017, par Martin Storsjöaarch64 : Add NEON optimizations for 10 and 12 bit vp9 loop filter
This work is sponsored by, and copyright, Google.
This is similar to the arm version, but due to the larger registers
on aarch64, we can do 8 pixels at a time for all filter sizes.Examples of runtimes vs the 32 bit version, on a Cortex A53 :
ARM AArch64
vp9_loop_filter_h_4_8_10bpp_neon : 213.2 172.6
vp9_loop_filter_h_8_8_10bpp_neon : 281.2 244.2
vp9_loop_filter_h_16_8_10bpp_neon : 657.0 444.5
vp9_loop_filter_h_16_16_10bpp_neon : 1280.4 877.7
vp9_loop_filter_mix2_h_44_16_10bpp_neon : 397.7 358.0
vp9_loop_filter_mix2_h_48_16_10bpp_neon : 465.7 429.0
vp9_loop_filter_mix2_h_84_16_10bpp_neon : 465.7 428.0
vp9_loop_filter_mix2_h_88_16_10bpp_neon : 533.7 499.0
vp9_loop_filter_mix2_v_44_16_10bpp_neon : 271.5 244.0
vp9_loop_filter_mix2_v_48_16_10bpp_neon : 330.0 305.0
vp9_loop_filter_mix2_v_84_16_10bpp_neon : 329.0 306.0
vp9_loop_filter_mix2_v_88_16_10bpp_neon : 386.0 365.0
vp9_loop_filter_v_4_8_10bpp_neon : 150.0 115.2
vp9_loop_filter_v_8_8_10bpp_neon : 209.0 175.5
vp9_loop_filter_v_16_8_10bpp_neon : 492.7 345.2
vp9_loop_filter_v_16_16_10bpp_neon : 951.0 682.7This is significantly faster than the ARM version in almost
all cases except for the mix2 functions.Based on START_TIMER/STOP_TIMER wrapping around a few individual
functions, the speedup vs C code is around 2-3x.Signed-off-by : Martin Storsjö <martin@martin.st>
-
Moviepy has issues when concatenating ImageClips of different dimensions
22 mars 2021, par Lysander CoxExample of the issues : https://drive.google.com/file/d/1WxfYtDTD0kc_4WQzzvB6QXkZWo-e2Vuk/view?usp=sharing


Here's the code that led to the issue :


def fragmentConcat(comment, filePrefix):
 finalClips = []
 dirName = filePrefix + comment['id']
 vidClips = [mpy.VideoFileClip(dirName + '/' + file) for file 
 in natsorted(os.listdir(dirName))]
 
 finalClip = mpy.concatenate_videoclips(vidClips, method = "compose")
 finalClips.append(finalClip)
 
 if 'replies' in comment:
 for reply in comment['replies']:
 finalClips += fragmentConcat(reply, filePrefix)
 
 return finalClips

def finalVideoMaker(thread):
 fragmentGen(thread)
 filePrefix = thread['id'] + '/videos/'

 #Clips of comments and their children being read aloud.
 commentClips = []

 for comment in thread['comments']:
 commentClipFrags = fragmentConcat(comment, filePrefix)
 commentClip = mpy.concatenate_videoclips(commentClipFrags, method = "compose")
 commentClips.append(commentClip)

 #1 second of static to separate clips.
 staticVid = mpy.VideoFileClip('assets/static.mp4')
 commentClips.append(staticVid)

 finalVid = mpy.concatenate_videoclips(commentClips)
 finalVid.write_videofile(thread['id'] + '/final.mp4')



I'm certain that these issues appear somewhere in here, because the individual video "fragments" (which are concatenated here) do not exhibit the issue with the clip I showed.


I have tried adding and removing the
method = "compose"
parameter. It does not seem to have an affect. How can I resolve this ? Thanks.