
Recherche avancée
Médias (29)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (35)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (5843)
-
ffmpeg and ffserver, rc buffer underflow ?
25 février 2018, par Dove DevicI am attempting to write a simple streaming server for a project. I have an AWS Linux machine that will be running
ffserver
. Curently, as it stands, my config file looks like the following :#Server Configs
HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 1000
CustomLog -
#Create a Status Page
<stream>
Format status
ACL allow localhost
ACL allow 255.255.255.255 #Allow everyone to view status, for now
</stream>
#Creates feed, only allow from self
<feed>
File /tmp/feed1.ffm
FileMaxSize 50M
ACL allow 127.0.0.1
ACL allow
</feed>
#Creates stream, allow everyone
<stream>
Format mpeg
Feed feed1.ffm
VideoFrameRate 30
VideoSize 640x480
AudioSampleRate 44100
</stream>I then am capturing my Webcam and sending it up to the server using the following command :
ffmpeg -f dshow
-i video="Webcam C170":audio="Microphone (Webcam C170)"
-b:v 1400k
-maxrate 2400k
-bufsize 1200k
-ab 64k
-s 640x480
-ac 1
-ar 44100
-y http://:8090/feed1.ffmWhen I run this however, I get the following output from my console :
Guessed Channel Layout for Input Stream #0.1 : stereo
Input #0, dshow, from 'video=Webcam C170:audio=Microphone (Webcam C170)':
Duration: N/A, start: 12547.408000, bitrate: N/A
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 30 tbr, 10000k tbn, 30 tbc
Stream #0:1: Audio: pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
Output #0, ffm, to ':8090/feed1.ffm':
Metadata:
creation_time : 2017-04-26 14:55:27
encoder : Lavf57.25.100
Stream #0:0: Audio: mp2, 44100 Hz, mono, s16, 64 kb/s
Metadata:
encoder : Lavc57.24.102 mp2
Stream #0:1: Video: mpeg1video, yuv420p, 640x480, q=2-31, 64 kb/s, 30 fps, 1000k tbn, 30 tbc
Metadata:
encoder : Lavc57.24.102 mpeg1video
Side data:
unknown side data type 10 (24 bytes)
Stream mapping:
Stream #0:1 -> #0:0 (pcm_s16le (native) -> mp2 (native))
Stream #0:0 -> #0:1 (rawvideo (native) -> mpeg1video (native))
Press [q] to stop, [?] for help
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflowtime=00:00:01.13 bitrate= 404.8kbits/s dup=13 drop=0 speed=2.22x
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflowtime=00:00:01.63 bitrate= 361.1kbits/s dup=13 drop=0 speed=1.61x
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflowtime=00:00:02.13 bitrate= 368.6kbits/s dup=13 drop=0 speed= 1.4x
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflowtime=00:00:02.66 bitrate= 344.1kbits/s dup=13 drop=0 speed=1.32x
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflowtime=00:00:03.16 bitrate= 331.1kbits/s dup=13 drop=0 speed=1.25x
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video @ 02e95180] rc buffer underflow
[mpeg1video @ 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
frame= 117 fps= 36 q=31.0 Lsize= 156kB time=00:00:03.86 bitrate= 330.5kbits/s dup=13 drop=0 speed= 1.2x
video:118kB audio:27kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 7.659440%
Exiting normally, received signal 2.And on my viewer, I just get a black screen.
Is there something I’m missing ? Searching lead to nothing on "increasing qmax" or anything similar to what
ffmpeg
complained about. There have been questions asked here, but nothing has been done/answered.Thanks in advance
-
Duration of wav file saved in S3 using AWS Lambda
3 juin 2021, par Salim ShamimObjective


To calculate the duration of a wav file which is saved in S3 by AWS Lambda using node.js. I had to add
ffmpeg
andffprobe
executable inside a lambda layer (Downloaded linux-64 version from here). These files could be found in/opt
folder on lambda file system.

What I have tried


I have been trying using ffprobe in numerous ways, but I get
Invalid Data
as error.
Here's one example

const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const fs = require('fs');
const ffmpeg = require('fluent-ffmpeg');

exports.handler = async function(event) {
 let path = await load();
 console.log(`Saved Path ${path}`);

 ffmpeg.setFfmpegPath('/opt/ffmpeg');
 ffmpeg.setFfprobePath("/opt/ffprobe");

 let dur = await duration(path).catch(err => {
 console.log(err);
 })
 console.log(dur);
}


function duration(path) {
 return new Promise((resolve, reject) => {
 ffmpeg(path).ffprobe(path, function(err, metadata) {
 //console.dir(metadata); // all metadata
 if (err) {
 reject(err);
 }
 else {
 resolve(metadata.format.duration);

 }
 });
 })
}

async function listFiles(path) {
 console.log('list files');
 return new Promise((resolve, reject) => {
 fs.readdir(path, (err, files) => {
 if (err) {
 console.error('Error in readdir');
 reject(err);
 }
 else {
 console.log('recieved files');
 resolve(files);
 }

 });

 });

}

async function load() {
 return new Promise((resolve, reject) => {
 let params = {
 Key: 'Fanfare60.wav',
 Bucket: 'samplevideosshamim'
 };
 console.log(`Getting s3 object : ${JSON.stringify(params)}`);
 s3.getObject(params, (err, data) => {
 if (err) {
 console.error(err);
 reject(err);
 }
 else if (data) {
 console.log('Recieved Data');
 let path = `/tmp/${params.Key}`;
 console.log('Path: ' + path);
 fs.writeFileSync(path, data.body);
 resolve(path);
 }
 });
 });

}



Error :


Error: ffprobe exited with code 1
ffprobe version 4.2.1-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2007-2019 the FFmpeg developers
 built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
 configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
/tmp/Fanfare60.wav: Invalid data found when processing input

 at ChildProcess.<anonymous> (/var/task/node_modules/fluent-ffmpeg/lib/ffprobe.js:233:22)
 at ChildProcess.emit (events.js:314:20)
 at ChildProcess.EventEmitter.emit (domain.js:483:12)
 at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12)
</anonymous>


I am guessing it doesn't support
wav
format, but internet searches provide no proof of that.

A point to note here is, I was able to get the duration of a local file when I ran this code on my local machine, but I have a windows machine, so perhaps only linux executable of ffprobe has issue ?


Possible Solutions I am looking for


- 

- Is there a way to specify format ?
- Can I use a different library (code example for the same) ?
- Any possible way to get duration of a
wav
file in the mentioned scenario (AWS Lambda NodeJS and S3 file (private file) ?








-
How to Simply Remove Duplicate Frames from a Video using ffmpeg
29 janvier 2017, par SkeeveFirst of all, I’d preface this by saying I’m NO EXPERT with video manipulation,
although I’ve been fiddling with ffmpeg for years (in a fairly limited way). Hence, I’m not too flash with all the language folk often use... and how it affects what I’m trying to do in my manipulations... but I’ll have a go with this anyway...I’ve checked a few links here, for example :
ffmpeg - remove sequentially duplicate frames...but the content didn’t really help me.
I have some hundreds of video clips that have been created under both Windows and Linux using both ffmpeg and other similar applications. However, they have some problems with times in the video where the display is ’motionless’.
As an example, let’s say we have some web site that streams a live video into, say, a Flash video player/plugin in a web browser. In this case, we’re talking about a traffic camera video stream, for example.
There’s an instance of ffmpeg running that is capturing a region of the (Windows) desktop into a video file, viz :-
ffmpeg -hide_banner -y -f dshow ^
-i video="screen-capture-recorder" ^
-vf "setpts=1.00*PTS,crop=448:336:620:360" ^
-an -r 25 -vcodec libx264 -crf 0 -qp 0 ^
-preset ultrafast SAMPLE.flvLet’s say the actual ’display’ that is being captured looks like this :-
123456789 XXXXX 1234567 XXXXXXXXXXX 123456789 XXXXXXX
^---a---^ ^-P-^ ^--b--^ ^----Q----^ ^---c---^ ^--R--^...where each character position represents a (sequence of) frame(s). Owing to a poor internet connection, a "single frame" can be displayed for an extended period (the ’X’ characters being an (almost) exact copy of the immediately previous frame). So this means we have segments of the captured video where the image doesn’t change at all (to the naked eye, anyway).
How can we deal with the duplicate frames ?... and how does our approach change if the ’duplicates’ are NOT the same to ffmpeg but LOOK more-or-less the same to the viewer ?
If we simply remove the duplicate frames, the ’pacing’ of the video is lost, and what used to take, maybe, 5 seconds to display, now takes a fraction of a second, giving a very jerky, unnatural motion, although there are no duplicate images in the video. This seems to be achievable using ffmpeg with an ’mp_decimate’ option, viz :-
ffmpeg -i SAMPLE.flv ^ ... (i)
-r 25 ^
-vf mpdecimate,setpts=N/FRAME_RATE/TB DEC_SAMPLE.mp4That reference I quoted uses a command that shows which frames ’mp_decimate’ will remove when it considers them to be ’the same’, viz :-
ffmpeg -i SAMPLE.flv ^ ... (ii)
-vf mpdecimate ^
-loglevel debug -f null -...but knowing that (complicated formatted) information, how can we re-organize the video without executing multiple runs of ffmpeg to extract ’slices’ of video for re-combining later ?
In that case, I’m guessing we’d have to run something like :-
- user specifies a ’threshold duration’ for the duplicates
(maybe run for 1 sec only) - determine & save main video information (fps, etc - assuming
constant frame rate) - map the (frame/time where duplicates start)->no. of
frames/duration of duplicates - if the duration of duplicates is less than the user threshold,
don’t consider this period as a ’series of duplicate frames’
and move on - extract the ’non-duplicate’ video segments (a, b & c in the
diagram above) - create ’new video’ (empty) with original video’s specs
- for each video segment
extract the last frame of the segment
create a short video clip with repeated frames of the frame
just extracted (duration = user spec. = 1 sec)
append (current video segment+short clip) to ’new video’
and repeat
...but in my case, a lot of the captured videos might be 30 minutes long and have hundreds of 10 sec long pauses, so the ’rebuilding’ of the videos will take a long time using this method.
This is why I’m hoping there’s some "reliable" and "more intelligent" way to use
ffmepg (with/without the ’mp_decimate’ filter) to do the ’decimate’ function in only a couple of passes or so... Maybe there’s a way that the required segments could even be specified (in a text file, for example) and as ffmpeg runs it will
stop/restart it’s transcoding at specified times/frame numbers ?Short of this, is there another application (for use on Windows or Linux) that could do what I’m looking for, without having to manually set start/stop points,
extracting/combining video segments manually...?I’ve been trying to do all this with ffmpeg N-79824-gcaee88d under Win7-SP1 and (a different version I don’t currently remember) under Puppy Linux Slacko 5.6.4.
Thanks a heap for any clues.
- user specifies a ’threshold duration’ for the duplicates