
Recherche avancée
Autres articles (57)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Utilisation et configuration du script
19 janvier 2011, parInformations spécifiques à la distribution Debian
Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
Récupération du script
Le script d’installation peut être récupéré de deux manières différentes.
Via svn en utilisant la commande pour récupérer le code source à jour :
svn co (...)
Sur d’autres sites (6953)
-
iOS multiple video display
28 mars 2017, par CuevasI’m currently doing an iOS project that uses
IJKPlayer
which is based onFFmpeg
andSDL
to display RTSP feed from a certain source.I have no problem in displaying a single video feed but my project requires me to display multiple stream on the screen simultaneously, the problem I’m facing right now is to separate each of the streams and display it on
n
number of instances of player.RTSP -> stream 0, stream 1, stream 2, stream 4 -> display
Here is a sample output I want to achieve. Each color represents a single stream. Thanks !
Edit : If this is not possible on IJKPlayer, can someone recommend a different approach on how to implement this ?
-
If multiple channels, merge then take sample length from audio file and save it to s3
18 mai 2017, par khinesterI am using transloadit to extract the audio from a video file, which is then saved to S3.
This works great, but I wanted to know how to :-
check if the file has multiple channels and then squash it inot one as per https://transloadit.com/demos/audio-encoding/merging-multiple-audio-streams/ - do I need to check for this or do i default to use this robot ?
-
extract a small sample from the audio file - and save this as a separate file.
For example, I have a 2h audio file from which I want to take 5% of the length and save this as sample.mp3
In ffmpeg, i can cut :
ffmpeg -ss 0 -t 30 -i original.mp3 sample.mp3
but I am unsure how to chain this workflow, here is what i have thus far :
const opts = {
params: {
notify_url: `${ process.env.SELF }/services/trans/${ jwToken }`,
steps: {
import: {
robot: '/s3/import',
use: ':original',
bucket: process.env.S3_INGEST,
path: ingest.key,
key: process.env.AWS_ID,
secret: process.env.AWS_SECRET,
},
encode: {
robot: '/audio/encode',
use: 'import',
ffmpeg_stack: 'v2.2.3',
preset: 'aac',
ffmpeg: {
ab: '128k',
},
},
export: {
robot: '/s3/store',
use: 'encode',
bucket: s3Export,
path: `${ prefix }/${ token }.m4a`,
headers: {
'Content-Type': 'audio/mp4',
'x-amz-server-side-encryption': 'AES256',
},
key: process.env.AWS_ID,
secret: process.env.AWS_SECRET,
},
},
},
};in the docs, https://transloadit.com/docs/conversion-robots/ i can’t see how to do this ?
any advice is much appreciated.
-
-
ffmpeg doesn't seem to be working with multiple audio streams correctly
21 juin 2017, par Caius JardI’m having an issue with ffmpeg 3.2.2 ; ordinarily I ask it to make an MP4 video file with 2 audio streams. The command line looks like this :
ffmpeg.exe
-rtbufsize 256M
-f dshow -i video="screen-capture-recorder" -thread_queue_size 512
-f dshow -i audio="Line 2 (Virtual Audio Cable)"
-f dshow -i audio="Line 3 (Virtual Audio Cable)"
-map 0:v -map 1:a -map 2:a
-af silencedetect=n=-50dB:d=60 -pix_fmt yuv420p -y "c:\temp\2channelvideo.mp4"I’ve wrapped it for legibility. This once worked fine, but something is wrong lately - it doesnt seem to record any audio, even though I can use other tools like Audacity to record audio from these devices just fine
I’m trying to do some diag on it by dropping the video component and asking ffmpeg to record the two audio devices to two separate files :
ffmpeg.exe
-f dshow -i audio="Line 2 (Virtual Audio Cable)" "c:\temp\line2.mp3"
-f dshow -i audio="Line 3 (Virtual Audio Cable)" "c:\temp\line3.mp3"ffmpeg’s console output looks like :
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, dshow, from 'audio=Line 2 (Virtual Audio Cable)':
Duration: N/A, start: 5935.810000, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, dshow, from 'audio=Line 3 (Virtual Audio Cable)':
Duration: N/A, start: 5936.329000, bitrate: 1411 kb/s
Stream #1:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
Output #0, mp3, to 'c:\temp\line2.mp3':
Metadata:
TSSE : Lavf57.56.100
Stream #0:0: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s16p
Metadata:
encoder : Lavc57.64.101 libmp3lame
Output #1, mp3, to 'c:\temp\line3.mp3':
Metadata:
TSSE : Lavf57.56.100
Stream #1:0: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s16p
Metadata:
encoder : Lavc57.64.101 libmp3lame
Stream mapping:
Stream #0:0 -> #0:0 (pcm_s16le (native) -> mp3 (libmp3lame))
Stream #0:0 -> #1:0 (pcm_s16le (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for helpThe problem i’m currently having is that the produced mp3 are identical copies of line 2 only ; line 3 audio is not recorded. The last line is of concern ; it seems to be saying that stream 0 is being mapped to both output 0 and 1 ? Do I need a map command for each file also ? I thought it would be implicit due to the way i specified the arguments