
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (55)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (8822)
-
Kdialog unable to cancel job with ffmpeg from progressbar , with other command it works :S
2 janvier 2015, par user1088530i did a script to convert my stuff with ffmpeg
it does :
1 create progressbar with cancel button
2 loop to convert files
3 set the progress valueit works fine but it doens’t stop ffmpeg when i click on cancel
the strange thing is that when i use another program like echo it works like it should it seems a ffmpeg issue
can onyone help me how to find out the trick ?
listoffile="$HOME/ffmpeglist.lst"
numberoffile=$(wc -l <$listoffile)
ffmpegpath="/usr/bin/ffmpeg"
mystufvariabletouseonmyscript=$(kdialog --progressbar "hello this is a progress bar with 100 steps" 100) ; sleep 2 ; qdbus $mystufvariabletouseonmyscript showCancelButton true
until test "true" = `qdbus $mystufvariabletouseonmyscript wasCancelled` ; do
while read line ; do
qdbus $mystufvariabletouseonmyscript org.kde.kdialog.ProgressDialog.setLabelText $"Starting Conversion...processing file $line"
$ffmpegpath -i "$line" "${line%%.*}.$mp3"
value=$(( $(($a*100))/$numberoffile ))
qdbus $mystufvariabletouseonmyscript Set org.kde.kdialog.ProgressDialog value $value
a=$(($a+1))
done< "$listoffile"
done -
Transcoding Mp4 video on Android with FFMPEG
18 octobre 2012, par geeknizerI've compiled FFMPEG binary from Bambuser project (refer question) and have pushed binary and .so files to application directory.
But when I try running FFMPEG with input file, it always complains file not found.
1|root@android:/data/data/com.bambuser.broadcaster # ls
cache libavcodec.so libavfilter.so libswscale.so
ffmpeg libavcore.so libavformat.so tutorial.mp4
lib libavdevice.so libavutil.so
1|root@android:/data/data/com.bambuser.broadcaster # ./ffmpeg -i tutorial.mp4 out.mp4
FFmpeg version UNKNOWN, Copyright (c) 2000-2010 the FFmpeg developers
built on May 8 2012 10:11:37 with gcc 4.4.3
configuration: --target-os=linux --cross-prefix=arm-linux-androideabi- --arch=arm --sysroot=/home/tarandeep/tools/android-ndk/platforms/android-3/arch-arm --soname-prefix=/data/data/com.bambuser.broadcaster/lib/ --enable-shared --disable-symver --enable-small --optimization-flags=-O2 --disable-everything --enable-encoder=mpeg2video --enable-encoder=nellymoser --prefix=../build/ffmpeg/armeabi-v7a --extra-cflags='-march=armv7-a -mfloat-abi=softfp' --extra-ldflags=
libavutil 50.34. 0 / 50.34. 0
libavcore 0.16. 0 / 0.16. 0
libavcodec 52.99. 1 / 52.99. 1
libavformat 52.88. 0 / 52.88. 0
libavdevice 52. 2. 2 / 52. 2. 2
libavfilter 1.69. 0 / 1.69. 0
libswscale 0.12. 0 / 0.12. 0
tutorial.mp4: No such file or directoryI've tried placing file in /sdcard and other locations, I always get same output
-
How To Write An Oscilloscope
I’m trying to figure out how to write a software oscilloscope audio visualization. It’s made more frustrating by the knowledge that I am certain that I have accomplished this task before.
In this context, the oscilloscope is used to draw the time-domain samples of an audio wave form. I have written such a plugin as part of the xine project. However, for that project, I didn’t have to write the full playback pipeline— my plugin was just handed some PCM data and drew some graphical data in response. Now I’m trying to write the entire engine in a standalone program and I’m wondering how to get it just right.
This is an SDL-based oscilloscope visualizer and audio player for Game Music Emu library. My approach is to have an audio buffer that holds a second of audio (44100 stereo 16-bit samples). The player updates the visualization at 30 frames per second. The o-scope is 512 pixels wide. So, at every 1/30th second interval, the player dips into the audio buffer at position ((frame_number % 30) * 44100 / 30) and takes the first 512 stereo frames for plotting on the graph.
It seems to be working okay, I guess. The only problem is that the A/V sync seems to be slightly misaligned. I am just wondering if this is the correct approach. Perhaps the player should be performing some slightly more complicated calculation over those (44100/30) audio frames during each update in order to obtain a more accurate graph ? I described my process to an electrical engineer friend of mine and he insisted that I needed to apply something called hysteresis to the output or I would never get accurate A/V sync in this scenario.
Further, I know that some schools of thought on these matters require that the dots in those graphs be connected, that the scattered points simply won’t do. I guess it’s a stylistic choice.
Still, I think I have a reasonable, workable approach here. I might just be starting the visualization 1/30th of a second too late.