
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (69)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (10765)
-
How to record audio with ffmpeg on linux ?
7 décembre 2014, par Conor PatrickI’d like to record audio from my microphone. My OS is ubuntu. I’ve tried the following and got errors
$ ffmpeg -f alsa -ac 2 -i hw:1,0 -itsoffset 00:00:00.5 -f video4linux2 -s 320x240 -r 25 /dev/video0 out.mpg
ffmpeg version 0.8.8-4:0.8.8-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav
developers
built on Oct 22 2013 12:31:55 with gcc 4.6.3
*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release.
Please use avconv instead.
ALSA lib conf.c:3314:(snd_config_hooks_call) Cannot open shared library
libasound_module_conf_pulse.so
ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM hw:1,0
[alsa @ 0xbda7a0] cannot open audio device hw:1,0 (No such file or directory)
hw:1,0: Input/output errorThen I tried
$ ffmpeg -f oss -i /dev/dsp audio.mp3
ffmpeg version 0.8.8-4:0.8.8-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav
developers
built on Oct 22 2013 12:31:55 with gcc 4.6.3
*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release.
Please use avconv instead.
[oss @ 0x1ba57a0] /dev/dsp: No such file or directory
/dev/dsp: Input/output errorI haven’t been able to get ffmpeg to find my microphone. How can I tell ffmpeg to record from my microphone ?
It seems the ’Deprecated’ message can be ignored because of this topic -
Create a demultiplexer for MPEG 2 TS in android
17 novembre 2014, par anzI have a requirement where I need to extract ID3 tags from a MPEG2 TS(HLS STREAM). MPEG2 has a limited support in android in regards to playing the file. But my concern is to extract the ID3 tags(playing the file is not necessary). Hence I am not concerned with the codecs(encoding and decoding).
I have explored the following options :
libstagefright and OpenMax : A playback engine implemented by Google from Android 2.0.
It has a MediaExtractor is responsible for retrieving track data and the corresponding meta data from the underlying file system or http stream. But according to this post Adding video codec to Android I need to build my own firmware or my own media player.I am hoping I don’t have to go down that path. More info on stagefright and openMax can be found here :An overview of Stagefright player
Android’s Stagefright Media Player Architecture
Custom Wrapper Codec Integration into Android
How to integrate a decoder to multimedia framework
Compiling and using FFMPEG : A complete, cross-platform solution to record, convert and stream audio and video. We can demultiplex ts files with this library as mentioned here :
FFmpeg - Extracting video and audio from transport stream file (.ts).
But I am not sure if I will be able to extract the ID3 tags from the HLS Stream. libavformat might be able to do this but I still need to come up with a mechanism for signaling the read metadata to my application.
Compiling vlc for android : I have compiled vlc for android and made some modifications inside the transport module in demux component for extracting the tags, but it is not able to play all the streams that I am supplying to it.
After looking through these options , I am still at a fix in how to achieve this. I don’t want to create a media player as I will not be playing the files nor do I want to build my own firmware. Using ffmpeg seems to be the most viable option, but I want to try this without using any third-party or open source library. My questions are :
Is it even possible to create a demultiplexer from scratch that will work on android ?
If possible then ,how to go about it ?
Any options that I have missed ?
I am new to this. Any help would be greatly appreciated..Thanks
-
FFmpeg muxing theora/vorbis unable to flush ?
11 novembre 2013, par user2979732I'm pretty new to ffmpeg and it's confusing. I'm working on a basic muxer and have been spending over a week on this - I don't normally post as I solve 98% of my issues with google, but unable to get this one so far.
The basis of my source is FFmpeg's own muxing.c example. When I try to force it using libvorbis for audio, and create "test.ogg" it demonstrates the same issues I'm having in my own derivation of muxing.c. The problem is with ogg/theora/vorbis. I'm forcing the use of audio codec like this :
audio_st = add_stream(oc, &audio_codec, avcodec_find_encoder_by_name("libvorbis")->id);
It seems the problem is in not setting audio pts in the muxing.c sample. There is a confusion in general about this, nobody apart from this guy didn't address what I am looking for http://webcache.googleusercontent.com/search?q=cache:6ml82RMN3YYJ:ffmpeg.org/pipermail/libav-user/2013-April/004304.html+&cd=4&hl=en&ct=clnk&gl=cz
I couldn't find any answers to that naturally - like why don't they set the audio pts ? Laziness ? Not needed ? Do they believe all encoders will produce the pts for them(not true as seen below) ?
Anyway, when you try muxing.c with mp4/libx264/forced libmp3lame all is fine, but the encoder says that "encoder did not produce valid pts, making some up.". However, it's silent with ogg/theora/vorbis, as if there were valid pts(?) but the result is no audio packets present in the stream(!), at least from what I saw using ffprobe. Which results in the video not being able to replay even, until you take out the empty audio stream. Then it plays the video, which shows that stream is fine.
Coming to my original issue. I tried setting the pts on the audio frame you're sending to the encoder to fix that problem(this already sucks). I was unable to find a definite answer how to properly set the pts - that's the other big issue as I'm trying stuff which I'm not sure works. Anyway, in the end when setting "some" pts, this results in ogg with sound.
if (frame->pts == AV_NOPTS_VALUE) frame->pts = audio_sync_opts;
audio_sync_opts = frame->pts + frame->nb_samples;I'm aware I should probably use rescaling to adjust for the container time bases etc..if this was present/explained in ffmpeg's own sample I wouldn't have to guess now (as I'm stil not 100% sure about time base relationship between container and codec, I think container time base takes somehow over the codec one).
My other problem is flushing - but that might have something to do with the screwed up pts. So I won't rather get into that in detail - the basic problem is, when I send finite number of audio frames, like 20, I get 2 packets only for example. From my understanding, I need to flush the rest of audio after all the encoding/muxing is done, which I managed to do with mp4/libx264/libmp3lame, but with ogg/theora/vorbis it doesn't flush. Why not, I have no idea.
If someone could rework muxing.c into sending it finite number of audio / video frames - ie . not until duration > X, but until it sent 20 video & 100 audio frames(just an example). So that number of frames I have is important, not the video time I end up with. Then encode / mux all the frames - with proper video/audio pts, working with theora/ogg and flushing if needed, that would probably solve all of my issues. I'm sure for an expert ffmpeg'er modifying muxing.c addressing all those things would be a pretty quick exercise and could help more than 1 confused person.
Thanks !