
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (70)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (8830)
-
Révision 23340 : Compatibilité PHP 7.1 : minipipe() utilise call_user_func() qui ne peut pas trans...
18 janvier 2017, par marcimat@rezo.net"call_user_func() and call_user_func_array() will now always throw errors when trying to call a method whose signature has references. Previously this sometimes worked if call_user_func() was used outside a namespace."
-
How can I read a live webstream in java using xuggle ? (I can do it in ffmpeg, just not xuggle)
25 février 2013, par GramminSo if I run :
ffmpeg -t 10 -re -i "rtmp://170.93.143.150/rtplive/ app=rtplive/ playpath=e000990f025f0075004d823633235daa swfUrl=http://www.chart.state.md.us/video/mediaplayer/player.swf pageUrl=http://www.chart.state.md.us/video/video.asp?feed=e000990f025f0075004d823633235daa stop=5000 flashver=`LNX 11,2,202,262` live=true" test.flv -report
It gives me a 5 second snapsnot of video from that webstream that gets put into test.flv.
Now I would like to do the same thing in java using xuggle except everytime I try and open the container it errors out on me and sets x to -1 :public IMediaReader grabStream(IMediaReader reader) throws IOException
{
String rtmp = "rtmp://170.93.143.150/rtplive/";
rtmp = rtmp + " app=rtplive/";
rtmp = rtmp + " playpath=e000990f025f0075004d823633235daa";
rtmp = rtmp + " swfUrl=http://www.chart.state.md.us/video/mediaplayer/player.swf";
rtmp = rtmp + " pageUrl=http://www.chart.state.md.us/video/video.asp?feed=e000990f025f0075004d823633235daa";
rtmp = rtmp + " flashver=`LNX 11,2,202,262`";
rtmp = rtmp + " live=true";
IContainer container = IContainer.make();
IMediaReader newReader = ToolFactory.makeReader(container);
int x = container.open(rtmp, IContainer.Type.READ, null, true, false);
if (x < 0)
{
IError ie = IError.make(x);
System.out.println("Open error: " + ie.getType().toString());
throw new RuntimeException("failed to open with error" + x);
}
return newReader;
}Maybe the best way to do it is to stream in ffmpeg to a xuggle container using inputstream somehow ? Or maybe there is another way to stream in a webstream to java ?
-
how do i create a stereo mp3 file with latest version of ffmpeg ?
17 juin 2016, par SeanI’m updating my code from the older version of ffmpeg (53) to the newer (54/55). Code that did work has now been deprecated or removed so i’m having problems updating it.
Previously I could create a stereo MP3 file using a sample format called :
SAMPLE_FMT_S16
That matched up perfectly with my source stream. This has now been replace with
AV_SAMPLE_FMT_S16
Which works fine for mono recordings but when I try to create a stereo MP3 file it bugs out at avcodec_open2 with :
"Specified sample_fmt is not supported."
Through trial and error I’ve found that using
AV_SAMPLE_FMT_S16P
...is accepted by avcodec_open2 but when I get through and create the MP3 file the sound is very distorted - it sounds about 2 octaves lower than usual with a massive hum in the background - here’s an example recording :
http://hosting.ispyconnect.com/example.mp3
I’ve been told by the ffmpeg guys that this is because I now need to manually deinterleave my byte stream before calling :
avcodec_fill_audio_frame
How do I do that ? I’ve tried using the swrescale library without success and i’ve tried manually feeding in L/R data into avcodec_fill_audio_frame but the results i’m getting are sounding exactly the same as without interleaving.
Here is my code for encoding :
void add_audio_sample( AudioWriterPrivateData^ data, BYTE* soundBuffer, int soundBufferSize)
{
libffmpeg::AVCodecContext* c = data->AudioStream->codec;
memcpy(data->AudioBuffer + data->AudioBufferSizeCurrent, soundBuffer, soundBufferSize);
data->AudioBufferSizeCurrent += soundBufferSize;
uint8_t* pSoundBuffer = (uint8_t *)data->AudioBuffer;
DWORD nCurrentSize = data->AudioBufferSizeCurrent;
libffmpeg::AVFrame *frame;
int got_packet;
int ret;
int size = libffmpeg::av_samples_get_buffer_size(NULL, c->channels,
data->AudioInputSampleSize,
c->sample_fmt, 1);
while( nCurrentSize >= size) {
frame=libffmpeg::avcodec_alloc_frame();
libffmpeg::avcodec_get_frame_defaults(frame);
frame->nb_samples = data->AudioInputSampleSize;
ret = libffmpeg::avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt, pSoundBuffer, size, 1);
if (ret<0)
{
throw gcnew System::IO::IOException("error filling audio");
}
//audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
libffmpeg::AVPacket pkt = { 0 };
libffmpeg::av_init_packet(&pkt);
ret = libffmpeg::avcodec_encode_audio2(c, &pkt, frame, &got_packet);
if (ret<0)
throw gcnew System::IO::IOException("error encoding audio");
if (got_packet) {
pkt.stream_index = data->AudioStream->index;
if (pkt.pts != AV_NOPTS_VALUE)
pkt.pts = libffmpeg::av_rescale_q(pkt.pts, c->time_base, c->time_base);
if (pkt.duration > 0)
pkt.duration = av_rescale_q(pkt.duration, c->time_base, c->time_base);
pkt.flags |= AV_PKT_FLAG_KEY;
if (libffmpeg::av_interleaved_write_frame(data->FormatContext, &pkt) != 0)
throw gcnew System::IO::IOException("unable to write audio frame.");
}
nCurrentSize -= size;
pSoundBuffer += size;
}
memcpy(data->AudioBuffer, data->AudioBuffer + data->AudioBufferSizeCurrent - nCurrentSize, nCurrentSize);
data->AudioBufferSizeCurrent = nCurrentSize;
}Would love to hear any ideas - I’ve been trying to get this working for 3 days now :(