
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (75)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (8558)
-
FFmpeg memory growing in avcodec_decode_video2
21 juillet 2015, par Wendy PanI am using C# to call the ffmpeg lib.
I use the example code to decode an video in the MP4 format.However, I found out that the memory grows a lot (like 10MB), when I called the avcodec_decode_video2 function.
The memory keeps growing until I called the avcodec_close(&CodecContext) function.
I suppose the reason may be that AVCodecContext alloc 10MB data during the call of avcodec_decode_video2 function. And this memory will not be released until the AVCodecContext is closed.
I do not know if I am right.
Main codes are in the following.
private bool readNextBuffer(int index)
{
/// Read frames into buffer
//m_codecContext->refcounted_frames = 0;
int frameNumber = 0;
FrameBuffer[index].Clear();
GC.Collect();
while (frameNumber < m_bufferSize)
{
AVPacket Packet; /// vedio packet
AVPacket* pPacket = &Packet;
//AVFrame* pDecodedFrame = FFmpegInvoke.avcodec_alloc_frame();
AVFrame* pDecodedFrame = FFmpegInvoke.av_frame_alloc();
if (FFmpegInvoke.av_read_frame(m_pFormatContext, pPacket) < 0)
{
Console.WriteLine("The end of the vedio.");
break;
}
if (pPacket->stream_index == pStream->index)
{
/// Decode vedio frame
int gotPicture = 0;
int size = FFmpegInvoke.avcodec_decode_video2(m_codecContext, pDecodedFrame, &gotPicture, pPacket);
if (size < 0)
{
Console.WriteLine("End of the vedio.");
//throw new Exception(string.Format("Error while decoding frame {0}", frameNumber));
break;
}
if (gotPicture == 1)
{
/// Allocate an AVFrame structure
if (convertFrame(pDecodedFrame, index))
{
frameNumber++;
}
else
{
Console.WriteLine("Error: convert failed.");
}
}
}
FFmpegInvoke.av_frame_free(&pDecodedFrame);
FFmpegInvoke.av_free_packet(pPacket);
GC.Collect();
}
nowFrameIndex = 0;
return FrameBuffer.Count > 0 ? true : false;
} -
Is there a way to use ffmpeg audio filters to automatically synchronize 2 streams with similar content
29 mai 2015, par user3741412I have a situation where I have a video capture of HD content via HDMI with audio from a sound board that goes through a impedance drop into a microphone input of a camcorder. That same signal is split at line level to a ’line in’ jack on the same computer that is capturing the HDMI. Alternatively I can capture the audio via USB from the soundboard which is probably the best plan, but carries with it the same issue.
The point is that the line in or usb capture will be much higher quality than the one on HDMI because the line out -> impedance change -> mic in path generates inferior quality in that simply brushing the mic jack on the camera while trying to change the zoom (close proximity) can cause noise on the recording.
So I can do this today :
- Take the good sound and the camera captured sound and load each into
audacity and pretty quickly use the timeshift toot to perfectly fit
the good audio to the questionable audio from the HDMI capture and
cut the good audio to the exact size of the video. Then I can use
ffmpeg or other video editing software to replace the questionable
audio with the better audio.
But while somewhat quick and easy, it always carries with it a bit of human error and time. I’d like to automate this if possible as this process is repeated at least weekly throughout the year.
Does anyone have a suggestion if any of these ideas have merit or could suggest another approach ?
-
I suspect but have yet to confirm that the system timestamp of the start time may be recorded in both audio captured with something like Audacity, or the USB capture tool from the sound board as well as the HDMI mpeg-2 video. I tried ffprobe on a couple audacity captured .wav files but didn’t see anything in the results about such a time code, but perhaps other audio formats or other probing tools may include this info. Can anyone advise if this is common with any particular capture tools or file formats ?
- if so, I think I could get best results by extracting this information and then using simple adelay and atrim filters in ffmpeg to sync reliably directly from the two sources in one ffmpeg call. This is all theoretical for me right now— I’ve never tried either of these filters yet— just trying to optimize against blind alleys by asking for advice up front.
-
If such timestamps are not embedded, possibly I can use the file system timestamp for the same idea expressed in 1a, but I suspect the file open of the two capture tools may have different inherant delays. Possibly these delays will be found to be nearly constant and the approach can work with a built-in constant anticipation delay but sounds messy and less reliable than idea 1. Still, I’d take it, if it turns out reasonably reliable
-
Are there any ffmpeg or general digital audio experts out there that know of particular filters that can be employed on the actual data to look for similarities like normalizing the peak amplitudes or normalizing the amplification of the two to some RMS value and then stepping through a short 10 second snippet of audio, moving one time stream .01s left against the other repeatedly and subtracting the two and looking for a minimum ? Sounds like it could take a while, but if it could do this in less than a minute and be reliable, I suspect it could work. But I have only rudimentary knowledge of audio streams and perhaps what I suggest is just not plausible— but since each stream starts with the same source I think there should be a chance. I am just way out of my depth as to how to go down this road, so if someone out there knows such magic or can throw me some names of filters and example calls, I can explore if I can make it work.
-
any hardware level suggestions to take a line level output down to a mic level input and not have the problems I am seeing using a simple in-line impedance drop module, so that I can simply rely on the audio from the HDMI ?
Thanks in advance for any pointers or suggestinons !
- Take the good sound and the camera captured sound and load each into
-
FFmpegMediaPlayer : findLibrary returned null
8 août 2015, par IceJOKERI use https://github.com/wseemann/FFmpegMediaPlayer in my applicaton, but some Adndroid device throw exception :
java.lang.ExceptionInInitializerError
at ru.mypackage.PlayService.initPlayer(PlayService.java:74)
at ru.mypackage.PlayService.onCreate(PlayService.java:68)
at android.app.ActivityThread.handleCreateService(ActivityThread.java:1949)
at android.app.ActivityThread.access$2500(ActivityThread.java:117)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:989)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:130)
at android.app.ActivityThread.main(ActivityThread.java:3687)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:507)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:867)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)
at dalvik.system.NativeStart.main(Native Method)
Caused by: java.lang.UnsatisfiedLinkError: Couldn't load avutil: findLibrary returned null
at java.lang.Runtime.loadLibrary(Runtime.java:429)
at java.lang.System.loadLibrary(System.java:554)
at wseemann.media.FFmpegMediaPlayer.<clinit>(FFmpegMediaPlayer.java:620)
... 13 more
</clinit>[![enter image description here][3]][3]
Can somebody explain me what’s wrong there ?
On my device and some other device the app working nice, but on some device (for example : Galaxy Ace (GT-S5830i) Android 2.3.3 - 2.3.7) it throw the exception.p.s. about "lib" prefix I understood ( http://developer.android.com/intl/ru/reference/java/lang/System.html#mapLibraryName(java.lang.String) )