
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (92)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
Sur d’autres sites (11444)
-
Android editing video files
25 novembre 2013, par KernaldI'm working on an Android application in which the user should be able to edit a video (cut and reorder some parts of a video taken with the phone).
I have no idea how to do this : the media package doesn't seems to contain anything related to this. The videoeditor package could probably help me, but it's not public in the SDK. ffmpeg could maybe be a solution, but I'd like to avoid using NDK if I can.
Any directions on how to do it ?
-
how to make live rtsp player using ffmpeg ?
20 avril 2018, par geeeekI am creating an rtsp live viewer using ffmpeg. I have a question about video delay, so I ask.
At beginning play live rtsp, Video delay is about 2 seconds on general movie player(vlc, gom). However, when the playback speed is increased, the delay is reduced and the video delay is shortened to about 300 ms.
I do not know how to reduce the delay to speed up or speed up this playback on my ffmpeg player.
It seems like I can use something like av_seek_frame, but it does not work.
How can I reduce live rtsp video delay with ffmpeg ?In summary, I’d like to get the latest frame to be av_read_frame in live rtsp.
-
How do I get FFMPEG to build a video using the same timing as my input ?
15 avril 2016, par Forest J. HandfordI’m trying to create a video of screen actions a user takes by piping screenshots to FFMPEG from a C# console application. I’m sending 10 frames per second. The final video has exactly as many frames as I sent (ie : a 10 second vid has 100 frames). The time, however, of the video does not match. With the below code I get 7m 47s worth of video from 490751 ms of input. I’ve found that PTS gets me a little closer, but it feels like I’m doing something wrong.
private const int VID_FRAME_FPS = 10;
private const double PTS = 2.4444;
/// <summary>
/// Generates the Videos by gathering frames and processing via FFMPEG.
/// Deletes the generated Frame images after successfully compiling the video.
/// </summary>
public static void RecordScreen(string pathToOutput)
{
Logger.log.Info("Launching FFMPEG ....");
String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libvpx -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.webm\"";
//String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libx264 -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.mp4\"";
Process launchingFFMPEG = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "ffmpeg",
Arguments = arg,
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true
}
};
launchingFFMPEG.Start();
System.Drawing.Image img;
Stopwatch stopWatch = Stopwatch.StartNew(); //creates and start the instance of Stopwatch
int sleep;
Stopwatch vidTime = Stopwatch.StartNew();
do
{
img = Capture.GetScreen();
img.Save(launchingFFMPEG.StandardInput.BaseStream, System.Drawing.Imaging.ImageFormat.Bmp);
img.Dispose();
sleep = 10 * VID_FRAME_FPS - (int)stopWatch.ElapsedMilliseconds;
if (sleep > 0)
{
Logger.log.Info("Captured frame, sleeping " + sleep + " milliseconds.");
Thread.Sleep(sleep);
}
stopWatch.Restart();
} while (workerThread.IsAlive);
Logger.log.Debug("Video Time: " + vidTime.ElapsedMilliseconds);
launchingFFMPEG.StandardInput.Flush();
launchingFFMPEG.StandardInput.Close();
launchingFFMPEG.Close();
}Is there a way to do this without PTS ? If I need PTS, what is the correct value ? It seems that PTS of 2.565656 is close to correct.
All the related documentation points to just using -r (the framerate command) but that doesn’t work (as I’m using it).
Note : I’m only using H.264 for debugging with ffprobe, I plan to switch back to webm when this is resolved. I’m trying to avoid H.256 and MP4 patents.