
Recherche avancée
Autres articles (59)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (6046)
-
How do I get FFMPEG to build a video using the same timing as my input ?
15 avril 2016, par Forest J. HandfordI’m trying to create a video of screen actions a user takes by piping screenshots to FFMPEG from a C# console application. I’m sending 10 frames per second. The final video has exactly as many frames as I sent (ie : a 10 second vid has 100 frames). The time, however, of the video does not match. With the below code I get 7m 47s worth of video from 490751 ms of input. I’ve found that PTS gets me a little closer, but it feels like I’m doing something wrong.
private const int VID_FRAME_FPS = 10;
private const double PTS = 2.4444;
/// <summary>
/// Generates the Videos by gathering frames and processing via FFMPEG.
/// Deletes the generated Frame images after successfully compiling the video.
/// </summary>
public static void RecordScreen(string pathToOutput)
{
Logger.log.Info("Launching FFMPEG ....");
String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libvpx -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.webm\"";
//String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libx264 -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.mp4\"";
Process launchingFFMPEG = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "ffmpeg",
Arguments = arg,
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true
}
};
launchingFFMPEG.Start();
System.Drawing.Image img;
Stopwatch stopWatch = Stopwatch.StartNew(); //creates and start the instance of Stopwatch
int sleep;
Stopwatch vidTime = Stopwatch.StartNew();
do
{
img = Capture.GetScreen();
img.Save(launchingFFMPEG.StandardInput.BaseStream, System.Drawing.Imaging.ImageFormat.Bmp);
img.Dispose();
sleep = 10 * VID_FRAME_FPS - (int)stopWatch.ElapsedMilliseconds;
if (sleep > 0)
{
Logger.log.Info("Captured frame, sleeping " + sleep + " milliseconds.");
Thread.Sleep(sleep);
}
stopWatch.Restart();
} while (workerThread.IsAlive);
Logger.log.Debug("Video Time: " + vidTime.ElapsedMilliseconds);
launchingFFMPEG.StandardInput.Flush();
launchingFFMPEG.StandardInput.Close();
launchingFFMPEG.Close();
}Is there a way to do this without PTS ? If I need PTS, what is the correct value ? It seems that PTS of 2.565656 is close to correct.
All the related documentation points to just using -r (the framerate command) but that doesn’t work (as I’m using it).
Note : I’m only using H.264 for debugging with ffprobe, I plan to switch back to webm when this is resolved. I’m trying to avoid H.256 and MP4 patents.
-
imagemagick gradient mask file creation
6 avril 2016, par lang2I’m playing with this creative script here : http://www.fmwconcepts.com/imagemagick/transitions/. The plan is to mimic what happens with the script with
ffmpeg
and generate video with transition effects between pictures. My current understanding is this :- I have two pictures A and B.
- I need in between a couple of pictures (say 15) that are partially A and partially B.
- To do that I use the
composite -compose src-over A.jpg B.jpg mask-n.jpg out.jpg
command. - During the process, the mask-n.jpg gets generated automatically that gradually change from all black to all white.
- Depends on the mathematically equations, the way the transition effect looks is different.
In one of the example, Fred the author gave this :
convert -size 128x128 gradient: maskfile.jpg
This will generate a image like this :
This is partially black and partially white. For the transition to work, I’ll need an all white one and an all black one and a couple of others in between. What’s the magical command to do that ?
-
Creating a video from data of server-side script
8 mars 2016, par Peter LeupoldMy plan is to display the data that a server-side script generates in a video displayable on my web page. So far my approach is the following :
- A three dimensional integer array in a C script is use to accumulate the image data. Dimensions are width, breadth and color (R, G and B).
- The array is written to a ppm-file.
- The next picture is accumulated and written and so on.
- With ffmpeg a script merges the ppm-files to a mp4-video.
Basically this works, but of course faster would be nicer. I would appreciate proposals for fundamentally different approaches as well as help on the following details :
- Is there a file format simple as ppm that uses the HEX code for colors instead of triplets ? This would reduce the size of my array as well a the number of write operations.
- Do I loose much time if I print every single value to the file with an fprintf operation instead of accumulating lines into a string ? Or do compilers optimize this kind of sequences of writes ?