
Recherche avancée
Autres articles (15)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)
Sur d’autres sites (4525)
-
Architecture of video-based service for mobile phones
27 juin 2015, par David AzarI guess this is more of a conceptual question than a technical one.
I’m trying to figure out the best way to upload short videos to a server and also be able to download them and watch them on both Android and iOS.
Lets focus on Android for the moment.
I’ve done some experiments, and my results have been :
-
I’m able to compress 12-14MB video down to 500KB using FFMPEG lib with pretty good results in quality, but it takes about 12 seconds.
-
Next, im uploading those videos to my Parse backend as ParseFile to store them.
-
Finally, i can download them and watch them with no problem using a VideoView widget.
Now, for the tests i’ve been running, these are great results. But i want to see if there is a better way to manage and scale all of this.
My questions are :
-
Is there a better, lighter way to compress video ?
-
Is Parse the right way to go ?
-
How can i stream videos instead of downloading them and storing the on local storage before playing them ? i know this will cause my app to use significant space on disk and i dont want that.
-
How do big companies do this kind of tasks ?
I’ve heard Amazon S3 is a cool thing for projects like this one, also Google Cloud Platform. I want to understand the best approach before building everything so i can do it the right way and also, provide the absolute best user experience for watching these videos.
-
-
Why when using ffmpeg to create in real time avi video file from images the avi file is playing with purple noisy color ?
30 juin 2015, par Brubaker HaimThis is my Ffmpeg class i did some time ago
using System;
using System.Windows.Forms;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Drawing;
using System.IO.Pipes;
using System.Runtime.InteropServices;
using System.Diagnostics;
using System.IO;
using DannyGeneral;
namespace Manager
{
class Ffmpeg
{
NamedPipeServerStream p;
String pipename = "mytestpipe";
System.Diagnostics.Process process;
string ffmpegFileName = "ffmpeg.exe";
string workingDirectory;
public Ffmpeg()
{
workingDirectory = Path.GetDirectoryName(Application.ExecutablePath);
Logger.Write("workingDirectory: " + workingDirectory);
if (!Directory.Exists(workingDirectory))
{
Directory.CreateDirectory(workingDirectory);
}
ffmpegFileName = Path.Combine(workingDirectory, ffmpegFileName);
Logger.Write("FfmpegFilename: " + ffmpegFileName);
}
public void Start(string pathFileName, int BitmapRate)
{
try
{
string outPath = pathFileName;
p = new NamedPipeServerStream(pipename, PipeDirection.Out, 1, PipeTransmissionMode.Byte);
ProcessStartInfo psi = new ProcessStartInfo();
psi.WindowStyle = ProcessWindowStyle.Hidden;
psi.UseShellExecute = false;
psi.CreateNoWindow = false;
psi.FileName = ffmpegFileName;
psi.WorkingDirectory = workingDirectory;
psi.Arguments = @"-f rawvideo -pix_fmt yuv420p -video_size 1920x1080 -i \\.\pipe\mytestpipe -map 0 -c:v mpeg4 -r " + BitmapRate + " " + outPath;
process = Process.Start(psi);
process.EnableRaisingEvents = false;
psi.RedirectStandardError = true;
p.WaitForConnection();
}
catch (Exception err)
{
Logger.Write("Exception Error: " + err.ToString());
}
}
public void PushFrame(Bitmap bmp)
{
try
{
int length;
// Lock the bitmap's bits.
//bmp = new Bitmap(1920, 1080);
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
//Rectangle rect = new Rectangle(0, 0, 1280, 720);
System.Drawing.Imaging.BitmapData bmpData =
bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly,
bmp.PixelFormat);
int absStride = Math.Abs(bmpData.Stride);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
//length = 3 * bmp.Width * bmp.Height;
length = absStride * bmpData.Height;
byte[] rgbValues = new byte[length];
//Marshal.Copy(ptr, rgbValues, 0, length);
int j = bmp.Height - 1;
for (int i = 0; i < bmp.Height; i++)
{
IntPtr pointer = new IntPtr(bmpData.Scan0.ToInt32() + (bmpData.Stride * j));
System.Runtime.InteropServices.Marshal.Copy(pointer, rgbValues, absStride * (bmp.Height - i - 1), absStride);
j--;
}
p.Write(rgbValues, 0, length);
bmp.UnlockBits(bmpData);
}
catch(Exception err)
{
Logger.Write("Error: " + err.ToString());
}
}
public void Close()
{
p.Close();
}
}
}And i’m using it in form1 in a button click event :
private void button1_Click(object sender, EventArgs e)
{
timer1.Start();
}the directroy screenshots is where i’m taking a screenshot every 100ms in the timer1 tick event :
ScreenShot shot = new ScreenShot();
public static int counter = 0;
private void timer1_Tick(object sender, EventArgs e)
{
counter++;
shot.GetScreenShot(@"e:\screenshots\", "screenshot");
if (counter == 1200)
{
timer1.Stop();
}
}I’m calling the method PushFrame from inside the ScreenShot class where i save the screenshots.
Ffmpeg fmpeg;
Then :
fmpeg = new Ffmpeg();
fmpeg.Start(@"e:\screenshots\test.avi", 25);And :
public Bitmap GetScreenShot(string folder, string name)
{
_screenShot = new Bitmap(GetScreen());
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
string ingName = folder + name + Elgato_Video_Capture.counter.ToString("D6") + ".bmp";
_screenShot.Save(ingName);
fmpeg.PushFrame(_screenShot);
_screenShot.Dispose();
return _screenShot;
}All the images on the hard disk are fine i can edit/open them and watch them no problems.
They are also same size.The result in the end is one big avi file 1.08 GB size.
But when i play it i see many windows running inside very fast and all painted with noisy purple color.Here a screenshot from the video file when playing it :
I think the problem is somewhere in the Ffmpeg class where i give parameters to the ffmpeg.exe
psi.Arguments = @"-f rawvideo -pix_fmt yuv420p -video_size 1920x1080 -i \\.\pipe\mytestpipe -map 0 -c:v mpeg4 -r " + BitmapRate + " " + outPath;
Not sure what make this avi file to look like that.
This is the video file the result i got : https://www.youtube.com/watch?v=fdxPus-Xv1k&feature=youtu.be
-
Why i'm getting strange video file when using ffmpeg and pipes to create video file in real time ?
30 juin 2015, par Brubaker HaimThe goal here is to create a compressed mp4 video file in real time.
I’m saving screenshots as bitmaps type on my hard disk.
And i want to create mp4 file and compress the mp4 video file in real time.The problem is the end the video file i get looks very strange.
This is the result : Strange Video FileThe class that i’m using the ffmpeg with arguments.
using System;
using System.Windows.Forms;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Drawing;
using System.IO.Pipes;
using System.Runtime.InteropServices;
using System.Diagnostics;
using System.IO;
using DannyGeneral;
namespace Youtube_Manager
{
class Ffmpeg
{
NamedPipeServerStream p;
String pipename = "mytestpipe";
System.Diagnostics.Process process;
string ffmpegFileName = "ffmpeg.exe";
string workingDirectory;
public Ffmpeg()
{
workingDirectory = Path.GetDirectoryName(Application.ExecutablePath);
Logger.Write("workingDirectory: " + workingDirectory);
if (!Directory.Exists(workingDirectory))
{
Directory.CreateDirectory(workingDirectory);
}
ffmpegFileName = Path.Combine(workingDirectory, ffmpegFileName);
Logger.Write("FfmpegFilename: " + ffmpegFileName);
}
public void Start(string pathFileName, int BitmapRate)
{
try
{
string outPath = pathFileName;
p = new NamedPipeServerStream(pipename, PipeDirection.Out, 1, PipeTransmissionMode.Byte);
ProcessStartInfo psi = new ProcessStartInfo();
psi.WindowStyle = ProcessWindowStyle.Hidden;
psi.UseShellExecute = false;
psi.CreateNoWindow = false;
psi.FileName = ffmpegFileName;
psi.WorkingDirectory = workingDirectory;
psi.Arguments = @"-f rawvideo -pix_fmt yuv420p -video_size 1920x1080 -i \\.\pipe\mytestpipe -map 0 -c:v mpeg4 -r " + BitmapRate + " " + outPath;
//psi.Arguments = @"-framerate 1/5 -i -c:v libx264 -r 30 -pix_fmt yuv420p \\.\pipe\mytestpipe -map 0 -c:v mpeg4 -r" + BitmapRate + " " + outPath;
process = Process.Start(psi);
process.EnableRaisingEvents = false;
psi.RedirectStandardError = true;
p.WaitForConnection();
}
catch (Exception err)
{
Logger.Write("Exception Error: " + err.ToString());
}
}
public void PushFrame(Bitmap bmp)
{
try
{
int length;
// Lock the bitmap's bits.
//bmp = new Bitmap(1920, 1080);
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
//Rectangle rect = new Rectangle(0, 0, 1280, 720);
System.Drawing.Imaging.BitmapData bmpData =
bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly,
bmp.PixelFormat);
int absStride = Math.Abs(bmpData.Stride);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
//length = 3 * bmp.Width * bmp.Height;
length = absStride * bmpData.Height;
byte[] rgbValues = new byte[length];
//Marshal.Copy(ptr, rgbValues, 0, length);
int j = bmp.Height - 1;
for (int i = 0; i < bmp.Height; i++)
{
IntPtr pointer = new IntPtr(bmpData.Scan0.ToInt32() + (bmpData.Stride * j));
System.Runtime.InteropServices.Marshal.Copy(pointer, rgbValues, absStride * (bmp.Height - i - 1), absStride);
j--;
}
p.Write(rgbValues, 0, length);
bmp.UnlockBits(bmpData);
}
catch(Exception err)
{
Logger.Write("Error: " + err.ToString());
}
}
public void Close()
{
p.Close();
}
}
}Then i’m using the method PushFrame here in this class :
public Bitmap GetScreenShot(string folder, string name)
{
_screenShot = new Bitmap(GetScreen());
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
string ingName = folder + name + images.counter.ToString("D6") + ".bmp";
_screenShot.Save(ingName,System.Drawing.Imaging.ImageFormat.Bmp);
fmpeg.PushFrame(_screenShot);
_screenShot.Dispose();
return _screenShot;
}The Bitmaps on the hard disk are fine i can edit/open see them.
When using command prompt and manually type ffmpeg command it does compress and create a mp4 video file.But when using the Ffmpeg class i did with the PushFrame method it’s creating this strange video file.
This is a link for my OneDrive with 10 screenshots images files for testing :
Sample screenshot from the video file when playing the video file :
Looks very choppy. The Bitmaps on the hard disk each one is 1920x1080 and Bit depth 32But it dosen’t look like that in the video file :
This is the arguments i’m using :
psi.Arguments = @"-f rawvideo -vcodec rawvideo -pix_fmt rgb24 -video_size 1920x1080 -i \\.\pipe\mytestpipe -map 0 -c:v mpeg4 -r " + BitmapRate + " " + outPath;
The video size is very small 1.24 MB