
Recherche avancée
Autres articles (62)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (8639)
-
Why when recording video using ffmpeg command line when playing the video it's running very fast ?
31 mai 2013, par Revuen Ben DrorThis is my code in Form1 :
private void StartRecording_Click(object sender, EventArgs e)
{
ffmp.Start("test.avi", 25);
timer1.Enabled = true;
}The timer tick event :
private void timer1_Tick(object sender, EventArgs e)
{
using (bitmap = (Bitmap)ScreenCapture.CaptureScreen(true))//(Bitmap)ScreenCapture.CaptureScreen(true))
{
ffmp.PushFrame(bitmap);
}
}Then the code in the ffmpeg class i did :
namespace ScreenVideoRecorder
{
class Ffmpeg
{
NamedPipeServerStream p;
String pipename = "mytestpipe";
byte[] b;
//int i, j;
System.Diagnostics.Process process;
//IAsyncResult ar;
public Ffmpeg()
{
}
public void Start(string FileName, int BitmapRate)
{
p = new NamedPipeServerStream(pipename, PipeDirection.Out, 1, PipeTransmissionMode.Byte);
b = new byte[1920 * 1080 * 3]; // some buffer for the r g and b of pixels of an image of size 720p
process = new System.Diagnostics.Process();
//process.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
process.StartInfo.FileName = @"D:\pipetest\pipetest\ffmpegx86\ffmpeg.exe";
process.EnableRaisingEvents = false;
process.StartInfo.WorkingDirectory = @"D:\pipetest\pipetest\ffmpegx86";
process.StartInfo.Arguments = @"-f rawvideo -pix_fmt bgr0 -video_size 1920x1080 -i \\.\pipe\mytestpipe -map 0 -c:v libx264 -r " + BitmapRate + " " + FileName;
process.Start();
process.StartInfo.UseShellExecute = false;
process.StartInfo.CreateNoWindow = false;
//ar = p.BeginWaitForConnection(EndWait,null);
p.WaitForConnection();
}
public void PushFrame(Bitmap bmp)
{
int length;
// Lock the bitmap's bits.
//bmp = new Bitmap(1920, 1080);
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
//Rectangle rect = new Rectangle(0, 0, 1280, 720);
System.Drawing.Imaging.BitmapData bmpData =
bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly,
bmp.PixelFormat);
int absStride = Math.Abs(bmpData.Stride);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
//length = 3 * bmp.Width * bmp.Height;
length = absStride * bmpData.Height;
byte[] rgbValues = new byte[length];
//Marshal.Copy(ptr, rgbValues, 0, length);
int j = bmp.Height - 1;
for (int i = 0; i < bmp.Height; i++)
{
IntPtr pointer = new IntPtr(bmpData.Scan0.ToInt32() + (bmpData.Stride * j));
System.Runtime.InteropServices.Marshal.Copy(pointer, rgbValues, absStride * (bmp.Height - i - 1), absStride);
j--;
}
p.Write(rgbValues, 0, length);
bmp.UnlockBits(bmpData);The ffmpeg command line is :
process.StartInfo.Arguments = @"-f rawvideo -pix_fmt bgr0 -video_size 1920x1080 -i \\.\pipe\mytestpipe -map 0 -c:v libx264 -r " + BitmapRate + " " + FileName;
And i checked with a breakpoint and saw that the variable BitmapRate is 25 .
And the timer interval is 40 .So it should fit for 25 frames per seconds.
But when i'm running the video file using MPC Media Player Class everything in the video is running very fast. I mean the video is 2,326KB size the video Length is 16 seconds .And everything in the video seems to mvoe very fast if i opend a new window or dragged a window somewhere in the screen i see it moving very fast and as it i moved and did everything in real .
I see on the video file details that : length 16 seconds. 1920x1080 and 25 frames per second.
So where is the problem ? i can't figure out.
-
Video decoding using ffms2 (ffmpegsource)
21 juin 2013, par praks411I'm using ffms2 (aka FFmpegSource) for decoding video frames and display on UI based on wxWidgets.
My player works fine for low resolution video (320*240, 640*480) but for higher resolution (1080) it is very slow. I'm not able to meed the desired frame for high resolution video.
After time analysis I found that FFMS_GetFrame() frame function takes much longer time for high resolution frame.
Here are the results.
1. 320*240 FFMS_GetFrame takes 4-6ms
2. 640*480 FFMS_GetFrame takes >20ms
3. 1080*720 FFMS_GetFrame takes >40Which means that I'll never meets 30 fps requirement for 1080p frame with FFMS2. But I'm not sure if this is the case.
Please suggest what could be going wrong.void SetPosition(int64 pos)
{
uint8_t* data_ptr = NULL;
/*check if position is valid*/
if (!m_track || pos < 0 && pos > m_videoProp->NumFrames - 1)
return; // ERR_POS;
wxMilliClock_t start_wx_t = wxGetLocalTimeMillis();
long long start_t = start_wx_t.GetValue();
m_frameId = pos;
if(m_video)
{
m_frameProp = FFMS_GetFrame(m_video, m_frameId, &m_errInfo);
if(!m_frameProp) return;
if(m_frameProp)
{
m_width_ffms2 = m_frameProp->EncodedWidth;
m_height_ffms2 = m_frameProp->EncodedHeight;
}
wxMilliClock_t end_wx_t = wxGetLocalTimeMillis();
long long end_t = end_wx_t.GetValue();
long long diff_t = end_t - start_t;
wxLogDebug(wxString(wxT("Frame Grabe Millisec") + ToString(diff_t)));
//m_frameInfo = FFMS_GetFrameInfo(m_track, FFMS_TYPE_VIDEO);
/* If you want to change the output colorspace or resize the output frame size, now is the time to do it.
IMPORTANT: This step is also required to prevent resolution and colorspace changes midstream. You can
always tell a frame's original properties by examining the Encoded properties in FFMS_Frame. */
/* A -1 terminated list of the acceptable output formats (see pixfmt.h for the list of pixel formats/colorspaces).
To get the name of a given pixel format, strip the leading PIX_FMT_ and convert to lowercase. For example,
PIX_FMT_YUV420P becomes "yuv420p". */
#if 0
int pixfmt[2];
pixfmt[0] = FFMS_GetPixFmt("bgr24");
pixfmt[1] = -1;
#endif
// FFMS_SetOutputFormatV2 returns 0 on success. It Returns non-0 and sets ErrorMsg on failure.
int failure = FFMS_SetOutputFormatV2(m_video, pixfmt, m_width_ffms2, m_height_ffms2, FFMS_RESIZER_BICUBIC, &m_errInfo);
if (failure)
{
//FFMS_DestroyVideoSource(m_video);
//m_video = NULL;
return; //return ERR_POS;
}
data_ptr = m_frameProp->Data[0];
}
else
{
m_width_ffms2 = 320;
m_height_ffms2 = 240;
}
if(data_ptr)
{
memcpy(m_buf, data_ptr, 3*m_height_ffms2 * m_width_ffms2);
}
else
{
memset(m_buf, 0, 3*m_height_ffms2 * m_width_ffms2);
}
} -
ffmpeg commands to concatenate different type and resolution videos into 1 video and can be played in android
26 octobre 2015, par AalapI want to concatinate 4 different videos of 4 different resolution and type into 1 video which can be played in android. I am using ffmpeg ported on android using https://github.com/guardianproject/android-ffmpeg
So I have these 4 different types of videos
1)./ffmpeg -i 1.mp4
Video: h264 (High), yuv420p, 1920x1080, 16959 kb/s, 29.85 fps, 90k tbr, 90k tbn, 180k tbc
Audio: aac, 48000 Hz, stereo, s16, 106 kb/s2)
ffmpeg -i 2.mp4
Video: h264 (Constrained Baseline), yuv420p, 640x480, 3102 kb/s, 29.99 fps, 90k tbr, 90k tbn, 180k tbc
Audio: aac, 48000 Hz, stereo, s16, 93 kb/s3)
ffmpeg -i 3.3gp
Video: h263, yuv420p, 1408x1152 [PAR 12:11 DAR 4:3], 2920 kb/s, 15 fps, 15 tbr, 15360 tbn, 29.97 tbc
Audio: amrnb, 8000 Hz, 1 channels, flt, 12 kb/s4)
ffmpeg -i 4.3gp
Video: h264 (High), yuv420p, 352x288 [PAR 12:11 DAR 4:3], 216 kb/s, 24 fps, 24 tbr, 24 tbn, 48 tbcAudio : aac, 44100 Hz, stereo, s16, 92 kb/s
So I am converting them to mpegts using following commands
./ffmpeg -i 1.mp4 -c:v libx264 -vf scale=1920:1080 -r 60 -c:a aac -ar 48000 -b:a 160k -strict experimental -f mpegts 1.ts
./ffmpeg -i 2.mp4 -c:v libx264 -vf scale=1920:1080 -r 60 -c:a aac -ar 48000 -b:a 160k -strict experimental -f mpegts 2.ts
./ffmpeg -i 3.3gp -c:v libx264 -vf scale=1920:1080 -r 60 -c:a aac -ar 48000 -b:a 160k -strict experimental -f mpegts 3.ts
./ffmpeg -i 4.3gp -c:v libx264 -vf scale=1920:1080 -r 60 -c:a aac -ar 48000 -b:a 160k -strict experimental -f mpegts 4.tsthen concatenating the .ts files into f.ts and then creating a final .mp4 file from it using
cat 1.ts 2.ts 3.ts 4.ts > f.ts
./ffmpeg -i f.ts -c copy -bsf:a aac_adtstoasc output.mp4But my f.ts also doesnt seem to play correctly in VLC on linux, it plays first 2 mp4’s video + audio and it plays last .3gp’s audio only.(Same for output.mp4 too) Could you please help me in figuring out what am I missing ?
Thanks in advance