
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (64)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
Sur d’autres sites (11541)
-
Converting .3gp file into mp4 file in android using ffmpeg
8 août 2013, par user2171513I want to convert .3gp file into .mp4 file with resolution modified in Android using ffmpeg.
I want to increase the resolution of the video from its standard resolution to 1920x1080.So far I have been successful in
1) extracting .h264 video file from .3gp file and increase its resolution
2) extracting .aac audio file from .3gp file.Now I want to combine them back into .mp4 file. The commands that I have used to extract this .h264 and .aac files are :
./ffmpeg -i 1.3gp -vbsf h264_mp4toannexb -s 1920x1080 1.h264
./ffmpeg -i 1.3gp -ab 160k -ac 2 -ar 48000 -vn -strict -2 1.aacThe command that I have tried to merge them back is
./ffmpeg -i 1.h264 -i 1.aac -map 0:0 -map 1:0 -strict -2 1.mp4
The 1.mp4 that gets generated at the end basically has audio only at few sync frames of video. (Thats what I feel , because the audio is present at specific intervals within the video)
Can anyone please help in figuring out what am I missing here.
EDIT :
So basically I want to concat 4 different videos of 4 different resolution and type.1)
./ffmpeg -i 1.mp4
Video: h264 (High), yuv420p, 1920x1080, 16959 kb/s, 29.85 fps, 90k tbr, 90k tbn, 180k tbc
Audio: aac, 48000 Hz, stereo, s16, 106 kb/s2)
ffmpeg -i 2.mp4
Video: h264 (Constrained Baseline), yuv420p, 640x480, 3102 kb/s, 29.99 fps, 90k tbr, 90k tbn, 180k tbc
Audio: aac, 48000 Hz, stereo, s16, 93 kb/s3)
ffmpeg -i 3.3gp
Video: h263, yuv420p, 1408x1152 [PAR 12:11 DAR 4:3], 2920 kb/s, 15 fps, 15 tbr, 15360 tbn, 29.97 tbc
Audio: amrnb, 8000 Hz, 1 channels, flt, 12 kb/s4)
ffmpeg -i 4.3gp
Video: h264 (High), yuv420p, 352x288 [PAR 12:11 DAR 4:3], 216 kb/s, 24 fps, 24 tbr, 24 tbn, 48 tbc
Audio: aac, 44100 Hz, stereo, s16, 92 kb/sSo I am converting them to mpegts using following commands
./ffmpeg -i 1.mp4 -c:v libx264 -vf scale=1920:1080 -r 60 -c:a aac -ar 48000 -b:a 160k -strict experimental -f mpegts 1.ts
./ffmpeg -i 2.mp4 -c:v libx264 -vf scale=1920:1080 -r 60 -c:a aac -ar 48000 -b:a 160k -strict experimental -f mpegts 2.ts
./ffmpeg -i 3.3gp -c:v libx264 -vf scale=1920:1080 -r 60 -c:a aac -ar 48000 -b:a 160k -strict experimental -f mpegts 3.ts
./ffmpeg -i 4.3gp -c:v libx264 -vf scale=1920:1080 -r 60 -c:a aac -ar 48000 -b:a 160k -strict experimental -f mpegts 4.tsthen concatenating the .ts files into f.ts and then creating a final .mp4 file from it using
cat 1.ts 2.ts 3.ts 4.ts > f.ts
./ffmpeg -i f.ts -c copy -bsf:a aac_adtstoasc output.mp4But my f.ts also doesnt seem to play correctly in VLC on linux, it plays first 2 mp4's video + audio and it plays last .3gp's audio only.(Same for output.mp4 too) Could you please help me in figuring out what am I missing ?
Thanks
-
select FrameSize for vidoe with logo in FFmpeg
24 juin 2016, par Ahmad Ali MukashatyI’m using this command to stream video with 1920*1080 frame size
ffmpeg -re -i test.mp4 -vf scale=1920*1080 -f mpegts udp://127.0.0.1:port
but whan I want to stream images with video like this
ffmpeg -re -i test.mp4 -vf scale=1920*1080 -i logo.png -ignore_loop 0 -i
test6.gif -filter_complex "[0][1]overlay=10:10[a];[a][2]overlay=90:90" -f
mpegts udp://127.0.0.1:portthe command line display this error
hereHow can I choose frame size whan I stream video with image ?
-
Video decoding using ffms2 (ffmpegsource)
21 juin 2013, par praks411I'm using ffms2 (aka FFmpegSource) for decoding video frames and display on UI based on wxWidgets.
My player works fine for low resolution video (320*240, 640*480) but for higher resolution (1080) it is very slow. I'm not able to meed the desired frame for high resolution video.
After time analysis I found that FFMS_GetFrame() frame function takes much longer time for high resolution frame.
Here are the results.
1. 320*240 FFMS_GetFrame takes 4-6ms
2. 640*480 FFMS_GetFrame takes >20ms
3. 1080*720 FFMS_GetFrame takes >40Which means that I'll never meets 30 fps requirement for 1080p frame with FFMS2. But I'm not sure if this is the case.
Please suggest what could be going wrong.void SetPosition(int64 pos)
{
uint8_t* data_ptr = NULL;
/*check if position is valid*/
if (!m_track || pos < 0 && pos > m_videoProp->NumFrames - 1)
return; // ERR_POS;
wxMilliClock_t start_wx_t = wxGetLocalTimeMillis();
long long start_t = start_wx_t.GetValue();
m_frameId = pos;
if(m_video)
{
m_frameProp = FFMS_GetFrame(m_video, m_frameId, &m_errInfo);
if(!m_frameProp) return;
if(m_frameProp)
{
m_width_ffms2 = m_frameProp->EncodedWidth;
m_height_ffms2 = m_frameProp->EncodedHeight;
}
wxMilliClock_t end_wx_t = wxGetLocalTimeMillis();
long long end_t = end_wx_t.GetValue();
long long diff_t = end_t - start_t;
wxLogDebug(wxString(wxT("Frame Grabe Millisec") + ToString(diff_t)));
//m_frameInfo = FFMS_GetFrameInfo(m_track, FFMS_TYPE_VIDEO);
/* If you want to change the output colorspace or resize the output frame size, now is the time to do it.
IMPORTANT: This step is also required to prevent resolution and colorspace changes midstream. You can
always tell a frame's original properties by examining the Encoded properties in FFMS_Frame. */
/* A -1 terminated list of the acceptable output formats (see pixfmt.h for the list of pixel formats/colorspaces).
To get the name of a given pixel format, strip the leading PIX_FMT_ and convert to lowercase. For example,
PIX_FMT_YUV420P becomes "yuv420p". */
#if 0
int pixfmt[2];
pixfmt[0] = FFMS_GetPixFmt("bgr24");
pixfmt[1] = -1;
#endif
// FFMS_SetOutputFormatV2 returns 0 on success. It Returns non-0 and sets ErrorMsg on failure.
int failure = FFMS_SetOutputFormatV2(m_video, pixfmt, m_width_ffms2, m_height_ffms2, FFMS_RESIZER_BICUBIC, &m_errInfo);
if (failure)
{
//FFMS_DestroyVideoSource(m_video);
//m_video = NULL;
return; //return ERR_POS;
}
data_ptr = m_frameProp->Data[0];
}
else
{
m_width_ffms2 = 320;
m_height_ffms2 = 240;
}
if(data_ptr)
{
memcpy(m_buf, data_ptr, 3*m_height_ffms2 * m_width_ffms2);
}
else
{
memset(m_buf, 0, 3*m_height_ffms2 * m_width_ffms2);
}
}