
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (61)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (8375)
-
setting bit rates in creating video from images in ffmpeg not working
2 mai 2014, par mast kalandarI have a HQ video of one second
Some information of this video is as below
Dimensions : 1920 x 1080
Codec : H.264
Framerate : 30 frames per second
Size : 684.7 kB (6,84,673 bytes)
Bitrates : 5458 kbpsI have extracted frames from video
ffmpeg -i f1.mp4 f%d.jpg
All images are of 1920 x 1020 pixels by default 30 frames are generated (f7_1.jpg, f7_2.jpg,.....,f7_30.jpg)
I have added some texts and objects to these images (without changing dimensions of any image, all 30 images are still of 1920 x 1020 pixels)
Now I am trying to merge all these images to create single video (of 1 second)
I referred this official document, I have run below command
ffmpeg -f image2 -i f7_%d.jpg -r 30 -b:v 5458k foo_5458_2.mp4
Video created is also of one second, thing is its bit rates are higher then the original one. New video has 6091 kbps bit rates, while I expect are 5458 kbps only.
Because of higher bits, its gets finish very quickly compare to original video in video player.
Is there any thing I missing ??
And I don’t know what is exact meaning and job of
-f image2
option, when I run command without this option, I am getting same video. -
libx264 encoder video plays too fast
23 avril 2014, par NickI’m trying to write a program that uses libx264 to encode the video frames. I’ve wrapped this code into a small class (see below). I have frames that are in YUV420 format. libx264 encodes the frames and I save them to a file. I can play the file back in VLC, all of the frames are there, but it plays back at several hundred times the actual frame rate. Currently I am capturing frames at 2.5 FPS, but they play back as if it was recorded at 250 or more FPS. I’ve tried to change the frame rate with no luck.
I’ve also tried to set
_param.b_vfr_input = 1
and then set the time bases appropriately, but that causes my program to crash. Any ideas ? My encode code is shown below. I’ve also included the output of ffprobe -show_frames
Wrapper Class :
x264wrapper::x264wrapper(int width, int height, int fps, int timeBaseNum, int timeBaseDen, int vfr)
{
x264_param_default_preset(&_param, "veryfast", "zerolatency");
_param.i_threads = 1;
_param.i_width = width;
_param.i_height = height;
_param.i_fps_num = fps;
_param.i_fps_den = 1;
// Intra refres:
_param.i_keyint_max = fps;
_param.b_intra_refresh = 1;
//Rate control:
_param.rc.i_rc_method = X264_RC_CRF;
//_param.rc.i_rc_method = X264_RC_CQP;
_param.rc.f_rf_constant = 25;
_param.rc.f_rf_constant_max = 35;
//For streaming:
_param.b_repeat_headers = 1;
_param.b_annexb = 1;
// misc
_param.b_vfr_input = vfr;
_param.i_timebase_num = timeBaseNum;
_param.i_timebase_den = timeBaseDen;
_param.i_log_level = X264_LOG_DEBUG;
_encoder = x264_encoder_open(&_param);
cout << "Timebase " << _param.i_timebase_num << "/" << _param.i_timebase_den << endl;
cout << "fps " << _param.i_fps_num << "/" << _param.i_fps_den << endl;
_ticks_per_frame = (int64_t)_param.i_timebase_den * _param.i_fps_den / _param.i_timebase_num / _param.i_fps_num;
cout << "ticks_per_frame " << _ticks_per_frame << endl;
int result = x264_picture_alloc(&_pic_in, X264_CSP_I420, width, height);
if (result != 0)
{
cout << "Failed to allocate picture" << endl;
throw(1);
}
_ofs = new ofstream("output.h264", ofstream::out | ofstream::binary);
_pts = 0;
}
x264wrapper::~x264wrapper(void)
{
_ofs->close();
}
void x264wrapper::encode(uint8_t * buf)
{
x264_nal_t* nals;
int i_nals;
convertFromBalserToX264(buf);
_pts += _ticks_per_frame;
_pic_in.i_pts = _pts;
x264_picture_t pic_out;
int frame_size = x264_encoder_encode(_encoder, &nals, &i_nals, &_pic_in, &pic_out);
if (frame_size >= 0)
{
_ofs->write((char*)nals[0].p_payload, frame_size);
}
else
{
cout << "error: x264_encoder_encode failed" << endl;
}
}Output of ffprobe -show_frames :
[FRAME]
media_type=video
key_frame=1
pkt_pts=N/A
pkt_pts_time=N/A
pkt_dts=N/A
pkt_dts_time=N/A
pkt_duration=48000
pkt_duration_time=0.040000
pkt_pos=0
width=1920
height=1080
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=I
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
reference=0
[/FRAME]
[FRAME]
media_type=video
key_frame=0
pkt_pts=N/A
pkt_pts_time=N/A
pkt_dts=N/A
pkt_dts_time=N/A
pkt_duration=N/A
pkt_duration_time=N/A
pkt_pos=54947
width=1920
height=1080
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=1
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
reference=0
[/FRAME]
[FRAME]
media_type=video
key_frame=0
pkt_pts=N/A
pkt_pts_time=N/A
pkt_dts=N/A
pkt_dts_time=N/A
pkt_duration=N/A
pkt_duration_time=N/A
pkt_pos=57899
width=1920
height=1080
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=2
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
reference=0
[/FRAME] -
How do I keep black areas black with ffmpeg ?
11 janvier 2018, par Aurelius SchnitzlerWhen encoding GoPro videos with ffmpeg using
ffmpeg -i Goprovideo.mp4 -pix_fmt yuv420p -vf scale=1920:-1,crop=1920:1080:0:362 Goprovideo-out.mp4
I noticed that the videos do not only get much smaller, but also in some cases do black areas lose intensity. So what was black is now grey. Not to an extreme, but slightly noticable.
How do I keep black areas black with ffmpeg ?