
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (101)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Prérequis à l’installation
31 janvier 2010, parPréambule
Cet article n’a pas pour but de détailler les installations de ces logiciels mais plutôt de donner des informations sur leur configuration spécifique.
Avant toute chose SPIPMotion tout comme MediaSPIP est fait pour tourner sur des distributions Linux de type Debian ou dérivées (Ubuntu...). Les documentations de ce site se réfèrent donc à ces distributions. Il est également possible de l’utiliser sur d’autres distributions Linux mais aucune garantie de bon fonctionnement n’est possible.
Il (...)
Sur d’autres sites (6660)
-
How to play sequentially received video clips
21 novembre 2017, par Ye LiI’m working on a prototype project in which some small video clips are sequentially sent over a network. These clips are chunked (using ffmpeg) from a complete video file. Each of the clips is separately playable (containing exactly one GOP). At the destination, I need to smoothly play the sequentially received files as if I were streaming the video, i.e., play the files at the receiver side one by one in one process. May I know whether this is doable ? And if yes, what is the best practice to do this task ?
The network transmission/receiving parts are written in Python, but I’m open to use any other languages/tools for doing the task.
-
X264 Encoder API
16 avril 2014, par user1884325I’m studying the X264 API for encoding images.
So far I’ve built the X264 library and the following code snippet shows how far I am :
int frame_size;
x264_t* encoder;
x264_picture_t pic_in, pic_out;
x264_param_t x264Param;
int fps = 20;
int width = 1280;
int height = 720;
x264_nal_t* nals;
int i_nals;
x264_param_default_preset(&x264Param, "veryfast", "zerolatency");
x264Param.i_threads = 1;
x264Param.i_width = 1280;
x264Param.i_height = 720;
x264Param.i_fps_num = fps;
x264Param.i_fps_den = 1;
x264Param.i_keyint_max = fps;
x264Param.b_intra_refresh = 1;
x264Param.rc.i_rc_method = X264_RC_CRF;
x264Param.rc.f_rf_constant = 25;
x264Param.rc.f_rf_constant_max = 35;
x264Param.b_repeat_headers = 1;
x264Param.b_annexb = 1;
x264_param_apply_profile(&x264Param, "baseline");
encoder = x264_encoder_open(&x264Param);
x264_picture_alloc(&pic_in, X264_CSP_BGR, width, height);
/* How to fill in bitmap data? */
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
if (frame_size >= 0)
{
printf("OK\n");
}So I’m trying to encode a 24bit BGR bitmap image. However, the x264 header file doesn’t show any API function for writing the bitmap image to the encoder. How is this done ?
EDIT
This code snippet seems to work. I would appreciate a review and some comments. Thanks.
int frame_size;
int accum_frame_size;
x264_t* encoder;
x264_picture_t pic_in, pic_out;
x264_param_t x264Param;
int fps = 20;
int width = 1280;
int height = 720;
x264_nal_t* nals;
int i_nals;
int64_t frameCount = 0;
int k;
for (k = 0; k < (1280*3*720); k++)
{
bgr[k] = rand();
}
x264_param_default_preset(&x264Param, "veryfast", "zerolatency");
x264Param.i_threads = 1;
x264Param.i_width = 1280;
x264Param.i_height = 720;
x264Param.i_fps_num = fps;
x264Param.i_fps_den = 1;
x264Param.i_keyint_max = fps;
x264Param.b_intra_refresh = 1;
x264Param.rc.i_rc_method = X264_RC_CRF;
x264Param.i_csp = X264_CSP_BGR;
x264Param.rc.f_rf_constant = 25;
x264Param.rc.f_rf_constant_max = 35;
x264Param.b_repeat_headers = 1;
x264Param.b_annexb = 1;
x264_param_apply_profile(&x264Param, "baseline");
encoder = x264_encoder_open(&x264Param);
x264_picture_alloc(&pic_in, X264_CSP_BGR, width, height);
/* Load 24-bit BGR bitmap */
pic_in.img.i_csp = X264_CSP_BGR;
pic_in.img.i_plane = 1;
pic_in.img.i_stride[0] = 3 * 1280;
pic_in.img.plane[0] = bgr;
pic_in.i_pts = frameCount;
pic_in.i_type = X264_TYPE_AUTO;
pic_out.i_pts = frameCount;
/* Returns a frame size of 912 for first frame in this case */
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
printf("Decoder returned frame size = %d \n", frame_size);
printf("Decoder returned %d NAL units \n", i_nals);
if (frame_size >= 0)
{
int i;
int j;
accum_frame_size = 0;
for (i = 0; i < i_nals; i++)
{
printf("******************* NAL %d (%d bytes) *******************\n", i, nals[i].i_payload);
for (j = 0; j < nals[i].i_payload; j++)
{
if (j == 0) printf("First 10 bytes: ");
if (j < 10) printf("%02X |", nals[i].p_payload[j]);
accum_frame_size++;
}
printf("\n");
}
}
printf("Verified frame size = %d \n", accum_frame_size);EDIT #2
The encoder outputs this :x264 [error]: baseline profile doesn't support 4:4:4
x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
x264 [info]: profile High 4:4:4 Predictive, level 3.1, 4:4:4 8-bit
Decoder returned frame size = 1467194
Decoder returned 4 NAL units
******************* NAL 0 (31 bytes) *******************
First 10 bytes: 00 |00 |00 |01 |67 |F4 |00 |1F |91 |89 |
******************* NAL 1 (8 bytes) *******************
First 10 bytes: 00 |00 |00 |01 |68 |EF |1F |2C |
******************* NAL 2 (595 bytes) *******************
First 10 bytes: 00 |00 |01 |06 |05 |FF |FF |4C |DC |45 |
******************* NAL 3 (1466560 bytes) *******************
First 10 bytes: 00 |00 |01 |65 |88 |82 |0A |FF |F5 |B0 |
Verified frame size = 1467194Isn’t each NAL unit supposed to start with 0x00 0x00 0x00 0x01 ?
szatmary : I appreciate your valuable feedback. So you’re saying that each NAL unit does not necessarily start with 0,0,0,1. However, I’m a bit unclear on your answer. Are you implying that with a certain configuration the NAL units will start with 0,0,0,1 ? If so, which configuration is that ? I need to make sure that each NAL unit I transmit out on the network to a remote receiver starts with 0,0,0,1. Prior to exploring the x264 library I was using the x264 exe and piped BMP data in and encoded data out from the x264 process. I then parsed the encoder output and looked for NAL units by looking for 0,0,0,1. How do I accomplish the same with the x264 library ?
Regarding libswscale :
I downloaded the ffmpeg source and ran configure and make in MINGW. After the process had completed I couldn’t find anything but a number of .exe files. How do I build actual static libraries (.lib) which I can use in a Visual Studio project ?
-
OpenCV FFmpeg dquant error
17 septembre 2017, par Karan MaverickI am using OpenCV to process video. When trying to process RTSP streamed video I get what appears to be an ffmpeg related error (dquant out of range).
Is there a way around this ? For example, are there any settings/parameters in opencv’s videoCapture functionality that can be modified to deal with the problem, or perhaps use a different receiver/decoder ?