
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (45)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Changer son thème graphique
22 février 2011, parLe thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
Modifier le thème graphique utilisé
Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
Il suffit ensuite de se rendre dans l’espace de configuration du (...) -
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (4419)
-
Retrieving current frame number
2 avril 2015, par SolidusI am doing a project which involves a bit of video recording and editing and I am struggling to find a good C++ library to use. I am using QT as my framework and their video player is not working properly for me to use (seeking crashes some times, e.g.). Also, I need to record video and audio from my camera and QCamera does not work in windows (for recording).
On my program the user can draw on top of the video and I need to store the start frame and the end frame of those drawings.
Right now I’ve been testing Libvlc which almost does what I want. From what I can see they don’t have a way to just jump to a certain frame as this can only be done by time or position.
The first solution that I came up with was to capture the time change event and then calculate the frame using the FPS. The problem is that, as far as I can tell, the interval of this event is around 250ms, which for a 15fps video is almost 4 frames.
So, the second solution was to use libvlc_video_set_callbacks to make my own "lock, unlock and display" and count the frames there. This works for recording from the camera, as there is no going back and the frames go from 0 until the video stops. The problem is when playing a video. Since there is no timestamp, as far as I can tell, there is no way for me to know in which frame number I am (the user can be seeking for example). My "hacky" solution was to have a "lastTime" and "numTimes" on the struct I pass into these callbacks and this is what I do :
lastTime represents the "last new time" received and numTimes represents the number of times lastTime was received.
get_the_current_time
calculate_frame_num_with_fps
if current_time is equal to lastTime:
frameNum += numTimes
numTimes++
else
lastTime = current_time
numTimes = 1This kinda works but I hate the solution. I’m not sure if when doing seeking the time changes if the difference is less than 250ms. That would maybe be kinda hard for a user to do but I’d prefer not to implement it like that.
So my question is if there is another solution for this ? If not, any libraries that could help me on this ? I know about FFMPEG which seems would solve me this problem as it’s more low level and I could implement this solution. The problem is my deadline is approaching and that would still me take some time (learning the library and doing all the work). So I was thinking of it as a last resort.
Thank you for your time.
-
LibVLC : Retrieving current frame number
2 avril 2015, par SolidusI am doing a project which involves a bit of video recording and editing and I am struggling to find a good C++ library to use. I am using QT as my framework and their video player is not working properly for me to use (seeking crashes some times, e.g.). Also, I need to record video and audio from my camera and QCamera does not work in windows (for recording).
On my program the user can draw on top of the video and I need to store the start frame and the end frame of those drawings.
Right now I’ve been testing Libvlc which almost does what I want. From what I can see they don’t have a way to just jump to a certain frame as this can only be done by time or position.
The first solution that I came up with was to capture the time change event and then calculate the frame using the FPS. The problem is that, as far as I can tell, the interval of this event is around 250ms, which for a 15fps video is almost 4 frames.
So, the second solution was to use libvlc_video_set_callbacks to make my own "lock, unlock and display" and count the frames there. This works for recording from the camera, as there is no going back and the frames go from 0 until the video stops. The problem is when playing a video. Since there is no timestamp, as far as I can tell, there is no way for me to know in which frame number I am (the user can be seeking for example). My "hacky" solution was to have a "lastTime" and "numTimes" on the struct I pass into these callbacks and this is what I do :
lastTime represents the "last new time" received and numTimes represents the number of times lastTime was received.
get_the_current_time
calculate_frame_num_with_fps
if current_time is equal to lastTime:
frameNum += numTimes
numTimes++
else
lastTime = current_time
numTimes = 1This kinda works but I hate the solution. I’m not sure if when doing seeking the time changes if the difference is less than 250ms. That would maybe be kinda hard for a user to do but I’d prefer not to implement it like that.
So my question is if there is another solution for this ? If not, any libraries that could help me on this ? I know about FFMPEG which seems would solve me this problem as it’s more low level and I could implement this solution. The problem is my deadline is approaching and that would still me take some time (learning the library and doing all the work). So I was thinking of it as a last resort.
Thank you for your time.
-
How to convert ffmpeg video frame to YUV444 ?
21 octobre 2019, par Edward SeverinsenI have been following a tutorial on how to use ffmpeg and SDL to make a simple video player with no audio (yet). While looking through the tutorial I realized it was out of date and many of the functions it used, for both ffmpeg and SDL, were deprecated. So I searched for an up-to-date solution and found a stackoverflow question answer that completed what the tutorial was missing.
However, it uses YUV420 which is of low quality. I want to implement YUV444 and after studying chroma-subsampling for a bit and looking at the different formats for YUV am confused as to how to implement it. From what I understand YUV420 is a quarter of the quality YUV444 is. YUV444 means every pixel has its own chroma sample and as such is more detailed while YUV420 means pixels are grouped together and have the same chroma sample and therefore is less detailed.
And from what I understand the different formats of YUV(420, 422, 444) are different in the way they order y, u, and v. All of this is a bit overwhelming because I haven’t done much with codecs, conversions, etc. Any help would be much appreciated and if additional info is needed please let me know before downvoting.
Here is the code from the answer I mentioned concerning the conversion to YUV420 :
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_YV12,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// initialize SWS context for software scaling
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV420P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// set up YV12 pixel array (12 bits per pixel)
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width / 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if (frameFinished) {
AVPicture pict;
pict.data[0] = yPlane;
pict.data[1] = uPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = uvPitch;
pict.linesize[2] = uvPitch;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
pCodecCtx->width,
uPlane,
uvPitch,
vPlane,
uvPitch
);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
SDL_PollEvent(&event);
switch (event.type) {
case SDL_QUIT:
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
exit(0);
break;
default:
break;
}
}
// Free the YUV frame
av_frame_free(&pFrame);
free(yPlane);
free(uPlane);
free(vPlane);
// Close the codec
avcodec_close(pCodecCtx);
avcodec_close(pCodecCtxOrig);
// Close the video file
avformat_close_input(&pFormatCtx);EDIT :
After more research I learned that in YUV420 is stored with all Y’s first then a combination of U and V bytes one after another as illustrated by this image :
(source : wikimedia.org)However I also learned that YUV444 is stored in the order U, Y, V and repeats like this picture shows :
I tried changing some things around in code :
// I changed SDL_PIXELFORMAT_YV12 to SDL_PIXELFORMAT_UYVY
// as to reflect the order of YUV444
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_UYVY,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// Changed AV_PIX_FMT_YUV420P to AV_PIX_FMT_YUV444P
// for rather obvious reasons
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV444P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// There are as many Y, U and V bytes as pixels I just
// made yPlaneSz and uvPlaneSz equal to the number of pixels
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width * 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Rearranged the order of the planes to reflect UYV order
// then set linesize to the number of Y, U and V bytes
// per row
if (frameFinished) {
AVPicture pict;
pict.data[0] = uPlane;
pict.data[1] = yPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = pCodecCtx->width;
pict.linesize[2] = pCodecCtx->width;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
1,
uPlane,
uvPitch,
vPlane,
uvPitch
);
//.................................................But now I get an access violation at the call to
SDL_UpdateYUVTexture
... I’m honestly not sure what’s wrong. I think it may have to do with settingAVPicture pic
’s memberdata
andlinesize
improperly but I’m not positive.