
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (104)
-
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)
Sur d’autres sites (9101)
-
LibVLC : Retrieving current frame number
2 avril 2015, par SolidusI am doing a project which involves a bit of video recording and editing and I am struggling to find a good C++ library to use. I am using QT as my framework and their video player is not working properly for me to use (seeking crashes some times, e.g.). Also, I need to record video and audio from my camera and QCamera does not work in windows (for recording).
On my program the user can draw on top of the video and I need to store the start frame and the end frame of those drawings.
Right now I’ve been testing Libvlc which almost does what I want. From what I can see they don’t have a way to just jump to a certain frame as this can only be done by time or position.
The first solution that I came up with was to capture the time change event and then calculate the frame using the FPS. The problem is that, as far as I can tell, the interval of this event is around 250ms, which for a 15fps video is almost 4 frames.
So, the second solution was to use libvlc_video_set_callbacks to make my own "lock, unlock and display" and count the frames there. This works for recording from the camera, as there is no going back and the frames go from 0 until the video stops. The problem is when playing a video. Since there is no timestamp, as far as I can tell, there is no way for me to know in which frame number I am (the user can be seeking for example). My "hacky" solution was to have a "lastTime" and "numTimes" on the struct I pass into these callbacks and this is what I do :
lastTime represents the "last new time" received and numTimes represents the number of times lastTime was received.
get_the_current_time
calculate_frame_num_with_fps
if current_time is equal to lastTime:
frameNum += numTimes
numTimes++
else
lastTime = current_time
numTimes = 1This kinda works but I hate the solution. I’m not sure if when doing seeking the time changes if the difference is less than 250ms. That would maybe be kinda hard for a user to do but I’d prefer not to implement it like that.
So my question is if there is another solution for this ? If not, any libraries that could help me on this ? I know about FFMPEG which seems would solve me this problem as it’s more low level and I could implement this solution. The problem is my deadline is approaching and that would still me take some time (learning the library and doing all the work). So I was thinking of it as a last resort.
Thank you for your time.
-
How to convert ffmpeg video frame to YUV444 ?
21 octobre 2019, par Edward SeverinsenI have been following a tutorial on how to use ffmpeg and SDL to make a simple video player with no audio (yet). While looking through the tutorial I realized it was out of date and many of the functions it used, for both ffmpeg and SDL, were deprecated. So I searched for an up-to-date solution and found a stackoverflow question answer that completed what the tutorial was missing.
However, it uses YUV420 which is of low quality. I want to implement YUV444 and after studying chroma-subsampling for a bit and looking at the different formats for YUV am confused as to how to implement it. From what I understand YUV420 is a quarter of the quality YUV444 is. YUV444 means every pixel has its own chroma sample and as such is more detailed while YUV420 means pixels are grouped together and have the same chroma sample and therefore is less detailed.
And from what I understand the different formats of YUV(420, 422, 444) are different in the way they order y, u, and v. All of this is a bit overwhelming because I haven’t done much with codecs, conversions, etc. Any help would be much appreciated and if additional info is needed please let me know before downvoting.
Here is the code from the answer I mentioned concerning the conversion to YUV420 :
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_YV12,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// initialize SWS context for software scaling
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV420P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// set up YV12 pixel array (12 bits per pixel)
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width / 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if (frameFinished) {
AVPicture pict;
pict.data[0] = yPlane;
pict.data[1] = uPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = uvPitch;
pict.linesize[2] = uvPitch;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
pCodecCtx->width,
uPlane,
uvPitch,
vPlane,
uvPitch
);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
SDL_PollEvent(&event);
switch (event.type) {
case SDL_QUIT:
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
exit(0);
break;
default:
break;
}
}
// Free the YUV frame
av_frame_free(&pFrame);
free(yPlane);
free(uPlane);
free(vPlane);
// Close the codec
avcodec_close(pCodecCtx);
avcodec_close(pCodecCtxOrig);
// Close the video file
avformat_close_input(&pFormatCtx);EDIT :
After more research I learned that in YUV420 is stored with all Y’s first then a combination of U and V bytes one after another as illustrated by this image :
(source : wikimedia.org)However I also learned that YUV444 is stored in the order U, Y, V and repeats like this picture shows :
I tried changing some things around in code :
// I changed SDL_PIXELFORMAT_YV12 to SDL_PIXELFORMAT_UYVY
// as to reflect the order of YUV444
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_UYVY,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// Changed AV_PIX_FMT_YUV420P to AV_PIX_FMT_YUV444P
// for rather obvious reasons
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV444P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// There are as many Y, U and V bytes as pixels I just
// made yPlaneSz and uvPlaneSz equal to the number of pixels
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width * 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Rearranged the order of the planes to reflect UYV order
// then set linesize to the number of Y, U and V bytes
// per row
if (frameFinished) {
AVPicture pict;
pict.data[0] = uPlane;
pict.data[1] = yPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = pCodecCtx->width;
pict.linesize[2] = pCodecCtx->width;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
1,
uPlane,
uvPitch,
vPlane,
uvPitch
);
//.................................................But now I get an access violation at the call to
SDL_UpdateYUVTexture
... I’m honestly not sure what’s wrong. I think it may have to do with settingAVPicture pic
’s memberdata
andlinesize
improperly but I’m not positive. -
Saving scatterplot animations with matplotlib produces blank video file
1er avril 2013, par user2175850I am having a very similar problem to this question
but the suggested solution doesn't work for me.
I have set up an animated scatter plot using the matplotlib animation module. This works fine when it is displaying live. I would like to save it to an avi file or something similar. The code I have written to do this does not error out but the video it produces just shows a blank set of axes or a black screen. I've done several checks and the data is being run and figure updated it's just not getting saved to video...
I tried removing "animated=True" and "blit=True" as suggested in this question but that did not fix the problem.
I have placed the relevant code below but can provide more if necessary. Could anyone suggest what I should do to get this working ?
def initAnimation(self):
rs, cfgs = next(self.jumpingDataStreamIterator)
#self.scat = self.axAnimation.scatter(rs[0], rs[1], c=cfgs[0], marker='o')
self.scat = self.axAnimation.scatter(rs[0], rs[1], c=cfgs[0], marker='o', animated=True)
return self.scat,
def updateAnimation(self, i):
"""Update the scatter plot."""
rs, cfgs = next(self.jumpingDataStreamIterator)
# Set x and y data...
self.scat.set_offsets(rs[:2,].transpose())
#self.scat = self.axAnimation.scatter(rs[0], rs[1], c=cfgs[0], animated=True)
# Set sizes...
#self.scat._sizes = 300 * abs(data[2])**1.5 + 100
# Set colors..
#self.scat.set_array(cfgs[0])
# We need to return the updated artist for FuncAnimation to draw..
# Note that it expects a sequence of artists, thus the trailing comma.
matplotlib.pyplot.draw()
return self.scat,
def animate2d(self, steps=None, showEvery=50, size = 25):
self.figAnimation, self.axAnimation = matplotlib.pyplot.subplots()
self.axAnimation.set_aspect("equal")
self.axAnimation.axis([-size, size, -size, size])
self.jumpingDataStreamIterator = self.jumpingDataStream(showEvery)
self.univeseAnimation = matplotlib.animation.FuncAnimation(self.figAnimation,
self.updateAnimation, init_func=self.initAnimation,
blit=True)
matplotlib.pyplot.show()
def animate2dVideo(self,fileName=None, steps=10000, showEvery=50, size=25):
self.figAnimation, self.axAnimation = matplotlib.pyplot.subplots()
self.axAnimation.set_aspect("equal")
self.axAnimation.axis([-size, size, -size, size])
self.Writer = matplotlib.animation.writers['ffmpeg']
self.writer = self.Writer(fps=1, metadata=dict(artist='Universe Simulation'))
self.jumpingDataStreamIterator = self.jumpingDataStream(showEvery)
self.universeAnimation = matplotlib.animation.FuncAnimation(self.figAnimation,
self.updateAnimation, scipy.arange(1, 25), init_func=self.initAnimation)
self.universeAnimation.save('C:/universeAnimation.mp4', writer = self.writer)