
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (102)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
MediaSPIP Init et Diogène : types de publications de MediaSPIP
11 novembre 2010, parÀ l’installation d’un site MediaSPIP, le plugin MediaSPIP Init réalise certaines opérations dont la principale consiste à créer quatre rubriques principales dans le site et de créer cinq templates de formulaire pour Diogène.
Ces quatre rubriques principales (aussi appelées secteurs) sont : Medias ; Sites ; Editos ; Actualités ;
Pour chacune de ces rubriques est créé un template de formulaire spécifique éponyme. Pour la rubrique "Medias" un second template "catégorie" est créé permettant d’ajouter (...) -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (12363)
-
MAINTAINERS : update myself for dvdvideo, rcwtdec, rcwtenc
26 septembre 2024, par Marth64MAINTAINERS : update myself for dvdvideo, rcwtdec, rcwtenc
I plan to look after and test them for the forseeable future.
I am not a committer but do care for these muxers/demuxers.Signed-off-by : Marth64 <marth64@proxyid.net>
Signed-off-by : Michael Niedermayer <michael@niedermayer.cc> -
avcodec : add avcodec_get_supported_config()
3 avril 2024, par Niklas Haasavcodec : add avcodec_get_supported_config()
This replaces the myriad of existing lists in AVCodec by a unified API
call, allowing us to (ultimately) trim down the sizeof(AVCodec) quite
substantially, while also making this more trivially extensible.In addition to the already covered lists, add two new entries for color
space and color range, mirroring the newly added negotiable fields in
libavfilter.Once the deprecation period passes for the existing public fields, the
rough plan is to move the commonly used fields (such as
pix_fmt/sample_fmt) into FFCodec, possibly as a union of audio and video
configuration types, and then implement the rarely used fields with
custom callbacks. -
Cannot display a decoded video frame on Raylib
20 décembre 2024, par gabriel_tisoI'm trying to explore
libav
andraylib
just to understand how audio and video work, and also to learn how to build nice interfaces using theraylib
project. I've implemented a simple struct capable of decoding audio and video frames. When a video frame appears, I convert it to theRGBA
format, which packs the values into 32bpp. This is the setup :

if (av_image_alloc((uint8_t **)media->dst_frame->data,
 media->dst_frame->linesize, media->ctxs[0]->width,
 media->ctxs[0]->height, AV_PIX_FMT_RGBA, 1) < 0) {
 fprintf(stderr, "Failed to setup dest image\n");
 return -1;
 }

 media->sws_ctx = sws_getContext(
 media->ctxs[0]->width, media->ctxs[0]->height, media->ctxs[0]->pix_fmt,
 media->ctxs[0]->width, media->ctxs[0]->height, AV_PIX_FMT_RGBA,
 SWS_BILINEAR, NULL, NULL, 0);

 // Later on, in the decode function:
 int ret = sws_scale(media->sws_ctx, media->frame->data,
 media->frame->linesize, 0, media->frame->height,
 media->dst_frame->data, media->dst_frame->linesize);




In the main file, I init raylib, and setup the necessary steps to load the texture (here I'm trying to fetch the first video frame in order to show the user a preview of the video, later on I plan to reset the stream to allow a correct playback routine). I think the format of the image is right.


Image previewImage =
 GenImageColor(videoArea.width, videoArea.height, BLACK);
 // I assume this makes the formats compatible
 ImageFormat(&previewImage, PIXELFORMAT_UNCOMPRESSED_R8G8B8A8);

 Texture2D videoTexture = LoadTextureFromImage(previewImage);
 UnloadImage(previewImage);




 if (!state->has_media) {
 DrawText("Drop a video file here!", videoArea.x + 10,
 videoArea.y + 10, 20, GRAY);
 } else {
 if (state->first_frame) {
 do {
 decode_packet(state->media);
 } while (!is_frame_video(state->media));

 UpdateTexture(videoTexture, state->media->dst_frame->data[0]);

 state->first_frame = 0;
 }
 }

 DrawTexture(videoTexture, videoArea.x, videoArea.y, WHITE);



Anyway, this is what I get when a mp4 file is dropped :



It seems like an alignment issue maybe ? Can someone point me in the right direction in order to correctly solve this problem ?