
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (14)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...)
Sur d’autres sites (5204)
-
Text recognition from each frame of long Video Stream [closed]
19 mars 2020, par Shashikant SharmaI am a naive android app developer who comes from reactJS background and it’s my First project in android. I have a requirement to develop an android app that needs to capture all of time intervals at which a particular text appears in the video stream. Actually I am not able to find the approach in order to proceed for the app development. I know it sounds stupid to post such a question on such portal but I am not asking for any source code or link to GitHub repo.
(NOTE : Video stream would be 30min+ log and Text that needs to be recognized would appear at a fixed position in the video). -
FFMPEG : Mapping YUV data to output buffer of decode function
22 décembre 2013, par ZaxI am modifying a video decoder code from FFMPEG. The decoded code is of format YUV420. I'm having 3 pointers, each pointing to Y, U and V data i.e :
yPtr-> is a pointer pointing to the Luma
uPtr-> is a pointer pointing to the Chroma Cb
vPtr-> is a pointer pointing to the Chroma CrHowever, the output pointer to which I need to map my YUV data is of type void. And the output pointer is just one.
i.e :
void *data
is the output pointer for which I need to point myyPtr, uPtr and vPtr
. How should I do this ?One approach I have tried is, created a new buffer whose size is equal to the sum of Y, U and V data, and copy the contents of yPtr, uPtr and vPtr to the newly allocated buffer, and the pointer to this buffer I'm allocating to
*data
output pointer.However this approach is not preferred because memcpy needs to be performed and other performance drawbacks.
Can anyone please suggest an alternative for this issue. This may be not related directly to FFMPEG, but since I'm modifying FFMPEG's libavcodec's decoder code, I'm tagging it in FFMPEG.
Edit : What I'm trying to do :
Actually my understanding is if I make this pointer point to
void *data
pointer of any decode function of any decoder and setting*got_frame_ptr
to 1, the framework will take care of dumping this data into the yuv file. is my understanding right ?The function prototype of my custom video decoder or any video decoder in ffmpeg is as shown below :
static int myCustomDec_decode_frame(AVCodecContext *avctx,
void *data, int *data_size,
uint8_t *buf, int buf_size) {I'm referring to this post FFMPEG : Explain parameters of any codecs function pointers and assuming that I need to point *data to my YUV data pointer, and the dumping stuff will be taken care by ffmpeg. Please provide suggestions regrading the same.
-
Transcode webm audio stream on-the-fly using ffmpeg
5 mai 2022, par sailybraI want to transcode an audio stream from YouTube (webm) to PCM on the fly using a buffer, but ffmpeg can only process the first received buffer due to the lack of metadata in subsequent buffers. Is there any way to make this work ? I've thought about attaching metadata to other chunks but couldn't make this work. Maybe there's a better approach ?