
Recherche avancée
Autres articles (58)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Utilisation et configuration du script
19 janvier 2011, parInformations spécifiques à la distribution Debian
Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
Récupération du script
Le script d’installation peut être récupéré de deux manières différentes.
Via svn en utilisant la commande pour récupérer le code source à jour :
svn co (...)
Sur d’autres sites (10937)
-
Can ffmpeg periodically report statistics on a real-time audio stream (rather than file) ?
19 janvier 2016, par Caius JardI currently use ffmpeg to capture desktop screen and audio that the computer speakers are playing, something like a screencast. ffmpeg is started by an app that captures its console output, so I can have that app read the output and look for info
I’d like to know if there are a set of switches I can supply to ffmpeg whereby it will periodically output some audio statistics that will directly report, or allow me to infer, that the audio stream has gone silent ?
I see some audio statistics switches/filters but the help docs for these seem to imply they will collect their stats over the processing of an entire stream and then report them at the end.. I’d prefer something like "the average audio volume over the past 5 seconds" reported every 5 seconds. I could even deduce from the audio bitrate of the encoder I think, if it’s VBR and the rate consistently falls because it’s encoding nothing
-
libav sws_scale() fails colorspace conversion on real device, works on emulator
26 août 2020, par chugadieI'm making a movie player with libav. I have decoding video packets working, I have play in reverse working, I have seeking working. All this works no an x86 android emulator, but fails to work on a real android phone (arm64-v8a)


The failure is in
sws_scale()
- it returns 0. The video frames continue to be decoded properly with no errors.

There are no errors, warnings, alerts from libav. I have connected an
avlog_callback


void log_callback(void *ptr, int level, const char *fmt, va_list vargs) {
 if (level<= AV_LOG_WARNING)
 __android_log_print( level, LOG_TAG, fmt, vargs);
}
uint64_t openMovie( char* path, int rotate, float javaDuration )
{
 av_log_set_level(AV_LOG_WARNING);
 av_log_set_callback(log_callback);



The code to do the
sws_scale()
is :

int JVM_getBitmapBuffer( JNIEnv* env, jobject thiz, jlong av, jobject bufferAsInt, jbyte transparent ) { 
 avblock *block = (avblock *) av;
 if (!block) {
 __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " avblock is null");
 return AVERROR(EINVAL);
 }
 if (!block->pCodecCtx) {
 __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " codecctx is null");
 return AVERROR(EINVAL);
 }

 int width = block->pCodecCtx->width;
 int height = block->pCodecCtx->height;

 if (NULL == block->sws) {
 __android_log_print( ANDROID_LOG_ERROR, LOG_TAG, "getBitmapBuffer:\n *** invalid sws context ***" );
 }

 int scaleRet = sws_scale( block->sws,
 block->pFrame->data,
 block->pFrame->linesize,
 0,
 height,
 block->pFrameRGB->data,
 block->pFrameRGB->linesize
 );
 if (scaleRet == 0 ) {
 __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " scale failed");
 __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " pframe linesize %d", block->pFrame->linesize[0]); 
 __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " pframergb linesize %d", block->pFrameRGB->linesize[0]); 
 __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " height %d",
 height);
 return AVERROR(EINVAL);
 }



Setting up the codex and avframes :


//i have tried every combination of 1, 8, 16, and 32 for these values
int alignRGB = 32;
int align = 16; 
int width = block->pCodecCtx->width;
int height = block->pCodecCtx->height;
block->pFrame = av_frame_alloc();
block->pFrameRGB = av_frame_alloc();

block->pFrameRGBBuffer = av_malloc(
 (size_t)av_image_get_buffer_size(AV_PIX_FMT_RGB32, width, height, alignRGB) 
);

av_image_fill_arrays(
 block->pFrameRGB->data,
 block->pFrameRGB->linesize,
 block->pFrameRGBBuffer,
 AV_PIX_FMT_RGB32,
 width,
 height,
 alignRGB
);

block->pFrameBuffer = av_malloc(
 (size_t) av_image_get_buffer_size(block->pCodecCtx->pix_fmt,
 width, height, align
 )
);
av_image_fill_arrays(
 block->pFrame->data,
 block->pFrame->linesize,
 block->pFrameBuffer,
 block->pCodecCtx->pix_fmt,
 width, height,
 align
);
block->sws = sws_getContext(
 width, height,
 AV_PIX_FMT_YUV420P,
 width, height,
 AV_PIX_FMT_RGB32,
 SWS_BILINEAR, NULL, NULL, 0
);



Wildcards are that :


- 

- I'm using React-Native
- My emulator is x86 android api 28
- My real-device is arm64-v8a AOSP (around api 28, don't remember exactly(








Other notes :


- 

- libav .so files are compiled from mobile-ffmpeg project.
- I can also sws_scale also works on x86_64 linux using SDL to project YV12
- Test video is here : https://github.com/markkimsal/video-thumbnailer/tree/master/fixtures
block
is a simple C struct with pointers to relevant AV memory structures.- Using FFMPEG 4.3.2












I'm pretty certain it has something to do with the pixel alignment. But documentation is practically non-existent on this topic. It could also be the difference between pixel formats RGBA and RGB32, or possibly little-endian vs big-endian.


-
Sending video stream from NodeJS to python in real time [closed]
17 juin 2021, par Tristan DelortI'm using a NodeJS server to catch a video stream through a WebRTC PeerConnection and I need to send it to a python script.


I use NodeJS mainly because it's easy to use WebRTC in it and the package 'wrtc' supports RTCVideoSink and python's aiortc doesn't.


I was thinking of using a named pipe with ffmpeg to stream the video stream but 3 questions arose :


- 

-
Should I use python instead of NodeJS and completely avoid the stream through a named pipe part ? (This means there is a way to extract individual frames from a MediaStreamTrack in python)


-
If I stick with the "NodeJS - Python" approach, how do I send the stream from one script to the other ? Named pipe ? Unix domain sockets ? And with FFMpeg ?


-
Finally, for performance purpose I think that sending a stream and not each individual frames is better and simpler but is this true ?










Thanks all !


-