
Recherche avancée
Autres articles (47)
-
Monitoring de fermes de MediaSPIP (et de SPIP tant qu’à faire)
31 mai 2013, parLorsque l’on gère plusieurs (voir plusieurs dizaines) de MediaSPIP sur la même installation, il peut être très pratique d’obtenir d’un coup d’oeil certaines informations.
Cet article a pour but de documenter les scripts de monitoring Munin développés avec l’aide d’Infini.
Ces scripts sont installés automatiquement par le script d’installation automatique si une installation de munin est détectée.
Description des scripts
Trois scripts Munin ont été développés :
1. mediaspip_medias
Un script de (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Personnaliser l’affichage de mon Médiaspip
27 mai 2013Vous pouvez modifier la configuration du squelette afin de personnaliser votre Médiaspip Voir aussi plus d’informations en suivant ce lien
Comment supprimer le nombre de vues d’affichage d’un média ?
Administrer > Gestion du squelette > Pages des articles et médias Cocher dans "Informations non affichées sur les pages de médias" les paramètres que vous ne souhaitez pas afficher.
Comment supprimer le titre de mon Médiaspip dans le bandeau horizontal ?
Administrer > Gestion du squelette > (...)
Sur d’autres sites (4897)
-
How to flatten a VR video to display in normal screen ? [closed]
14 mai 2023, par d-bI am not sure about the terminology here, but I have a VR-video that is intended to be shown using a headset with separate screens for each eye. It is not 3D in the sense that when you turn your head you will see something different, it is just "2,5D" so you get a sense of depth when looking at it. There are two video channels that are more or less identical, they are just recorded with slightly different angle, similar to how human eyes see the world. I hope this makes it clear what type of video I have, otherwise please ask for clarification in a comment (and if there is a special terminology for this type of video, please let me know).


More details : the original video is 4320x2160, basically 2 square channels at 2160 x 2160.


I want to show this video undistorted on a regular screen.


I have read the following questions here on SO :


- 

-
How to reproject and join these two clips with ffmpeg ?


-


-
How to de-warp 180 degree or 360 degree fisheye video with ffmpeg ?


-


-
Unwarping 180 VR Footage with FFmpeg v360 Filter














(and problably a few more).


I think I want to extract the two video channels (note that they are in the same video stream, not like in a movie where you can have several separate audio streams for different languages) into separate files and then "undistort" them.


(3) gave me a command to splitting the video into two files :


ffmpeg -i -myclip.mp4 -filter_complex "[0]crop=iw/2:ih:0:0[left];[0]crop=iw/2:ih:ow:0[right]" -map "[left]" -map 0:a /tmp/left.mp4 -map "[right]" -map 0:a /tmp/right.mp4



That seemed to have worked as expected but then I also need to "undistort" the content because it was filmed with some fisheye lens or something like that (straight lines not in the absolute centre of the image are more or less circular).


(5) suggested this command :


ffmpeg -i left.mp4 -vf "v360=input=hequirect:output=flat:h_fov=100:v_fov=67.5:w=1280:h=720" leftfixed.mp4



but that produced an output that was 4320x2160 (obviously only from one channel, since input was just one channel) but just the centre of the original image, I estimate the content to be the 500x250 px (upscaled to 4320x2160, so very blocky) of the midpoint of the original image.


How can I "undistort" this video so it looks good on a 2D-screen while the size is preserved ?


-
-
Is there a way to cut movement "dead air" on a screen recording ? [closed]
16 mai 2023, par RaelbeI have got a couple of screen recordings of a painting I've done, and I've managed to concat the files together.


Unfortunately, there is a lot of "dead air" in the video (where I have left my desk, so there is no movement happening on screen) is there a way to cut out this down time ?


I found an example that another artist uses for his screen recordings, so I plugged it in with my file directory's. This is what I used :


.\ffmpeg -f concat -safe 0 -i "merge.txt" -vf npdecimate=hi=64*12:lo=64*5:frac=0.33,seipts=N/30/TB,"setpts=0.25*PTS" -r 30 -crf 30 -an Illu_Test.mp4



I got this error message at the end :


[AVFilterGraph @ 000001cadfe5b1c0] No option name near 'N/30/TB'
[AVFilterGraph @ 000001cadfe5b1c0] Error parsing a filter description around: ,setpts=0.25*PTS
[AVFilterGraph @ 000001cadfe5b1c0] Error parsing filterchain 'npdecimate=hi=64*12:lo=64*5:frac=0.33,seipts=N/30/TB,setpts=0.25*PTS' around: ,setpts=0.25*PTS
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:0`



So I chopped it up a bit and this is what I used to concat the files and it worked perfectly.


.\ffmpeg -f concat -safe 0 -i "merge.txt" -crf 30 -an Illu_Test.mp4



Now, I'm looking to cut out the seconds of no movement. I'm unsure what the -crf command does (as stated I am brand new to this) OG artist states that :


"This is the tolerance level that determines whether there has been enough change between frames or not to be considered as detected motion."


Any help would be appreciated.


(Apologies if the format of this question is wrong)


-
`free() : invalid next size (normal)` after loading some frames of some videos using ffmpeg libs
17 septembre 2024, par rakivohere is the function :


uint8_t *load_first_frame(const char *file_path, AVFrame **frame)
{
 int ret;
 AVFormatContext *format_ctx = NULL;
 if ((ret = avformat_open_input(&format_ctx, file_path, NULL, NULL)) < 0) {
 eprintf("could not open input file\n");
 return NULL;
 }

 if ((ret = avformat_find_stream_info(format_ctx, NULL)) < 0) {
 eprintf("could not find stream info\n");
 avformat_close_input(&format_ctx);
 return NULL;
 }

 int stream_idx = -1;
 for (unsigned int i = 0; i < format_ctx->nb_streams; i++) {
 if (format_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
 stream_idx = i;
 break;
 }
 }

 if (stream_idx == -1) {
 eprintf("could not find video stream\n");
 avformat_close_input(&format_ctx);
 return NULL;
 }

 AVCodecParameters *codec_par = format_ctx->streams[stream_idx]->codecpar;
 const AVCodec *codec = avcodec_find_decoder(codec_par->codec_id);
 if (!codec) {
 eprintf("could not find decoder\n");
 avformat_close_input(&format_ctx);
 return NULL;
 }

 AVCodecContext *codec_ctx = avcodec_alloc_context3(codec);
 if ((ret = avcodec_parameters_to_context(codec_ctx, codec_par)) < 0) {
 eprintf("could not copy codec parameters to context\n");
 avformat_close_input(&format_ctx);
 avcodec_free_context(&codec_ctx);
 return NULL;
 }

 if ((ret = avcodec_open2(codec_ctx, codec, NULL)) < 0) {
 eprintf("could not open codec\n");
 avformat_close_input(&format_ctx);
 avcodec_free_context(&codec_ctx);
 return NULL;
 }

 AVPacket *packet = av_packet_alloc();
 uint8_t *rgb_buffer = NULL;
 while (av_read_frame(format_ctx, packet) >= 0) {
 if (packet->stream_index != stream_idx) {
 av_packet_unref(packet);
 continue;
 }

 if (avcodec_send_packet(codec_ctx, packet) != 0) {
 av_packet_unref(packet);
 continue;
 }

 ret = avcodec_receive_frame(codec_ctx, *frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 av_packet_unref(packet);
 continue;
 } else if (ret < 0) {
 eprintf("error receiving frame: %d\n", ret);
 av_packet_unref(packet);
 av_packet_free(&packet);
 avformat_close_input(&format_ctx);
 avcodec_free_context(&codec_ctx);
 return NULL;
 }

 struct SwsContext *sws_ctx = sws_getContext(
 (*frame)->width, (*frame)->height, codec_ctx->pix_fmt,
 (*frame)->width, (*frame)->height, AV_PIX_FMT_RGB24,
 SWS_BILINEAR, NULL, NULL, NULL
 );

 if (!sws_ctx) {
 eprintf("failed to create SwsContext\n");
 av_packet_unref(packet);
 av_packet_free(&packet);
 av_frame_free(frame);
 avformat_close_input(&format_ctx);
 avcodec_free_context(&codec_ctx);
 return NULL;
 }

 int rgb_buffer_size = av_image_get_buffer_size(AV_PIX_FMT_RGB24, (*frame)->width, (*frame)->height, 1);
 if (rgb_buffer_size < 0) {
 eprintf("could not get buffer size\n");
 sws_freeContext(sws_ctx);
 av_packet_unref(packet);
 av_packet_free(&packet);
 av_frame_free(frame);
 avformat_close_input(&format_ctx);
 avcodec_free_context(&codec_ctx);
 return NULL;
 }

 rgb_buffer = (uint8_t *) malloc(rgb_buffer_size);
 if (rgb_buffer == NULL) {
 eprintf("failed to allocate RGB buffer\n");
 sws_freeContext(sws_ctx);
 av_packet_unref(packet);
 av_packet_free(&packet);
 avformat_close_input(&format_ctx);
 av_frame_free(frame);
 avcodec_free_context(&codec_ctx);
 return NULL;
 }

 uint8_t *dst[4] = {rgb_buffer, NULL, NULL, NULL};
 int dst_linesize[4] = {0};
 av_image_fill_linesizes(dst_linesize, AV_PIX_FMT_RGB24, (*frame)->width);

 sws_scale(sws_ctx,
 (const uint8_t *const *)(*frame)->data,
 (*frame)->linesize,
 0,
 (*frame)->height,
 dst,
 dst_linesize);

 sws_freeContext(sws_ctx);
 av_packet_unref(packet);
 break;
 }

 av_packet_unref(packet);
 av_packet_free(&packet);
 avformat_close_input(&format_ctx);
 avcodec_free_context(&codec_ctx);

 return rgb_buffer;
}



It loads up first frames of mp4s. The problem is that it works only first 7 times, and then, on 8th call of the showed function
malloc(): corrupted top size
happens. Specifically when callavcodec_send_packet
in thewhile
loop. Executing program usingvalgrind
outputs that :

==21777== Invalid write of size 8
==21777== at 0x7426956: ??? (in /usr/lib/libswscale.so.8.1.100)
==21777== by 0x18497CBF: ???
==21777== by 0x35E3AD3F: ???
==21777== Address 0x37f563f0 is 0 bytes after a block of size 2,194,560 alloc'd
==21777== at 0x48447A8: malloc (vg_replace_malloc.c:446)
==21777== by 0x1113CB: load_first_frame (main.c:503)
==21777== by 0x111995: draw_preview (main.c:605)
==21777== by 0x111E7F: render_files (main.c:672)
==21777== by 0x11209E: main (main.c:704)
==21777==
==21777== Invalid write of size 8
==21777== at 0x742695B: ??? (in /usr/lib/libswscale.so.8.1.100)
==21777== by 0x18497CBF: ???
==21777== by 0x35E3AD3F: ???
==21777== Address 0x37f563f8 is 8 bytes after a block of size 2,194,560 alloc'd
==21777== at 0x48447A8: malloc (vg_replace_malloc.c:446)
==21777== by 0x1113CB: load_first_frame (main.c:503)
==21777== by 0x111995: draw_preview (main.c:605)
==21777== by 0x111E7F: render_files (main.c:672)
==21777== by 0x11209E: main (main.c:704)
==21777==
==21777== Invalid write of size 8
==21777== at 0x7426956: ??? (in /usr/lib/libswscale.so.8.1.100)
==21777== by 0x183FE37F: ???
==21777== by 0x185D16FF: ???
==21777== Address 0x389b94e0 is 0 bytes after a block of size 943,200 alloc'd
==21777== at 0x48447A8: malloc (vg_replace_malloc.c:446)
==21777== by 0x1113CB: load_first_frame (main.c:503)
==21777== by 0x111995: draw_preview (main.c:605)
==21777== by 0x111E7F: render_files (main.c:672)
==21777== by 0x11209E: main (main.c:704)



The 503th line is :
rgb_buffer = (uint8_t *) malloc(rgb_buffer_size);
.

Apparently, I double freed something somewhere, but I can't see where exactly.
AVFrame **frame
that is passed to the function is allocated and freed properly every time usingav_frame_alloc
andav_frame_free
. The mp4 file fromfile_path
is always just a normal video, and it always exists.

I've tried loading up different mp4 files, the crash happens every time exactly on 8th call of the function