
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (46)
-
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (6735)
-
ffmpeg xfade audio out of sync
1er décembre 2020, par Richardhi I am using ffmpeg xfade to concat two clips, the transition looks fine, but the second clip has no audio in the output. I don't find more information about this in https://trac.ffmpeg.org/wiki/Xfade and the official doc. Can I keep the audio of both clips in sync ?


I'm puttting the complete log below, it's massive


ffmpeg -i teacher1.mp4 -i teacher2.mp4 -filter_complex xfade=transition=fadewhite:duration=1:offset=14.5 output.mp4
ffmpeg version 4.3.1-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 8 (Debian 8.3.0-6)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
 libavutil 56. 51.100 / 56. 51.100
 libavcodec 58. 91.100 / 58. 91.100
 libavformat 58. 45.100 / 58. 45.100
 libavdevice 58. 10.100 / 58. 10.100
 libavfilter 7. 85.100 / 7. 85.100
 libswscale 5. 7.100 / 5. 7.100
 libswresample 3. 7.100 / 3. 7.100
 libpostproc 55. 7.100 / 55. 7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'teacher1.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:15.02, start: 0.000000, bitrate: 2447 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 2315 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 handler_name : SoundHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'teacher2.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:15.02, start: 0.000000, bitrate: 1643 kb/s
 Stream #1:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 1509 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 handler_name : SoundHandler
Stream mapping:
 Stream #0:0 (h264) -> xfade:main (graph 0)
 Stream #1:0 (h264) -> xfade:xfade (graph 0)
 xfade (graph 0) -> Stream #0:0 (libx264)
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x64603c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x64603c0] profile High 4:4:4 Predictive, level 4.0, 4:4:4, 8-bit
[libx264 @ 0x64603c0] 264 - core 161 r3018 db0d417 - H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.45.100
 Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv444p, 1920x1080, q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
 Metadata:
 encoder : Lavc58.91.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 handler_name : SoundHandler
 encoder : Lavc58.91.100 aac
frame= 738 fps= 25 q=-1.0 Lsize= 7317kB time=00:00:29.40 bitrate=2038.9kbits/s speed=1.01x 
video:7064kB audio:236kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.241368%
[libx264 @ 0x64603c0] frame I:8 Avg QP:16.56 size:100307
[libx264 @ 0x64603c0] frame P:332 Avg QP:19.50 size: 15211
[libx264 @ 0x64603c0] frame B:398 Avg QP:24.35 size: 3467
[libx264 @ 0x64603c0] consecutive B-frames: 6.2% 62.6% 8.9% 22.2%
[libx264 @ 0x64603c0] mb I I16..4: 18.7% 60.5% 20.9%
[libx264 @ 0x64603c0] mb P I16..4: 4.5% 7.6% 0.6% P16..4: 24.1% 5.6% 1.9% 0.0% 0.0% skip:55.7%
[libx264 @ 0x64603c0] mb B I16..4: 0.4% 0.5% 0.1% B16..8: 14.3% 0.9% 0.1% direct: 0.4% skip:83.3% L0:42.9% L1:53.9% BI: 3.3%
[libx264 @ 0x64603c0] 8x8 transform intra:59.2% inter:84.3%
[libx264 @ 0x64603c0] coded y,u,v intra: 25.3% 11.9% 13.0% inter: 3.2% 1.7% 1.8%
[libx264 @ 0x64603c0] i16 v,h,dc,p: 49% 25% 15% 12%
[libx264 @ 0x64603c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 17% 37% 2% 2% 3% 2% 3% 2%
[libx264 @ 0x64603c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 19% 15% 6% 7% 7% 6% 6% 5%
[libx264 @ 0x64603c0] Weighted P-Frames: Y:3.3% UV:1.5%
[libx264 @ 0x64603c0] ref P L0: 66.0% 8.0% 18.5% 7.5% 0.0%
[libx264 @ 0x64603c0] ref B L0: 84.9% 13.1% 2.0%
[libx264 @ 0x64603c0] ref B L1: 98.8% 1.2%
[libx264 @ 0x64603c0] kb/s:1960.04
[aac @ 0x64d1280] Qavg: 171.394



-
How to use ffmpeg api to make a filter overlay water mark ?
6 septembre 2022, par Leon LeeOS : Ubuntu 20.04


FFmpeg : 4.4.0


Test video :


Input #0, hevc, from './videos/akiyo_352x288p25.265' :
Duration : N/A, bitrate : N/A
Stream #0:0 : Video : hevc (Main), yuv420p(tv), 352x288, 25 fps, 25 tbr, 1200k tbn, 25 tbc


Test watermark :


200*200.png


I copy ffmpeg official example.


Compiler no error, run no error , but i can't see add watermark


Here is my code


#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
#include <libavfilter></libavfilter>buffersink.h>
#include <libavfilter></libavfilter>buffersrc.h>
int open_input_file(AVFormatContext *fmt, AVCodecContext **codecctx, AVCodec *codec, const char *filename, int index)
{
 int ret = 0;
 char msg[500];
 *codecctx = avcodec_alloc_context3(codec);
 ret = avcodec_parameters_to_context(*codecctx, fmt->streams[index]->codecpar);
 if (ret < 0)
 {
 printf("avcodec_parameters_to_context error,ret:%d\n", ret);
 
 return -1;
 }

 // open 解码器
 ret = avcodec_open2(*codecctx, codec, NULL);
 if (ret < 0)
 {
 printf("avcodec_open2 error,ret:%d\n", ret);
 
 return -2;
 }
 printf("pix:%d\n", (*codecctx)->pix_fmt);
 return ret;
}

int init_filter(AVFilterContext **buffersrc_ctx, AVFilterContext **buffersink_ctx, AVFilterGraph **filter_graph, AVStream *stream, AVCodecContext *codecctx, const char *filter_desc)
{
 int ret = -1;
 char args[512];
 char msg[500];
 const AVFilter *buffersrc = avfilter_get_by_name("buffer");
 const AVFilter *buffersink = avfilter_get_by_name("buffersink");

 AVFilterInOut *input = avfilter_inout_alloc();
 AVFilterInOut *output = avfilter_inout_alloc();

 AVRational time_base = stream->time_base;
 enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};

 if (!output || !input || !filter_graph)
 {
 ret = -1;
 printf("avfilter_graph_alloc/avfilter_inout_alloc error,ret:%d\n", ret);
 
 goto end;
 }
 snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", codecctx->width, codecctx->height, codecctx->pix_fmt, stream->time_base.num, stream->time_base.den, codecctx->sample_aspect_ratio.num, codecctx->sample_aspect_ratio.den);
 ret = avfilter_graph_create_filter(buffersrc_ctx, buffersrc, "in", args, NULL, *filter_graph);
 if (ret < 0)
 {
 printf("avfilter_graph_create_filter buffersrc error,ret:%d\n", ret);
 
 goto end;
 }

 ret = avfilter_graph_create_filter(buffersink_ctx, buffersink, "out", NULL, NULL, *filter_graph);
 if (ret < 0)
 {
 printf("avfilter_graph_create_filter buffersink error,ret:%d\n", ret);
 
 goto end;
 }
 ret = av_opt_set_int_list(*buffersink_ctx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
 if (ret < 0)
 {
 printf("av_opt_set_int_list error,ret:%d\n", ret);
 
 goto end;
 }
 /*
 * The buffer source output must be connected to the input pad of
 * the first filter described by filters_descr; since the first
 * filter input label is not specified, it is set to "in" by
 * default.
 */
 output->name = av_strdup("in");
 output->filter_ctx = *buffersrc_ctx;
 output->pad_idx = 0;
 output->next = NULL;

 /*
 * The buffer sink input must be connected to the output pad of
 * the last filter described by filters_descr; since the last
 * filter output label is not specified, it is set to "out" by
 * default.
 */
 input->name = av_strdup("out");
 input->filter_ctx = *buffersink_ctx;
 input->pad_idx = 0;
 input->next = NULL;

 if ((ret = avfilter_graph_parse_ptr(*filter_graph, filter_desc, &input, &output, NULL)) < 0)
 {
 printf("avfilter_graph_parse_ptr error,ret:%d\n", ret);
 
 goto end;
 }

 if ((ret = avfilter_graph_config(*filter_graph, NULL)) < 0)
 {
 printf("avfilter_graph_config error,ret:%d\n", ret);
 
 goto end;
 }
 end:
 avfilter_inout_free(&input);
 avfilter_inout_free(&output);
 return ret;
}

int main(int argc, char **argv)
{
 int ret;
 char msg[500];
 const char *filter_descr = "drawbox=x=100:y=100:w=100:h=100:color=pink@0.5"; // OK
 //const char *filter_descr = "movie=200.png[wm];[in][wm]overlay=10:10[out]"; //Test
 // const char *filter_descr = "scale=640:360,transpose=cclock";
 AVFormatContext *pFormatCtx = NULL;
 AVCodecContext *pCodecCtx;
 AVFilterContext *buffersink_ctx;
 AVFilterContext *buffersrc_ctx;
 AVFilterGraph *filter_graph;
 AVCodec *codec;
 int video_stream_index = -1;

 AVPacket packet;
 AVFrame *pFrame;
 AVFrame *pFrame_out;
 filter_graph = avfilter_graph_alloc();
 FILE *fp_yuv = fopen("test.yuv", "wb+");
 ret = avformat_open_input(&pFormatCtx, argv[1], NULL, NULL);
 if (ret < 0)
 {
 printf("avformat_open_input error,ret:%d\n", ret);
 
 return -1;
 }

 ret = avformat_find_stream_info(pFormatCtx, NULL);
 if (ret < 0)
 {
 printf("avformat_find_stream_info error,ret:%d\n", ret);
 
 return -2;
 }

 ret = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &codec, 0);
 if (ret < 0)
 {
 printf("av_find_best_stream error,ret:%d\n", ret);
 
 return -3;
 }
 // 获取到视频流索引
 video_stream_index = ret;

 av_dump_format(pFormatCtx, 0, argv[1], 0);
 if ((ret = open_input_file(pFormatCtx, &pCodecCtx, codec, argv[1], video_stream_index)) < 0)
 {
 ret = -1;
 printf("open_input_file error,ret:%d\n", ret);
 
 goto end;
 }

 if ((ret = init_filter(&buffersrc_ctx, &buffersink_ctx, &filter_graph, pFormatCtx->streams[video_stream_index], pCodecCtx, filter_descr)) < 0)
 {
 ret = -2;
 printf("init_filter error,ret:%d\n", ret);
 
 goto end;
 }
 pFrame = av_frame_alloc();
 pFrame_out = av_frame_alloc();
 while (1)
 {
 if ((ret = av_read_frame(pFormatCtx, &packet)) < 0)
 break;

 if (packet.stream_index == video_stream_index)
 {
 ret = avcodec_send_packet(pCodecCtx, &packet);
 if (ret < 0)
 {
 printf("avcodec_send_packet error,ret:%d\n", ret);
 
 break;
 }

 while (ret >= 0)
 {
 ret = avcodec_receive_frame(pCodecCtx, pFrame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 break;
 }
 else if (ret < 0)
 {
 printf("avcodec_receive_frame error,ret:%d\n", ret);
 
 goto end;
 }

 pFrame->pts = pFrame->best_effort_timestamp;

 /* push the decoded frame into the filtergraph */
 ret = av_buffersrc_add_frame_flags(buffersrc_ctx, pFrame, AV_BUFFERSRC_FLAG_KEEP_REF);
 if (ret < 0)
 {
 printf("av_buffersrc_add_frame_flags error,ret:%d\n", ret);
 
 break;
 }

 /* pull filtered frames from the filtergraph */
 while (1)
 {
 ret = av_buffersink_get_frame(buffersink_ctx, pFrame_out);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 if (ret < 0)
 goto end;
 if (pFrame_out->format == AV_PIX_FMT_YUV420P)
 {
 //Y, U, V
 for (int i = 0; i < pFrame_out->height; i++)
 {
 fwrite(pFrame_out->data[0] + pFrame_out->linesize[0] * i, 1, pFrame_out->width, fp_yuv);
 }
 for (int i = 0; i < pFrame_out->height / 2; i++)
 {
 fwrite(pFrame_out->data[1] + pFrame_out->linesize[1] * i, 1, pFrame_out->width / 2, fp_yuv);
 }
 for (int i = 0; i < pFrame_out->height / 2; i++)
 {
 fwrite(pFrame_out->data[2] + pFrame_out->linesize[2] * i, 1, pFrame_out->width / 2, fp_yuv);
 }
 }
 av_frame_unref(pFrame_out);
 }
 av_frame_unref(pFrame);
 }
 }
 av_packet_unref(&packet);
 }
 end:
 avcodec_free_context(&pCodecCtx);
 fclose(fp_yuv);
}



-
ffmpeg.wasm - How to do literally anything with a blob url
24 novembre 2024, par SeriousLeeI'm using the ffmpeg.wasm for the first time and I can't get anything working, beyond loading it. I have this function that does nothing in particular (I got it from the vite + react example in the docs, slightly modified) and all I want to do is pass it a blob URL like this
blob:http://localhost:5173/c7a9ea7c-aa26-4f4f-9c80-11b8aef3e81f
and run through the function and have it give me anything back. But instead, it hangs on theffmpeg.exec
command and never completes. And yes, I've determined that the input blob does work - it's an 8MB, 12-second long mp4 clip.

Here's the function :


const processOutputVideo = async (videoURL) => {
 const ffmpeg = ffmpegRef.current;

 await ffmpeg.writeFile("input.mp4", await fetchFile(videoURL));
 await ffmpeg.exec(["-i", "input.mp4", "output.mp4"]);

 const fileData = await ffmpeg.readFile("output.mp4");
 const blob = new Blob([fileData.buffer], { type: "video/mp4" });
 const blobUrl = URL.createObjectURL(blob);

 return blobUrl;
 };



And here's the ffmpeg logs from my terminal.


[FFMPEG stderr] ffmpeg version 5.1.4 Copyright (c) 2000-2023 the FFmpeg developers
Post.jsx:35 [FFMPEG stderr] built with emcc (Emscripten gcc/clang-like replacement + linker emulating GNU ld) 3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)
Post.jsx:35 [FFMPEG stderr] configuration: --target-os=none --arch=x86_32 --enable-cross-compile --disable-asm --disable-stripping --disable-programs --disable-doc --disable-debug --disable-runtime-cpudetect --disable-autodetect --nm=emnm --ar=emar --ranlib=emranlib --cc=emcc --cxx=em++ --objcc=emcc --dep-cc=emcc --extra-cflags='-I/opt/include -O3 -msimd128 -sUSE_PTHREADS -pthread' --extra-cxxflags='-I/opt/include -O3 -msimd128 -sUSE_PTHREADS -pthread' --enable-gpl --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libopus --enable-zlib --enable-libwebp --enable-libfreetype --enable-libfribidi --enable-libass --enable-libzimg
Post.jsx:35 [FFMPEG stderr] libavutil 57. 28.100 / 57. 28.100
Post.jsx:35 [FFMPEG stderr] libavcodec 59. 37.100 / 59. 37.100
Post.jsx:35 [FFMPEG stderr] libavformat 59. 27.100 / 59. 27.100
Post.jsx:35 [FFMPEG stderr] libavdevice 59. 7.100 / 59. 7.100
Post.jsx:35 [FFMPEG stderr] libavfilter 8. 44.100 / 8. 44.100
Post.jsx:35 [FFMPEG stderr] libswscale 6. 7.100 / 6. 7.100
Post.jsx:35 [FFMPEG stderr] libswresample 4. 7.100 / 4. 7.100
Post.jsx:35 [FFMPEG stderr] libpostproc 56. 6.100 / 56. 6.100
Post.jsx:35 [FFMPEG stderr] Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
Post.jsx:35 [FFMPEG stderr] Metadata:
Post.jsx:35 [FFMPEG stderr] major_brand : mp42
Post.jsx:35 [FFMPEG stderr] minor_version : 0
Post.jsx:35 [FFMPEG stderr] compatible_brands: mp42mp41isomavc1
Post.jsx:35 [FFMPEG stderr] creation_time : 2019-03-15T17:39:05.000000Z
Post.jsx:35 [FFMPEG stderr] Duration: 00:00:12.82, start: 0.000000, bitrate: 5124 kb/s
Post.jsx:35 [FFMPEG stderr] Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080, 4985 kb/s, 29.97 fps, 29.97 tbr, 30k tbn (default)
Post.jsx:35 [FFMPEG stderr] Metadata:
Post.jsx:35 [FFMPEG stderr] creation_time : 2019-03-15T17:39:05.000000Z
Post.jsx:35 [FFMPEG stderr] handler_name : L-SMASH Video Handler
Post.jsx:35 [FFMPEG stderr] vendor_id : [0][0][0][0]
Post.jsx:35 [FFMPEG stderr] encoder : AVC Coding
Post.jsx:35 [FFMPEG stderr] Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 137 kb/s (default)
Post.jsx:35 [FFMPEG stderr] Metadata:
Post.jsx:35 [FFMPEG stderr] creation_time : 2019-03-15T17:39:05.000000Z
Post.jsx:35 [FFMPEG stderr] handler_name : L-SMASH Audio Handler
Post.jsx:35 [FFMPEG stderr] vendor_id : [0][0][0][0]
Post.jsx:35 [FFMPEG stderr] Stream mapping:
Post.jsx:35 [FFMPEG stderr] Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Post.jsx:35 [FFMPEG stderr] Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0x154e4f0] using cpu capabilities: none!
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0x154e4f0] profile High, level 4.0, 4:2:0, 8-bit
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0x154e4f0] 264 - core 164 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00



And it just hangs there. When I use the example video URL from the official example (
https://raw.githubusercontent.com/ffmpegwasm/testdata/master/video-15s.avi
), it doesn't hang and it completes the function and returns me a blob URL in the same format as that first blob URL I showed you guys and this is what the ffmpeg output looks like in my console in that case :

Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] frame P:160 Avg QP:23.62 size: 1512
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] frame B:385 Avg QP:26.75 size: 589
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] consecutive B-frames: 5.5% 3.6% 0.0% 90.9%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] mb I I16..4: 12.6% 87.4% 0.0%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] mb P I16..4: 3.8% 47.5% 1.6% P16..4: 12.9% 7.4% 5.0% 0.0% 0.0% skip:21.7%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] mb B I16..4: 1.2% 10.3% 0.4% B16..8: 22.3% 6.9% 1.4% direct: 2.7% skip:54.8% L0:46.9% L1:40.2% BI:12.9%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] 8x8 transform intra:88.7% inter:74.7%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] coded y,uvDC,uvAC intra: 68.3% 0.0% 0.0% inter: 11.8% 0.0% 0.0%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i16 v,h,dc,p: 33% 40% 24% 3%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 15% 26% 52% 2% 1% 1% 1% 1% 3%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 27% 21% 20% 5% 5% 5% 4% 6% 5%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i8c dc,h,v,p: 100% 0% 0% 0%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] Weighted P-Frames: Y:12.5% UV:0.0%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] ref P L0: 48.9% 12.5% 22.3% 14.7% 1.6%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] ref B L0: 77.5% 15.7% 6.8%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] ref B L1: 90.9% 9.1%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] kb/s:242.65
Post.jsx:35 [FFMPEG stderr] Aborted()



Where am I going wrong, what should I convert my input blob into ? And just FYI, ChatGPT has been completely garbage at helping me solve this lmao.