
Recherche avancée
Médias (39)
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Corona Radiata
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (40)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Initialisation de MediaSPIP (préconfiguration)
20 février 2010, parLors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)
Sur d’autres sites (9254)
-
I want to take any Audio from a file and encode it as PCM_ALAW. My Example is a .m4a file to .wav file
22 novembre 2023, par ClockmanI have been working on this for a while now while am generally new to ffmpeg library, I have studied it a bit. The challenge I have that at the point of witting to file I get the following exception.


"Exception thrown at 0x00007FFACA8305B3 (avformat-60.dll) in FfmpegPractice.exe : 0xC0000005 : Access violation writing location 0x0000000000000000.". I understand this means am writing to an uninitialized buffer am unable to discover why this is happening. The exception call stack shows the following


avformat-60.dll!avformat_write_header() C
avformat-60.dll!ff_write_chained() C
avformat-60.dll!ff_write_chained() C
avformat-60.dll!av_write_frame() C
FfmpegPractice.exe!main() Line 215 C++



Some things I have tried


This code is part of a larger project built with CMake but for some reason I could no step into ffmpeg library while debugging, So I recompiled ffmpeg ensured debugging was enabled so I could drill down to the root cause but I still could not step into the ffmpeg library.


I then created a minimal project using visual studio c++ console project and I still could not step into the code.


I have read through many ffmpeg docs and some I could find on the internet and I still could not solve it.


This is the code


#include <iostream>

extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswresample></libswresample>swresample.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>audio_fifo.h>
}

using namespace std;

//in audio file
string filename{ "rapid_caller_test.m4a" };
AVFormatContext* pFormatCtx{};
AVCodecContext* pCodecCtx{};
AVStream* pStream{};

//out audio file
string outFilename{ "output.wav" };
AVFormatContext* pOutFormatCtx{ nullptr };
AVCodecContext* pOutCodecCtx{ nullptr };
AVIOContext* pOutIoContext{ nullptr };
const AVCodec* pOutCodec{ nullptr };
AVStream* pOutStream{ nullptr };
const int OUTPUT_CHANNELS = 1;
const int SAMPLE_RATE = 8000;
const int OUT_BIT_RATE = 64000;
uint8_t** convertedSamplesBuffer{ nullptr };
int64_t dstNmbrSamples{ 0 };
int dstLineSize{ 0 };
static int64_t pts{ 0 };

//conversion context;
SwrContext* swr{};

uint32_t i{ 0 };
int audiostream{ -1 };


void cleanUp() 
{
 avcodec_free_context(&pOutCodecCtx);;
 avio_closep(&(pOutFormatCtx)->pb);
 avformat_free_context(pOutFormatCtx);
 pOutFormatCtx = nullptr;
}

int main()
{

/*
* section to setup input file
*/
if (avformat_open_input(&pFormatCtx, filename.data(), nullptr, nullptr) != 0) {
 cout << "could not open file " << filename << endl;
 return -1;
}
if (avformat_find_stream_info(pFormatCtx, nullptr) < 0) {
 cout << "Could not retrieve stream information from file " << filename << endl;
 return -1;
}
av_dump_format(pFormatCtx, 0, filename.c_str(), 0);

for (i = 0; i < pFormatCtx->nb_streams; i++) {
 if (pFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
 audiostream = i;
 break;
 }
}
if (audiostream == -1) {
 cout << "did not find audio stream" << endl;
 return -1;
}

pStream = pFormatCtx->streams[audiostream];
const AVCodec* pCodec{ avcodec_find_decoder(pStream->codecpar->codec_id) };
pCodecCtx = avcodec_alloc_context3(pCodec);
avcodec_parameters_to_context(pCodecCtx, pStream->codecpar);
if (avcodec_open2(pCodecCtx, pCodec, nullptr)) {
 cout << "could not open codec" << endl;
 return -1;
}

/*
* section to set up output file which is a G711 audio
*/
if (avio_open(&pOutIoContext, outFilename.data(), AVIO_FLAG_WRITE)) {
 cout << "could not open out put file" << endl;
 return -1;
}
if (!(pOutFormatCtx = avformat_alloc_context())) {
 cout << "could not create format conext" << endl;
 cleanUp();
 return -1;
}
pOutFormatCtx->pb = pOutIoContext;
if (!(pOutFormatCtx->oformat = av_guess_format(nullptr, outFilename.data(), nullptr))) {
 cout << "could not find output file format" << endl;
 cleanUp();
 return -1;
}
if (!(pOutFormatCtx->url = av_strdup(outFilename.data()))) {
 cout << "could not allocate file name" << endl;
 cleanUp();
 return -1;
}
if (!(pOutCodec = avcodec_find_encoder(AV_CODEC_ID_PCM_ALAW))) {
 cout << "codec not found" << endl;
 cleanUp();
 return -1;
}
if (!(pOutStream = avformat_new_stream(pOutFormatCtx, nullptr))) {
 cout << "could not create new stream" << endl;
 cleanUp();
 return -1;
}
if (!(pOutCodecCtx = avcodec_alloc_context3(pOutCodec))) {
 cout << "could not allocate codec context" << endl;
 return -1;
}
av_channel_layout_default(&pOutCodecCtx->ch_layout, OUTPUT_CHANNELS);
pOutCodecCtx->sample_rate = SAMPLE_RATE;
pOutCodecCtx->sample_fmt = pOutCodec->sample_fmts[0];
pOutCodecCtx->bit_rate = OUT_BIT_RATE;

//setting sample rate for the container
pOutStream->time_base.den = SAMPLE_RATE;
pOutStream->time_base.num = 1;
if (pOutFormatCtx->oformat->flags & AVFMT_GLOBALHEADER)
 pOutCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

if (avcodec_open2(pOutCodecCtx, pOutCodec, nullptr)) {
 cout << "could not open output codec" << endl;
 cleanUp();
 return -1;
}
if ((avcodec_parameters_from_context(pOutStream->codecpar, pOutCodecCtx)) < 0) {
 cout << "could not initialize stream parameters" << endl;
} 

AVPacket* packet = av_packet_alloc();

swr = swr_alloc();
swr_alloc_set_opts2(&swr, &pOutCodecCtx->ch_layout, pOutCodecCtx->sample_fmt, pOutCodecCtx->sample_rate,&pCodecCtx->ch_layout, pCodecCtx->sample_fmt, pCodecCtx->sample_rate, 0, nullptr);
swr_init(swr);

int ret{};
int bSize{};
while (av_read_frame(pFormatCtx, packet) >= 0) {
 AVFrame* pFrame = av_frame_alloc();
 AVFrame* pOutFrame = av_frame_alloc();
 if (packet->stream_index == audiostream) {
 ret = avcodec_send_packet(pCodecCtx, packet);
 while (ret >= 0) {
 ret = avcodec_receive_frame(pCodecCtx, pFrame);
 if (ret == AVERROR(EAGAIN))
 continue;
 else if (ret == AVERROR_EOF)
 break;
 dstNmbrSamples = av_rescale_rnd(swr_get_delay(swr, pCodecCtx->sample_rate) + pFrame->nb_samples, pOutCodecCtx->sample_rate, pCodecCtx->sample_rate, AV_ROUND_UP);
 if ((av_samples_alloc_array_and_samples(&convertedSamplesBuffer, &dstLineSize, pOutCodecCtx->ch_layout.nb_channels,dstNmbrSamples, pOutCodecCtx->sample_fmt, 0)) < 0) {
 cout << "coult not allocate samples array and buffer" << endl;
 }
 int channel_samples_count{ 0 };
 channel_samples_count = swr_convert(swr, convertedSamplesBuffer, dstNmbrSamples, (const uint8_t**)pFrame->data, pFrame->nb_samples);
 bSize = av_samples_get_buffer_size(&dstLineSize, pOutCodecCtx->ch_layout.nb_channels, channel_samples_count, pOutCodecCtx->sample_fmt, 0);
 cout << "no of samples is " << channel_samples_count << " the buffer size " << bSize << endl;
 pOutFrame->nb_samples = channel_samples_count;
 av_channel_layout_copy(&pOutFrame->ch_layout, &pOutCodecCtx->ch_layout);
 pOutFrame->format = pOutCodecCtx->sample_fmt;
 pOutFrame->sample_rate = pOutCodecCtx->sample_rate;
 if ((av_frame_get_buffer(pOutFrame, 0)) < 0) {
 cout << "could not allocate output frame samples " << endl;
 av_frame_free(&pOutFrame);
 }
 
 //populate out frame buffer
 av_frame_make_writable(pOutFrame);
 for (int i{ 0 }; i < bSize; i++) {
 pOutFrame->data[0][i] = convertedSamplesBuffer[0][i];
 cout << pOutFrame->data[0][i];
 }
 if (pOutFrame) {
 pOutFrame->pts = pts;
 pts += pOutFrame->nb_samples;
 }
 int res = avcodec_send_frame(pOutCodecCtx, pOutFrame);
 if (res < 0) {
 cout << "error sending frame to encoder" << endl;
 cleanUp();
 return -1;
 }
 //int er = avformat_write_header(pOutFormatCtx,nullptr);
 AVPacket* pOutPacket = av_packet_alloc();
 pOutPacket->time_base.num = 1;
 pOutPacket->time_base.den = 8000;
 if (pOutPacket == nullptr) {
 cout << "unable to allocate packet" << endl;
 }
 while (res >= 0) {
 res = avcodec_receive_packet(pOutCodecCtx, pOutPacket);
 if (res == AVERROR(EAGAIN))
 continue;
 else if (ret == AVERROR_EOF)
 break;
 av_packet_rescale_ts(pOutPacket, pOutCodecCtx->time_base, pOutFormatCtx->streams[0]->time_base);
 //av_dump_format(pOutFormatCtx, 0, outFilename.c_str(), 1);
 if (av_write_frame(pOutFormatCtx, pOutPacket) < 0) {
 cout << "could not write frame" << endl;
 }
 }
 }
}
 av_frame_free(&pFrame);
 av_frame_free(&pOutFrame);
}
if (av_write_trailer(pOutFormatCtx) < 0) {
 cout << "could not write file trailer" << endl;
}
swr_free(&swr);
avcodec_free_context(&pOutCodecCtx);
av_packet_free(&packet);
}
</iostream>


Error/Exception


The exception is thrown when I call


if (av_write_frame(pOutFormatCtx, pOutPacket) < 0) { cout << "could not write frame" << endl; }

I also called this line

//int er = avformat_write_header(pOutFormatCtx,nullptr);


to see if I will get an exception but it did not throw any exception.


I have spent weeks on this issue with no success.
My goal is to take any audio from a file an be able to resample it if need be, and transcode it to PCM_ALAW.
I will appreciate any help I can get.


-
Bad src image ptrs converting YUV to RGB after H264 decoding with libav and c++
31 octobre 2023, par Sebastian DELLINGI am getting "bad src image ptrs" errors when trying to convert my frames to RGB with sws_scale after decoding frames from a H264 file and cannot figure out wht is going wrong.


I checked what is causing the error and found the
check_image_pointers
function in swscale.c which validates that the planes and line sizes needed for the pixel format (av_pix_fmt_desc_get
) are present in the given data which seems not to be the case with my data.

The written pgm files look ok to me, also replaying the file works.


I printed the corresponding data of my frame. The problem seems that planes 1 and 2 have lines sizes of 0. All 3 of them seem to have data. Plane 0 line size is three times image width which is also confusing to me.


Here is my output :


Have videoStreamIndex 0 codec id: 27
saving frame 1 C:\\tmp\\output-frame-1.pgm colorspace 2 pix_fmt 0 w: 3840 h: 2160
Required:
plane 0 : 0
plane 1 : 1
plane 2 : 2
plane 3 : 0
Present:
Frame plane 0: 1 , 11520
Frame plane 1: 1 , 0
Frame plane 2: 1 , 0
Frame plane 3: 0 , 0
Frame plane 4: 0 , 0
Frame plane 5: 0 , 0
Frame plane 6: 0 , 0
Frame plane 7: 0 , 0



Here the whole code of my application, the issues occurs in method decode :


#include <iostream>
#include <cstring>
#include <cstdio>
#include <cstdint>
#include <string>
#include <iostream>
#include <chrono>

// #include <opencv2></opencv2>highgui.hpp>
// #include <opencv2></opencv2>opencv.hpp>

extern "C"
{

#include <libswscale></libswscale>swscale.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavfilter></libavfilter>buffersink.h>
#include <libavfilter></libavfilter>buffersrc.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>pixdesc.h>
#include <libavutil></libavutil>display.h>
#include "libavutil/imgutils.h"
}

#define INBUF_SIZE 4096
class H264Decoder
{
public:
 H264Decoder(const std::string &inputFilename, const std::string &outputFilenamePrefix)
 {

 // Open input file
 if (avformat_open_input(&formatContext, inputFilename.c_str(), nullptr, nullptr) != 0)
 {
 throw std::runtime_error("Could not open input file");
 }

 if (avformat_find_stream_info(formatContext, nullptr) < 0)
 {
 throw std::runtime_error("Could not find stream information");
 }

 // Find H.264 video stream
 for (unsigned i = 0; i < formatContext->nb_streams; i++)
 {
 if (formatContext->streams[i]->codecpar->codec_id == AV_CODEC_ID_H264)
 {
 videoStreamIndex = i;
 std::cout << "Have videoStreamIndex " << videoStreamIndex << " codec id: " << formatContext->streams[i]->codecpar->codec_id << std::endl;
 break;
 }
 }

 if (videoStreamIndex == -1)
 {
 throw std::runtime_error("H.264 video stream not found");
 }

 // Initialize codec and codec context
 const AVCodec *codec = avcodec_find_decoder(formatContext->streams[videoStreamIndex]->codecpar->codec_id);
 if (!codec)
 {
 throw std::runtime_error("Codec not found");
 }

 parser = av_parser_init(codec->id);
 if (!parser)
 {
 throw std::runtime_error("parser not found");
 }

 codecContext = avcodec_alloc_context3(codec);
 if (!codecContext)
 {
 throw std::runtime_error("Could not allocate codec context");
 }

 if (avcodec_open2(codecContext, codec, nullptr) < 0)
 {
 throw std::runtime_error("Could not open codec");
 }

 // Initialize frame
 frame = av_frame_alloc();
 frame->format = AV_PIX_FMT_YUV420P;
 if (!frame)
 {
 throw std::runtime_error("Could not allocate frame");
 }

 inputPacket = av_packet_alloc();
 if (!inputPacket)
 {
 throw std::runtime_error("Could not allocate packet");
 }

 inputFilename_ = inputFilename;
 outputFilenamePrefix_ = outputFilenamePrefix;
 }

 void decode()
 {
 char buf[1024];
 int ret;

 ret = avcodec_send_packet(codecContext, inputPacket);
 if (ret < 0)
 {
 fprintf(stderr, "Error sending a packet for decoding\n");
 exit(1);
 }

 while (ret >= 0)
 {
 ret = avcodec_receive_frame(codecContext, frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 return;
 else if (ret < 0)
 {
 fprintf(stderr, "Error during decoding\n");
 exit(1);
 }

 /* the picture is allocated by the decoder. no need to
 free it */
 snprintf(buf, sizeof(buf), "%s-%" PRId64 ".pgm", outputFilenamePrefix_.c_str(), codecContext->frame_num);

 std::cout << "saving frame " << codecContext->frame_num << " " << buf << " colorspace " << frame->colorspace << " pix_fmt " << codecContext->pix_fmt << " w: " << frame->width << " h: " << frame->height << std::endl;

 SwsContext *sws_ctx = NULL;

 sws_ctx = sws_getContext(codecContext->width,
 codecContext->height,
 codecContext->pix_fmt,
 codecContext->width,
 codecContext->height,
 AV_PIX_FMT_RGB24,
 SWS_BICUBIC,
 NULL,
 NULL,
 NULL);

 AVFrame *frame2 = av_frame_alloc();
 int num_bytes = av_image_get_buffer_size(AV_PIX_FMT_RGB24, codecContext->width, codecContext->height, 32);
 uint8_t *frame2_buffer = (uint8_t *)av_malloc(num_bytes * sizeof(uint8_t));
 av_image_fill_arrays(frame2->data, frame->linesize, frame2_buffer, AV_PIX_FMT_RGB24, codecContext->width, codecContext->height, 32);

 const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(codecContext->pix_fmt);
 std::cout << "Required:" << std::endl;
 for (int i = 0; i < 4; i++)
 {
 int plane = desc->comp[i].plane;
 std::cout << "plane " << i << " : " << plane << std::endl;
 }
 std::cout << "Present:" << std::endl;
 for (int i = 0; i < AV_NUM_DATA_POINTERS; ++i)
 {
 std::cout << "Frame plane " << i << ": " << static_cast<bool>(frame->data[i]) << " , " << frame->linesize[i] << std::endl;
 }

 sws_scale(sws_ctx, frame->data,
 frame->linesize, 0, codecContext->height,
 frame2->data, frame2->linesize);

 // cv::Mat img(frame2->height, frame2->width, CV_8UC3, frame2->data[0]);
 // cv::imshow("Image", img);

 pgm_save(frame->data[0], frame->linesize[0],
 frame->width, frame->height, buf);
 }
 }

 ~H264Decoder()
 {
 avformat_close_input(&formatContext);
 avformat_free_context(formatContext);
 avcodec_free_context(&codecContext);
 av_frame_free(&frame);
 av_packet_free(&inputPacket);
 }

 void readAndDecode()
 {
 FILE *f;
 uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
 uint8_t *data;
 size_t data_size;
 int ret;
 int eof;
 f = fopen(inputFilename_.c_str(), "rb");
 auto start = std::chrono::high_resolution_clock::now();
 do
 {
 /* read raw data from the input file */
 data_size = fread(inbuf, 1, INBUF_SIZE, f);
 if (ferror(f))
 break;
 eof = !data_size;

 /* use the parser to split the data into frames */
 data = inbuf;
 while (data_size > 0 || eof)
 {
 ret = av_parser_parse2(parser, codecContext, &inputPacket->data, &inputPacket->size,
 data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
 if (ret < 0)
 {
 fprintf(stderr, "Error while parsing\n");
 exit(1);
 }
 data += ret;
 data_size -= ret;

 if (inputPacket->size)
 {
 decode();
 }
 else if (eof)
 {
 break;
 }
 }
 } while (!eof);
 auto diff = std::chrono::high_resolution_clock::now() - start;
 std::cout << "Decoded " << codecContext->frame_num << " frames in " << std::chrono::duration_cast(diff).count() << " ms" << std::endl;
 }

private:
 AVFormatContext *formatContext = nullptr;
 AVCodecContext *codecContext = nullptr;
 AVCodecParserContext *parser;
 AVFrame *frame = nullptr;
 AVFrame *frameRgb = nullptr;
 AVPacket *inputPacket = nullptr;
 int videoStreamIndex = -1;
 std::string inputFilename_;
 std::string outputFilenamePrefix_;

 static void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize, const char *filename)
 {
 FILE *f = fopen(filename, "wb");
 if (!f)
 {
 std::cout << "Error opening file for saving PGM" << std::endl;
 exit(1);
 }

 fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255);
 for (int i = 0; i < ysize; i++)
 fwrite(buf + i * wrap, 1, xsize, f);

 fclose(f);
 }
};

int main(int argc, char *argv[])
{
 if (argc < 2)
 {
 std::cout << "Please provide input file name as parameter" << std::endl;
 }

 std::string inputFilename = argv[1];
 std::string outputFilenamePrefix = "C:\\tmp\\output-frame";

 try
 {

 H264Decoder decoder(inputFilename, outputFilenamePrefix);
 decoder.readAndDecode();
 }
 catch (const std::exception &e)
 {
 std::cout << "Error: " << e.what() << std::endl;
 return 1;
 }

 return 0;
}
</bool></chrono></iostream></string></cstdint></cstdio></cstring></iostream>


-
Merge file without data loss using FFmpeg inside of WASM
9 septembre 2023, par DejiEdit : I'm rewriting this entire question


Goal : To reconstruct a video from its pieces/chunks from a network stream inside of an
@ffmpeg/ffmpeg
worker

Problems :


- 

- Video chunks/pieces which come after the first piece/chunk are reported by
@ffmpeg/ffmpeg
to have invalid data, as seen in the log below :




{
 "type": "stderr",
 "message": "video-0_chunk-1.part: Invalid data found when processing input"
}



- 

- How would I merge these chunks/pieces to reconstruct the full video using
@ffmpeg/ffmpeg
(after solving the first issue above)




My current code situation :


- 

- For merging the video pieces




const constructFile = async (chunks: Uint8Array[], queueId: number) => {
 await Promise.all(
 chunks.map(async (chunk, index) => {
 const chunkFile = `video-${queueId}_chunk-${index}`;
 await ffmpeg.writeFile(chunkFile, chunk);

 // Return information about newly created file
 ffmpeg.exec(["-i", chunkFile]);
 })
 );
};



I'm reading the logs/output for


ffmpeg.exec(['-i', chunkFile])



using


ffmpeg.on('log', (log) => console.log(log))



- 

- For fetching the videos using streams




await useFetch(Capacitor.convertFileSrc(file.path), {
 responseType: "stream",

 onResponse: async ({ response }) => {
 if (response.body) {
 const reader = response.body.getReader();

 while (true) {
 const { done, value } = await reader.read();

 if (done) break;
 file.chunks.push(value);
 }
 reader.releaseLock();
 }
 },
});



Note : file.chunks is linked to a reactive value which is passed to
constructFile()
when initialized

These are the logs I get from the code currently above :


chunk-4OF65L5M.js:2710 <suspense> is an experimental feature and its API will likely change.
(index):298 native App.addListener (#25407936)
(index):298 native FilePicker.pickVideos (#25407937)
(index):272 result FilePicker.pickVideos (#25407937)
(index):298 native VideoEditor.thumbnail (#25407938)
(index):272 result VideoEditor.thumbnail (#25407938)
Processing.vue:135 {type: 'stderr', message: 'ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers'}
Processing.vue:135 {type: 'stderr', message: ' built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)'}
Processing.vue:135 {type: 'stderr', message: ' configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass'}
Processing.vue:135 {type: 'stderr', message: ' libavutil 57. 28.100 / 57. 28.100'}
Processing.vue:135 {type: 'stderr', message: ' libavcodec 59. 37.100 / 59. 37.100'}
Processing.vue:135 {type: 'stderr', message: ' libavformat 59. 27.100 / 59. 27.100'}
Processing.vue:135 {type: 'stderr', message: ' libavdevice 59. 7.100 / 59. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libavfilter 8. 44.100 / 8. 44.100'}
Processing.vue:135 {type: 'stderr', message: ' libswscale 6. 7.100 / 6. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libswresample 4. 7.100 / 4. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libpostproc 56. 6.100 / 56. 6.100'}
Processing.vue:135 {type: 'stderr', message: "Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video-0_chunk-0':"}
Processing.vue:135 {type: 'stderr', message: ' Metadata:'}
Processing.vue:135 {type: 'stderr', message: ' major_brand : mp42'}
Processing.vue:135 {type: 'stderr', message: ' minor_version : 0'}
Processing.vue:135 {type: 'stderr', message: ' compatible_brands: isommp42'}
Processing.vue:135 {type: 'stderr', message: ' creation_time : 2022-11-29T14:46:32.000000Z'}
Processing.vue:135 {type: 'stderr', message: ' Duration: 00:00:51.50, start: 0.000000, bitrate: 81 kb/s'}
Processing.vue:135 {type: 'stderr', message: ' Stream #0:0[0x1](und): Video: h264 (High) (avc1 …6], 259 kb/s, 30 fps, 30 tbr, 15360 tbn (default)'}
Processing.vue:135 {type: 'stderr', message: ' Metadata:'}
Processing.vue:135 {type: 'stderr', message: ' creation_time : 2022-11-29T14:46:32.000000Z'}
Processing.vue:135 {type: 'stderr', message: ' handler_name : ISO Media file produced by Google Inc. Created on: 11/29/2022.'}
Processing.vue:135 {type: 'stderr', message: ' vendor_id : [0][0][0][0]'}
Processing.vue:135 {type: 'stderr', message: ' Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0…706D), 44100 Hz, stereo, fltp, 127 kb/s (default)'}
Processing.vue:135 {type: 'stderr', message: ' Metadata:'}
Processing.vue:135 {type: 'stderr', message: ' creation_time : 2022-11-29T14:46:32.000000Z'}
Processing.vue:135 {type: 'stderr', message: ' handler_name : ISO Media file produced by Google Inc. Created on: 11/29/2022.'}
Processing.vue:135 {type: 'stderr', message: ' vendor_id : [0][0][0][0]'}
Processing.vue:135 {type: 'stderr', message: 'At least one output file must be specified'}
Processing.vue:135 {type: 'stderr', message: 'Aborted()'}
Processing.vue:135 {type: 'stderr', message: 'ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers'}
Processing.vue:135 {type: 'stderr', message: ' built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)'}
Processing.vue:135 {type: 'stderr', message: ' configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass'}
Processing.vue:135 {type: 'stderr', message: ' libavutil 57. 28.100 / 57. 28.100'}
Processing.vue:135 {type: 'stderr', message: ' libavcodec 59. 37.100 / 59. 37.100'}
Processing.vue:135 {type: 'stderr', message: ' libavformat 59. 27.100 / 59. 27.100'}
Processing.vue:135 {type: 'stderr', message: ' libavdevice 59. 7.100 / 59. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libavfilter 8. 44.100 / 8. 44.100'}
Processing.vue:135 {type: 'stderr', message: ' libswscale 6. 7.100 / 6. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libswresample 4. 7.100 / 4. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libpostproc 56. 6.100 / 56. 6.100'}
Processing.vue:135 {type: 'stderr', message: 'video-0_chunk-1: Invalid data found when processing input'}
Processing.vue:135 {type: 'stderr', message: 'Aborted()'}
Processing.vue:135 {type: 'stderr', message: 'ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers'}
Processing.vue:135 {type: 'stderr', message: ' built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)'}
Processing.vue:135 {type: 'stderr', message: ' configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass'}
Processing.vue:135 {type: 'stderr', message: ' libavutil 57. 28.100 / 57. 28.100'}
Processing.vue:135 {type: 'stderr', message: ' libavcodec 59. 37.100 / 59. 37.100'}
Processing.vue:135 {type: 'stderr', message: ' libavformat 59. 27.100 / 59. 27.100'}
Processing.vue:135 {type: 'stderr', message: ' libavdevice 59. 7.100 / 59. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libavfilter 8. 44.100 / 8. 44.100'}
Processing.vue:135 {type: 'stderr', message: ' libswscale 6. 7.100 / 6. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libswresample 4. 7.100 / 4. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libpostproc 56. 6.100 / 56. 6.100'}
Processing.vue:135 {type: 'stderr', message: 'video-0_chunk-2: Invalid data found when processing input'}
Processing.vue:135 {type: 'stderr', message: 'Aborted()'}
Processing.vue:135 {type: 'stderr', message: 'ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers'}
Processing.vue:135 {type: 'stderr', message: ' built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)'}
Processing.vue:135 {type: 'stderr', message: ' configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass'}
Processing.vue:135 {type: 'stderr', message: ' libavutil 57. 28.100 / 57. 28.100'}
Processing.vue:135 {type: 'stderr', message: ' libavcodec 59. 37.100 / 59. 37.100'}
Processing.vue:135 {type: 'stderr', message: ' libavformat 59. 27.100 / 59. 27.100'}
Processing.vue:135 {type: 'stderr', message: ' libavdevice 59. 7.100 / 59. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libavfilter 8. 44.100 / 8. 44.100'}
Processing.vue:135 {type: 'stderr', message: ' libswscale 6. 7.100 / 6. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libswresample 4. 7.100 / 4. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libpostproc 56. 6.100 / 56. 6.100'}
Processing.vue:135 {type: 'stderr', message: 'video-0_chunk-3: Invalid data found when processing input'}
Processing.vue:135 {type: 'stderr', message: 'Aborted()'}
Processing.vue:135 {type: 'stderr', message: 'ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers'}
Processing.vue:135 {type: 'stderr', message: ' built with emcc (Emscripten gcc/clang-like repla…3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)'}
Processing.vue:135 {type: 'stderr', message: ' configuration: --target-os=none --arch=x86_32 --…e-libfreetype --enable-libfribidi --enable-libass'}
Processing.vue:135 {type: 'stderr', message: ' libavutil 57. 28.100 / 57. 28.100'}
Processing.vue:135 {type: 'stderr', message: ' libavcodec 59. 37.100 / 59. 37.100'}
Processing.vue:135 {type: 'stderr', message: ' libavformat 59. 27.100 / 59. 27.100'}
Processing.vue:135 {type: 'stderr', message: ' libavdevice 59. 7.100 / 59. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libavfilter 8. 44.100 / 8. 44.100'}
Processing.vue:135 {type: 'stderr', message: ' libswscale 6. 7.100 / 6. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libswresample 4. 7.100 / 4. 7.100'}
Processing.vue:135 {type: 'stderr', message: ' libpostproc 56. 6.100 / 56. 6.100'}
Processing.vue:135 {type: 'stderr', message: 'video-0_chunk-4: Invalid data found when processing input'}
Processing.vue:135 {type: 'stderr', message: 'Aborted()'}
</suspense>


Notes :


- 

- The sections which start with
Processing.vue
come from the logging system I've setup. - The pieces/chunks gotten from the network where stored in exactly the same order in which they came
- If you've seen the old question, the
ReferenceError
happens as a result of HMR by Vite
- 

- Similar to this, some logs were repeated twice because I was actively changing some things and the component had to rerun from the start












Summary : If my problem is still not clear, you could provide another way of fetching a large file (video) from a network, loading the file into memory and passing the file data to
@ffmpeg/ffmpeg
for further processing

- Video chunks/pieces which come after the first piece/chunk are reported by