
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (111)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (7642)
-
Muxing H264 packets into a MPEGTS container using libav*
30 avril 2024, par LucenI'm writing a C++ program where I need to encode packets in h264 format and mux them to a MPEG TS container. For the encoding part, I based my code on the encode_video example (https://ffmpeg.org/doxygen/trunk/encode_video_8c-example.html#a9) provided in FFMPEG documentation, and it seems to work fine. In particular, I generate a std::vector of packets which I sequentially write to an output .ts file for debug. Such .ts file plays fine with SMPlayer, and a ffproba command gives


>> ffprobe -print_format json -show_format -show_streams out.ts
Input #0, h264, from 'out.ts':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264 (Main), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 1200k tbn, 50 tbc
 "streams": [
 {
 "index": 0,
 "codec_name": "h264",
 "codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
 "profile": "Main",
 "codec_type": "video",
 "codec_time_base": "1/50",
 "codec_tag_string": "[0][0][0][0]",
 "codec_tag": "0x0000",
 "width": 640,
 "height": 480,
 "coded_width": 640,
 "coded_height": 480,
 "has_b_frames": 1,
 "sample_aspect_ratio": "1:1",
 "display_aspect_ratio": "4:3",
 "pix_fmt": "yuv420p",
 "level": 30,
 "chroma_location": "left",
 "field_order": "progressive",
 "refs": 1,
 "is_avc": "false",
 "nal_length_size": "0",
 "r_frame_rate": "25/1",
 "avg_frame_rate": "25/1",
 "time_base": "1/1200000",
 "bits_per_raw_sample": "8",
 "disposition": {
 "default": 0,
 "dub": 0,
 "original": 0,
 "comment": 0,
 "lyrics": 0,
 "karaoke": 0,
 "forced": 0,
 "hearing_impaired": 0,
 "visual_impaired": 0,
 "clean_effects": 0,
 "attached_pic": 0,
 "timed_thumbnails": 0
 }
 }
 ],
 "format": {
 "filename": "out.ts",
 "nb_streams": 1,
 "nb_programs": 0,
 "format_name": "h264",
 "format_long_name": "raw H.264 video",
 "size": "435443",
 "probe_score": 51
 }
}




The dts and pts timestamps are also set.
However, if I try to mux them in MPEG TS format, using as a base the example mux.c (https://ffmpeg.org/doxygen/trunk/mux_8c-example.html), it doesn't work. A shortened version of my muxing code is as follows : (the variables ending with "_" are class fields)


int MyProcessing::Mux(const std::string outputFilename) {
 AVFormatContext *muxingContest;
 avformat_alloc_output_context2(&muxingContest, NULL, NULL, m_output.c_str());

 auto outFormat = muxingContest->oformat;
 outFormat->video_codec = AV_CODEC_ID_H264;

 AVStream *outStream;
 const AVCodec *codec;

 Mux_AddStream(&outStream, muxingContest, &codec, outFormat->video_codec);

 AVDictionary *opt = nullptr;
 Mux_OpenVideo(muxingContest, codec, outStream, opt);
 
 if (!(muxingContest->flags & AVFMT_NOFILE)) {
 avio_open(&muxingContest->pb, m_output.c_str(), AVIO_FLAG_WRITE);
 }
 avformat_write_header(muxingContest, &opt);

 auto muxOk = true;
 size_t countMuxedFrames = 0;
 while ((muxOk) && (countMuxedFrames < packets_.size())) {
 muxOk = !MuxPacket(muxingContest, outStream, packets_[countMuxedFrames], &opt);
 countMuxedFrames++;
 }

 av_write_trailer(muxingContest);
 if (!(muxCodecContextPtr_->flags & AVFMT_NOFILE)) avio_closep(&muxingContest->pb);
 
 return 0;
}


int MyProcessing::Mux_AddStream(AVStream **stream, AVFormatContext *format, const AVCodec **codec, enum AVCodecID codecId) {
 *codec = avcodec_find_encoder(codecId);
 muxPacketTmpPtr_ = av_packet_alloc();
 *stream = avformat_new_stream(format, *codec);
 (*stream)->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
 (*stream)->id = format->nb_streams-1;
 (*stream)->index = 0;
 muxCodecContextPtr_ = avcodec_alloc_context3(*codec);
 Mux_FillCodecContext(*muxCodecContextPtr_, codecId, **stream);
 if (format->oformat->flags & AVFMT_GLOBALHEADER)
 muxCodecContextPtr_->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 return 0;
}


void MyProcessing::Mux_FillCodecContext(AVCodecContext &cc, enum AVCodecID codecId, AVStream &stream) {
 cc.codec_id = codecId;
 cc.bit_rate = 400000;
 cc.width = outputWidth_;
 cc.height = outputHeight_;
 cc.time_base = stream.time_base;
 cc.gop_size = 10;
 cc.max_b_frames = 1;
 cc.gop_size = 12;
 cc.pix_fmt = AV_PIX_FMT_YUV420P;
 if (cc.codec_id == AV_CODEC_ID_MPEG2VIDEO) cc.max_b_frames = 2;
 if (cc.codec_id == AV_CODEC_ID_MPEG1VIDEO) cc.mb_decision = 2;
 av_opt_set(&cc, "preset", "slow", 0);
 av_opt_set(&cc, "tune", "zerolatency", 0);
}


int MyProcessing::Mux_OpenVideo(AVFormatContext *format, const AVCodec *codec, AVStream *stream, AVDictionary *opt_arg) {
 AVDictionary *opt = nullptr;
 av_dict_copy(&opt, opt_arg, 0);
 muxCodecContextPtr_->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 avcodec_open2(muxCodecContextPtr_, codec, &opt);
 av_dict_free(&opt);
 avcodec_parameters_from_context(stream->codecpar, muxCodecContextPtr_);
 return 0;
}

int MyProcessing::MuxPacket(AVFormatContext *format, AVStream *stream, AVPacket &pkt, AVDictionary **opt) {
 AVBitStreamFilterContext *bsf = av_bitstream_filter_init("h264_mp4toannexb");
 AVPacket filteredPkt = pkt;
 auto filterResult = av_bitstream_filter_filter(bsf, format->streams[stream->index]->codec, NULL,
 &filteredPkt.data, &filteredPkt.size,
 pkt.data, pkt.size,
 pkt.flags & AV_PKT_FLAG_KEY);

 if (filterResult < 0) return filterResult;
 else {
 filteredPkt.buf = av_buffer_create(filteredPkt.data, filteredPkt.size,
 av_buffer_default_free, NULL, 0);
 }
 av_bitstream_filter_close(bsf);
 filteredPkt.stream_index = stream->index;
 filteredPkt.dts = filteredPkt.pts;
 filteredPkt.duration = ((double)stream->time_base.num / (double)stream->time_base.den) / STREAM_FRAME_RATE;
 av_packet_rescale_ts(&filteredPkt, muxCodecContextPtr_->time_base, stream->time_base); // rescale output packet timestamp values from codec to stream timebase
 auto writePktResult = av_write_frame(format, &filteredPkt);
 // auto writePktResult = av_interleaved_write_frame(format, &filteredPkt);
 return 0;
}




The console error is


[mpegts @ 0x55555736edc0] H.264 bitstream malformed, no startcode found, use the video bitstream filter 'h264_mp4toannexb' to fix it ('-bsf:v h264_mp4toannexb' option with ffmpeg)



It Is telling me to apply the h264_mp4toannexb filter. As you see from the code, I've put the filtering accordingly, but the error message persists (unless I'm applying the filter in a wrong way).


In the last lines of method MuxPacket(), if I uncomment the line with av_interleaved_write_frame() and comment the previous one, I get the same error, as well as a seg fault. Inspecting with GDB, the call stack for the seg fault is as follows :


#0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:440
#1 0x00007ffff67c7cb6 in av_packet_copy_props () at /lib/x86_64-linux-gnu/libavcodec.so.58
#2 0x00007ffff67c8447 in av_packet_ref () at /lib/x86_64-linux-gnu/libavcodec.so.58
#3 0x00007ffff7e2fa13 in () at /lib/x86_64-linux-gnu/libavformat.so.58
#4 0x00007ffff7e2fb11 in () at /lib/x86_64-linux-gnu/libavformat.so.58
#5 0x00007ffff7e30575 in av_interleaved_write_frame () at /lib/x86_64-linux-gnu/libavformat.so.58



I tried to look at solutions online, but they are either old or they don't work. Some of the things I tried and didn't work :


- 

- Putting the line




muxCodecContextPtr_->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;



in Mux() after the call to avformat_alloc_output_context2.


- 

- Setting




packet.flags |= AV_PKT_FLAG_KEY;



before the call to av_write_frame / av_interleaved_write_frame.


- 

-
Trying to write by hand to the file the starting code as described here Need to convert h264 stream from annex-b format to AVCC format.


-
Playing with parameters in Mux_FillCodecContext().








-
(C_UDP Socket Programming) How can I convert binary file to video format ?
30 avril 2024, par user24723398I am practicing UDP socket programming. My code's functions are below.


- 

- Connect Server-Client and send "hello" message each other (it is working).
- Then Server is sending video file to client (problem).






Transfer video file to client is working. But it is written in binary so I can't open the video.


So I try to use ffmpeg to convert the video, but it doesn't work.


Is there something wrong in my code ? How can I transfer a received file to a video file ?


My environment is MacOs.


Server.c (Server Code) :


#include 
#include 
#include 
#include <arpa></arpa>inet.h>
#include 
#include <sys></sys>socket.h>

#define PORT 8888
#define BUF_SIZE 256

int main(){
 int serv_sock;
 char message[BUF_SIZE];
 char buf[BUF_SIZE];
 int str_len;
 socklen_t clnt_adr_sz;

 struct sockaddr_in serv_adr, clnt_adr;
 
 //create socket
 serv_sock=socket(PF_INET, SOCK_DGRAM, 0);
 if(serv_sock == -1){
 perror("socket() error");
 exit(1);
 }
 
 //socket address
 memset(&serv_adr, 0, sizeof(serv_adr));
 serv_adr.sin_family=AF_INET;
 serv_adr.sin_addr.s_addr=htonl(INADDR_ANY);
 serv_adr.sin_port=htons(PORT);
 //binding socket
 if(bind(serv_sock, (struct sockaddr*)&serv_adr, sizeof(serv_adr)) == -1){
 perror("bind() error");
 exit(1);
 }
 
 while(1){
 clnt_adr_sz=sizeof(clnt_adr);
 str_len=recvfrom(serv_sock, message, BUF_SIZE, 0, (struct sockaddr *)&clnt_adr, &clnt_adr_sz);
 if (str_len < 0) {
 perror("recvfrom error");
 exit(1);
 }
 
 char hello_message[] = "hello i am server";
 if (sendto(serv_sock, hello_message, strlen(hello_message), 0, (struct sockaddr *)&clnt_adr, clnt_adr_sz) < 0) {
 perror("sendto error");
 exit(1);
 }
 
 //print message
 message[str_len] = '\0';
 printf("client say: %s\n", message);
 
 char buf[BUF_SIZE];
 ssize_t bytes_read;
 // sending viedo file
 printf("sending video file...\n");
 size_t fsize;
 
 //video file
 FILE *file;
 char *filename = "video.mp4";
 // open video file
 file = fopen(filename, "rb");
 if (file == NULL) {
 perror("File opening failed");
 exit(EXIT_FAILURE);
 }
 //calculate video file memory
 fseek(file, 0, SEEK_END);
 fsize = ftell(file);
 fseek(file,0,SEEK_SET);
 
 size_t size = htonl(fsize);
 int nsize =0;
 
 while(nsize!=fsize){
 int fpsize = fread(buf,1, BUF_SIZE, file);
 nsize += fpsize;
 if (sendto(serv_sock, &size, sizeof(size), 0, (struct sockaddr *)&clnt_adr, clnt_adr_sz) < 0) {
 perror("sendto");
 exit(EXIT_FAILURE); 
 }
 fclose(file);
 /*
 while ((bytes_read = fread(buf, 1, BUF_SIZE, file)) > 0) {
 if (sendto(serv_sock, buf, bytes_read, 0,
 (struct sockaddr *)&clnt_adr, clnt_adr_sz) < 0) {
 perror("sendto");
 exit(EXIT_FAILURE);
 } 
 }
 */
 } 
 }
 close(serv_sock);
 return 0;
}



Client.c (Client code)


#include 
#include 
#include 
#include <arpa></arpa>inet.h>
#include 
#include <sys></sys>socket.h>

#define BUFSIZE 256
#define PORT 8888

int main(){
 int sock;
 char message[BUFSIZE];
 int str_len;
 socklen_t adr_sz;

 struct sockaddr_in serv_addr, client_addr; 
 
 sock = socket(PF_INET, SOCK_DGRAM, 0);
 if(sock == -1){
 printf("socket() error\n");
 exit(1);
 }

 memset(&serv_addr, 0, sizeof(serv_addr));
 serv_addr.sin_family = AF_INET;
 serv_addr.sin_addr.s_addr = inet_addr("127.0.0.1");
 serv_addr.sin_port = htons(PORT);

 char hello_message[] = "hello i am client";
 sendto(sock, hello_message, strlen(hello_message), 0, (struct sockaddr*)&serv_addr, sizeof(serv_addr));
 adr_sz = sizeof(client_addr);
 str_len=recvfrom(sock,message,BUFSIZE,0,(struct sockaddr*)&client_addr,&adr_sz);
 
 message[str_len] = '\0';
 printf("client say: %s\n", message);
 
 /*
 char buf[BUFSIZE];
 ssize_t bytes_received;
 socklen_t serv_len = sizeof(serv_addr);
 while ((bytes_received = recvfrom(sock, buf, BUFSIZE, 0,
 (struct sockaddr *)&serv_addr, &serv_len)) > 0) {
 fwrite(buf, 1, bytes_received, file);
 }
 */
 
 FILE *file = fopen("received_test.mp4", "wb");

 int nbyte = BUFSIZE;
 while(nbyte>= BUFSIZE){
 nbyte = recvfrom(sock, message, BUFSIZE, 0, (struct sockaddr*)&serv_addr, &adr_sz);
 fwrite(message, sizeof(char), nbyte, file);
 }

 if (file == NULL) {
 perror("File opening failed");
 exit(EXIT_FAILURE);
 }

 fclose(file);
 close(sock);
 printf("File received successfully\n");
 
 return 0;
}



I try to convert the binary file to an
.mp4
file using ffmpeg
but it doesn't work :

ffmpeg -i received_test.mp4 output.mp4
ffmpeg version 7.0 Copyright (c) 2000-2024 the FFmpeg developers
 built with Apple clang version 15.0.0 (clang-1500.3.9.4)
 configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/7.0 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopenvino --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon
 libavutil 59. 8.100 / 59. 8.100
 libavcodec 61. 3.100 / 61. 3.100
 libavformat 61. 1.100 / 61. 1.100
 libavdevice 61. 1.100 / 61. 1.100
 libavfilter 10. 1.100 / 10. 1.100
 libswscale 8. 1.100 / 8. 1.100
 libswresample 5. 1.100 / 5. 1.100
 libpostproc 58. 1.100 / 58. 1.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x12a62bdb0] Format mov,mp4,m4a,3gp,3g2,mj2 detected only with low score of 1, misdetection possible!
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x12a62bdb0] moov atom not found
[in#0 @ 0x12b0043c0] Error opening input: Invalid data found when processing input
Error opening input file received_test.mp4.
Error opening input files: Invalid data found when processing input



-
Error transcoding with FFmpeg : Error : Output format hls is not available
6 mai 2024, par asif mohmdI am using FFmpeg library to transcode a video file into multiple resolutions and create an HLS (HTTP Live Streaming) master playlist.


It takes a video file as input but its does give me the output with HLS playlist.I got a error called "Output format hls is not available". Only the Output directory is creating


I am using FFMpeg 7.0 full build version and also tried older versions and ffmpeg essentials and also tried chocolatey.


if i remove the implementation of HLS from this code.it will create 4 different resolution videos in my output.


Note:I just tried this same code on my friend MAC Book by only changing the setffmpegPath : "ffmpeg.setFfmpegPath("C :\ffmpeg\bin\ffmpeg.exe") ;" to his ffmpeg directory.
Its working perfectly in his mac book


import "dotenv/config";
import * as fs from "fs";
import * as path from "path";
import ffmpeg from "fluent-ffmpeg";
import crypto from "crypto";

ffmpeg.setFfmpegPath("C:\\ffmpeg\\bin\\ffmpeg.exe");

export const FFmpegTranscoder = async (file: any): Promise<any> => {
 try {
 console.log("Starting script");
 console.time("req_time");

 const randomName = (bytes = 32) =>
 crypto.randomBytes(bytes).toString("hex");
 const fileName = randomName();
 const directoryPath = path.join(__dirname, "..", "..", "input");
 const filePath = path.join(directoryPath, `${fileName}.mp4`);

 if (!fs.existsSync(directoryPath)) {
 fs.mkdirSync(directoryPath, { recursive: true });
 }

 const paths = await new Promise<any>((resolve, reject) => {
 fs.writeFile(filePath, file, async (err) => {
 if (err) {
 console.error("Error saving file:", err);
 throw err;
 }
 console.log("File saved successfully:", filePath);

 try {
 const outputDirectoryPath = await transcodeWithFFmpeg(
 fileName,
 filePath
 );
 resolve({ directoryPath, filePath, fileName, outputDirectoryPath });
 } catch (error) {
 console.error("Error transcoding with FFmpeg:", error);
 }
 });
 });
 return paths;
 } catch (e: any) {
 console.log(e);
 }
};

const transcodeWithFFmpeg = async (fileName: string, filePath: string) => {
 const directoryPath = path.join(
 __dirname,
 "..",
 "..",
 `output/hls/${fileName}`
 );

 if (!fs.existsSync(directoryPath)) {
 fs.mkdirSync(directoryPath, { recursive: true });
 }

 const resolutions = [
 {
 resolution: "256x144",
 videoBitrate: "200k",
 audioBitrate: "64k",
 },
 {
 resolution: "640x360",
 videoBitrate: "800k",
 audioBitrate: "128k",
 },
 {
 resolution: "1280x720",
 videoBitrate: "2500k",
 audioBitrate: "192k",
 },
 {
 resolution: "1920x1080",
 videoBitrate: "5000k",
 audioBitrate: "256k",
 },
 ];

 const variantPlaylists: { resolution: string; outputFileName: string }[] = [];

 for (const { resolution, videoBitrate, audioBitrate } of resolutions) {
 console.log(`HLS conversion starting for ${resolution}`);
 const outputFileName = `${fileName}_${resolution}.m3u8`;
 const segmentFileName = `${fileName}_${resolution}_%03d.ts`;

 await new Promise<void>((resolve, reject) => {
 ffmpeg(filePath)
 .outputOptions([
 `-c:v h264`,
 `-b:v ${videoBitrate}`,
 `-c:a aac`,
 `-b:a ${audioBitrate}`,
 `-vf scale=${resolution}`,
 `-f hls`,
 `-hls_time 10`,
 `-hls_list_size 0`,
 `-hls_segment_filename ${directoryPath}/${segmentFileName}`,
 ])
 .output(`${directoryPath}/${outputFileName}`)
 .on("end", () => resolve())
 .on("error", (err) => reject(err))
 .run();
 });
 const variantPlaylist = {
 resolution,
 outputFileName,
 };
 variantPlaylists.push(variantPlaylist);
 console.log(`HLS conversion done for ${resolution}`);
 }
 console.log(`HLS master m3u8 playlist generating`);

 let masterPlaylist = variantPlaylists
 .map((variantPlaylist) => {
 const { resolution, outputFileName } = variantPlaylist;
 const bandwidth =
 resolution === "256x144"
 ? 264000
 : resolution === "640x360"
 ? 1024000
 : resolution === "1280x720"
 ? 3072000
 : 5500000;
 ``;
 return `#EXT-X-STREAM-INF:BANDWIDTH=${bandwidth},RESOLUTION=${resolution}\n${outputFileName}`;
 })
 .join("\n");
 masterPlaylist = `#EXTM3U\n` + masterPlaylist;

 const masterPlaylistFileName = `${fileName}_master.m3u8`;

 const masterPlaylistPath = `${directoryPath}/${masterPlaylistFileName}`;
 fs.writeFileSync(masterPlaylistPath, masterPlaylist);
 console.log(`HLS master m3u8 playlist generated`);
 return directoryPath;
};
</void></any></any>


My console.log is :


Starting script
 HLS conversion starting for 256x144
 Error transcoding with FFmpeg: Error: Output format hls is not available
 at C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\fluent-ffmpeg\lib\capabilities.js:589:21
 at nextTask (C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\async\dist\async.js:5791:13)
 at next (C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\async\dist\async.js:5799:13)
 at C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\async\dist\async.js:329:20
 at C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\fluent-ffmpeg\lib\capabilities.js:549:7
 at handleExit (C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\fluent-ffmpeg\lib\processor.js:170:11)
 at ChildProcess.<anonymous> (C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\fluent-ffmpeg\lib\processor.js:184:11)
 at ChildProcess.emit (node:events:518:28)
 at ChildProcess.emit (node:domain:488:12)
 at Process.ChildProcess._handle.onexit (node:internal/child_process:294:12) 
</anonymous>


I am using Windows 11 and FFMpeg version 7.0. I repeatedly checked, using CMD commands, that my FFMpeg was installed correctly and confirmed the environment variables path, experimented with various FFMpeg versions, and tried with FFMpeg full build Chocolatey package.


In Command Line its working perfectly :


PS C:\Users\asifa\Desktop\test fmmpeg> ffmpeg -hide_banner -y -i .\SampleVideo_1280x720_30mb.mp4 -vf scale=w=640:h=360:force_original_aspect_ratio=decrease -c:a aac -b:v 800k -c:v h264 -b:a 128k -f hls -hls_time 14 -hls_list_size 0 -hls_segment_filename beach/480p_%03d.ts beach/480p.m3u8
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '.\SampleVideo_1280x720_30mb.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 creation_time : 1970-01-01T00:00:00.000000Z
 encoder : Lavf53.24.2
 Duration: 00:02:50.86, start: 0.000000, bitrate: 1474 kb/s
 Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 1086 kb/s, 25 fps, 25 tbr, 12800 tbn (default)
 Metadata:
 creation_time : 1970-01-01T00:00:00.000000Z
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 383 kb/s (default)
 Metadata:
 creation_time : 1970-01-01T00:00:00.000000Z
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 000001ef1288ec00] using SAR=1/1
[libx264 @ 000001ef1288ec00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 000001ef1288ec00] profile High, level 3.0, 4:2:0, 8-bit
[libx264 @ 000001ef1288ec00] 264 - core 164 r3190 7ed753b - H.264/MPEG-4 AVC codec - Copyleft 2003-2024 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=11 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=800 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, hls, to 'beach/480p.m3u8':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf61.1.100
 Stream #0:0(und): Video: h264, yuv420p(progressive), 640x360 [SAR 1:1 DAR 16:9], q=2-31, 800 kb/s, 25 fps, 90k tbn (default)
 Metadata:
 creation_time : 1970-01-01T00:00:00.000000Z
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 encoder : Lavc61.3.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/800000 buffer size: 0 vbv_delay: N/A
 Stream #0:1(und): Audio: aac (LC), 48000 Hz, 5.1, fltp, 128 kb/s (default)
 Metadata:
 creation_time : 1970-01-01T00:00:00.000000Z
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
 encoder : Lavc61.3.100 aac
[hls @ 000001ef12482040] Opening 'beach/480p_000.ts' for writing speed=15.5x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_001.ts' for writing speed=17.9x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_002.ts' for writing speed=17.3x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_003.ts' for writing speed=19.4x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_004.ts' for writing speed=19.3x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_005.ts' for writing speed=19.2x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_006.ts' for writing
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_007.ts' for writing speed=19.4x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_008.ts' for writing speed=19.5x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_009.ts' for writing speed=19.5x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_010.ts' for writing speed=19.4x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[hls @ 000001ef12482040] Opening 'beach/480p_011.ts' for writing/A =19.4x
[hls @ 000001ef12482040] Opening 'beach/480p.m3u8.tmp' for writing
[out#0/hls @ 000001ef11d4e880] video:17094KiB audio:2680KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: unknown
frame= 4271 fps=485 q=-1.0 Lsize=N/A time=00:02:50.76 bitrate=N/A speed=19.4x
[libx264 @ 000001ef1288ec00] frame I:45 Avg QP:10.29 size: 60418
[libx264 @ 000001ef1288ec00] frame P:1914 Avg QP:14.53 size: 5582
[libx264 @ 000001ef1288ec00] frame B:2312 Avg QP:20.63 size: 1774
[libx264 @ 000001ef1288ec00] consecutive B-frames: 22.9% 11.9% 8.6% 56.6%
[libx264 @ 000001ef1288ec00] mb I I16..4: 15.6% 32.1% 52.2%
[libx264 @ 000001ef1288ec00] mb P I16..4: 0.3% 3.4% 1.2% P16..4: 20.3% 10.0% 13.1% 0.0% 0.0% skip:51.8%
[libx264 @ 000001ef1288ec00] mb B I16..4: 0.1% 0.9% 0.4% B16..8: 17.2% 5.6% 2.8% direct: 2.0% skip:71.0% L0:41.5% L1:44.1% BI:14.4%
[libx264 @ 000001ef1288ec00] final ratefactor: 16.13
[libx264 @ 000001ef1288ec00] 8x8 transform intra:58.4% inter:51.7%
[libx264 @ 000001ef1288ec00] coded y,uvDC,uvAC intra: 86.7% 94.3% 78.8% inter: 12.6% 15.0% 4.5%
[libx264 @ 000001ef1288ec00] i16 v,h,dc,p: 17% 42% 14% 28%
[libx264 @ 000001ef1288ec00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 19% 11% 6% 7% 8% 8% 9% 9%
[libx264 @ 000001ef1288ec00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 18% 12% 6% 9% 9% 8% 8% 7%
[libx264 @ 000001ef1288ec00] i8c dc,h,v,p: 44% 24% 20% 12%
[libx264 @ 000001ef1288ec00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 000001ef1288ec00] ref P L0: 78.3% 9.7% 8.8% 3.2%
[libx264 @ 000001ef1288ec00] ref B L0: 92.5% 6.0% 1.5%
[libx264 @ 000001ef1288ec00] ref B L1: 97.1% 2.9%
[libx264 @ 000001ef1288ec00] kb/s:819.63
[aac @ 000001ef128f7c80] Qavg: 452.137



When I use the
.on('start', (cmdline) => console.log(cmdline))}
code with the-f hls
command, the error "Output format hls is not available" appears, as previously mentioned. But my Console.log looks like this if I run my code without using-f hls
command :

Without -f hls command


await new Promise<void>((resolve, reject) => {
 ffmpeg(filePath)
 .outputOptions([
 `-c:v h264`,
 `-b:v ${videoBitrate}`,
 `-c:a aac`,
 `-b:a ${audioBitrate}`,
 `-vf scale=${resolution}`,
 
 `-hls_time 10`,
 `-hls_list_size 0`,
 `-hls_segment_filename ${directoryPath}/${segmentFileName}`,
 ])
 .output(`${directoryPath}/${outputFileName}`)
 .on('start', (cmdline) => console.log(cmdline)) 
 .on("end", () => resolve())
 .on("error", (err) => reject(err))
 .run();
});
</void>


Console.log is :


`Starting script
File saved successfully: C:\Users\asifa\Desktop\Genius Grid\Transcode-service\input\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1.mp4
HLS conversion starting for 256x144
ffmpeg -i C:\Users\asifa\Desktop\Genius Grid\Transcode-service\input\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1.mp4 -y -c:v h264 -b:v 200k -c:a aac -b:a 64k -vf scale=256x144 -hls_time 10 -hls_list_size 0 -hls_segment_filename C:\Users\asifa\Desktop\Genius Grid\Transcode-service\output\hls\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1/c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1_256x144_%03d.ts C:\Users\asifa\Desktop\Genius Grid\Transcode-service\output\hls\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1/c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1_256x144.m3u8
Error transcoding with FFmpeg: Error: ffmpeg exited with code 2880417800: Unrecognized option 'hls_segment_filename C:\Users\asifa\Desktop\Genius Grid\Transcode-service\output\hls\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1/c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1_256x144_%03d.ts'.
Error splitting the argument list: Option not found`