Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (65)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (10147)

  • Aspect ratio problems at transcoding with ffmpeg [closed]

    19 novembre 2023, par udippel

    I have a huge collection of videos from the last 20+ years, videos in all sorts of formats. I use gerbera as open source UPnP-AV media server. Our TV handles only very limited of these formats. Therefore I use the transcoding feature of gerbera (I don't want to convert the 2000+ files ; thereby avoiding loss of multiple audio tracks, (multiple) subtitles, and so forth).

    


    This is my current unified argument line for ffmpeg :
-c:v mpeg2video -maxrate 20000k -vf setdar=16/9 -r 24000/1001 -qscale:v 4 -top 1 -c:a mp2 -f mpeg -y
It works pretty okay, except of the aspect ratios. Well, I don't understand this fully, because ffprobe for File A states :
Stream #0:0: Video: mpeg4 (Simple Profile) (XVID / 0x44495658), yuv420p, 624x464 [SAR 1:1 DAR 39:29], 1500 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
This file displays very well.
File B comes as :
Stream #0:0(eng): Video: h264 (High), yuv420p(tv, bt709, progressive), 960x720, SAR 1:1 DAR 4:3, 23.98 fps, 23.98 tbr, 1k tbn, 180k tbc (default)
This file displays horribly squeezed vertically and doesn't fill the screen left and right neither, with the same settings of the TV. Also, playing this file (and others, naturally) the TV doesn't offer the 14:9 display option, which is available e.g. for the file further up.

    


    Both have same SAR, DAR, almost identical H:V ratios (1.345, 1.333) ; and almost identical DAR.

    


    My questions :

    


      

    1. Why, despite of almost identical pixel ratios, DAR and SAR are these files handled so differently in one and the same session on the same TV (SONY), please ?
    2. 


    3. With which method could I instruct ffmpeg to display the second file properly, too, please ?
(I have already tried 'scale' ; but to no avail. Which could have been foreseen, since the ratios are already very close.)
My guess is, that the (tv, bt709, progressive) mess things up.
(I have already tried to add the yuv420p in the argument line, also to no avail.)
    4. 


    


    Appreciate any help,

    


    Uwe

    


    I have already tried to add a 'scale' option ; but to no avail. Which could have been foreseen, since the ratios are already very close.
I have already tried to add the yuv420p in the argument line, also to no avail.
I have already tried force_original_aspect_ratio, but also here, nothing improving.
Also, I played with -aspect, but the aspects are okay, and would need individual corrections, which I can't and don't to do for 2000+ files. A simple 16:9 doesn't cut it.

    


  • Error transcoding with FFmpeg : Error : Output format hls is not available

    6 mai 2024, par asif mohmd

    I am using FFmpeg library to transcode a video file into multiple resolutions and create an HLS (HTTP Live Streaming) master playlist.

    


    It takes a video file as input but its does give me the output with HLS playlist.I got a error called "Output format hls is not available". Only the Output directory is creating

    


    I am using FFMpeg 7.0 full build version and also tried older versions and ffmpeg essentials and also tried chocolatey.

    


    if i remove the implementation of HLS from this code.it will create 4 different resolution videos in my output.

    


    Note:I just tried this same code on my friend MAC Book by only changing the setffmpegPath : "ffmpeg.setFfmpegPath("C :\ffmpeg\bin\ffmpeg.exe") ;" to his ffmpeg directory.
Its working perfectly in his mac book

    


    import "dotenv/config";&#xA;import * as fs from "fs";&#xA;import * as path from "path";&#xA;import ffmpeg from "fluent-ffmpeg";&#xA;import crypto from "crypto";&#xA;&#xA;ffmpeg.setFfmpegPath("C:\\ffmpeg\\bin\\ffmpeg.exe");&#xA;&#xA;export const FFmpegTranscoder = async (file: any): Promise<any> => {&#xA;  try {&#xA;    console.log("Starting script");&#xA;    console.time("req_time");&#xA;&#xA;    const randomName = (bytes = 32) =>&#xA;      crypto.randomBytes(bytes).toString("hex");&#xA;    const fileName = randomName();&#xA;    const directoryPath = path.join(__dirname, "..", "..", "input");&#xA;    const filePath = path.join(directoryPath, `${fileName}.mp4`);&#xA;&#xA;    if (!fs.existsSync(directoryPath)) {&#xA;      fs.mkdirSync(directoryPath, { recursive: true });&#xA;    }&#xA;&#xA;    const paths = await new Promise<any>((resolve, reject) => {&#xA;      fs.writeFile(filePath, file, async (err) => {&#xA;        if (err) {&#xA;          console.error("Error saving file:", err);&#xA;          throw err;&#xA;        }&#xA;        console.log("File saved successfully:", filePath);&#xA;&#xA;        try {&#xA;          const outputDirectoryPath = await transcodeWithFFmpeg(&#xA;            fileName,&#xA;            filePath&#xA;          );&#xA;          resolve({ directoryPath, filePath, fileName, outputDirectoryPath });&#xA;        } catch (error) {&#xA;          console.error("Error transcoding with FFmpeg:", error);&#xA;        }&#xA;      });&#xA;    });&#xA;    return paths;&#xA;  } catch (e: any) {&#xA;    console.log(e);&#xA;  }&#xA;};&#xA;&#xA;const transcodeWithFFmpeg = async (fileName: string, filePath: string) => {&#xA;  const directoryPath = path.join(&#xA;    __dirname,&#xA;    "..",&#xA;    "..",&#xA;    `output/hls/${fileName}`&#xA;  );&#xA;&#xA;  if (!fs.existsSync(directoryPath)) {&#xA;    fs.mkdirSync(directoryPath, { recursive: true });&#xA;  }&#xA;&#xA;  const resolutions = [&#xA;    {&#xA;      resolution: "256x144",&#xA;      videoBitrate: "200k",&#xA;      audioBitrate: "64k",&#xA;    },&#xA;    {&#xA;      resolution: "640x360",&#xA;      videoBitrate: "800k",&#xA;      audioBitrate: "128k",&#xA;    },&#xA;    {&#xA;      resolution: "1280x720",&#xA;      videoBitrate: "2500k",&#xA;      audioBitrate: "192k",&#xA;    },&#xA;    {&#xA;      resolution: "1920x1080",&#xA;      videoBitrate: "5000k",&#xA;      audioBitrate: "256k",&#xA;    },&#xA;  ];&#xA;&#xA;  const variantPlaylists: { resolution: string; outputFileName: string }[] = [];&#xA;&#xA;  for (const { resolution, videoBitrate, audioBitrate } of resolutions) {&#xA;    console.log(`HLS conversion starting for ${resolution}`);&#xA;    const outputFileName = `${fileName}_${resolution}.m3u8`;&#xA;    const segmentFileName = `${fileName}_${resolution}_%03d.ts`;&#xA;&#xA;    await new Promise<void>((resolve, reject) => {&#xA;      ffmpeg(filePath)&#xA;        .outputOptions([&#xA;          `-c:v h264`,&#xA;          `-b:v ${videoBitrate}`,&#xA;          `-c:a aac`,&#xA;          `-b:a ${audioBitrate}`,&#xA;          `-vf scale=${resolution}`,&#xA;          `-f hls`,&#xA;          `-hls_time 10`,&#xA;          `-hls_list_size 0`,&#xA;          `-hls_segment_filename ${directoryPath}/${segmentFileName}`,&#xA;        ])&#xA;        .output(`${directoryPath}/${outputFileName}`)&#xA;        .on("end", () => resolve())&#xA;        .on("error", (err) => reject(err))&#xA;        .run();&#xA;    });&#xA;    const variantPlaylist = {&#xA;      resolution,&#xA;      outputFileName,&#xA;    };&#xA;    variantPlaylists.push(variantPlaylist);&#xA;    console.log(`HLS conversion done for ${resolution}`);&#xA;  }&#xA;  console.log(`HLS master m3u8 playlist generating`);&#xA;&#xA;  let masterPlaylist = variantPlaylists&#xA;    .map((variantPlaylist) => {&#xA;      const { resolution, outputFileName } = variantPlaylist;&#xA;      const bandwidth =&#xA;        resolution === "256x144"&#xA;          ? 264000&#xA;          : resolution === "640x360"&#xA;          ? 1024000&#xA;          : resolution === "1280x720"&#xA;          ? 3072000&#xA;          : 5500000;&#xA;      ``;&#xA;      return `#EXT-X-STREAM-INF:BANDWIDTH=${bandwidth},RESOLUTION=${resolution}\n${outputFileName}`;&#xA;    })&#xA;    .join("\n");&#xA;  masterPlaylist = `#EXTM3U\n` &#x2B; masterPlaylist;&#xA;&#xA;  const masterPlaylistFileName = `${fileName}_master.m3u8`;&#xA;&#xA;  const masterPlaylistPath = `${directoryPath}/${masterPlaylistFileName}`;&#xA;  fs.writeFileSync(masterPlaylistPath, masterPlaylist);&#xA;  console.log(`HLS master m3u8 playlist generated`);&#xA;  return directoryPath;&#xA;};&#xA;</void></any></any>

    &#xA;

    My console.log is :

    &#xA;

        Starting script&#xA;    HLS conversion starting for 256x144&#xA;    Error transcoding with FFmpeg: Error: Output format hls is not available&#xA;        at C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\fluent-ffmpeg\lib\capabilities.js:589:21&#xA;        at nextTask (C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\async\dist\async.js:5791:13)&#xA;        at next (C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\async\dist\async.js:5799:13)&#xA;        at C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\async\dist\async.js:329:20&#xA;        at C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\fluent-ffmpeg\lib\capabilities.js:549:7&#xA;        at handleExit (C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\fluent-ffmpeg\lib\processor.js:170:11)&#xA;        at ChildProcess.<anonymous> (C:\Users\asifa\Desktop\Genius Grid\Transcode-service\node_modules\fluent-ffmpeg\lib\processor.js:184:11)&#xA;        at ChildProcess.emit (node:events:518:28)&#xA;        at ChildProcess.emit (node:domain:488:12)&#xA;        at Process.ChildProcess._handle.onexit (node:internal/child_process:294:12) &#xA;</anonymous>

    &#xA;

    I am using Windows 11 and FFMpeg version 7.0. I repeatedly checked, using CMD commands, that my FFMpeg was installed correctly and confirmed the environment variables path, experimented with various FFMpeg versions, and tried with FFMpeg full build Chocolatey package.

    &#xA;

    In Command Line its working perfectly :

    &#xA;

    PS C:\Users\asifa\Desktop\test fmmpeg> ffmpeg -hide_banner -y -i .\SampleVideo_1280x720_30mb.mp4 -vf scale=w=640:h=360:force_original_aspect_ratio=decrease -c:a aac -b:v 800k -c:v h264 -b:a 128k -f hls -hls_time 14 -hls_list_size 0 -hls_segment_filename beach/480p_%03d.ts beach/480p.m3u8&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;.\SampleVideo_1280x720_30mb.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    creation_time   : 1970-01-01T00:00:00.000000Z&#xA;    encoder         : Lavf53.24.2&#xA;  Duration: 00:02:50.86, start: 0.000000, bitrate: 1474 kb/s&#xA;  Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 1086 kb/s, 25 fps, 25 tbr, 12800 tbn (default)&#xA;      Metadata:&#xA;        creation_time   : 1970-01-01T00:00:00.000000Z&#xA;        handler_name    : VideoHandler&#xA;        vendor_id       : [0][0][0][0]&#xA;  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 383 kb/s (default)&#xA;      Metadata:&#xA;        creation_time   : 1970-01-01T00:00:00.000000Z&#xA;        handler_name    : SoundHandler&#xA;        vendor_id       : [0][0][0][0]&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))&#xA;  Stream #0:1 -> #0:1 (aac (native) -> aac (native))&#xA;Press [q] to stop, [?] for help&#xA;[libx264 @ 000001ef1288ec00] using SAR=1/1&#xA;[libx264 @ 000001ef1288ec00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 000001ef1288ec00] profile High, level 3.0, 4:2:0, 8-bit&#xA;[libx264 @ 000001ef1288ec00] 264 - core 164 r3190 7ed753b - H.264/MPEG-4 AVC codec - Copyleft 2003-2024 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=11 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=800 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, hls, to &#x27;beach/480p.m3u8&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf61.1.100&#xA;  Stream #0:0(und): Video: h264, yuv420p(progressive), 640x360 [SAR 1:1 DAR 16:9], q=2-31, 800 kb/s, 25 fps, 90k tbn (default)&#xA;      Metadata:&#xA;        creation_time   : 1970-01-01T00:00:00.000000Z&#xA;        handler_name    : VideoHandler&#xA;        vendor_id       : [0][0][0][0]&#xA;        encoder         : Lavc61.3.100 libx264&#xA;      Side data:&#xA;        cpb: bitrate max/min/avg: 0/0/800000 buffer size: 0 vbv_delay: N/A&#xA;  Stream #0:1(und): Audio: aac (LC), 48000 Hz, 5.1, fltp, 128 kb/s (default)&#xA;      Metadata:&#xA;        creation_time   : 1970-01-01T00:00:00.000000Z&#xA;        handler_name    : SoundHandler&#xA;        vendor_id       : [0][0][0][0]&#xA;        encoder         : Lavc61.3.100 aac&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_000.ts&#x27; for writing speed=15.5x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_001.ts&#x27; for writing speed=17.9x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_002.ts&#x27; for writing speed=17.3x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_003.ts&#x27; for writing speed=19.4x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_004.ts&#x27; for writing speed=19.3x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_005.ts&#x27; for writing speed=19.2x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_006.ts&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_007.ts&#x27; for writing speed=19.4x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_008.ts&#x27; for writing speed=19.5x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_009.ts&#x27; for writing speed=19.5x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_010.ts&#x27; for writing speed=19.4x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p_011.ts&#x27; for writing/A    =19.4x&#xA;[hls @ 000001ef12482040] Opening &#x27;beach/480p.m3u8.tmp&#x27; for writing&#xA;[out#0/hls @ 000001ef11d4e880] video:17094KiB audio:2680KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: unknown&#xA;frame= 4271 fps=485 q=-1.0 Lsize=N/A time=00:02:50.76 bitrate=N/A speed=19.4x&#xA;[libx264 @ 000001ef1288ec00] frame I:45    Avg QP:10.29  size: 60418&#xA;[libx264 @ 000001ef1288ec00] frame P:1914  Avg QP:14.53  size:  5582&#xA;[libx264 @ 000001ef1288ec00] frame B:2312  Avg QP:20.63  size:  1774&#xA;[libx264 @ 000001ef1288ec00] consecutive B-frames: 22.9% 11.9%  8.6% 56.6%&#xA;[libx264 @ 000001ef1288ec00] mb I  I16..4: 15.6% 32.1% 52.2%&#xA;[libx264 @ 000001ef1288ec00] mb P  I16..4:  0.3%  3.4%  1.2%  P16..4: 20.3% 10.0% 13.1%  0.0%  0.0%    skip:51.8%&#xA;[libx264 @ 000001ef1288ec00] mb B  I16..4:  0.1%  0.9%  0.4%  B16..8: 17.2%  5.6%  2.8%  direct: 2.0%  skip:71.0%  L0:41.5% L1:44.1% BI:14.4%&#xA;[libx264 @ 000001ef1288ec00] final ratefactor: 16.13&#xA;[libx264 @ 000001ef1288ec00] 8x8 transform intra:58.4% inter:51.7%&#xA;[libx264 @ 000001ef1288ec00] coded y,uvDC,uvAC intra: 86.7% 94.3% 78.8% inter: 12.6% 15.0% 4.5%&#xA;[libx264 @ 000001ef1288ec00] i16 v,h,dc,p: 17% 42% 14% 28%&#xA;[libx264 @ 000001ef1288ec00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 19% 11%  6%  7%  8%  8%  9%  9%&#xA;[libx264 @ 000001ef1288ec00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 18% 12%  6%  9%  9%  8%  8%  7%&#xA;[libx264 @ 000001ef1288ec00] i8c dc,h,v,p: 44% 24% 20% 12%&#xA;[libx264 @ 000001ef1288ec00] Weighted P-Frames: Y:0.0% UV:0.0%&#xA;[libx264 @ 000001ef1288ec00] ref P L0: 78.3%  9.7%  8.8%  3.2%&#xA;[libx264 @ 000001ef1288ec00] ref B L0: 92.5%  6.0%  1.5%&#xA;[libx264 @ 000001ef1288ec00] ref B L1: 97.1%  2.9%&#xA;[libx264 @ 000001ef1288ec00] kb/s:819.63&#xA;[aac @ 000001ef128f7c80] Qavg: 452.137&#xA;

    &#xA;

    When I use the .on(&#x27;start&#x27;, (cmdline) => console.log(cmdline))} code with the -f hls command, the error "Output format hls is not available" appears, as previously mentioned. But my Console.log looks like this if I run my code without using -f hls command :

    &#xA;

    Without -f hls command

    &#xA;

    await new Promise<void>((resolve, reject) => {&#xA;  ffmpeg(filePath)&#xA;    .outputOptions([&#xA;      `-c:v h264`,&#xA;      `-b:v ${videoBitrate}`,&#xA;      `-c:a aac`,&#xA;      `-b:a ${audioBitrate}`,&#xA;      `-vf scale=${resolution}`,&#xA; &#xA;      `-hls_time 10`,&#xA;      `-hls_list_size 0`,&#xA;      `-hls_segment_filename ${directoryPath}/${segmentFileName}`,&#xA;    ])&#xA;    .output(`${directoryPath}/${outputFileName}`)&#xA;    .on(&#x27;start&#x27;, (cmdline) => console.log(cmdline)) &#xA;    .on("end", () => resolve())&#xA;    .on("error", (err) => reject(err))&#xA;    .run();&#xA;});&#xA;</void>

    &#xA;

    Console.log is :

    &#xA;

    `Starting script&#xA;File saved successfully: C:\Users\asifa\Desktop\Genius Grid\Transcode-service\input\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1.mp4&#xA;HLS conversion starting for 256x144&#xA;ffmpeg -i C:\Users\asifa\Desktop\Genius Grid\Transcode-service\input\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1.mp4 -y -c:v h264 -b:v 200k -c:a aac -b:a 64k -vf scale=256x144 -hls_time 10 -hls_list_size 0 -hls_segment_filename C:\Users\asifa\Desktop\Genius Grid\Transcode-service\output\hls\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1/c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1_256x144_%03d.ts C:\Users\asifa\Desktop\Genius Grid\Transcode-service\output\hls\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1/c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1_256x144.m3u8&#xA;Error transcoding with FFmpeg: Error: ffmpeg exited with code 2880417800: Unrecognized option &#x27;hls_segment_filename C:\Users\asifa\Desktop\Genius Grid\Transcode-service\output\hls\c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1/c9fcf43726e617a295b203d5acb7b81658b5f05f80eafc74cee21b053422fef1_256x144_%03d.ts&#x27;.&#xA;Error splitting the argument list: Option not found`&#xA;

    &#xA;

  • Grand Unified Theory of Compact Disc

    1er février 2013, par Multimedia Mike — General

    This is something I started writing about a decade ago (and I almost certainly have some of it wrong), back when compact discs still had a fair amount of relevance. Back around 2002, after a few years investigating multimedia technology, I took an interest in compact discs of all sorts. Even though there may seem to be a wide range of CD types, I generally found that they’re all fundamentally the same. I thought I would finally publishing something, incomplete though it may be.

    Physical Perspective
    There are a lot of ways to look at a compact disc. First, there’s the physical format, where a laser detects where pits/grooves have disturbed the smooth surface (a.k.a. lands). A lot of technical descriptions claim that these lands and pits on a CD correspond to ones and zeros. That’s not actually true, but you have to decide what level of abstraction you care about, and that abstraction is good enough if you only care about the discs from a software perspective.

    Grand Unified Theory (Software Perspective)
    Looking at a disc from a software perspective, I have generally found it useful to view a CD as a combination of a 2 main components :

    • table of contents (TOC)
    • a long string of sectors, each of which is 2352 bytes long

    I like to believe that’s pretty much all there is to it. All of the information on a CD is stored as a string of sectors that might be chopped up into a series of anywhere from 1-99 individual tracks. The exact sector locations where these individual tracks begin are defined in the TOC.

    Audio CDs (CD-DA / Red Book)
    The initial purpose for the compact disc was to store digital audio. The strange sector size of 2352 bytes is an artifact of this original charter. “CD quality audio”, as any multimedia nerd knows, is formally defined as stereo PCM samples that are each 16 bits wide and played at a frequency of 44100 Hz.

    (44100 audio frames / 1 second) * (2 samples / audio frame) * 
      (16 bits / 1 sample) * (1 byte / 8 bits) = 176,400 bytes / second
    (176,400 bytes / 1 second) / (2352 bytes / 1 sector) = 75
    

    75 is the number of sectors required to store a single second of CD-quality audio. A single sector stores 1/75th of a second, or a ‘frame’ of audio (though I think ‘frame’ gets tossed around at all levels when describing CD formats).

    The term “red book” is thrown around in relation to audio CDs. There is a series of rainbow books that define various optical disc standards and the red book describes audio CDs.

    Basic Data CD-ROMs (Mode 1 / Yellow Book)
    Somewhere along the line, someone decided that general digital information could be stored on these discs. Hence, the CD-ROM was born. The standard model above still applies– TOC and string of 2352-byte sectors. However, it’s generally only useful to have a single track on a CD-ROM. Thus, the TOC only lists a single track. That single track can easily span the entire disc (something that would be unusual for a typical audio CD).

    While the model is mostly the same, the most notable difference between and audio CD and a plain CD-ROM is that, while each sector is 2352 bytes long, only 2048 bytes are used to store actual data payload. The remaining bytes are used for synchronization and additional error detection/correction.

    At least, the foregoing is true for mode 1 / form 1 CD-ROMs (which are the most common). “Mode 1″ CD-ROMs are defined by a publication called the yellow book. There is also mode 1 / form 2. This forgoes the additional error detection and correction afforded by form 1 and dedicates 2336 of the 2352 sector bytes to the data payload.

    CD-ROM XA (Mode 2 / Green Book)
    From a software perspective, these are similar to mode 1 CD-ROMs. There are also 2 forms here. The first form gives a 2048-byte data payload while the second form yields a 2324-byte data payload.

    Video CD (VCD / White Book)
    These are CD-ROM XA discs that carry MPEG-1 video and audio data.

    Photo CD (Beige Book)
    This is something I have never personally dealt with. But it’s supposed to conform to the CD-ROM XA standard and probably fits into my model. It seems to date back to early in the CD-ROM era when CDs were particularly cost prohibitive.

    Multisession CDs (Blue Book)
    Okay, I admit that this confuses me a bit. Multisession discs allow a user to burn multiple sessions to a single recordable disc. I.e., burn a lump of data, then burn another lump at a later time, and the final result will look like all the lumps were recorded as the same big lump. I remember this being incredibly useful and cost effective back when recordable CDs cost around US$10 each (vs. being able to buy a spindle of 100 CD-Rs for US$10 or less now). Studying the cdrom.h file for the Linux OS, I found a system call named CDROMMULTISESSION that returns the sector address of the start of the last session. If I were to hypothesize about how to make this fit into my model, I might guess that the TOC has some hint that the disc was recorded in multisession (which needs to be decided up front) and the CDROMMULTISESSION call is made to find the last session. Or it could be that a disc read initialization operation always leads off with the CDROMMULTISESSION query in order to determine this.

    I suppose I could figure out how to create a multisession disc with modern software, or possibly dig up a multisession disc from 15+ years ago, and then figure out how it should be read.

    CD-i
    This type puzzles my as well. I do have some CD-i discs and I thought that I could read them just fine (the last time I looked, which was many years ago). But my research for this blog post has me thinking that I might not have been seeing the entire picture when I first studied my CD-i samples. I was able to see some of the data, but sources indicate that only proper CD-i hardware is able to see all of the data on the disc (apparently, the TOC doesn’t show all of the sectors on disc).

    Hybrid CDs (Data + Audio)
    At some point, it became a notable selling point for an audio CD to have a data track with bonus features. Even more common (particularly in the early era of CD-ROMs) were computer and console games that used the first track of a disc for all the game code and assets and the remaining tracks for beautifully rendered game audio that could also be enjoyed outside the game. Same model : TOC points to the various tracks and also makes notes about which ones are data and which are audio.

    There seems to be 2 distinct things described above. One type is the mixed mode CD which generally has the data in the first track and the audio in tracks 2..n. Then there is the enhanced CD, which apparently used multisession recording and put the data at the end. I think that the reasoning for this is that most audio CD player hardware would only read tracks from the first session and would have no way to see the data track. This was a positive thing. By contrast, when placing a mixed-mode CD into an audio player, the data track would be rendered as nonsense noise.

    Subchannels
    There’s at least one small detail that my model ignores : subchannels. CDs can encode bits of data in subchannels in sectors. This is used for things like CD-Text and CD-G. I may need to revisit this.

    In Summary
    There’s still a lot of ground to cover, like how those sectors might be formatted to show something useful (e.g., filesystems), and how the model applies to other types of optical discs. Sounds like something for another post.