Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (14606)

  • What is “interoperable TTML” ?

    19 septembre 2012, par silvia

    I’ve just tried to come to terms with the latest state of TTML, the Timed Text Markup Language.

    TTML has been specified by the W3C Timed Text Working Group and released as a RECommendation v1.0 in November 2010. Since then, several organisations have tried to adopt it as their caption file format. This includes the SMPTE, the EBU (European Broadcasting Union), and Microsoft.

    Both, Microsoft and the EBU actually looked at TTML in detail and decided that in order to make it usable for their use cases, a restriction of its functionalities is needed.

    EBU-TT

    The EBU released EBU-TT, which restricts the set of valid attributes and feature. “The EBU-TT format is intended to constrain the features provided by TTML, especially to make EBU-TT more suitable for the use with broadcast video and web video applications.” (see EBU-TT).

    In addition, EBU-specific namespaces were introduce to extend TTML with EBU-specific data types, e.g. ebuttdt:frameRateMultiplierType or ebuttdt:smpteTimingType. Similarly, a bunch of metadata elements were introduced, e.g. ebuttm:documentMetadata, ebuttm:documentEbuttVersion, or ebuttm:documentIdentifier.

    The use of namespaces as an extensibility mechanism will ascertain that EBU-TT files continue to be valid TTML files. However, any vanilla TTML parser will not know what to do with these custom extensions and will drop them on the floor.

    Simple Delivery Profile

    With the intention to make TTML ready for “internet delivery of Captions originated in the United States”, Microsoft proposed a “Simple Delivery Profile for Closed Captions (US)” (see Simple Profile). The Simple Profile is also a restriction of TTML.

    Unfortunately, the Microsoft profile is not the same as the EBU-TT profile : for example, it contains the “set” element, which is not conformant in EBU-TT. Similarly, the supported style features are different, e.g. Simple Profile supports “display-region”, while EBU-TT does not. On the other hand, EBU-TT supports monospace, sans-serif and serif fonts, while the Simple profile does not.

    Thus files created for the Simple Delivery Profile will not work on players that expect EBU-TT and the reverse.

    Fortunately, the Simple Delivery Profile does not introduce any new namespaces and new features, so at least it is an explicit subpart of TTML and not both a restriction and extension like EBU-TT.

    SMPTE-TT

    SMPTE also created a version of the TTML standard called SMPTE-TT. SMPTE did not decide on a subset of TTML for their purposes – it was simply adopted as a complete set. “This Standard provides a framework for timed text to be supported for content delivered via broadband means,…” (see SMPTE-TT).

    However, SMPTE extended TTML in SMPTE-TT with an ability to store a binary blob with captions in another format. This allows using SMPTE-TT as a transport format for any caption format and is deemed to help with “backwards compatibility”.

    Now, instead of specifying a profile, SMPTE decided to define how to convert CEA-608 captions to SMPTE-TT. Even if it’s not called a “profile”, that’s actually what it is. It even has its own namespace : “m608 :”.

    Conclusion

    With all these different versions of TTML, I ask myself what a video player that claims support for TTML will do to get something working. The only chance it has is to implement all the extensions defined in all the different profiles. I pity the player that has to deal with a SMPTE-TT file that has a binary blob in it and is expected to be able to decode this.

    Now, what is a caption author supposed to do when creating TTML ? They obviously cannot expect all players to be able to play back all TTML versions. Should they create different files depending on what platform they are targeting, i.e. a EBU-TT version, a SMPTE-TT version, a vanilla TTML version, and a Simple Delivery Profile version ? Should they by throwing all the features of all the versions into one TTML file and hope that the players will pick out the right things that they require and drop the rest on the floor ?

    Maybe the best way to progress would be to make a list of the “safe” features : those features that every TTML profile supports. That may be the best way to get an “interoperable TTML” file. Here’s me hoping that this minimal set of features doesn’t just end up being the usual (starttime, endtime, text) triple.

    UPDATE :

    I just found out that UltraViolet have their own profile of SMPTE-TT called CFF-TT (see UltraViolet FAQ and spec). They are making some SMPTE-TT fields optional, but introduce a new @forcedDisplayMode attribute under their own namespace “cff :”.

  • How to overlay multiple landscape regions from a single input to a new portrait video ? FFmpeg

    27 août 2023, par 3V1LXD

    I have an electron program that selects multiple regions of a landscape video and lets you rearrange them in a portrait canvas. I'm having trouble building the proper ffmpeg command to create the video. I have this somewhat working. I can export 2 layers, but i can't export if i only have 1 layer or if i have 3 or more layers selected.

    


    2 regions of video selected

    


    layers [
  { top: 0, left: 658, width: 576, height: 1080 },
  { top: 262, left: 0, width: 576, height: 324 }
]
newPositions [
  { top: 0, left: 0, width: 576, height: 1080 },
  { top: 0, left: 0, width: 576, height: 324 }
]
filtergraph [0]crop=576:1080:658:0,scale=576:1080[v0];[0]crop=576:324:0:262,scale=576:324[v1];[v0][v1]overlay=0:0:0:0[out]

No Error export successful


    


    1 region selected

    


    layers [ { top: 0, left: 650, width: 576, height: 1080 } ]
newPositions [ { top: 0, left: 0, width: 576, height: 1080 } ]
filtergraph [0]crop=576:1080:650:0,scale=576:1080[v0];[v0]overlay=0:0[out]

FFmpeg error: [fc#0 @ 000001dd3b6db0c0] Cannot find a matching stream for unlabeled input pad overlay
Error initializing complex filters: Invalid argument


    


    3 regions of video selected

    


    layers [
  { top: 0, left: 641, width: 576, height: 1080 },
  { top: 250, left: 0, width: 576, height: 324 },
  { top: 756, left: 0, width: 576, height: 324 }
]
newPositions [
  { top: 0, left: 0, width: 576, height: 1080 },
  { top: 0, left: 0, width: 576, height: 324 },
  { top: 756, left: 0, width: 576, height: 324 }
]
filtergraph [0]crop=576:1080:641:0,scale=576:1080[v0];[0]crop=576:324:0:250,scale=576:324[v1];[0]crop=576:324:0:756,scale=576:324[v2];[v0][v1][v2]overlay=0:0:0:0:0:756[out]

FFmpeg error: [AVFilterGraph @ 0000018faf2189c0] More input link labels specified for filter 'overlay' than it has inputs: 3 > 2
[AVFilterGraph @ 0000018faf2189c0] Error linking filters

FFmpeg error: Failed to set value '[0]crop=576:1080:698:0,scale=576:1080[v0];[0]crop=576:324:0:264,scale=576:324[v1];[0]crop=576:324:0:756,scale=576:324[v2];[v0][v1][v2]overlay=0:0:0:0:0:0[out]' for option 'filter_complex': Invalid argument
Error parsing global options: Invalid argument


    


    I can't figure out how to construct the proper overlay command. Here is the js code i'm using from my electron app.

    


    ipcMain.handle('export-video', async (_event, args) => {
  const { videoFile, outputName, layers, newPositions } = args;
  const ffmpegPath = path.join(__dirname, 'bin', 'ffmpeg');
  const outputDir = checkOutputDir();
  
  // use same video for each layer as input
  // crop, scale, and position each layer
  // overlay each layer on top of each other

  // export video
  console.log('layers', layers);
  console.log('newPositions', newPositions);

  let filtergraph = '';

  for (let i = 0; i < layers.length; i++) {
    const { top, left, width, height } = layers[i];
    const { width: newWidth, height: newHeight } = newPositions[i];
    const filter = `[0]crop=${width}:${height}:${left}:${top},scale=${newWidth}:${newHeight}[v${i}];`;
    filtergraph += filter;
  }

  for (let i = 0; i < layers.length; i++) {
    const filter = `[v${i}]`;
    filtergraph += filter;
  }

  filtergraph += `overlay=`;
  for (let i = 0; i < layers.length; i++) {
    const { top: newTop, left: newLeft } = newPositions[i];
    const overlay = `${newLeft}:${newTop}:`;
    filtergraph += overlay;
  }

  filtergraph = filtergraph.slice(0, -1); // remove last comma
  filtergraph += `[out]`;
  
  console.log('filtergraph', filtergraph);

  const ffmpeg = spawn(ffmpegPath, [
    '-i', videoFile,
    '-filter_complex', filtergraph,
    '-map', '[out]',
    '-c:v', 'libx264',
    '-preset', 'ultrafast',
    '-crf', '18',
    '-y',
    path.join(outputDir, `${outputName}`)
  ]);  

  ffmpeg.stdout.on('data', (data) => {
    console.log(`FFmpeg output: ${data}`);
  });

  ffmpeg.stderr.on('data', (data) => {
    console.error(`FFmpeg error: ${data}`);
  });

  ffmpeg.on('close', (code) => {
    console.log(`FFmpeg process exited with code ${code}`);
    // event.reply('ffmpeg-export-done'); // Notify the renderer process
  });
});


    


    Any advice might be helpful, The docs are confusing, Thanks.

    


    Edit
I'm getting closer with this
Output :

    


    layers [
  { top: 0, left: 677, width: 576, height: 1080 },
  { top: 240, left: 0, width: 576, height: 324 }
]
newPositions [
  { top: 0, left: 0, width: 576, height: 1080 },
  { top: 0, left: 0, width: 576, height: 324 }
]
filtergraph [0]crop=576:1080:677:0,scale=576:1080[v0];[0]crop=576:324:0:240,scale=576:324[v1];[0][v0]overlay=0:0[o0];[o0][v1]overlay=0:0[o1]


    


    ipcMain.handle('export-video', async (_event, args) => {
  const { videoFile, outputName, layers, newPositions } = args;
  const ffmpegPath = path.join(__dirname, 'bin', 'ffmpeg');
  const outputDir = checkOutputDir();
  
  // use same video for each layer as input
  // crop, scale, and position each layer
  // overlay each layer on top of each other

  // export video
  console.log('layers', layers);
  console.log('newPositions', newPositions);

  let filtergraph = '';

  for (let i = 0; i < layers.length; i++) {
    const { top, left, width, height } = layers[i];
    const { width: newWidth, height: newHeight } = newPositions[i];
    const filter = `[0]crop=${width}:${height}:${left}:${top},scale=${newWidth}:${newHeight}[v${i}];`;
    filtergraph += filter;
  }

  for (let i = 0; i < layers.length; i++) {
    if (i === 0) {
      filtergraph += `[0][v${i}]overlay=`;
    } else {
      filtergraph += `[o${i-1}][v${i}]overlay=`;
    }
    const { top: newTop, left: newLeft } = newPositions[i];
    let overlay = '';
    if (i !== layers.length - 1) {
      overlay = `${newLeft}:${newTop}[o${i}];`;
    } else {
      overlay = `${newLeft}:${newTop};`;
    }
    filtergraph += overlay;
  }

  filtergraph = filtergraph.slice(0, -1); // remove last comma
  filtergraph += `[o${layers.length-1}]`;
  
  console.log('filtergraph', filtergraph);

  const ffmpeg = spawn(ffmpegPath, [
    '-i', videoFile,
    '-filter_complex', filtergraph,
    '-map', `[o${layers.length-1}]`,
    '-c:v', 'libx264',
    '-preset', 'ultrafast',
    '-crf', '18',
    '-y',
    path.join(outputDir, `${outputName}`)
  ]);  

  ffmpeg.stdout.on('data', (data) => {
    console.log(`FFmpeg output: ${data}`);
  });

  ffmpeg.stderr.on('data', (data) => {
    console.error(`FFmpeg error: ${data}`);
  });

  ffmpeg.on('close', (code) => {
    console.log(`FFmpeg process exited with code ${code}`);
    // event.reply('ffmpeg-export-done'); // Notify the renderer process
  });
});


    


    The problem I'm having now is that its overlaying the regions over the original input and keeping the landscape dimensions instead of making a portrait video.

    


  • I want to take any Audio from a file and encode it as PCM_ALAW. My Example is a .m4a file to .wav file

    22 novembre 2023, par Clockman

    I have been working on this for a while now while am generally new to ffmpeg library, I have studied it a bit. The challenge I have that at the point of witting to file I get the following exception.

    


    "Exception thrown at 0x00007FFACA8305B3 (avformat-60.dll) in FfmpegPractice.exe : 0xC0000005 : Access violation writing location 0x0000000000000000.". I understand this means am writing to an uninitialized buffer am unable to discover why this is happening. The exception call stack shows the following

    


    avformat-60.dll!avformat_write_header() C
avformat-60.dll!ff_write_chained()  C
avformat-60.dll!ff_write_chained()  C
avformat-60.dll!av_write_frame()    C
FfmpegPractice.exe!main() Line 215  C++


    


    Some things I have tried

    


    This code is part of a larger project built with CMake but for some reason I could no step into ffmpeg library while debugging, So I recompiled ffmpeg ensured debugging was enabled so I could drill down to the root cause but I still could not step into the ffmpeg library.

    


    I then created a minimal project using visual studio c++ console project and I still could not step into the code.

    


    I have read through many ffmpeg docs and some I could find on the internet and I still could not solve it.

    


    This is the code

    


    #include <iostream>&#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libswresample></libswresample>swresample.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>audio_fifo.h>&#xA;}&#xA;&#xA;using namespace std;&#xA;&#xA;//in audio file&#xA;string filename{ "rapid_caller_test.m4a" };&#xA;AVFormatContext* pFormatCtx{};&#xA;AVCodecContext* pCodecCtx{};&#xA;AVStream* pStream{};&#xA;&#xA;//out audio file&#xA;string outFilename{ "output.wav" };&#xA;AVFormatContext* pOutFormatCtx{ nullptr };&#xA;AVCodecContext* pOutCodecCtx{ nullptr };&#xA;AVIOContext* pOutIoContext{ nullptr };&#xA;const AVCodec* pOutCodec{ nullptr };&#xA;AVStream* pOutStream{ nullptr };&#xA;const int OUTPUT_CHANNELS = 1;&#xA;const int SAMPLE_RATE = 8000;&#xA;const int OUT_BIT_RATE = 64000;&#xA;uint8_t** convertedSamplesBuffer{ nullptr };&#xA;int64_t dstNmbrSamples{ 0 };&#xA;int dstLineSize{ 0 };&#xA;static int64_t pts{ 0 };&#xA;&#xA;//conversion context;&#xA;SwrContext* swr{};&#xA;&#xA;uint32_t i{ 0 };&#xA;int audiostream{ -1 };&#xA;&#xA;&#xA;void cleanUp() &#xA;{&#xA;  avcodec_free_context(&amp;pOutCodecCtx);;&#xA;  avio_closep(&amp;(pOutFormatCtx)->pb);&#xA;  avformat_free_context(pOutFormatCtx);&#xA;  pOutFormatCtx = nullptr;&#xA;}&#xA;&#xA;int main()&#xA;{&#xA;&#xA;/*&#xA;* section to setup input file&#xA;*/&#xA;if (avformat_open_input(&amp;pFormatCtx, filename.data(), nullptr, nullptr) != 0) {&#xA;  cout &lt;&lt; "could not open file " &lt;&lt; filename &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;if (avformat_find_stream_info(pFormatCtx, nullptr) &lt; 0) {&#xA;  cout &lt;&lt; "Could not retrieve stream information from file " &lt;&lt; filename &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;av_dump_format(pFormatCtx, 0, filename.c_str(), 0);&#xA;&#xA;for (i = 0; i &lt; pFormatCtx->nb_streams; i&#x2B;&#x2B;) {&#xA;  if (pFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;    audiostream = i;&#xA;    break;&#xA;  }&#xA;}&#xA;if (audiostream == -1) {&#xA;  cout &lt;&lt; "did not find audio stream" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;&#xA;pStream = pFormatCtx->streams[audiostream];&#xA;const AVCodec* pCodec{ avcodec_find_decoder(pStream->codecpar->codec_id) };&#xA;pCodecCtx = avcodec_alloc_context3(pCodec);&#xA;avcodec_parameters_to_context(pCodecCtx, pStream->codecpar);&#xA;if (avcodec_open2(pCodecCtx, pCodec, nullptr)) {&#xA;  cout &lt;&lt; "could not open codec" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;&#xA;/*&#xA;* section to set up output file which is a G711 audio&#xA;*/&#xA;if (avio_open(&amp;pOutIoContext, outFilename.data(), AVIO_FLAG_WRITE)) {&#xA;  cout &lt;&lt; "could not open out put file" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;if (!(pOutFormatCtx = avformat_alloc_context())) {&#xA;  cout &lt;&lt; "could not create format conext" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;pOutFormatCtx->pb = pOutIoContext;&#xA;if (!(pOutFormatCtx->oformat = av_guess_format(nullptr, outFilename.data(), nullptr))) {&#xA;  cout &lt;&lt; "could not find output file format" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutFormatCtx->url = av_strdup(outFilename.data()))) {&#xA;  cout &lt;&lt; "could not allocate file name" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutCodec = avcodec_find_encoder(AV_CODEC_ID_PCM_ALAW))) {&#xA;  cout &lt;&lt; "codec not found" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutStream = avformat_new_stream(pOutFormatCtx, nullptr))) {&#xA;  cout &lt;&lt; "could not create new stream" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutCodecCtx = avcodec_alloc_context3(pOutCodec))) {&#xA;  cout &lt;&lt; "could not allocate codec context" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;av_channel_layout_default(&amp;pOutCodecCtx->ch_layout, OUTPUT_CHANNELS);&#xA;pOutCodecCtx->sample_rate = SAMPLE_RATE;&#xA;pOutCodecCtx->sample_fmt = pOutCodec->sample_fmts[0];&#xA;pOutCodecCtx->bit_rate = OUT_BIT_RATE;&#xA;&#xA;//setting sample rate for the container&#xA;pOutStream->time_base.den = SAMPLE_RATE;&#xA;pOutStream->time_base.num = 1;&#xA;if (pOutFormatCtx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;  pOutCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;if (avcodec_open2(pOutCodecCtx, pOutCodec, nullptr)) {&#xA;  cout &lt;&lt; "could not open output codec" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if ((avcodec_parameters_from_context(pOutStream->codecpar, pOutCodecCtx)) &lt; 0) {&#xA;  cout &lt;&lt; "could not initialize stream parameters" &lt;&lt; endl;&#xA;}   &#xA;&#xA;AVPacket* packet = av_packet_alloc();&#xA;&#xA;swr = swr_alloc();&#xA;swr_alloc_set_opts2(&amp;swr, &amp;pOutCodecCtx->ch_layout, pOutCodecCtx->sample_fmt, pOutCodecCtx->sample_rate,&amp;pCodecCtx->ch_layout, pCodecCtx->sample_fmt, pCodecCtx->sample_rate, 0, nullptr);&#xA;swr_init(swr);&#xA;&#xA;int ret{};&#xA;int bSize{};&#xA;while (av_read_frame(pFormatCtx, packet) >= 0) {&#xA;  AVFrame* pFrame = av_frame_alloc();&#xA;  AVFrame* pOutFrame = av_frame_alloc();&#xA;  if (packet->stream_index == audiostream) {&#xA;    ret = avcodec_send_packet(pCodecCtx, packet);&#xA;    while (ret >= 0) {&#xA;    ret = avcodec_receive_frame(pCodecCtx, pFrame);&#xA;    if (ret == AVERROR(EAGAIN))&#xA;    continue;&#xA;    else if (ret == AVERROR_EOF)&#xA;    break;&#xA;    dstNmbrSamples = av_rescale_rnd(swr_get_delay(swr, pCodecCtx->sample_rate) &#x2B; pFrame->nb_samples, pOutCodecCtx->sample_rate, pCodecCtx->sample_rate, AV_ROUND_UP);&#xA;    if ((av_samples_alloc_array_and_samples(&amp;convertedSamplesBuffer, &amp;dstLineSize, pOutCodecCtx->ch_layout.nb_channels,dstNmbrSamples, pOutCodecCtx->sample_fmt, 0)) &lt; 0) {&#xA;    cout &lt;&lt; "coult not allocate samples array and buffer" &lt;&lt; endl;&#xA;    }&#xA;    int channel_samples_count{ 0 };&#xA;    channel_samples_count = swr_convert(swr, convertedSamplesBuffer, dstNmbrSamples, (const uint8_t**)pFrame->data, pFrame->nb_samples);&#xA;    bSize = av_samples_get_buffer_size(&amp;dstLineSize, pOutCodecCtx->ch_layout.nb_channels, channel_samples_count, pOutCodecCtx->sample_fmt, 0);&#xA;    cout &lt;&lt; "no of samples is " &lt;&lt; channel_samples_count &lt;&lt; " the buffer size " &lt;&lt; bSize &lt;&lt; endl;&#xA;    pOutFrame->nb_samples = channel_samples_count;&#xA;    av_channel_layout_copy(&amp;pOutFrame->ch_layout, &amp;pOutCodecCtx->ch_layout);&#xA;    pOutFrame->format = pOutCodecCtx->sample_fmt;&#xA;    pOutFrame->sample_rate = pOutCodecCtx->sample_rate;&#xA;    if ((av_frame_get_buffer(pOutFrame, 0)) &lt; 0) {&#xA;    cout &lt;&lt; "could not allocate output frame samples " &lt;&lt; endl;&#xA;    av_frame_free(&amp;pOutFrame);&#xA;  }&#xA;                &#xA;    //populate out frame buffer&#xA;    av_frame_make_writable(pOutFrame);&#xA;    for (int i{ 0 }; i &lt; bSize; i&#x2B;&#x2B;) {&#xA;    pOutFrame->data[0][i] = convertedSamplesBuffer[0][i];&#xA;    cout &lt;&lt; pOutFrame->data[0][i];&#xA;   }&#xA;   if (pOutFrame) {&#xA;   pOutFrame->pts = pts;&#xA;   pts &#x2B;= pOutFrame->nb_samples;&#xA;  }&#xA;   int res = avcodec_send_frame(pOutCodecCtx, pOutFrame);&#xA;    if (res &lt; 0) {&#xA;    cout &lt;&lt; "error sending frame to encoder" &lt;&lt; endl;&#xA;    cleanUp();&#xA;    return -1;&#xA;   }&#xA;   //int er = avformat_write_header(pOutFormatCtx,nullptr);&#xA;   AVPacket* pOutPacket = av_packet_alloc();&#xA;   pOutPacket->time_base.num = 1;&#xA;   pOutPacket->time_base.den = 8000;&#xA;   if (pOutPacket == nullptr) {&#xA;    cout &lt;&lt; "unable to allocate packet" &lt;&lt; endl;&#xA;  }&#xA;  while (res >= 0) {&#xA;   res = avcodec_receive_packet(pOutCodecCtx, pOutPacket);&#xA;   if (res == AVERROR(EAGAIN))&#xA;    continue;&#xA;   else if (ret == AVERROR_EOF)&#xA;    break;&#xA;   av_packet_rescale_ts(pOutPacket, pOutCodecCtx->time_base, pOutFormatCtx->streams[0]->time_base);&#xA;   //av_dump_format(pOutFormatCtx, 0, outFilename.c_str(), 1);&#xA;   if (av_write_frame(pOutFormatCtx, pOutPacket) &lt; 0) {&#xA;    cout &lt;&lt; "could not write frame" &lt;&lt; endl;&#xA;    }&#xA;   }&#xA;  }&#xA;}&#xA; av_frame_free(&amp;pFrame);&#xA; av_frame_free(&amp;pOutFrame);&#xA;}&#xA;if (av_write_trailer(pOutFormatCtx) &lt; 0) {&#xA; cout &lt;&lt; "could not write file trailer" &lt;&lt; endl;&#xA;}&#xA;swr_free(&amp;swr);&#xA;avcodec_free_context(&amp;pOutCodecCtx);&#xA;av_packet_free(&amp;packet);&#xA;}&#xA;</iostream>

    &#xA;

    Error/Exception

    &#xA;

    The exception is thrown when I call

    &#xA;

    if (av_write_frame(pOutFormatCtx, pOutPacket) &lt; 0)  {   cout &lt;&lt; "could not write frame" &lt;&lt; endl; } &#xA;I also called this line

    &#xA;

    //int er = avformat_write_header(pOutFormatCtx,nullptr);

    &#xA;

    to see if I will get an exception but it did not throw any exception.

    &#xA;

    I have spent weeks on this issue with no success.&#xA;My goal is to take any audio from a file an be able to resample it if need be, and transcode it to PCM_ALAW.&#xA;I will appreciate any help I can get.

    &#xA;