Recherche avancée

Médias (91)

Autres articles (16)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (4065)

  • Saying Goodbye To Old Machines

    1er décembre 2014, par Multimedia Mike — General, powerpc, via

    I recently sent a few old machines off for recycling. Both had relevance to the early days of the FATE testing effort. As is my custom, I photographed them (poorly, of course).

    First, there’s the PowerPC-based Mac Mini I procured thanks to a Craigslist ad in late 2006. I had plans to develop automated FFmpeg building and testing and was already looking ahead toward testing multiple CPU architectures. Again, this was 2006 and PowerPC wasn’t completely on the outs yet– although Apple’s MacTel transition was in full swing, the entire new generation of video game consoles was based on PowerPC.


    PPC Mac Mini pieces

    Click for larger image


    I remember trying to find a Mac Mini PPC on Craigslist. Many were to be found, but all asked more than the price of even a new Mac Mini Intel, always because the seller was leaving all of last year’s applications and perhaps including a monitor, neither of which I needed. Fortunately, I found this bare Mac Mini. Also fortunate was the fact that it was far easier to install Linux on it than the first PowerPC machine I owned.

    After FATE operation transitioned away from me, I still kept the machine in service as an edge server and automated backup machine. That is, until the hard drive failed on reboot one day. Thus, when it was finally time to recycle the computer, I felt it necessary to disassemble the machine and remove the hard drive for possible salvage and then for destruction.

    If you’ve ever attempted to upgrade or otherwise service this style of Mac Mini, you will no doubt recognize the pictured paint scraper tool as standard kit. I have had that tool since I first endeavored to upgrade the RAM to 1 GB from the standard 1/2 GB. Performing such activities on a Mac Mini is tedious, but only if you care about putting it back together afterwards.

    The next machine is a bit older. I put it together nearly a decade ago, early in 2005. This machine’s original duty was “download agent”– this would be more specifically called a BitTorrent machine in modern tech parlance. Back then, I placed it on someone else’s woefully underutilized home broadband connection (with their permission, of course) when I was too cheap to upgrade from dialup.


    VIA small form factor front

    Click for larger image


    This is a small form factor system from VIA that was clearly designed with home theater PC (HTPC) use cases in mind. It has a VIA C3 x86-compatible CPU (according to my notes, Centaur VIA Samuel 2 stepping 03, flags : fpu de tsc msr cx8 mtrr pge mmx 3dnow) and 128 MB of RAM (initially ; I upgraded it to 512 MB some years later, just for the sake of doing it). And then there was the 120 GB PATA HD for all that downloaded goodness.


    VIA machine small form factor inside

    Click for larger image


    I have specific memories of a time when my main computer at home wasn’t working correctly for one reason or another. Instead, I logged into this machine remotely via SSH to make several optimizations and fixes on FFmpeg’s VP3/Theora video decoder, all from the terminal, without being able to see the decoded images with my own eyes (which is why I insist that even blind people could work on video codecs).

    By the time I got my own broadband, I had become inspired to attempt the automated build and test system for FFmpeg. This was the machine I used for prototyping early brainstorms of FATE. By the time I put a basic build/test system into place in early 2008, I had much faster computers that could build and test the project– obvious limitation of this machine is that it could take at least 1/2 hour to build the entire codebase, and that was the project from 8 years ago.

    So the machine got stuffed in a closet somewhere along the line. The next time I pulled it out was in 2010 when I wanted to toy with Dreamcast programming once more (the machine appears in one of the photos in this post). This was the only machine I still owned which still had an RS-232 serial port (I didn’t know much about USB serial converters yet), plus it still had a bunch of pre-compiled DC homebrew binaries (I was having trouble getting the toolchain to work right).

    The next time I dusted off this machine was late last year when I was trying some experiments with the Microsoft Xbox’s IDE drive (a photo in that post also shows the machine ; this thing shows up a lot on this blog). The VIA machine was the only machine I still owned which had 40-pin IDE connectors which was crucial to my experiment.

    At this point, I was trying to make the machine more useful which meant replacing the ancient Gentoo Linux distribution as well as simply interacting with it via a keyboard and mouse. I have a long Evernote entry documenting a comedy of errors revolving around this little box. The interaction troubles were due to the fact that I didn’t have any PS/2 keyboards left and I couldn’t make a USB keyboard work with it. Diego was able to explain that I needed to flip a bit in the BIOS to address this which worked. As for upgrading the OS, I tried numerous Linux distributions large and small, mostly focusing on the small. None worked. I eventually learned that, while I was trying to use i686 distributions, this machine did not actually qualify as an i686 CPU ; installations usually booted but failed because the default kernel required the cmov instruction. I was advised to try i386 distros instead. My notes don’t indicate whether I had any luck on this front before I gave up and moved on.

    I just made the connection that this VIA machine has two 40-pin IDE connectors which means that the thing was technically capable of supporting up to 4 IDE devices. Obviously, the computer couldn’t really accommodate that in terms of space or power. When I wanted to try installing a new OS, I needed take off the top and connect a rather bulky IDE CD-ROM drive. This computer’s casing was supposed to be able to support a slimline optical drive (perhaps like the type found in laptops), but I could never quite visualize how that was supposed to work, space-wise. When I disassembled the PowerPC Mac Mini, I realized I might be able to repurpose that machines optical drive for this computer. Obviously, I thought better of trying since both machines are off to the recycle pile.

    I would still like to work on the Xbox project a bit more, but I procured a different, unused, much more powerful yet still old computer that has a motherboard with 1 PATA connector in addition to 6 SATA connectors. If I ever get around to toying with Linux kernel development, this should be a much more appropriate platform to use.

    I thought about turning this machine into an old Windows XP (and lower, down to Windows 3.1) gaming platform ; the capabilities of the machine would probably be perfect for a huge portion of my Windows game collection. But I think the lack of an optical drive renders this idea intractable. External USB drives are likely out of the question since there is very little chance that this motherboard featured USB 2.0 (the specs don’t mention 2.0, so the USB ports are probably 1.1).

    So it is with fond memories that I send off both machines, sans hard drives, to the recycle pile. I’m still deciding on an appropriate course of action for failed hard drives, though.

  • Why does the frame time increase over time when decoding video using OpenCV ?

    21 février 2024, par ZeunO8

    I have set up OpenCV in my project. I added the OpenCV github repo as a submodule in my project and included it in my cmake dependencies file like so :

    


     set(WITH_FFMPEG ON)
 set(VIDEOIO_PLUGIN_LIST "ffmpeg")
 set(BUILD_PERF_TESTS OFF)
 set(BUILD_TESTS OFF)
 set(INSTALL_TESTS OFF)
 add_subdirectory(${COJE_SRC_DIR}/vendor/opencv build/build_opencv)


    


    I then set up a Video struct inheriting from IEntity (to get it working with my render drivers draw system) and that looks like :

    


    #pragma once&#xA;#include <opencv2></opencv2>opencv.hpp>&#xA;#include <coje></coje>interfaces/IEntity.hpp>&#xA;#include <coje></coje>enums/EFileLocation.hpp>&#xA;#include <coje></coje>String.hpp>&#xA;#include <coje></coje>graphics/Texture.hpp>&#xA;&#xA;namespace coje::entitys&#xA;{&#xA; struct Video : IEntity&#xA; {&#xA;  String filePath;&#xA;  EFileLocation fileLocation;&#xA;  String tempname;&#xA;  glm::vec2 size;&#xA;  UniquePointer videoCapturePointer;&#xA;  cv::Mat frame;&#xA;  cv::Mat frameConverted;&#xA;  Floating64 fps = 0;&#xA;  Floating64 frameCount = 0;&#xA;  Integer64 currentFrameIndex = -1;&#xA;  Video(const String &amp;filePath, const EFileLocation &amp;fileLocation, const glm::vec2 &amp;size, const glm::vec3 &amp;position, const glm::quat &amp;rotation);&#xA;  ~Video();&#xA;  void updateTextureWithFrame(const uInteger64 &amp;frameIndex, UniquePointer<texture> &amp;texturePointer);&#xA;  const Boolean resize(const glm::vec2 &amp;size);&#xA;  Boolean update(const uInteger64 &amp;elapsedTimeMs);&#xA; };&#xA;}&#xA;</texture>

    &#xA;

    The source for Video.cpp is :

    &#xA;

    #include <coje></coje>bullet.hpp>&#xA;#include <coje></coje>Common.hpp>&#xA;#include <coje></coje>Entitys/Video.hpp>&#xA;#include <coje></coje>Logger.hpp>&#xA;#include <coje></coje>Timer.hpp>&#xA;#include <cstdio>&#xA;using namespace coje::entitys;&#xA;/*&#xA; */&#xA;Video::Video(const String &amp;filePath, const EFileLocation &amp;fileLocation, const glm::vec2 &amp;size, const glm::vec3 &amp;position, const glm::quat &amp;rotation) : IEntity(EntityType)&#xA;{&#xA; this->position = position;&#xA; this->rotation = rotation;&#xA; File videoFile(filePath, fileLocation, "r");&#xA; auto videoBytes = videoFile.toBytes();&#xA; tempname = std::tmpnam(0);&#xA; {&#xA;  File tempFile(tempname, EFileLocation::Relative, "w");&#xA;  tempFile &amp; videoBytes;&#xA; }&#xA; videoCapturePointer = {ReleaseType::Delete, new cv::VideoCapture(tempname.c_str(), cv::CAP_FFMPEG), 1};&#xA; auto &amp;videoCapture = *videoCapturePointer.pointer;&#xA; if (!videoCapture.isOpened())&#xA; {&#xA;  Logger(LogType::ERROR, "%s\n", "Error opening video stream from memory");&#xA;  return;&#xA; }&#xA; // videoCapture.set(cv::CAP_PROP_BUFFERSIZE, 100);&#xA; uInteger64 bufferSize = videoCapture.get(cv::CAP_PROP_BUFFERSIZE);&#xA; Logger(LogType::INFO, "BufferSize: %llu\n", bufferSize);&#xA; fps = videoCapture.get(cv::CAP_PROP_FPS);&#xA; frameCount = videoCapture.get(cv::CAP_PROP_FRAME_COUNT);&#xA; uInteger64 frameWidth = videoCapture.get(cv::CAP_PROP_FRAME_WIDTH),&#xA;            frameHeight = videoCapture.get(cv::CAP_PROP_FRAME_HEIGHT);&#xA; resize(size);&#xA; textures.push_back({ReleaseType::Delete, new Texture(frameWidth, frameHeight, ETextureFormat::RGB8, ETextureType::UnsignedByte), 1});&#xA; glm::ivec3 *indices = (glm::ivec3 *)(*this).operator()(IEntity::Quanta::Indice, 2);&#xA; indices[0] = {3, 2, 1}; // front&#xA; indices[1] = {1, 0, 3};&#xA; glm::vec2 *uvs = (glm::vec2 *)(*this).operator()<float>(IEntity::Quanta::UV2, 4);&#xA; auto _uvs = Common::getUVs2DQuad();&#xA; for (int index = 0; index &lt; 4; index&#x2B;&#x2B;)&#xA; {&#xA;  uvs[index] = _uvs._data[index];&#xA; }&#xA; TimerFunctions::addFunction({this, &amp;Video::update}, 0, 1000 / fps);&#xA; return;&#xA;};&#xA;/*&#xA; */&#xA;Video::~Video()&#xA;{&#xA; File tempFile(tempname);&#xA; tempFile.remove();&#xA;};&#xA;/*&#xA; */&#xA;void Video::updateTextureWithFrame(const uInteger64 &amp;frameIndex, UniquePointer<texture> &amp;texturePointer)&#xA;{&#xA; auto start = std::chrono::high_resolution_clock::now();&#xA; auto &amp;videoCapture = *videoCapturePointer.pointer;&#xA; videoCapture.set(cv::CAP_PROP_POS_FRAMES, frameIndex);&#xA; Boolean frameGrabSuccess = videoCapture.grab();&#xA; if (!frameGrabSuccess)&#xA; {&#xA;  Logger(LogType::ERROR, "%s\n", "Failed to grab frame from VideoCapture");&#xA;  return;&#xA; }&#xA; Boolean frameRetrieveSuccess = videoCapture.retrieve(frame);&#xA; if (!frameRetrieveSuccess)&#xA; {&#xA;  Logger(LogType::ERROR, "%s\n", "Failed to retrieve frame from VideoCapture");&#xA;  return;&#xA; }&#xA; auto end = std::chrono::high_resolution_clock::now();&#xA; std::chrono::duration elapsed = end - start;&#xA; std::cout &lt;&lt; "Video::updateTextureWithFrame took " &lt;&lt; elapsed.count() &lt;&lt; "ms\n";&#xA; cv::cvtColor(frame, frameConverted, cv::COLOR_BGR2RGB);&#xA; cv::flip(frameConverted, frameConverted, 0);&#xA; if (texturePointer.pointer)&#xA; {&#xA;  auto &amp;texture = texturePointer.pointer;&#xA;  if (texture->width != frameConverted.cols || texture->height != frameConverted.rows)&#xA;  {&#xA;   goto _newTexture;&#xA;  }&#xA;  else&#xA;  {&#xA;   texture->update(frameConverted.data);&#xA;  }&#xA; }&#xA; else&#xA; {&#xA; _newTexture:&#xA;  texturePointer = {ReleaseType::Delete, new Texture(frameConverted.cols, frameConverted.rows, frameConverted.data, ETextureFormat::RGB8, ETextureType::UnsignedByte), 1};&#xA; }&#xA;};&#xA;/*&#xA; */&#xA;const Boolean Video::resize(const glm::vec2 &amp;_size)&#xA;{&#xA; size = _size;&#xA; glm::vec3 *vertices = (glm::vec3 *)(*this).operator()<float>(IEntity::Quanta::Vertex, 4);&#xA; glm::vec3 topRight = {size.x / 2, size.y / 2, 0};&#xA; glm::vec3 bottomRight = {size.x / 2, -(size.y / 2), 0};&#xA; glm::vec3 bottomLeft = {-(size.x / 2), -(size.y / 2), 0};&#xA; glm::vec3 topLeft = {-(size.x / 2), size.y / 2, 0};&#xA; vertices[0] = topRight;&#xA; vertices[1] = bottomRight;&#xA; vertices[2] = bottomLeft;&#xA; vertices[3] = topLeft;&#xA; *changedPointer = true;&#xA; return true;&#xA;};&#xA;/*&#xA; */&#xA;Boolean Video::update(const uInteger64 &amp;elapsedTimeMs)&#xA;{&#xA; currentFrameIndex&#x2B;&#x2B;;&#xA; Logger(LogType::INFO, "Video-elapsedTime: %llums\n", elapsedTimeMs);&#xA; if (currentFrameIndex == frameCount - 1)&#xA; {&#xA;  return false;&#xA; }&#xA; auto &amp;texturePointer = textures._data[0];&#xA; updateTextureWithFrame(currentFrameIndex, texturePointer);&#xA; return true;&#xA;};&#xA;</float></texture></float></cstdio>

    &#xA;

    When running a simple test video @ 1280x720 the updateTextureWithFrame timer begins at 12ms but gradually over time increases to over 100ms and beyond. Causing video playback to be running at lower than defined frames per second.

    &#xA;

    What is causing this gradual increase in updateTextureWithFrame ?? How can I solve it ?

    &#xA;

    Edit :

    &#xA;

     uInteger64 bufferSize = videoCapture.get(cv::CAP_PROP_BUFFERSIZE);&#xA; Logger(LogType::INFO, "BufferSize: %llu\n", bufferSize);&#xA;

    &#xA;

    prints BufferSize : 0. Indicating setting CAP_PROP_BUFFERSIZE is not supported for ffmpeg

    &#xA;

    Edit2 :&#xA;Some logs of timings

    &#xA;

    Video::updateTextureWithFrame took 16.5161ms&#xA;Video::updateTextureWithFrame took 21.6109ms&#xA;Video::updateTextureWithFrame took 21.1443ms&#xA;Video::updateTextureWithFrame took 20.4253ms&#xA;Video::updateTextureWithFrame took 23.9015ms&#xA;Video::updateTextureWithFrame took 22.1348ms&#xA;Video::updateTextureWithFrame took 21.3723ms&#xA;Video::updateTextureWithFrame took 21.2186ms&#xA;Video::updateTextureWithFrame took 24.0211ms&#xA;Video::updateTextureWithFrame took 24.5907ms&#xA;Video::updateTextureWithFrame took 23.2134ms&#xA;Video::updateTextureWithFrame took 25.6763ms&#xA;Video::updateTextureWithFrame took 25.416ms&#xA;Video::updateTextureWithFrame took 25.2314ms&#xA;Video::updateTextureWithFrame took 26.3919ms&#xA;Video::updateTextureWithFrame took 24.1883ms&#xA;Video::updateTextureWithFrame took 27.7095ms&#xA;Video::updateTextureWithFrame took 26.5594ms&#xA;Video::updateTextureWithFrame took 26.6618ms&#xA;Video::updateTextureWithFrame took 29.496ms&#xA;Video::updateTextureWithFrame took 27.2731ms&#xA;Video::updateTextureWithFrame took 27.5113ms&#xA;Video::updateTextureWithFrame took 30.2855ms&#xA;Video::updateTextureWithFrame took 27.6773ms&#xA;Video::updateTextureWithFrame took 30.5532ms&#xA;Video::updateTextureWithFrame took 32.6858ms&#xA;Video::updateTextureWithFrame took 32.8735ms&#xA;Video::updateTextureWithFrame took 31.7369ms&#xA;Video::updateTextureWithFrame took 31.2453ms&#xA;Video::updateTextureWithFrame took 30.9424ms&#xA;Video::updateTextureWithFrame took 36.7046ms&#xA;Video::updateTextureWithFrame took 33.6224ms&#xA;Video::updateTextureWithFrame took 32.0368ms&#xA;Video::updateTextureWithFrame took 33.0109ms&#xA;Video::updateTextureWithFrame took 32.2155ms&#xA;Video::updateTextureWithFrame took 33.5314ms&#xA;Video::updateTextureWithFrame took 33.576ms&#xA;Video::updateTextureWithFrame took 37.8993ms&#xA;Video::updateTextureWithFrame took 33.9495ms&#xA;Video::updateTextureWithFrame took 35.776ms&#xA;Video::updateTextureWithFrame took 36.2566ms&#xA;Video::updateTextureWithFrame took 36.5887ms&#xA;Video::updateTextureWithFrame took 40.0839ms&#xA;Video::updateTextureWithFrame took 38.5146ms&#xA;Video::updateTextureWithFrame took 40.72ms&#xA;Video::updateTextureWithFrame took 37.8345ms&#xA;Video::updateTextureWithFrame took 37.9925ms&#xA;Video::updateTextureWithFrame took 39.0402ms&#xA;Video::updateTextureWithFrame took 39.8856ms&#xA;Video::updateTextureWithFrame took 41.3421ms&#xA;Video::updateTextureWithFrame took 41.0703ms&#xA;Video::updateTextureWithFrame took 42.9482ms&#xA;Video::updateTextureWithFrame took 42.9199ms&#xA;Video::updateTextureWithFrame took 44.2593ms&#xA;Video::updateTextureWithFrame took 41.2746ms&#xA;Video::updateTextureWithFrame took 45.7017ms&#xA;Video::updateTextureWithFrame took 46.1854ms&#xA;Video::updateTextureWithFrame took 44.154ms&#xA;Video::updateTextureWithFrame took 42.6004ms&#xA;Video::updateTextureWithFrame took 47.2442ms&#xA;Video::updateTextureWithFrame took 43.4156ms&#xA;Video::updateTextureWithFrame took 47.9288ms&#xA;Video::updateTextureWithFrame took 45.3475ms&#xA;Video::updateTextureWithFrame took 46.9646ms&#xA;Video::updateTextureWithFrame took 48.4978ms&#xA;Video::updateTextureWithFrame took 45.1322ms&#xA;Video::updateTextureWithFrame took 48.1365ms&#xA;Video::updateTextureWithFrame took 49.8857ms&#xA;Video::updateTextureWithFrame took 47.4854ms&#xA;Video::updateTextureWithFrame took 48.2378ms&#xA;Video::updateTextureWithFrame took 50.9174ms&#xA;Video::updateTextureWithFrame took 52.347ms&#xA;Video::updateTextureWithFrame took 51.6252ms&#xA;Video::updateTextureWithFrame took 52.2018ms&#xA;Video::updateTextureWithFrame took 49.2384ms&#xA;Video::updateTextureWithFrame took 50.9491ms&#xA;Video::updateTextureWithFrame took 52.2139ms&#xA;Video::updateTextureWithFrame took 53.3229ms&#xA;Video::updateTextureWithFrame took 56.0199ms&#xA;Video::updateTextureWithFrame took 55.582ms&#xA;Video::updateTextureWithFrame took 55.2675ms&#xA;Video::updateTextureWithFrame took 54.9446ms&#xA;Video::updateTextureWithFrame took 54.7955ms&#xA;Video::updateTextureWithFrame took 54.0296ms&#xA;Video::updateTextureWithFrame took 54.0375ms&#xA;Video::updateTextureWithFrame took 57.0916ms&#xA;Video::updateTextureWithFrame took 55.2474ms&#xA;Video::updateTextureWithFrame took 56.8046ms&#xA;Video::updateTextureWithFrame took 57.562ms&#xA;Video::updateTextureWithFrame took 59.9115ms&#xA;Video::updateTextureWithFrame took 59.3991ms&#xA;Video::updateTextureWithFrame took 60.0536ms&#xA;Video::updateTextureWithFrame took 59.9457ms&#xA;Video::updateTextureWithFrame took 57.5088ms&#xA;Video::updateTextureWithFrame took 59.1255ms&#xA;Video::updateTextureWithFrame took 62.2311ms&#xA;Video::updateTextureWithFrame took 59.0422ms&#xA;Video::updateTextureWithFrame took 62.0419ms&#xA;Video::updateTextureWithFrame took 62.0586ms&#xA;Video::updateTextureWithFrame took 64.0988ms&#xA;Video::updateTextureWithFrame took 64.743ms&#xA;Video::updateTextureWithFrame took 63.008ms&#xA;Video::updateTextureWithFrame took 65.1726ms&#xA;Video::updateTextureWithFrame took 63.3618ms&#xA;Video::updateTextureWithFrame took 65.6431ms&#xA;Video::updateTextureWithFrame took 63.8957ms&#xA;Video::updateTextureWithFrame took 65.1142ms&#xA;Video::updateTextureWithFrame took 67.2243ms&#xA;Video::updateTextureWithFrame took 65.1302ms&#xA;Video::updateTextureWithFrame took 66.4947ms&#xA;Video::updateTextureWithFrame took 66.092ms&#xA;Video::updateTextureWithFrame took 68.6997ms&#xA;Video::updateTextureWithFrame took 70.5683ms&#xA;Video::updateTextureWithFrame took 71.9019ms&#xA;Video::updateTextureWithFrame took 68.6088ms&#xA;Video::updateTextureWithFrame took 70.7946ms&#xA;Video::updateTextureWithFrame took 68.263ms&#xA;Video::updateTextureWithFrame took 66.1565ms&#xA;Video::updateTextureWithFrame took 70.6742ms&#xA;Video::updateTextureWithFrame took 70.7035ms&#xA;Video::updateTextureWithFrame took 73.8002ms&#xA;Video::updateTextureWithFrame took 73.1897ms&#xA;Video::updateTextureWithFrame took 74.006ms&#xA;Video::updateTextureWithFrame took 74.1048ms&#xA;Video::updateTextureWithFrame took 72.9378ms&#xA;Video::updateTextureWithFrame took 75.0651ms&#xA;Video::updateTextureWithFrame took 73.5676ms&#xA;Video::updateTextureWithFrame took 73.7706ms&#xA;Video::updateTextureWithFrame took 74.0839ms&#xA;Video::updateTextureWithFrame took 74.6773ms&#xA;Video::updateTextureWithFrame took 75.8827ms&#xA;Video::updateTextureWithFrame took 74.4724ms&#xA;Video::updateTextureWithFrame took 75.2119ms&#xA;Video::updateTextureWithFrame took 83.4102ms&#xA;Video::updateTextureWithFrame took 77.6811ms&#xA;Video::updateTextureWithFrame took 78.7307ms&#xA;Video::updateTextureWithFrame took 80.1705ms&#xA;Video::updateTextureWithFrame took 78.6064ms&#xA;Video::updateTextureWithFrame took 80.803ms&#xA;Video::updateTextureWithFrame took 80.0117ms&#xA;Video::updateTextureWithFrame took 78.2948ms&#xA;Video::updateTextureWithFrame took 81.0375ms&#xA;Video::updateTextureWithFrame took 78.7389ms&#xA;Video::updateTextureWithFrame took 80.2201ms&#xA;Video::updateTextureWithFrame took 82.8578ms&#xA;Video::updateTextureWithFrame took 84.2388ms&#xA;Video::updateTextureWithFrame took 84.6484ms&#xA;Video::updateTextureWithFrame took 87.6683ms&#xA;Video::updateTextureWithFrame took 82.8939ms&#xA;Video::updateTextureWithFrame took 84.015ms&#xA;Video::updateTextureWithFrame took 88.1832ms&#xA;Video::updateTextureWithFrame took 83.3894ms&#xA;Video::updateTextureWithFrame took 86.9088ms&#xA;Video::updateTextureWithFrame took 87.1049ms&#xA;Video::updateTextureWithFrame took 87.6748ms&#xA;Video::updateTextureWithFrame took 87.178ms&#xA;Video::updateTextureWithFrame took 84.7988ms&#xA;Video::updateTextureWithFrame took 89.528ms&#xA;Video::updateTextureWithFrame took 88.7021ms&#xA;Video::updateTextureWithFrame took 90.0357ms&#xA;Video::updateTextureWithFrame took 90.398ms&#xA;Video::updateTextureWithFrame took 87.8047ms&#xA;Video::updateTextureWithFrame took 90.2447ms&#xA;Video::updateTextureWithFrame took 94.6288ms&#xA;Video::updateTextureWithFrame took 88.9265ms&#xA;Video::updateTextureWithFrame took 89.01ms&#xA;Video::updateTextureWithFrame took 87.6294ms&#xA;Video::updateTextureWithFrame took 90.6988ms&#xA;Video::updateTextureWithFrame took 93.0173ms&#xA;Video::updateTextureWithFrame took 92.1651ms&#xA;Video::updateTextureWithFrame took 92.9234ms&#xA;Video::updateTextureWithFrame took 95.4223ms&#xA;Video::updateTextureWithFrame took 99.0941ms&#xA;Video::updateTextureWithFrame took 97.3014ms&#xA;Video::updateTextureWithFrame took 91.8709ms&#xA;Video::updateTextureWithFrame took 96.8951ms&#xA;Video::updateTextureWithFrame took 95.3506ms&#xA;Video::updateTextureWithFrame took 96.5474ms&#xA;Video::updateTextureWithFrame took 92.4739ms&#xA;Video::updateTextureWithFrame took 95.1857ms&#xA;Video::updateTextureWithFrame took 96.6743ms&#xA;Video::updateTextureWithFrame took 99.0657ms&#xA;Video::updateTextureWithFrame took 105.84ms&#xA;Video::updateTextureWithFrame took 99.3163ms&#xA;Video::updateTextureWithFrame took 127.942ms&#xA;Video::updateTextureWithFrame took 101.378ms&#xA;Video::updateTextureWithFrame took 98.6114ms&#xA;Video::updateTextureWithFrame took 101.161ms&#xA;Video::updateTextureWithFrame took 102.271ms&#xA;Video::updateTextureWithFrame took 100.77ms&#xA;Video::updateTextureWithFrame took 100.825ms&#xA;Video::updateTextureWithFrame took 100.64ms&#xA;Video::updateTextureWithFrame took 99.7002ms&#xA;Video::updateTextureWithFrame took 103.207ms&#xA;Video::updateTextureWithFrame took 107.135ms&#xA;Video::updateTextureWithFrame took 100.766ms&#xA;Video::updateTextureWithFrame took 103.321ms&#xA;Video::updateTextureWithFrame took 107.361ms&#xA;Video::updateTextureWithFrame took 104.086ms&#xA;Video::updateTextureWithFrame took 100.975ms&#xA;Video::updateTextureWithFrame took 105.846ms&#xA;Video::updateTextureWithFrame took 104.755ms&#xA;Video::updateTextureWithFrame took 105.893ms&#xA;Video::updateTextureWithFrame took 105.234ms&#xA;Video::updateTextureWithFrame took 109.415ms&#xA;Video::updateTextureWithFrame took 107.942ms&#xA;Video::updateTextureWithFrame took 109.816ms&#xA;Video::updateTextureWithFrame took 109.268ms&#xA;Video::updateTextureWithFrame took 111.918ms&#xA;Video::updateTextureWithFrame took 110.123ms&#xA;Video::updateTextureWithFrame took 109.975ms&#xA;Video::updateTextureWithFrame took 110.105ms&#xA;Video::updateTextureWithFrame took 115.888ms&#xA;Video::updateTextureWithFrame took 112.443ms&#xA;Video::updateTextureWithFrame took 111.795ms&#xA;Video::updateTextureWithFrame took 112.016ms&#xA;Video::updateTextureWithFrame took 115.857ms&#xA;Video::updateTextureWithFrame took 114.762ms&#xA;Video::updateTextureWithFrame took 112.551ms&#xA;Video::updateTextureWithFrame took 116.05ms&#xA;Video::updateTextureWithFrame took 119.133ms&#xA;Video::updateTextureWithFrame took 114.202ms&#xA;Video::updateTextureWithFrame took 119.864ms&#xA;Video::updateTextureWithFrame took 119.743ms&#xA;Video::updateTextureWithFrame took 119.911ms&#xA;Video::updateTextureWithFrame took 120.957ms&#xA;Video::updateTextureWithFrame took 117.611ms&#xA;Video::updateTextureWithFrame took 116.596ms&#xA;Video::updateTextureWithFrame took 116.859ms&#xA;Video::updateTextureWithFrame took 120.355ms&#xA;Video::updateTextureWithFrame took 121.932ms&#xA;Video::updateTextureWithFrame took 117.56ms&#xA;Video::updateTextureWithFrame took 122.747ms&#xA;Video::updateTextureWithFrame took 120.103ms&#xA;Video::updateTextureWithFrame took 123.497ms&#xA;Video::updateTextureWithFrame took 126.391ms&#xA;Video::updateTextureWithFrame took 123.512ms&#xA;Video::updateTextureWithFrame took 121.612ms&#xA;Video::updateTextureWithFrame took 130.169ms&#xA;Video::updateTextureWithFrame took 126.936ms&#xA;Video::updateTextureWithFrame took 122.812ms&#xA;Video::updateTextureWithFrame took 122.843ms&#xA;Video::updateTextureWithFrame took 124.214ms&#xA;Video::updateTextureWithFrame took 125.563ms&#xA;Video::updateTextureWithFrame took 128.024ms&#xA;Video::updateTextureWithFrame took 129.263ms&#xA;Video::updateTextureWithFrame took 130.028ms&#xA;Video::updateTextureWithFrame took 127.493ms&#xA;Video::updateTextureWithFrame took 129.553ms&#xA;Video::updateTextureWithFrame took 130.538ms&#xA;Video::updateTextureWithFrame took 22.6048ms&#xA;Video::updateTextureWithFrame took 20.2454ms&#xA;Video::updateTextureWithFrame took 19.9947ms&#xA;Video::updateTextureWithFrame took 21.2817ms&#xA;Video::updateTextureWithFrame took 22.6694ms&#xA;Video::updateTextureWithFrame took 25.5187ms&#xA;Video::updateTextureWithFrame took 19.8971ms&#xA;Video::updateTextureWithFrame took 22.2975ms&#xA;Video::updateTextureWithFrame took 21.4979ms&#xA;Video::updateTextureWithFrame took 25.6767ms&#xA;Video::updateTextureWithFrame took 23.4276ms&#xA;Video::updateTextureWithFrame took 25.5657ms&#xA;Video::updateTextureWithFrame took 23.2816ms&#xA;Video::updateTextureWithFrame took 26.8515ms&#xA;Video::updateTextureWithFrame took 24.0271ms&#xA;Video::updateTextureWithFrame took 24.4675ms&#xA;Video::updateTextureWithFrame took 25.6897ms&#xA;Video::updateTextureWithFrame took 28.7489ms&#xA;Video::updateTextureWithFrame took 24.6164ms&#xA;Video::updateTextureWithFrame took 29.6739ms&#xA;Video::updateTextureWithFrame took 27.8118ms&#xA;Video::updateTextureWithFrame took 30.3992ms&#xA;Video::updateTextureWithFrame took 28.2943ms&#xA;Video::updateTextureWithFrame took 29.9693ms&#xA;Video::updateTextureWithFrame took 30.6129ms&#xA;

    &#xA;

  • Why am I getting blips when encoding a sound file using Java JNA ?

    21 mars 2014, par yonran

    I have implemented a hello world libavcodec using JNA to generate a wav file containing a pure 440Hz sine wave. But when I actually run the program the wav file contains annoying clicks and blips (compare to pure sin wav created from the C program). How am I calling avcodec_encode_audio2 wrong ?

    Here is my Java code. All the sources are also at github in case you want to try to compile it.

    import java.io.IOException;
    import java.nio.ByteBuffer;
    import java.nio.ByteOrder;
    import java.nio.IntBuffer;
    import java.util.Objects;

    import javax.sound.sampled.AudioFormat;
    import javax.sound.sampled.AudioSystem;
    import javax.sound.sampled.DataLine;
    import javax.sound.sampled.LineUnavailableException;
    import javax.sound.sampled.TargetDataLine;


    public class Sin {
       /**
        * Abstract class that allows you to put the initialization and cleanup
        * code at the same place instead of separated by the big try block.
        */
       public static abstract class SharedPtr<t> implements AutoCloseable {
           public T ptr;
           public SharedPtr(T ptr) {
               this.ptr = ptr;
           }
           /**
            * Abstract override forces method to throw no checked exceptions.
            * Subclasses will call a C function that throws no exceptions.
            */
           @Override public abstract void close();
       }

       /**
        * @param args
        * @throws IOException
        * @throws LineUnavailableException
        */
       public static void main(String[] args) throws IOException, LineUnavailableException {
           final AvcodecLibrary avcodec = AvcodecLibrary.INSTANCE;
           final AvformatLibrary avformat = AvformatLibrary.INSTANCE;
           final AvutilLibrary avutil = AvutilLibrary.INSTANCE;
           avcodec.avcodec_register_all();
           avformat.av_register_all();
           AVOutputFormat.ByReference format = null;
           String format_name = "wav", file_url = "file:sinjava.wav";
           for (AVOutputFormat.ByReference formatIter = avformat.av_oformat_next(null); formatIter != null; formatIter = avformat.av_oformat_next(formatIter)) {
               formatIter.setAutoWrite(false);
               String iterName = formatIter.name;
               if (format_name.equals(iterName)) {
                   format = formatIter;
                   break;
               }
           }
           Objects.requireNonNull(format);
           System.out.format("Found format %s%n", format_name);
           AVCodec codec = avcodec.avcodec_find_encoder(format.audio_codec);  // one of AvcodecLibrary.CodecID
           Objects.requireNonNull(codec);
           codec.setAutoWrite(false);
           try (
               SharedPtr<avformatcontext> fmtCtxPtr = new SharedPtr<avformatcontext>(avformat.avformat_alloc_context()) {@Override public void close(){if (null!=ptr) avformat.avformat_free_context(ptr);}};
               ) {
               AVFormatContext fmtCtx = Objects.requireNonNull(fmtCtxPtr.ptr);
               fmtCtx.setAutoWrite(false);
               fmtCtx.setAutoRead(false);
               fmtCtx.oformat = format; fmtCtx.writeField("oformat");

               AVStream st = avformat.avformat_new_stream(fmtCtx, codec);
               if (null == st)
                   throw new IllegalStateException();
               AVCodecContext c = st.codec;
               if (null == c)
                   throw new IllegalStateException();
               st.setAutoWrite(false);
               fmtCtx.readField("nb_streams");
               st.id = fmtCtx.nb_streams - 1; st.writeField("id");
               assert st.id >= 0;
               System.out.format("New stream: id=%d%n", st.id);

               if (0 != (format.flags &amp; AvformatLibrary.AVFMT_GLOBALHEADER)) {
                   c.flags |= AvcodecLibrary.CODEC_FLAG_GLOBAL_HEADER;
               }
               c.writeField("flags");

               c.bit_rate = 64000; c.writeField("bit_rate");
               int bestSampleRate;
               if (null == codec.supported_samplerates) {
                   bestSampleRate = 44100;
               } else {
                   bestSampleRate = 0;
                   for (int offset = 0, sample_rate = codec.supported_samplerates.getInt(offset); sample_rate != 0; codec.supported_samplerates.getInt(++offset)) {
                       bestSampleRate = Math.max(bestSampleRate, sample_rate);
                   }
                   assert bestSampleRate > 0;
               }
               c.sample_rate = bestSampleRate; c.writeField("sample_rate");
               c.channel_layout = AvutilLibrary.AV_CH_LAYOUT_STEREO; c.writeField("channel_layout");
               c.channels = avutil.av_get_channel_layout_nb_channels(c.channel_layout); c.writeField("channels");
               assert 2 == c.channels;
               c.sample_fmt = AvutilLibrary.AVSampleFormat.AV_SAMPLE_FMT_S16; c.writeField("sample_fmt");
               c.time_base.num = 1;
               c.time_base.den = bestSampleRate;
               c.writeField("time_base");
               c.setAutoWrite(false);

               AudioFormat javaSoundFormat = new AudioFormat(bestSampleRate, Short.SIZE, c.channels, true, ByteOrder.nativeOrder() == ByteOrder.BIG_ENDIAN);
               DataLine.Info javaDataLineInfo = new DataLine.Info(TargetDataLine.class, javaSoundFormat);
               if (! AudioSystem.isLineSupported(javaDataLineInfo))
                   throw new IllegalStateException();
               int err;
               if ((err = avcodec.avcodec_open(c, codec)) &lt; 0) {
                   throw new IllegalStateException();
               }
               assert c.channels != 0;

               AVIOContext.ByReference[] ioCtxReference = new AVIOContext.ByReference[1];
               if (0 != (err = avformat.avio_open(ioCtxReference, file_url, AvformatLibrary.AVIO_FLAG_WRITE))) {
                   throw new IllegalStateException("averror " + err);
               }
               try (
                   SharedPtr ioCtxPtr = new SharedPtr(ioCtxReference[0]) {@Override public void close(){if (null!=ptr) avutil.av_free(ptr.getPointer());}}
                   ) {
                   AVIOContext.ByReference ioCtx = Objects.requireNonNull(ioCtxPtr.ptr);
                   fmtCtx.pb = ioCtx; fmtCtx.writeField("pb");
                   int averr = avformat.avformat_write_header(fmtCtx, null);
                   if (averr &lt; 0) {
                       throw new IllegalStateException("" + averr);
                   }
                   st.read();  // it is modified by avformat_write_header
                   System.out.format("Wrote header. fmtCtx->nb_streams=%d, st->time_base=%d/%d; st->avg_frame_rate=%d/%d%n", fmtCtx.nb_streams, st.time_base.num, st.time_base.den, st.avg_frame_rate.num, st.avg_frame_rate.den);
                   avformat.avio_flush(ioCtx);
                   int frame_size = c.frame_size != 0 ? c.frame_size : 4096;
                   int expectedBufferSize = frame_size * c.channels * (Short.SIZE/8);
                   boolean supports_small_last_frame = c.frame_size == 0 ? true : 0 != (codec.capabilities &amp; AvcodecLibrary.CODEC_CAP_SMALL_LAST_FRAME);
                   int bufferSize = avutil.av_samples_get_buffer_size((IntBuffer)null, c.channels, frame_size, c.sample_fmt, 1);
                   assert bufferSize == expectedBufferSize: String.format("expected %d; got %d", expectedBufferSize, bufferSize);
                   ByteBuffer samples = ByteBuffer.allocate(expectedBufferSize);
                   samples.order(ByteOrder.nativeOrder());
                   int audio_time = 0;  // unit: (c.time_base) s = (1/c.sample_rate) s
                   int audio_sample_count = supports_small_last_frame ?
                       3 * c.sample_rate :
                       3 * c.sample_rate / frame_size * frame_size;
                   while (audio_time &lt; audio_sample_count) {
                       int frame_audio_time = audio_time;
                       samples.clear();
                       int nb_samples_in_frame = 0;
                       // encode a single tone sound
                       for (; samples.hasRemaining() &amp;&amp; audio_time &lt; audio_sample_count; nb_samples_in_frame++, audio_time++) {
                           double x = 2*Math.PI*440/c.sample_rate * audio_time;
                           double y = 10000 * Math.sin(x);
                           samples.putShort((short) y);
                           samples.putShort((short) y);
                       }
                       samples.flip();
                       try (
                               SharedPtr<avframe> framePtr = new SharedPtr<avframe>(avcodec.avcodec_alloc_frame()) {@Override public void close() {if (null!=ptr) avutil.av_free(ptr.getPointer());}};
                               ) {
                           AVFrame frame = Objects.requireNonNull(framePtr.ptr);
                           frame.setAutoRead(false);  // will be an in param
                           frame.setAutoWrite(false);
                           frame.nb_samples = nb_samples_in_frame; frame.writeField("nb_samples"); // actually unused during encoding
                           // Presentation time, in AVStream.time_base units.
                           frame.pts = avutil.av_rescale_q(frame_audio_time, c.time_base, st.time_base);  // i * codec_time_base / st_time_base
                           frame.writeField("pts");

                           assert c.channels > 0;
                           int bytesPerSample = avutil.av_get_bytes_per_sample(c.sample_fmt);
                           assert bytesPerSample > 0;
                           if (0 != (err = avcodec.avcodec_fill_audio_frame(frame, c.channels, c.sample_fmt, samples, samples.capacity(), 1))) {
                               throw new IllegalStateException(""+err);
                           }
                           AVPacket packet = new AVPacket();  // one of the few structs from ffmpeg with guaranteed size
                           avcodec.av_init_packet(packet);
                           packet.size = 0;
                           packet.data = null;
                           packet.stream_index = st.index; packet.writeField("stream_index");
                           // encode the samples
                           IntBuffer gotPacket = IntBuffer.allocate(1);
                           if (0 != (err = avcodec.avcodec_encode_audio2(c, packet, frame, gotPacket))) {
                               throw new IllegalStateException("" + err);
                           } else if (0 != gotPacket.get()) {
                               packet.read();
                               averr = avformat.av_write_frame(fmtCtx, packet);
                               if (averr &lt; 0)
                                   throw new IllegalStateException("" + averr);
                           }
                           System.out.format("encoded frame: codec time = %d; pts=%d = av_rescale_q(%d,%d/%d,%d/%d) (%.02fs) contains %d samples (%.02fs); got_packet=%d; packet.size=%d%n",
                                   frame_audio_time,
                                   frame.pts,
                                   frame_audio_time, st.codec.time_base.num,st.codec.time_base.den,st.time_base.num,st.time_base.den,
                                   1.*frame_audio_time/c.sample_rate, frame.nb_samples, 1.*frame.nb_samples/c.sample_rate, gotPacket.array()[0], packet.size);
                       }
                   }
                   if (0 != (err = avformat.av_write_trailer(fmtCtx))) {
                       throw new IllegalStateException();
                   }
                   avformat.avio_flush(ioCtx);
               }
           }
           System.out.println("Done writing");
       }
    }
    </avframe></avframe></avformatcontext></avformatcontext></t>

    I also rewrote it in C, and the C version works fine without any blips. But I can’t figure out how I am using the library differently ; all the library function calls should be identical !

    //! gcc --std=c99 sin.c $(pkg-config --cflags --libs libavutil libavformat libavcodec) -o sin
    // sudo apt-get install libswscale-dev
    #include
    #include
    #include
    #include

    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>mathematics.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libavcodec></libavcodec>avcodec.h>
    int main(int argc, char *argv[]) {
     const char *format_name = "wav", *file_url = "file:sin.wav";
     avcodec_register_all();
     av_register_all();
     AVOutputFormat *format = NULL;
     for (AVOutputFormat *formatIter = av_oformat_next(NULL); formatIter != NULL; formatIter = av_oformat_next(formatIter)) {
       int hasEncoder = NULL != avcodec_find_encoder(formatIter->audio_codec);
       if (0 == strcmp(format_name, formatIter->name)) {
         format = formatIter;
         break;
       }
     }
     printf("Found format %s\n", format->name);
     AVCodec *codec = avcodec_find_encoder(format->audio_codec);
     if (! codec) {
       fprintf(stderr, "Could not find codec %d\n", format->audio_codec);
       exit(1);
     }
     AVFormatContext *fmtCtx = avformat_alloc_context();
     if (! fmtCtx) {
       fprintf(stderr, "error allocating AVFormatContext\n");
       exit(1);
     }
     fmtCtx->oformat = format;
     AVStream *st = avformat_new_stream(fmtCtx, codec);
     if (! st) {
       fprintf(stderr, "error allocating AVStream\n");
       exit(1);
     }
     if (fmtCtx->nb_streams != 1) {
       fprintf(stderr, "avformat_new_stream should have incremented nb_streams, but it&#39;s still %d\n", fmtCtx->nb_streams);
       exit(1);
     }
     AVCodecContext *c = st->codec;
     if (! c) {
       fprintf(stderr, "avformat_new_stream should have allocated a AVCodecContext for my stream\n");
       exit(1);
     }
     st->id = fmtCtx->nb_streams - 1;
     printf("Created stream %d\n", st->id);
     if (0 != (format->flags &amp; AVFMT_GLOBALHEADER)) {
       c->flags |= CODEC_FLAG_GLOBAL_HEADER;
     }
     c->bit_rate = 64000;
     int bestSampleRate;
     if (NULL == codec->supported_samplerates) {
       bestSampleRate = 44100;
       printf("Setting sample rate: %d\n", bestSampleRate);
     } else {
       bestSampleRate = 0;
       for (const int *sample_rate_iter = codec->supported_samplerates; *sample_rate_iter != 0; sample_rate_iter++) {
         if (*sample_rate_iter >= bestSampleRate)
           bestSampleRate = *sample_rate_iter;
       }
       printf("Using best supported sample rate: %d\n", bestSampleRate);
     }
     c->sample_rate = bestSampleRate;
     c->channel_layout = AV_CH_LAYOUT_STEREO;
     c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
     c->time_base.num = 1;
     c->time_base.den = c->sample_rate;
     if (c->channels != 2) {
       fprintf(stderr, "av_get_channel_layout_nb_channels returned %d instead of 2\n", c->channels);
       exit(1);
     }
     c->sample_fmt = AV_SAMPLE_FMT_S16;
     int averr;
     if ((averr = avcodec_open2(c, codec, NULL)) &lt; 0) {
       fprintf(stderr, "avcodec_open2 returned error %d\n", averr);
       exit(1);
     }
     AVIOContext *ioCtx = NULL;
     if (0 != (averr = avio_open(&amp;ioCtx, file_url, AVIO_FLAG_WRITE))) {
       fprintf(stderr, "avio_open returned error %d\n", averr);
       exit(1);
     }
     if (ioCtx == NULL) {
       fprintf(stderr, "AVIOContext should have been set by avio_open\n");
       exit(1);
     }
     fmtCtx->pb = ioCtx;
     if (0 != (averr = avformat_write_header(fmtCtx, NULL))) {
       fprintf(stderr, "avformat_write_header returned error %d\n", averr);
       exit(1);
     }
     printf("Wrote header. fmtCtx->nb_streams=%d, st->time_base=%d/%d; st->avg_frame_rate=%d/%d\n", fmtCtx->nb_streams, st->time_base.num, st->time_base.den, st->avg_frame_rate.num, st->avg_frame_rate.den);
     int align = 1;
     int sample_size = av_get_bytes_per_sample(c->sample_fmt);
     if (sample_size != sizeof(int16_t)) {
       fprintf(stderr, "expected sample size=%zu but got %d\n", sizeof(int16_t), sample_size);
       exit(1);
     }
     int frame_size = c->frame_size != 0 ? c->frame_size : 4096;
     int bufferSize = av_samples_get_buffer_size(NULL, c->channels, frame_size, c->sample_fmt, align);
     int expectedBufferSize = frame_size * c->channels * sample_size;
     int supports_small_last_frame = c->frame_size == 0 ? 1 : 0 != (codec->capabilities &amp; CODEC_CAP_SMALL_LAST_FRAME);
     if (bufferSize != expectedBufferSize) {
       fprintf(stderr, "expected buffer size=%d but got %d\n", expectedBufferSize, bufferSize);
       exit(1);
     }
     int16_t *samples = (int16_t*)malloc(bufferSize);

     uint32_t audio_time = 0;  // unit: (1/c->sample_rate) s
     uint32_t audio_sample_count = supports_small_last_frame ?
       3 * c->sample_rate :
       3 * c->sample_rate / frame_size * frame_size;
     while (audio_time &lt; audio_sample_count) {
       uint32_t frame_audio_time = audio_time; // unit: (1/c->sample_rate) s
       AVFrame *frame = avcodec_alloc_frame();
       if (frame == NULL) {
         fprintf(stderr, "avcodec_alloc_frame failed\n");
         exit(1);
       }
       for (uint32_t i = 0; i != frame_size &amp;&amp; audio_time &lt; audio_sample_count; i++, audio_time++) {
         samples[2*i] = samples[2*i + 1] = 10000 * sin(2*M_PI*440/c->sample_rate * audio_time);
         frame->nb_samples = i+1;  // actually unused during encoding
       }
       // frame->format = c->sample_fmt;  // unused during encoding
       frame->pts = av_rescale_q(frame_audio_time, c->time_base, st->time_base);
       if (0 != (averr = avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt, (const uint8_t*)samples, bufferSize, align))) {
         fprintf(stderr, "avcodec_fill_audio_frame returned error %d\n", averr);
         exit(1);
       }
       AVPacket packet;
       av_init_packet(&amp;packet);
       packet.data = NULL;
       packet.size = 0;
       int got_packet;
       if (0 != (averr = avcodec_encode_audio2(c, &amp;packet, frame, &amp;got_packet))) {
         fprintf(stderr, "avcodec_encode_audio2 returned error %d\n", averr);
         exit(1);
       }
       if (got_packet) {
           packet.stream_index = st->index;
         if (0 &lt; (averr = av_write_frame(fmtCtx, &amp;packet))) {
           fprintf(stderr, "av_write_frame returned error %d\n", averr);
           exit(1);
         } else if (averr == 1) {
           // end of stream wanted.
         }
       }
       printf("encoded frame: codec time = %u; format pts=%ld = av_rescale_q(%u,%d/%d,%d/%d) (%.02fs) contains %d samples (%.02fs); got_packet=%d; packet.size=%d\n",
           frame_audio_time,
           frame->pts,
           frame_audio_time, c->time_base.num, c->time_base.den, st->time_base.num, st->time_base.den,
           1.*frame_audio_time/c->sample_rate, frame->nb_samples, 1.*frame->nb_samples/c->sample_rate, got_packet, packet.size);
       av_free(frame);
     }
     free(samples);
     cleanupFile:
     if (0 != (averr = av_write_trailer(fmtCtx))) {
       fprintf(stderr, "av_write_trailer returned error %d\n", averr);
       exit(1);
     }

     avio_flush(ioCtx);
     avio_close(ioCtx);
     avformat_free_context(fmtCtx);
    }