
Recherche avancée
Médias (91)
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Paul Westerberg - Looking Up in Heaven
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Le Tigre - Fake French
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Thievery Corporation - DC 3000
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Dan the Automator - Relaxation Spa Treatment
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Gilberto Gil - Oslodum
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (16)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (4065)
-
Saying Goodbye To Old Machines
I recently sent a few old machines off for recycling. Both had relevance to the early days of the FATE testing effort. As is my custom, I photographed them (poorly, of course).
First, there’s the PowerPC-based Mac Mini I procured thanks to a Craigslist ad in late 2006. I had plans to develop automated FFmpeg building and testing and was already looking ahead toward testing multiple CPU architectures. Again, this was 2006 and PowerPC wasn’t completely on the outs yet– although Apple’s MacTel transition was in full swing, the entire new generation of video game consoles was based on PowerPC.
I remember trying to find a Mac Mini PPC on Craigslist. Many were to be found, but all asked more than the price of even a new Mac Mini Intel, always because the seller was leaving all of last year’s applications and perhaps including a monitor, neither of which I needed. Fortunately, I found this bare Mac Mini. Also fortunate was the fact that it was far easier to install Linux on it than the first PowerPC machine I owned.
After FATE operation transitioned away from me, I still kept the machine in service as an edge server and automated backup machine. That is, until the hard drive failed on reboot one day. Thus, when it was finally time to recycle the computer, I felt it necessary to disassemble the machine and remove the hard drive for possible salvage and then for destruction.
If you’ve ever attempted to upgrade or otherwise service this style of Mac Mini, you will no doubt recognize the pictured paint scraper tool as standard kit. I have had that tool since I first endeavored to upgrade the RAM to 1 GB from the standard 1/2 GB. Performing such activities on a Mac Mini is tedious, but only if you care about putting it back together afterwards.
The next machine is a bit older. I put it together nearly a decade ago, early in 2005. This machine’s original duty was “download agent”– this would be more specifically called a BitTorrent machine in modern tech parlance. Back then, I placed it on someone else’s woefully underutilized home broadband connection (with their permission, of course) when I was too cheap to upgrade from dialup.
This is a small form factor system from VIA that was clearly designed with home theater PC (HTPC) use cases in mind. It has a VIA C3 x86-compatible CPU (according to my notes, Centaur VIA Samuel 2 stepping 03, flags : fpu de tsc msr cx8 mtrr pge mmx 3dnow) and 128 MB of RAM (initially ; I upgraded it to 512 MB some years later, just for the sake of doing it). And then there was the 120 GB PATA HD for all that downloaded goodness.
I have specific memories of a time when my main computer at home wasn’t working correctly for one reason or another. Instead, I logged into this machine remotely via SSH to make several optimizations and fixes on FFmpeg’s VP3/Theora video decoder, all from the terminal, without being able to see the decoded images with my own eyes (which is why I insist that even blind people could work on video codecs).
By the time I got my own broadband, I had become inspired to attempt the automated build and test system for FFmpeg. This was the machine I used for prototyping early brainstorms of FATE. By the time I put a basic build/test system into place in early 2008, I had much faster computers that could build and test the project– obvious limitation of this machine is that it could take at least 1/2 hour to build the entire codebase, and that was the project from 8 years ago.
So the machine got stuffed in a closet somewhere along the line. The next time I pulled it out was in 2010 when I wanted to toy with Dreamcast programming once more (the machine appears in one of the photos in this post). This was the only machine I still owned which still had an RS-232 serial port (I didn’t know much about USB serial converters yet), plus it still had a bunch of pre-compiled DC homebrew binaries (I was having trouble getting the toolchain to work right).
The next time I dusted off this machine was late last year when I was trying some experiments with the Microsoft Xbox’s IDE drive (a photo in that post also shows the machine ; this thing shows up a lot on this blog). The VIA machine was the only machine I still owned which had 40-pin IDE connectors which was crucial to my experiment.
At this point, I was trying to make the machine more useful which meant replacing the ancient Gentoo Linux distribution as well as simply interacting with it via a keyboard and mouse. I have a long Evernote entry documenting a comedy of errors revolving around this little box. The interaction troubles were due to the fact that I didn’t have any PS/2 keyboards left and I couldn’t make a USB keyboard work with it. Diego was able to explain that I needed to flip a bit in the BIOS to address this which worked. As for upgrading the OS, I tried numerous Linux distributions large and small, mostly focusing on the small. None worked. I eventually learned that, while I was trying to use i686 distributions, this machine did not actually qualify as an i686 CPU ; installations usually booted but failed because the default kernel required the cmov instruction. I was advised to try i386 distros instead. My notes don’t indicate whether I had any luck on this front before I gave up and moved on.
I just made the connection that this VIA machine has two 40-pin IDE connectors which means that the thing was technically capable of supporting up to 4 IDE devices. Obviously, the computer couldn’t really accommodate that in terms of space or power. When I wanted to try installing a new OS, I needed take off the top and connect a rather bulky IDE CD-ROM drive. This computer’s casing was supposed to be able to support a slimline optical drive (perhaps like the type found in laptops), but I could never quite visualize how that was supposed to work, space-wise. When I disassembled the PowerPC Mac Mini, I realized I might be able to repurpose that machines optical drive for this computer. Obviously, I thought better of trying since both machines are off to the recycle pile.
I would still like to work on the Xbox project a bit more, but I procured a different, unused, much more powerful yet still old computer that has a motherboard with 1 PATA connector in addition to 6 SATA connectors. If I ever get around to toying with Linux kernel development, this should be a much more appropriate platform to use.
I thought about turning this machine into an old Windows XP (and lower, down to Windows 3.1) gaming platform ; the capabilities of the machine would probably be perfect for a huge portion of my Windows game collection. But I think the lack of an optical drive renders this idea intractable. External USB drives are likely out of the question since there is very little chance that this motherboard featured USB 2.0 (the specs don’t mention 2.0, so the USB ports are probably 1.1).
So it is with fond memories that I send off both machines, sans hard drives, to the recycle pile. I’m still deciding on an appropriate course of action for failed hard drives, though.
-
Why does the frame time increase over time when decoding video using OpenCV ?
21 février 2024, par ZeunO8I have set up OpenCV in my project. I added the OpenCV github repo as a submodule in my project and included it in my cmake dependencies file like so :


set(WITH_FFMPEG ON)
 set(VIDEOIO_PLUGIN_LIST "ffmpeg")
 set(BUILD_PERF_TESTS OFF)
 set(BUILD_TESTS OFF)
 set(INSTALL_TESTS OFF)
 add_subdirectory(${COJE_SRC_DIR}/vendor/opencv build/build_opencv)



I then set up a Video struct inheriting from IEntity (to get it working with my render drivers draw system) and that looks like :


#pragma once
#include <opencv2></opencv2>opencv.hpp>
#include <coje></coje>interfaces/IEntity.hpp>
#include <coje></coje>enums/EFileLocation.hpp>
#include <coje></coje>String.hpp>
#include <coje></coje>graphics/Texture.hpp>

namespace coje::entitys
{
 struct Video : IEntity
 {
 String filePath;
 EFileLocation fileLocation;
 String tempname;
 glm::vec2 size;
 UniquePointer videoCapturePointer;
 cv::Mat frame;
 cv::Mat frameConverted;
 Floating64 fps = 0;
 Floating64 frameCount = 0;
 Integer64 currentFrameIndex = -1;
 Video(const String &filePath, const EFileLocation &fileLocation, const glm::vec2 &size, const glm::vec3 &position, const glm::quat &rotation);
 ~Video();
 void updateTextureWithFrame(const uInteger64 &frameIndex, UniquePointer<texture> &texturePointer);
 const Boolean resize(const glm::vec2 &size);
 Boolean update(const uInteger64 &elapsedTimeMs);
 };
}
</texture>


The source for Video.cpp is :


#include <coje></coje>bullet.hpp>
#include <coje></coje>Common.hpp>
#include <coje></coje>Entitys/Video.hpp>
#include <coje></coje>Logger.hpp>
#include <coje></coje>Timer.hpp>
#include <cstdio>
using namespace coje::entitys;
/*
 */
Video::Video(const String &filePath, const EFileLocation &fileLocation, const glm::vec2 &size, const glm::vec3 &position, const glm::quat &rotation) : IEntity(EntityType)
{
 this->position = position;
 this->rotation = rotation;
 File videoFile(filePath, fileLocation, "r");
 auto videoBytes = videoFile.toBytes();
 tempname = std::tmpnam(0);
 {
 File tempFile(tempname, EFileLocation::Relative, "w");
 tempFile & videoBytes;
 }
 videoCapturePointer = {ReleaseType::Delete, new cv::VideoCapture(tempname.c_str(), cv::CAP_FFMPEG), 1};
 auto &videoCapture = *videoCapturePointer.pointer;
 if (!videoCapture.isOpened())
 {
 Logger(LogType::ERROR, "%s\n", "Error opening video stream from memory");
 return;
 }
 // videoCapture.set(cv::CAP_PROP_BUFFERSIZE, 100);
 uInteger64 bufferSize = videoCapture.get(cv::CAP_PROP_BUFFERSIZE);
 Logger(LogType::INFO, "BufferSize: %llu\n", bufferSize);
 fps = videoCapture.get(cv::CAP_PROP_FPS);
 frameCount = videoCapture.get(cv::CAP_PROP_FRAME_COUNT);
 uInteger64 frameWidth = videoCapture.get(cv::CAP_PROP_FRAME_WIDTH),
 frameHeight = videoCapture.get(cv::CAP_PROP_FRAME_HEIGHT);
 resize(size);
 textures.push_back({ReleaseType::Delete, new Texture(frameWidth, frameHeight, ETextureFormat::RGB8, ETextureType::UnsignedByte), 1});
 glm::ivec3 *indices = (glm::ivec3 *)(*this).operator()(IEntity::Quanta::Indice, 2);
 indices[0] = {3, 2, 1}; // front
 indices[1] = {1, 0, 3};
 glm::vec2 *uvs = (glm::vec2 *)(*this).operator()<float>(IEntity::Quanta::UV2, 4);
 auto _uvs = Common::getUVs2DQuad();
 for (int index = 0; index < 4; index++)
 {
 uvs[index] = _uvs._data[index];
 }
 TimerFunctions::addFunction({this, &Video::update}, 0, 1000 / fps);
 return;
};
/*
 */
Video::~Video()
{
 File tempFile(tempname);
 tempFile.remove();
};
/*
 */
void Video::updateTextureWithFrame(const uInteger64 &frameIndex, UniquePointer<texture> &texturePointer)
{
 auto start = std::chrono::high_resolution_clock::now();
 auto &videoCapture = *videoCapturePointer.pointer;
 videoCapture.set(cv::CAP_PROP_POS_FRAMES, frameIndex);
 Boolean frameGrabSuccess = videoCapture.grab();
 if (!frameGrabSuccess)
 {
 Logger(LogType::ERROR, "%s\n", "Failed to grab frame from VideoCapture");
 return;
 }
 Boolean frameRetrieveSuccess = videoCapture.retrieve(frame);
 if (!frameRetrieveSuccess)
 {
 Logger(LogType::ERROR, "%s\n", "Failed to retrieve frame from VideoCapture");
 return;
 }
 auto end = std::chrono::high_resolution_clock::now();
 std::chrono::duration elapsed = end - start;
 std::cout << "Video::updateTextureWithFrame took " << elapsed.count() << "ms\n";
 cv::cvtColor(frame, frameConverted, cv::COLOR_BGR2RGB);
 cv::flip(frameConverted, frameConverted, 0);
 if (texturePointer.pointer)
 {
 auto &texture = texturePointer.pointer;
 if (texture->width != frameConverted.cols || texture->height != frameConverted.rows)
 {
 goto _newTexture;
 }
 else
 {
 texture->update(frameConverted.data);
 }
 }
 else
 {
 _newTexture:
 texturePointer = {ReleaseType::Delete, new Texture(frameConverted.cols, frameConverted.rows, frameConverted.data, ETextureFormat::RGB8, ETextureType::UnsignedByte), 1};
 }
};
/*
 */
const Boolean Video::resize(const glm::vec2 &_size)
{
 size = _size;
 glm::vec3 *vertices = (glm::vec3 *)(*this).operator()<float>(IEntity::Quanta::Vertex, 4);
 glm::vec3 topRight = {size.x / 2, size.y / 2, 0};
 glm::vec3 bottomRight = {size.x / 2, -(size.y / 2), 0};
 glm::vec3 bottomLeft = {-(size.x / 2), -(size.y / 2), 0};
 glm::vec3 topLeft = {-(size.x / 2), size.y / 2, 0};
 vertices[0] = topRight;
 vertices[1] = bottomRight;
 vertices[2] = bottomLeft;
 vertices[3] = topLeft;
 *changedPointer = true;
 return true;
};
/*
 */
Boolean Video::update(const uInteger64 &elapsedTimeMs)
{
 currentFrameIndex++;
 Logger(LogType::INFO, "Video-elapsedTime: %llums\n", elapsedTimeMs);
 if (currentFrameIndex == frameCount - 1)
 {
 return false;
 }
 auto &texturePointer = textures._data[0];
 updateTextureWithFrame(currentFrameIndex, texturePointer);
 return true;
};
</float></texture></float></cstdio>


When running a simple test video @
1280x720
theupdateTextureWithFrame
timer begins at 12ms but gradually over time increases to over 100ms and beyond. Causing video playback to be running at lower than defined frames per second.

What is causing this gradual increase in
updateTextureWithFrame
?? How can I solve it ?

Edit :


uInteger64 bufferSize = videoCapture.get(cv::CAP_PROP_BUFFERSIZE);
 Logger(LogType::INFO, "BufferSize: %llu\n", bufferSize);



prints BufferSize : 0. Indicating setting CAP_PROP_BUFFERSIZE is not supported for ffmpeg


Edit2 :
Some logs of timings


Video::updateTextureWithFrame took 16.5161ms
Video::updateTextureWithFrame took 21.6109ms
Video::updateTextureWithFrame took 21.1443ms
Video::updateTextureWithFrame took 20.4253ms
Video::updateTextureWithFrame took 23.9015ms
Video::updateTextureWithFrame took 22.1348ms
Video::updateTextureWithFrame took 21.3723ms
Video::updateTextureWithFrame took 21.2186ms
Video::updateTextureWithFrame took 24.0211ms
Video::updateTextureWithFrame took 24.5907ms
Video::updateTextureWithFrame took 23.2134ms
Video::updateTextureWithFrame took 25.6763ms
Video::updateTextureWithFrame took 25.416ms
Video::updateTextureWithFrame took 25.2314ms
Video::updateTextureWithFrame took 26.3919ms
Video::updateTextureWithFrame took 24.1883ms
Video::updateTextureWithFrame took 27.7095ms
Video::updateTextureWithFrame took 26.5594ms
Video::updateTextureWithFrame took 26.6618ms
Video::updateTextureWithFrame took 29.496ms
Video::updateTextureWithFrame took 27.2731ms
Video::updateTextureWithFrame took 27.5113ms
Video::updateTextureWithFrame took 30.2855ms
Video::updateTextureWithFrame took 27.6773ms
Video::updateTextureWithFrame took 30.5532ms
Video::updateTextureWithFrame took 32.6858ms
Video::updateTextureWithFrame took 32.8735ms
Video::updateTextureWithFrame took 31.7369ms
Video::updateTextureWithFrame took 31.2453ms
Video::updateTextureWithFrame took 30.9424ms
Video::updateTextureWithFrame took 36.7046ms
Video::updateTextureWithFrame took 33.6224ms
Video::updateTextureWithFrame took 32.0368ms
Video::updateTextureWithFrame took 33.0109ms
Video::updateTextureWithFrame took 32.2155ms
Video::updateTextureWithFrame took 33.5314ms
Video::updateTextureWithFrame took 33.576ms
Video::updateTextureWithFrame took 37.8993ms
Video::updateTextureWithFrame took 33.9495ms
Video::updateTextureWithFrame took 35.776ms
Video::updateTextureWithFrame took 36.2566ms
Video::updateTextureWithFrame took 36.5887ms
Video::updateTextureWithFrame took 40.0839ms
Video::updateTextureWithFrame took 38.5146ms
Video::updateTextureWithFrame took 40.72ms
Video::updateTextureWithFrame took 37.8345ms
Video::updateTextureWithFrame took 37.9925ms
Video::updateTextureWithFrame took 39.0402ms
Video::updateTextureWithFrame took 39.8856ms
Video::updateTextureWithFrame took 41.3421ms
Video::updateTextureWithFrame took 41.0703ms
Video::updateTextureWithFrame took 42.9482ms
Video::updateTextureWithFrame took 42.9199ms
Video::updateTextureWithFrame took 44.2593ms
Video::updateTextureWithFrame took 41.2746ms
Video::updateTextureWithFrame took 45.7017ms
Video::updateTextureWithFrame took 46.1854ms
Video::updateTextureWithFrame took 44.154ms
Video::updateTextureWithFrame took 42.6004ms
Video::updateTextureWithFrame took 47.2442ms
Video::updateTextureWithFrame took 43.4156ms
Video::updateTextureWithFrame took 47.9288ms
Video::updateTextureWithFrame took 45.3475ms
Video::updateTextureWithFrame took 46.9646ms
Video::updateTextureWithFrame took 48.4978ms
Video::updateTextureWithFrame took 45.1322ms
Video::updateTextureWithFrame took 48.1365ms
Video::updateTextureWithFrame took 49.8857ms
Video::updateTextureWithFrame took 47.4854ms
Video::updateTextureWithFrame took 48.2378ms
Video::updateTextureWithFrame took 50.9174ms
Video::updateTextureWithFrame took 52.347ms
Video::updateTextureWithFrame took 51.6252ms
Video::updateTextureWithFrame took 52.2018ms
Video::updateTextureWithFrame took 49.2384ms
Video::updateTextureWithFrame took 50.9491ms
Video::updateTextureWithFrame took 52.2139ms
Video::updateTextureWithFrame took 53.3229ms
Video::updateTextureWithFrame took 56.0199ms
Video::updateTextureWithFrame took 55.582ms
Video::updateTextureWithFrame took 55.2675ms
Video::updateTextureWithFrame took 54.9446ms
Video::updateTextureWithFrame took 54.7955ms
Video::updateTextureWithFrame took 54.0296ms
Video::updateTextureWithFrame took 54.0375ms
Video::updateTextureWithFrame took 57.0916ms
Video::updateTextureWithFrame took 55.2474ms
Video::updateTextureWithFrame took 56.8046ms
Video::updateTextureWithFrame took 57.562ms
Video::updateTextureWithFrame took 59.9115ms
Video::updateTextureWithFrame took 59.3991ms
Video::updateTextureWithFrame took 60.0536ms
Video::updateTextureWithFrame took 59.9457ms
Video::updateTextureWithFrame took 57.5088ms
Video::updateTextureWithFrame took 59.1255ms
Video::updateTextureWithFrame took 62.2311ms
Video::updateTextureWithFrame took 59.0422ms
Video::updateTextureWithFrame took 62.0419ms
Video::updateTextureWithFrame took 62.0586ms
Video::updateTextureWithFrame took 64.0988ms
Video::updateTextureWithFrame took 64.743ms
Video::updateTextureWithFrame took 63.008ms
Video::updateTextureWithFrame took 65.1726ms
Video::updateTextureWithFrame took 63.3618ms
Video::updateTextureWithFrame took 65.6431ms
Video::updateTextureWithFrame took 63.8957ms
Video::updateTextureWithFrame took 65.1142ms
Video::updateTextureWithFrame took 67.2243ms
Video::updateTextureWithFrame took 65.1302ms
Video::updateTextureWithFrame took 66.4947ms
Video::updateTextureWithFrame took 66.092ms
Video::updateTextureWithFrame took 68.6997ms
Video::updateTextureWithFrame took 70.5683ms
Video::updateTextureWithFrame took 71.9019ms
Video::updateTextureWithFrame took 68.6088ms
Video::updateTextureWithFrame took 70.7946ms
Video::updateTextureWithFrame took 68.263ms
Video::updateTextureWithFrame took 66.1565ms
Video::updateTextureWithFrame took 70.6742ms
Video::updateTextureWithFrame took 70.7035ms
Video::updateTextureWithFrame took 73.8002ms
Video::updateTextureWithFrame took 73.1897ms
Video::updateTextureWithFrame took 74.006ms
Video::updateTextureWithFrame took 74.1048ms
Video::updateTextureWithFrame took 72.9378ms
Video::updateTextureWithFrame took 75.0651ms
Video::updateTextureWithFrame took 73.5676ms
Video::updateTextureWithFrame took 73.7706ms
Video::updateTextureWithFrame took 74.0839ms
Video::updateTextureWithFrame took 74.6773ms
Video::updateTextureWithFrame took 75.8827ms
Video::updateTextureWithFrame took 74.4724ms
Video::updateTextureWithFrame took 75.2119ms
Video::updateTextureWithFrame took 83.4102ms
Video::updateTextureWithFrame took 77.6811ms
Video::updateTextureWithFrame took 78.7307ms
Video::updateTextureWithFrame took 80.1705ms
Video::updateTextureWithFrame took 78.6064ms
Video::updateTextureWithFrame took 80.803ms
Video::updateTextureWithFrame took 80.0117ms
Video::updateTextureWithFrame took 78.2948ms
Video::updateTextureWithFrame took 81.0375ms
Video::updateTextureWithFrame took 78.7389ms
Video::updateTextureWithFrame took 80.2201ms
Video::updateTextureWithFrame took 82.8578ms
Video::updateTextureWithFrame took 84.2388ms
Video::updateTextureWithFrame took 84.6484ms
Video::updateTextureWithFrame took 87.6683ms
Video::updateTextureWithFrame took 82.8939ms
Video::updateTextureWithFrame took 84.015ms
Video::updateTextureWithFrame took 88.1832ms
Video::updateTextureWithFrame took 83.3894ms
Video::updateTextureWithFrame took 86.9088ms
Video::updateTextureWithFrame took 87.1049ms
Video::updateTextureWithFrame took 87.6748ms
Video::updateTextureWithFrame took 87.178ms
Video::updateTextureWithFrame took 84.7988ms
Video::updateTextureWithFrame took 89.528ms
Video::updateTextureWithFrame took 88.7021ms
Video::updateTextureWithFrame took 90.0357ms
Video::updateTextureWithFrame took 90.398ms
Video::updateTextureWithFrame took 87.8047ms
Video::updateTextureWithFrame took 90.2447ms
Video::updateTextureWithFrame took 94.6288ms
Video::updateTextureWithFrame took 88.9265ms
Video::updateTextureWithFrame took 89.01ms
Video::updateTextureWithFrame took 87.6294ms
Video::updateTextureWithFrame took 90.6988ms
Video::updateTextureWithFrame took 93.0173ms
Video::updateTextureWithFrame took 92.1651ms
Video::updateTextureWithFrame took 92.9234ms
Video::updateTextureWithFrame took 95.4223ms
Video::updateTextureWithFrame took 99.0941ms
Video::updateTextureWithFrame took 97.3014ms
Video::updateTextureWithFrame took 91.8709ms
Video::updateTextureWithFrame took 96.8951ms
Video::updateTextureWithFrame took 95.3506ms
Video::updateTextureWithFrame took 96.5474ms
Video::updateTextureWithFrame took 92.4739ms
Video::updateTextureWithFrame took 95.1857ms
Video::updateTextureWithFrame took 96.6743ms
Video::updateTextureWithFrame took 99.0657ms
Video::updateTextureWithFrame took 105.84ms
Video::updateTextureWithFrame took 99.3163ms
Video::updateTextureWithFrame took 127.942ms
Video::updateTextureWithFrame took 101.378ms
Video::updateTextureWithFrame took 98.6114ms
Video::updateTextureWithFrame took 101.161ms
Video::updateTextureWithFrame took 102.271ms
Video::updateTextureWithFrame took 100.77ms
Video::updateTextureWithFrame took 100.825ms
Video::updateTextureWithFrame took 100.64ms
Video::updateTextureWithFrame took 99.7002ms
Video::updateTextureWithFrame took 103.207ms
Video::updateTextureWithFrame took 107.135ms
Video::updateTextureWithFrame took 100.766ms
Video::updateTextureWithFrame took 103.321ms
Video::updateTextureWithFrame took 107.361ms
Video::updateTextureWithFrame took 104.086ms
Video::updateTextureWithFrame took 100.975ms
Video::updateTextureWithFrame took 105.846ms
Video::updateTextureWithFrame took 104.755ms
Video::updateTextureWithFrame took 105.893ms
Video::updateTextureWithFrame took 105.234ms
Video::updateTextureWithFrame took 109.415ms
Video::updateTextureWithFrame took 107.942ms
Video::updateTextureWithFrame took 109.816ms
Video::updateTextureWithFrame took 109.268ms
Video::updateTextureWithFrame took 111.918ms
Video::updateTextureWithFrame took 110.123ms
Video::updateTextureWithFrame took 109.975ms
Video::updateTextureWithFrame took 110.105ms
Video::updateTextureWithFrame took 115.888ms
Video::updateTextureWithFrame took 112.443ms
Video::updateTextureWithFrame took 111.795ms
Video::updateTextureWithFrame took 112.016ms
Video::updateTextureWithFrame took 115.857ms
Video::updateTextureWithFrame took 114.762ms
Video::updateTextureWithFrame took 112.551ms
Video::updateTextureWithFrame took 116.05ms
Video::updateTextureWithFrame took 119.133ms
Video::updateTextureWithFrame took 114.202ms
Video::updateTextureWithFrame took 119.864ms
Video::updateTextureWithFrame took 119.743ms
Video::updateTextureWithFrame took 119.911ms
Video::updateTextureWithFrame took 120.957ms
Video::updateTextureWithFrame took 117.611ms
Video::updateTextureWithFrame took 116.596ms
Video::updateTextureWithFrame took 116.859ms
Video::updateTextureWithFrame took 120.355ms
Video::updateTextureWithFrame took 121.932ms
Video::updateTextureWithFrame took 117.56ms
Video::updateTextureWithFrame took 122.747ms
Video::updateTextureWithFrame took 120.103ms
Video::updateTextureWithFrame took 123.497ms
Video::updateTextureWithFrame took 126.391ms
Video::updateTextureWithFrame took 123.512ms
Video::updateTextureWithFrame took 121.612ms
Video::updateTextureWithFrame took 130.169ms
Video::updateTextureWithFrame took 126.936ms
Video::updateTextureWithFrame took 122.812ms
Video::updateTextureWithFrame took 122.843ms
Video::updateTextureWithFrame took 124.214ms
Video::updateTextureWithFrame took 125.563ms
Video::updateTextureWithFrame took 128.024ms
Video::updateTextureWithFrame took 129.263ms
Video::updateTextureWithFrame took 130.028ms
Video::updateTextureWithFrame took 127.493ms
Video::updateTextureWithFrame took 129.553ms
Video::updateTextureWithFrame took 130.538ms
Video::updateTextureWithFrame took 22.6048ms
Video::updateTextureWithFrame took 20.2454ms
Video::updateTextureWithFrame took 19.9947ms
Video::updateTextureWithFrame took 21.2817ms
Video::updateTextureWithFrame took 22.6694ms
Video::updateTextureWithFrame took 25.5187ms
Video::updateTextureWithFrame took 19.8971ms
Video::updateTextureWithFrame took 22.2975ms
Video::updateTextureWithFrame took 21.4979ms
Video::updateTextureWithFrame took 25.6767ms
Video::updateTextureWithFrame took 23.4276ms
Video::updateTextureWithFrame took 25.5657ms
Video::updateTextureWithFrame took 23.2816ms
Video::updateTextureWithFrame took 26.8515ms
Video::updateTextureWithFrame took 24.0271ms
Video::updateTextureWithFrame took 24.4675ms
Video::updateTextureWithFrame took 25.6897ms
Video::updateTextureWithFrame took 28.7489ms
Video::updateTextureWithFrame took 24.6164ms
Video::updateTextureWithFrame took 29.6739ms
Video::updateTextureWithFrame took 27.8118ms
Video::updateTextureWithFrame took 30.3992ms
Video::updateTextureWithFrame took 28.2943ms
Video::updateTextureWithFrame took 29.9693ms
Video::updateTextureWithFrame took 30.6129ms



-
Why am I getting blips when encoding a sound file using Java JNA ?
21 mars 2014, par yonranI have implemented a hello world libavcodec using JNA to generate a wav file containing a pure 440Hz sine wave. But when I actually run the program the wav file contains annoying clicks and blips (compare to pure sin wav created from the C program). How am I calling
avcodec_encode_audio2
wrong ?Here is my Java code. All the sources are also at github in case you want to try to compile it.
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.IntBuffer;
import java.util.Objects;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.TargetDataLine;
public class Sin {
/**
* Abstract class that allows you to put the initialization and cleanup
* code at the same place instead of separated by the big try block.
*/
public static abstract class SharedPtr<t> implements AutoCloseable {
public T ptr;
public SharedPtr(T ptr) {
this.ptr = ptr;
}
/**
* Abstract override forces method to throw no checked exceptions.
* Subclasses will call a C function that throws no exceptions.
*/
@Override public abstract void close();
}
/**
* @param args
* @throws IOException
* @throws LineUnavailableException
*/
public static void main(String[] args) throws IOException, LineUnavailableException {
final AvcodecLibrary avcodec = AvcodecLibrary.INSTANCE;
final AvformatLibrary avformat = AvformatLibrary.INSTANCE;
final AvutilLibrary avutil = AvutilLibrary.INSTANCE;
avcodec.avcodec_register_all();
avformat.av_register_all();
AVOutputFormat.ByReference format = null;
String format_name = "wav", file_url = "file:sinjava.wav";
for (AVOutputFormat.ByReference formatIter = avformat.av_oformat_next(null); formatIter != null; formatIter = avformat.av_oformat_next(formatIter)) {
formatIter.setAutoWrite(false);
String iterName = formatIter.name;
if (format_name.equals(iterName)) {
format = formatIter;
break;
}
}
Objects.requireNonNull(format);
System.out.format("Found format %s%n", format_name);
AVCodec codec = avcodec.avcodec_find_encoder(format.audio_codec); // one of AvcodecLibrary.CodecID
Objects.requireNonNull(codec);
codec.setAutoWrite(false);
try (
SharedPtr<avformatcontext> fmtCtxPtr = new SharedPtr<avformatcontext>(avformat.avformat_alloc_context()) {@Override public void close(){if (null!=ptr) avformat.avformat_free_context(ptr);}};
) {
AVFormatContext fmtCtx = Objects.requireNonNull(fmtCtxPtr.ptr);
fmtCtx.setAutoWrite(false);
fmtCtx.setAutoRead(false);
fmtCtx.oformat = format; fmtCtx.writeField("oformat");
AVStream st = avformat.avformat_new_stream(fmtCtx, codec);
if (null == st)
throw new IllegalStateException();
AVCodecContext c = st.codec;
if (null == c)
throw new IllegalStateException();
st.setAutoWrite(false);
fmtCtx.readField("nb_streams");
st.id = fmtCtx.nb_streams - 1; st.writeField("id");
assert st.id >= 0;
System.out.format("New stream: id=%d%n", st.id);
if (0 != (format.flags & AvformatLibrary.AVFMT_GLOBALHEADER)) {
c.flags |= AvcodecLibrary.CODEC_FLAG_GLOBAL_HEADER;
}
c.writeField("flags");
c.bit_rate = 64000; c.writeField("bit_rate");
int bestSampleRate;
if (null == codec.supported_samplerates) {
bestSampleRate = 44100;
} else {
bestSampleRate = 0;
for (int offset = 0, sample_rate = codec.supported_samplerates.getInt(offset); sample_rate != 0; codec.supported_samplerates.getInt(++offset)) {
bestSampleRate = Math.max(bestSampleRate, sample_rate);
}
assert bestSampleRate > 0;
}
c.sample_rate = bestSampleRate; c.writeField("sample_rate");
c.channel_layout = AvutilLibrary.AV_CH_LAYOUT_STEREO; c.writeField("channel_layout");
c.channels = avutil.av_get_channel_layout_nb_channels(c.channel_layout); c.writeField("channels");
assert 2 == c.channels;
c.sample_fmt = AvutilLibrary.AVSampleFormat.AV_SAMPLE_FMT_S16; c.writeField("sample_fmt");
c.time_base.num = 1;
c.time_base.den = bestSampleRate;
c.writeField("time_base");
c.setAutoWrite(false);
AudioFormat javaSoundFormat = new AudioFormat(bestSampleRate, Short.SIZE, c.channels, true, ByteOrder.nativeOrder() == ByteOrder.BIG_ENDIAN);
DataLine.Info javaDataLineInfo = new DataLine.Info(TargetDataLine.class, javaSoundFormat);
if (! AudioSystem.isLineSupported(javaDataLineInfo))
throw new IllegalStateException();
int err;
if ((err = avcodec.avcodec_open(c, codec)) < 0) {
throw new IllegalStateException();
}
assert c.channels != 0;
AVIOContext.ByReference[] ioCtxReference = new AVIOContext.ByReference[1];
if (0 != (err = avformat.avio_open(ioCtxReference, file_url, AvformatLibrary.AVIO_FLAG_WRITE))) {
throw new IllegalStateException("averror " + err);
}
try (
SharedPtr ioCtxPtr = new SharedPtr(ioCtxReference[0]) {@Override public void close(){if (null!=ptr) avutil.av_free(ptr.getPointer());}}
) {
AVIOContext.ByReference ioCtx = Objects.requireNonNull(ioCtxPtr.ptr);
fmtCtx.pb = ioCtx; fmtCtx.writeField("pb");
int averr = avformat.avformat_write_header(fmtCtx, null);
if (averr < 0) {
throw new IllegalStateException("" + averr);
}
st.read(); // it is modified by avformat_write_header
System.out.format("Wrote header. fmtCtx->nb_streams=%d, st->time_base=%d/%d; st->avg_frame_rate=%d/%d%n", fmtCtx.nb_streams, st.time_base.num, st.time_base.den, st.avg_frame_rate.num, st.avg_frame_rate.den);
avformat.avio_flush(ioCtx);
int frame_size = c.frame_size != 0 ? c.frame_size : 4096;
int expectedBufferSize = frame_size * c.channels * (Short.SIZE/8);
boolean supports_small_last_frame = c.frame_size == 0 ? true : 0 != (codec.capabilities & AvcodecLibrary.CODEC_CAP_SMALL_LAST_FRAME);
int bufferSize = avutil.av_samples_get_buffer_size((IntBuffer)null, c.channels, frame_size, c.sample_fmt, 1);
assert bufferSize == expectedBufferSize: String.format("expected %d; got %d", expectedBufferSize, bufferSize);
ByteBuffer samples = ByteBuffer.allocate(expectedBufferSize);
samples.order(ByteOrder.nativeOrder());
int audio_time = 0; // unit: (c.time_base) s = (1/c.sample_rate) s
int audio_sample_count = supports_small_last_frame ?
3 * c.sample_rate :
3 * c.sample_rate / frame_size * frame_size;
while (audio_time < audio_sample_count) {
int frame_audio_time = audio_time;
samples.clear();
int nb_samples_in_frame = 0;
// encode a single tone sound
for (; samples.hasRemaining() && audio_time < audio_sample_count; nb_samples_in_frame++, audio_time++) {
double x = 2*Math.PI*440/c.sample_rate * audio_time;
double y = 10000 * Math.sin(x);
samples.putShort((short) y);
samples.putShort((short) y);
}
samples.flip();
try (
SharedPtr<avframe> framePtr = new SharedPtr<avframe>(avcodec.avcodec_alloc_frame()) {@Override public void close() {if (null!=ptr) avutil.av_free(ptr.getPointer());}};
) {
AVFrame frame = Objects.requireNonNull(framePtr.ptr);
frame.setAutoRead(false); // will be an in param
frame.setAutoWrite(false);
frame.nb_samples = nb_samples_in_frame; frame.writeField("nb_samples"); // actually unused during encoding
// Presentation time, in AVStream.time_base units.
frame.pts = avutil.av_rescale_q(frame_audio_time, c.time_base, st.time_base); // i * codec_time_base / st_time_base
frame.writeField("pts");
assert c.channels > 0;
int bytesPerSample = avutil.av_get_bytes_per_sample(c.sample_fmt);
assert bytesPerSample > 0;
if (0 != (err = avcodec.avcodec_fill_audio_frame(frame, c.channels, c.sample_fmt, samples, samples.capacity(), 1))) {
throw new IllegalStateException(""+err);
}
AVPacket packet = new AVPacket(); // one of the few structs from ffmpeg with guaranteed size
avcodec.av_init_packet(packet);
packet.size = 0;
packet.data = null;
packet.stream_index = st.index; packet.writeField("stream_index");
// encode the samples
IntBuffer gotPacket = IntBuffer.allocate(1);
if (0 != (err = avcodec.avcodec_encode_audio2(c, packet, frame, gotPacket))) {
throw new IllegalStateException("" + err);
} else if (0 != gotPacket.get()) {
packet.read();
averr = avformat.av_write_frame(fmtCtx, packet);
if (averr < 0)
throw new IllegalStateException("" + averr);
}
System.out.format("encoded frame: codec time = %d; pts=%d = av_rescale_q(%d,%d/%d,%d/%d) (%.02fs) contains %d samples (%.02fs); got_packet=%d; packet.size=%d%n",
frame_audio_time,
frame.pts,
frame_audio_time, st.codec.time_base.num,st.codec.time_base.den,st.time_base.num,st.time_base.den,
1.*frame_audio_time/c.sample_rate, frame.nb_samples, 1.*frame.nb_samples/c.sample_rate, gotPacket.array()[0], packet.size);
}
}
if (0 != (err = avformat.av_write_trailer(fmtCtx))) {
throw new IllegalStateException();
}
avformat.avio_flush(ioCtx);
}
}
System.out.println("Done writing");
}
}
</avframe></avframe></avformatcontext></avformatcontext></t>I also rewrote it in C, and the C version works fine without any blips. But I can’t figure out how I am using the library differently ; all the library function calls should be identical !
//! gcc --std=c99 sin.c $(pkg-config --cflags --libs libavutil libavformat libavcodec) -o sin
// sudo apt-get install libswscale-dev
#include
#include
#include
#include
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libavcodec></libavcodec>avcodec.h>
int main(int argc, char *argv[]) {
const char *format_name = "wav", *file_url = "file:sin.wav";
avcodec_register_all();
av_register_all();
AVOutputFormat *format = NULL;
for (AVOutputFormat *formatIter = av_oformat_next(NULL); formatIter != NULL; formatIter = av_oformat_next(formatIter)) {
int hasEncoder = NULL != avcodec_find_encoder(formatIter->audio_codec);
if (0 == strcmp(format_name, formatIter->name)) {
format = formatIter;
break;
}
}
printf("Found format %s\n", format->name);
AVCodec *codec = avcodec_find_encoder(format->audio_codec);
if (! codec) {
fprintf(stderr, "Could not find codec %d\n", format->audio_codec);
exit(1);
}
AVFormatContext *fmtCtx = avformat_alloc_context();
if (! fmtCtx) {
fprintf(stderr, "error allocating AVFormatContext\n");
exit(1);
}
fmtCtx->oformat = format;
AVStream *st = avformat_new_stream(fmtCtx, codec);
if (! st) {
fprintf(stderr, "error allocating AVStream\n");
exit(1);
}
if (fmtCtx->nb_streams != 1) {
fprintf(stderr, "avformat_new_stream should have incremented nb_streams, but it's still %d\n", fmtCtx->nb_streams);
exit(1);
}
AVCodecContext *c = st->codec;
if (! c) {
fprintf(stderr, "avformat_new_stream should have allocated a AVCodecContext for my stream\n");
exit(1);
}
st->id = fmtCtx->nb_streams - 1;
printf("Created stream %d\n", st->id);
if (0 != (format->flags & AVFMT_GLOBALHEADER)) {
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
c->bit_rate = 64000;
int bestSampleRate;
if (NULL == codec->supported_samplerates) {
bestSampleRate = 44100;
printf("Setting sample rate: %d\n", bestSampleRate);
} else {
bestSampleRate = 0;
for (const int *sample_rate_iter = codec->supported_samplerates; *sample_rate_iter != 0; sample_rate_iter++) {
if (*sample_rate_iter >= bestSampleRate)
bestSampleRate = *sample_rate_iter;
}
printf("Using best supported sample rate: %d\n", bestSampleRate);
}
c->sample_rate = bestSampleRate;
c->channel_layout = AV_CH_LAYOUT_STEREO;
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
c->time_base.num = 1;
c->time_base.den = c->sample_rate;
if (c->channels != 2) {
fprintf(stderr, "av_get_channel_layout_nb_channels returned %d instead of 2\n", c->channels);
exit(1);
}
c->sample_fmt = AV_SAMPLE_FMT_S16;
int averr;
if ((averr = avcodec_open2(c, codec, NULL)) < 0) {
fprintf(stderr, "avcodec_open2 returned error %d\n", averr);
exit(1);
}
AVIOContext *ioCtx = NULL;
if (0 != (averr = avio_open(&ioCtx, file_url, AVIO_FLAG_WRITE))) {
fprintf(stderr, "avio_open returned error %d\n", averr);
exit(1);
}
if (ioCtx == NULL) {
fprintf(stderr, "AVIOContext should have been set by avio_open\n");
exit(1);
}
fmtCtx->pb = ioCtx;
if (0 != (averr = avformat_write_header(fmtCtx, NULL))) {
fprintf(stderr, "avformat_write_header returned error %d\n", averr);
exit(1);
}
printf("Wrote header. fmtCtx->nb_streams=%d, st->time_base=%d/%d; st->avg_frame_rate=%d/%d\n", fmtCtx->nb_streams, st->time_base.num, st->time_base.den, st->avg_frame_rate.num, st->avg_frame_rate.den);
int align = 1;
int sample_size = av_get_bytes_per_sample(c->sample_fmt);
if (sample_size != sizeof(int16_t)) {
fprintf(stderr, "expected sample size=%zu but got %d\n", sizeof(int16_t), sample_size);
exit(1);
}
int frame_size = c->frame_size != 0 ? c->frame_size : 4096;
int bufferSize = av_samples_get_buffer_size(NULL, c->channels, frame_size, c->sample_fmt, align);
int expectedBufferSize = frame_size * c->channels * sample_size;
int supports_small_last_frame = c->frame_size == 0 ? 1 : 0 != (codec->capabilities & CODEC_CAP_SMALL_LAST_FRAME);
if (bufferSize != expectedBufferSize) {
fprintf(stderr, "expected buffer size=%d but got %d\n", expectedBufferSize, bufferSize);
exit(1);
}
int16_t *samples = (int16_t*)malloc(bufferSize);
uint32_t audio_time = 0; // unit: (1/c->sample_rate) s
uint32_t audio_sample_count = supports_small_last_frame ?
3 * c->sample_rate :
3 * c->sample_rate / frame_size * frame_size;
while (audio_time < audio_sample_count) {
uint32_t frame_audio_time = audio_time; // unit: (1/c->sample_rate) s
AVFrame *frame = avcodec_alloc_frame();
if (frame == NULL) {
fprintf(stderr, "avcodec_alloc_frame failed\n");
exit(1);
}
for (uint32_t i = 0; i != frame_size && audio_time < audio_sample_count; i++, audio_time++) {
samples[2*i] = samples[2*i + 1] = 10000 * sin(2*M_PI*440/c->sample_rate * audio_time);
frame->nb_samples = i+1; // actually unused during encoding
}
// frame->format = c->sample_fmt; // unused during encoding
frame->pts = av_rescale_q(frame_audio_time, c->time_base, st->time_base);
if (0 != (averr = avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt, (const uint8_t*)samples, bufferSize, align))) {
fprintf(stderr, "avcodec_fill_audio_frame returned error %d\n", averr);
exit(1);
}
AVPacket packet;
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
int got_packet;
if (0 != (averr = avcodec_encode_audio2(c, &packet, frame, &got_packet))) {
fprintf(stderr, "avcodec_encode_audio2 returned error %d\n", averr);
exit(1);
}
if (got_packet) {
packet.stream_index = st->index;
if (0 < (averr = av_write_frame(fmtCtx, &packet))) {
fprintf(stderr, "av_write_frame returned error %d\n", averr);
exit(1);
} else if (averr == 1) {
// end of stream wanted.
}
}
printf("encoded frame: codec time = %u; format pts=%ld = av_rescale_q(%u,%d/%d,%d/%d) (%.02fs) contains %d samples (%.02fs); got_packet=%d; packet.size=%d\n",
frame_audio_time,
frame->pts,
frame_audio_time, c->time_base.num, c->time_base.den, st->time_base.num, st->time_base.den,
1.*frame_audio_time/c->sample_rate, frame->nb_samples, 1.*frame->nb_samples/c->sample_rate, got_packet, packet.size);
av_free(frame);
}
free(samples);
cleanupFile:
if (0 != (averr = av_write_trailer(fmtCtx))) {
fprintf(stderr, "av_write_trailer returned error %d\n", averr);
exit(1);
}
avio_flush(ioCtx);
avio_close(ioCtx);
avformat_free_context(fmtCtx);
}