
Recherche avancée
Autres articles (54)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (6244)
-
Why does adding audio stream to ffmpeg's libavcodec output container cause a crash ?
29 mars 2021, par SniggerfardimungusAs it stands, my project correctly uses libavcodec to decode a video, where each frame is manipulated (it doesn't matter how) and output to a new video. I've cobbled this together from examples found online, and it works. The result is a perfect .mp4 of the manipulated frames, minus the audio.


My problem is, when I try to add an audio stream to the output container, I get a crash in mux.c that I can't explain. It's in
static int compute_muxer_pkt_fields(AVFormatContext *s, AVStream *st, AVPacket *pkt)
. Wherest->internal->priv_pts->val = pkt->dts;
is attempted,priv_pts
is nullptr.

I don't recall the version number, but this is from a November 4, 2020 ffmpeg build from git.


My
MediaContentMgr
is much bigger than what I have here. I'm stripping out everything to do with the frame manipulation, so if I'm missing anything, please let me know and I'll edit.

The code that, when added, triggers the nullptr exception, is called out inline


The .h :


#ifndef _API_EXAMPLE_H
#define _API_EXAMPLE_H

#include <glad></glad>glad.h>
#include <glfw></glfw>glfw3.h>
#include "glm/glm.hpp"

extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>opt.h>
#include <libswscale></libswscale>swscale.h>
}

#include "shader_s.h"

class MediaContainerMgr {
public:
 MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag, 
 const glm::vec3* extents);
 ~MediaContainerMgr();
 void render();
 bool recording() { return m_recording; }

 // Major thanks to "shi-yan" who helped make this possible:
 // https://github.com/shi-yan/videosamples/blob/master/libavmp4encoding/main.cpp
 bool init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height);
 bool output_video_frame(uint8_t* buf);
 bool finalize_output();

private:
 AVFormatContext* m_format_context;
 AVCodec* m_video_codec;
 AVCodec* m_audio_codec;
 AVCodecParameters* m_video_codec_parameters;
 AVCodecParameters* m_audio_codec_parameters;
 AVCodecContext* m_codec_context;
 AVFrame* m_frame;
 AVPacket* m_packet;
 uint32_t m_video_stream_index;
 uint32_t m_audio_stream_index;
 
 void init_rendering(const glm::vec3* extents);
 int decode_packet();

 // For writing the output video:
 void free_output_assets();
 bool m_recording;
 AVOutputFormat* m_output_format;
 AVFormatContext* m_output_format_context;
 AVCodec* m_output_video_codec;
 AVCodecContext* m_output_video_codec_context;
 AVFrame* m_output_video_frame;
 SwsContext* m_output_scale_context;
 AVStream* m_output_video_stream;
 
 AVCodec* m_output_audio_codec;
 AVStream* m_output_audio_stream;
 AVCodecContext* m_output_audio_codec_context;
};

#endif



And, the hellish .cpp :


#include 
#include 
#include 
#include 
#include 

#include "media_container_manager.h"

MediaContainerMgr::MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag,
 const glm::vec3* extents) :
 m_video_stream_index(-1),
 m_audio_stream_index(-1),
 m_recording(false),
 m_output_format(nullptr),
 m_output_format_context(nullptr),
 m_output_video_codec(nullptr),
 m_output_video_codec_context(nullptr),
 m_output_video_frame(nullptr),
 m_output_scale_context(nullptr),
 m_output_video_stream(nullptr)
{
 // AVFormatContext holds header info from the format specified in the container:
 m_format_context = avformat_alloc_context();
 if (!m_format_context) {
 throw "ERROR could not allocate memory for Format Context";
 }
 
 // open the file and read its header. Codecs are not opened here.
 if (avformat_open_input(&m_format_context, infile.c_str(), NULL, NULL) != 0) {
 throw "ERROR could not open input file for reading";
 }

 printf("format %s, duration %lldus, bit_rate %lld\n", m_format_context->iformat->name, m_format_context->duration, m_format_context->bit_rate);
 //read avPackets (?) from the avFormat (?) to get stream info. This populates format_context->streams.
 if (avformat_find_stream_info(m_format_context, NULL) < 0) {
 throw "ERROR could not get stream info";
 }

 for (unsigned int i = 0; i < m_format_context->nb_streams; i++) {
 AVCodecParameters* local_codec_parameters = NULL;
 local_codec_parameters = m_format_context->streams[i]->codecpar;
 printf("AVStream->time base before open coded %d/%d\n", m_format_context->streams[i]->time_base.num, m_format_context->streams[i]->time_base.den);
 printf("AVStream->r_frame_rate before open coded %d/%d\n", m_format_context->streams[i]->r_frame_rate.num, m_format_context->streams[i]->r_frame_rate.den);
 printf("AVStream->start_time %" PRId64 "\n", m_format_context->streams[i]->start_time);
 printf("AVStream->duration %" PRId64 "\n", m_format_context->streams[i]->duration);
 printf("duration(s): %lf\n", (float)m_format_context->streams[i]->duration / m_format_context->streams[i]->time_base.den * m_format_context->streams[i]->time_base.num);
 AVCodec* local_codec = NULL;
 local_codec = avcodec_find_decoder(local_codec_parameters->codec_id);
 if (local_codec == NULL) {
 throw "ERROR unsupported codec!";
 }

 if (local_codec_parameters->codec_type == AVMEDIA_TYPE_VIDEO) {
 if (m_video_stream_index == -1) {
 m_video_stream_index = i;
 m_video_codec = local_codec;
 m_video_codec_parameters = local_codec_parameters;
 }
 m_height = local_codec_parameters->height;
 m_width = local_codec_parameters->width;
 printf("Video Codec: resolution %dx%d\n", m_width, m_height);
 }
 else if (local_codec_parameters->codec_type == AVMEDIA_TYPE_AUDIO) {
 if (m_audio_stream_index == -1) {
 m_audio_stream_index = i;
 m_audio_codec = local_codec;
 m_audio_codec_parameters = local_codec_parameters;
 }
 printf("Audio Codec: %d channels, sample rate %d\n", local_codec_parameters->channels, local_codec_parameters->sample_rate);
 }

 printf("\tCodec %s ID %d bit_rate %lld\n", local_codec->name, local_codec->id, local_codec_parameters->bit_rate);
 }

 m_codec_context = avcodec_alloc_context3(m_video_codec);
 if (!m_codec_context) {
 throw "ERROR failed to allocate memory for AVCodecContext";
 }

 if (avcodec_parameters_to_context(m_codec_context, m_video_codec_parameters) < 0) {
 throw "ERROR failed to copy codec params to codec context";
 }

 if (avcodec_open2(m_codec_context, m_video_codec, NULL) < 0) {
 throw "ERROR avcodec_open2 failed to open codec";
 }

 m_frame = av_frame_alloc();
 if (!m_frame) {
 throw "ERROR failed to allocate AVFrame memory";
 }

 m_packet = av_packet_alloc();
 if (!m_packet) {
 throw "ERROR failed to allocate AVPacket memory";
 }
}

MediaContainerMgr::~MediaContainerMgr() {
 avformat_close_input(&m_format_context);
 av_packet_free(&m_packet);
 av_frame_free(&m_frame);
 avcodec_free_context(&m_codec_context);


 glDeleteVertexArrays(1, &m_VAO);
 glDeleteBuffers(1, &m_VBO);
}


bool MediaContainerMgr::advance_frame() {
 while (true) {
 if (av_read_frame(m_format_context, m_packet) < 0) {
 // Do we actually need to unref the packet if it failed?
 av_packet_unref(m_packet);
 continue;
 //return false;
 }
 else {
 if (m_packet->stream_index == m_video_stream_index) {
 //printf("AVPacket->pts %" PRId64 "\n", m_packet->pts);
 int response = decode_packet();
 av_packet_unref(m_packet);
 if (response != 0) {
 continue;
 //return false;
 }
 return true;
 }
 else {
 printf("m_packet->stream_index: %d\n", m_packet->stream_index);
 printf(" m_packet->pts: %lld\n", m_packet->pts);
 printf(" mpacket->size: %d\n", m_packet->size);
 if (m_recording) {
 int err = 0;
 //err = avcodec_send_packet(m_output_video_codec_context, m_packet);
 printf(" encoding error: %d\n", err);
 }
 }
 }

 // We're done with the packet (it's been unpacked to a frame), so deallocate & reset to defaults:
/*
 if (m_frame == NULL)
 return false;

 if (m_frame->data[0] == NULL || m_frame->data[1] == NULL || m_frame->data[2] == NULL) {
 printf("WARNING: null frame data");
 continue;
 }
*/
 }
}

int MediaContainerMgr::decode_packet() {
 // Supply raw packet data as input to a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3
 int response = avcodec_send_packet(m_codec_context, m_packet);

 if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 }

 // Return decoded output data (into a frame) from a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c
 response = avcodec_receive_frame(m_codec_context, m_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 return response;
 } else if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 } else {
 printf(
 "Frame %d (type=%c, size=%d bytes) pts %lld key_frame %d [DTS %d]\n",
 m_codec_context->frame_number,
 av_get_picture_type_char(m_frame->pict_type),
 m_frame->pkt_size,
 m_frame->pts,
 m_frame->key_frame,
 m_frame->coded_picture_number
 );
 }
 return 0;
}


bool MediaContainerMgr::init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height) {
 if (m_recording)
 return true;
 m_recording = true;

 advance_to(0L); // I've deleted the implmentation. Just seeks to beginning of vid. Works fine.

 if (!(m_output_format = av_guess_format(nullptr, video_file_name.c_str(), nullptr))) {
 printf("Cannot guess output format.\n");
 return false;
 }

 int err = avformat_alloc_output_context2(&m_output_format_context, m_output_format, nullptr, video_file_name.c_str());
 if (err < 0) {
 printf("Failed to allocate output context.\n");
 return false;
 }

 //TODO(P0): Break out the video and audio inits into their own methods.
 m_output_video_codec = avcodec_find_encoder(m_output_format->video_codec);
 if (!m_output_video_codec) {
 printf("Failed to create video codec.\n");
 return false;
 }
 m_output_video_stream = avformat_new_stream(m_output_format_context, m_output_video_codec);
 if (!m_output_video_stream) {
 printf("Failed to find video format.\n");
 return false;
 } 
 m_output_video_codec_context = avcodec_alloc_context3(m_output_video_codec);
 if (!m_output_video_codec_context) {
 printf("Failed to create video codec context.\n");
 return(false);
 }
 m_output_video_stream->codecpar->codec_id = m_output_format->video_codec;
 m_output_video_stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 m_output_video_stream->codecpar->width = width;
 m_output_video_stream->codecpar->height = height;
 m_output_video_stream->codecpar->format = AV_PIX_FMT_YUV420P;
 // Use the same bit rate as the input stream.
 m_output_video_stream->codecpar->bit_rate = m_format_context->streams[m_video_stream_index]->codecpar->bit_rate;
 m_output_video_stream->avg_frame_rate = m_format_context->streams[m_video_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_video_codec_context, m_output_video_stream->codecpar);
 m_output_video_codec_context->time_base = m_format_context->streams[m_video_stream_index]->time_base;
 
 //TODO(P1): Set these to match the input stream?
 m_output_video_codec_context->max_b_frames = 2;
 m_output_video_codec_context->gop_size = 12;
 m_output_video_codec_context->framerate = m_format_context->streams[m_video_stream_index]->r_frame_rate;
 //m_output_codec_context->refcounted_frames = 0;
 if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H264) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H265) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else {
 av_opt_set_int(m_output_video_codec_context, "lossless", 1, 0);
 }
 avcodec_parameters_from_context(m_output_video_stream->codecpar, m_output_video_codec_context);

 m_output_audio_codec = avcodec_find_encoder(m_output_format->audio_codec);
 if (!m_output_audio_codec) {
 printf("Failed to create audio codec.\n");
 return false;
 }



I've commented out all of the audio stream init beyond this next line, because this is where
the trouble begins. Creating this output stream causes the null reference I mentioned. If I
uncomment everything below here, I still get the null deref. If I comment out this line, the
deref exception vanishes. (IOW, I commented out more and more code until I found that this
was the trigger that caused the problem.)


I assume that there's something I'm doing wrong in the rest of the commented out code, that,
when fixed, will fix the nullptr and give me a working audio stream.


m_output_audio_stream = avformat_new_stream(m_output_format_context, m_output_audio_codec);
 if (!m_output_audio_stream) {
 printf("Failed to find audio format.\n");
 return false;
 }
 /*
 m_output_audio_codec_context = avcodec_alloc_context3(m_output_audio_codec);
 if (!m_output_audio_codec_context) {
 printf("Failed to create audio codec context.\n");
 return(false);
 }
 m_output_audio_stream->codecpar->codec_id = m_output_format->audio_codec;
 m_output_audio_stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
 m_output_audio_stream->codecpar->format = m_format_context->streams[m_audio_stream_index]->codecpar->format;
 m_output_audio_stream->codecpar->bit_rate = m_format_context->streams[m_audio_stream_index]->codecpar->bit_rate;
 m_output_audio_stream->avg_frame_rate = m_format_context->streams[m_audio_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_audio_codec_context, m_output_audio_stream->codecpar);
 m_output_audio_codec_context->time_base = m_format_context->streams[m_audio_stream_index]->time_base;
 */

 //TODO(P2): Free assets that have been allocated.
 err = avcodec_open2(m_output_video_codec_context, m_output_video_codec, nullptr);
 if (err < 0) {
 printf("Failed to open codec.\n");
 return false;
 }

 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 err = avio_open(&m_output_format_context->pb, video_file_name.c_str(), AVIO_FLAG_WRITE);
 if (err < 0) {
 printf("Failed to open output file.");
 return false;
 }
 }

 err = avformat_write_header(m_output_format_context, NULL);
 if (err < 0) {
 printf("Failed to write header.\n");
 return false;
 }

 av_dump_format(m_output_format_context, 0, video_file_name.c_str(), 1);

 return true;
}


//TODO(P2): make this a member. (Thanks to https://emvlo.wordpress.com/2016/03/10/sws_scale/)
void PrepareFlipFrameJ420(AVFrame* pFrame) {
 for (int i = 0; i < 4; i++) {
 if (i)
 pFrame->data[i] += pFrame->linesize[i] * ((pFrame->height >> 1) - 1);
 else
 pFrame->data[i] += pFrame->linesize[i] * (pFrame->height - 1);
 pFrame->linesize[i] = -pFrame->linesize[i];
 }
}



This is where we take an altered frame and write it to the output container. This works fine
as long as we haven't set up an audio stream in the output container.


bool MediaContainerMgr::output_video_frame(uint8_t* buf) {
 int err;

 if (!m_output_video_frame) {
 m_output_video_frame = av_frame_alloc();
 m_output_video_frame->format = AV_PIX_FMT_YUV420P;
 m_output_video_frame->width = m_output_video_codec_context->width;
 m_output_video_frame->height = m_output_video_codec_context->height;
 err = av_frame_get_buffer(m_output_video_frame, 32);
 if (err < 0) {
 printf("Failed to allocate output frame.\n");
 return false;
 }
 }

 if (!m_output_scale_context) {
 m_output_scale_context = sws_getContext(m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_RGB24,
 m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_YUV420P, SWS_BICUBIC, nullptr, nullptr, nullptr);
 }

 int inLinesize[1] = { 3 * m_output_video_codec_context->width };
 sws_scale(m_output_scale_context, (const uint8_t* const*)&buf, inLinesize, 0, m_output_video_codec_context->height,
 m_output_video_frame->data, m_output_video_frame->linesize);
 PrepareFlipFrameJ420(m_output_video_frame);
 //TODO(P0): Switch m_frame to be m_input_video_frame so I don't end up using the presentation timestamp from
 // an audio frame if I threadify the frame reading.
 m_output_video_frame->pts = m_frame->pts;
 printf("Output PTS: %d, time_base: %d/%d\n", m_output_video_frame->pts,
 m_output_video_codec_context->time_base.num, m_output_video_codec_context->time_base.den);
 err = avcodec_send_frame(m_output_video_codec_context, m_output_video_frame);
 if (err < 0) {
 printf(" ERROR sending new video frame output: ");
 switch (err) {
 case AVERROR(EAGAIN):
 printf("AVERROR(EAGAIN): %d\n", err);
 break;
 case AVERROR_EOF:
 printf("AVERROR_EOF: %d\n", err);
 break;
 case AVERROR(EINVAL):
 printf("AVERROR(EINVAL): %d\n", err);
 break;
 case AVERROR(ENOMEM):
 printf("AVERROR(ENOMEM): %d\n", err);
 break;
 }

 return false;
 }

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;
 pkt.flags |= AV_PKT_FLAG_KEY;
 int ret = 0;
 if ((ret = avcodec_receive_packet(m_output_video_codec_context, &pkt)) == 0) {
 static int counter = 0;
 printf("pkt.key: 0x%08x, pkt.size: %d, counter:\n", pkt.flags & AV_PKT_FLAG_KEY, pkt.size, counter++);
 uint8_t* size = ((uint8_t*)pkt.data);
 printf("sizes: %d %d %d %d %d %d %d %d %d\n", size[0], size[1], size[2], size[2], size[3], size[4], size[5], size[6], size[7]);
 av_interleaved_write_frame(m_output_format_context, &pkt);
 }
 printf("push: %d\n", ret);
 av_packet_unref(&pkt);

 return true;
}

bool MediaContainerMgr::finalize_output() {
 if (!m_recording)
 return true;

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;

 for (;;) {
 avcodec_send_frame(m_output_video_codec_context, nullptr);
 if (avcodec_receive_packet(m_output_video_codec_context, &pkt) == 0) {
 av_interleaved_write_frame(m_output_format_context, &pkt);
 printf("final push:\n");
 } else {
 break;
 }
 }

 av_packet_unref(&pkt);

 av_write_trailer(m_output_format_context);
 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 int err = avio_close(m_output_format_context->pb);
 if (err < 0) {
 printf("Failed to close file. err: %d\n", err);
 return false;
 }
 }

 return true;
}



EDIT
The call stack on the crash (which I should have included in the original question) :


avformat-58.dll!compute_muxer_pkt_fields(AVFormatContext * s, AVStream * st, AVPacket * pkt) Line 630 C
avformat-58.dll!write_packet_common(AVFormatContext * s, AVStream * st, AVPacket * pkt, int interleaved) Line 1122 C
avformat-58.dll!write_packets_common(AVFormatContext * s, AVPacket * pkt, int interleaved) Line 1186 C
avformat-58.dll!av_interleaved_write_frame(AVFormatContext * s, AVPacket * pkt) Line 1241 C
CamBot.exe!MediaContainerMgr::output_video_frame(unsigned char * buf) Line 553 C++
CamBot.exe!main() Line 240 C++



If I move the call to avformat_write_header so it's immediately before the audio stream initialization, I still get a crash, but in a different place. The crash happens on line 6459 of movenc.c, where we have :


/* Non-seekable output is ok if using fragmentation. If ism_lookahead
 * is enabled, we don't support non-seekable output at all. */
if (!(s->pb->seekable & AVIO_SEEKABLE_NORMAL) && // CRASH IS HERE
 (!(mov->flags & FF_MOV_FLAG_FRAGMENT) || mov->ism_lookahead)) {
 av_log(s, AV_LOG_ERROR, "muxer does not support non seekable output\n");
 return AVERROR(EINVAL);
}



The exception is a nullptr exception, where s->pb is NULL. The call stack is :


avformat-58.dll!mov_init(AVFormatContext * s) Line 6459 C
avformat-58.dll!init_muxer(AVFormatContext * s, AVDictionary * * options) Line 407 C
[Inline Frame] avformat-58.dll!avformat_init_output(AVFormatContext *) Line 489 C
avformat-58.dll!avformat_write_header(AVFormatContext * s, AVDictionary * * options) Line 512 C
CamBot.exe!MediaContainerMgr::init_video_output(const std::string & video_file_name, unsigned int width, unsigned int height) Line 424 C++
CamBot.exe!main() Line 183 C++



-
Why does adding audio stream to libavcodec output container causes a crash ?
19 mars 2021, par SniggerfardimungusAs it stands, my project correctly uses libavcodec to decode a video, where each frame is manipulated (it doesn't matter how) and output to a new video. I've cobbled this together from examples found online, and it works. The result is a perfect .mp4 of the manipulated frames, minus the audio.


My problem is, when I try to add an audio stream to the output container, I get a crash in mux.c that I can't explain. It's in
static int compute_muxer_pkt_fields(AVFormatContext *s, AVStream *st, AVPacket *pkt)
. Wherest->internal->priv_pts->val = pkt->dts;
is attempted,priv_pts
is nullptr.

I don't recall the version number, but this is from a November 4, 2020 ffmpeg build from git.


My
MediaContentMgr
is much bigger than what I have here. I'm stripping out everything to do with the frame manipulation, so if I'm missing anything, please let me know and I'll edit.

The code that, when added, triggers the nullptr exception, is called out inline


The .h :


#ifndef _API_EXAMPLE_H
#define _API_EXAMPLE_H

#include <glad></glad>glad.h>
#include <glfw></glfw>glfw3.h>
#include "glm/glm.hpp"

extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>opt.h>
#include <libswscale></libswscale>swscale.h>
}

#include "shader_s.h"

class MediaContainerMgr {
public:
 MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag, 
 const glm::vec3* extents);
 ~MediaContainerMgr();
 void render();
 bool recording() { return m_recording; }

 // Major thanks to "shi-yan" who helped make this possible:
 // https://github.com/shi-yan/videosamples/blob/master/libavmp4encoding/main.cpp
 bool init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height);
 bool output_video_frame(uint8_t* buf);
 bool finalize_output();

private:
 AVFormatContext* m_format_context;
 AVCodec* m_video_codec;
 AVCodec* m_audio_codec;
 AVCodecParameters* m_video_codec_parameters;
 AVCodecParameters* m_audio_codec_parameters;
 AVCodecContext* m_codec_context;
 AVFrame* m_frame;
 AVPacket* m_packet;
 uint32_t m_video_stream_index;
 uint32_t m_audio_stream_index;
 
 void init_rendering(const glm::vec3* extents);
 int decode_packet();

 // For writing the output video:
 void free_output_assets();
 bool m_recording;
 AVOutputFormat* m_output_format;
 AVFormatContext* m_output_format_context;
 AVCodec* m_output_video_codec;
 AVCodecContext* m_output_video_codec_context;
 AVFrame* m_output_video_frame;
 SwsContext* m_output_scale_context;
 AVStream* m_output_video_stream;
 
 AVCodec* m_output_audio_codec;
 AVStream* m_output_audio_stream;
 AVCodecContext* m_output_audio_codec_context;
};

#endif



And, the hellish .cpp :


#include 
#include 
#include 
#include 
#include 

#include "media_container_manager.h"

MediaContainerMgr::MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag,
 const glm::vec3* extents) :
 m_video_stream_index(-1),
 m_audio_stream_index(-1),
 m_recording(false),
 m_output_format(nullptr),
 m_output_format_context(nullptr),
 m_output_video_codec(nullptr),
 m_output_video_codec_context(nullptr),
 m_output_video_frame(nullptr),
 m_output_scale_context(nullptr),
 m_output_video_stream(nullptr)
{
 // AVFormatContext holds header info from the format specified in the container:
 m_format_context = avformat_alloc_context();
 if (!m_format_context) {
 throw "ERROR could not allocate memory for Format Context";
 }
 
 // open the file and read its header. Codecs are not opened here.
 if (avformat_open_input(&m_format_context, infile.c_str(), NULL, NULL) != 0) {
 throw "ERROR could not open input file for reading";
 }

 printf("format %s, duration %lldus, bit_rate %lld\n", m_format_context->iformat->name, m_format_context->duration, m_format_context->bit_rate);
 //read avPackets (?) from the avFormat (?) to get stream info. This populates format_context->streams.
 if (avformat_find_stream_info(m_format_context, NULL) < 0) {
 throw "ERROR could not get stream info";
 }

 for (unsigned int i = 0; i < m_format_context->nb_streams; i++) {
 AVCodecParameters* local_codec_parameters = NULL;
 local_codec_parameters = m_format_context->streams[i]->codecpar;
 printf("AVStream->time base before open coded %d/%d\n", m_format_context->streams[i]->time_base.num, m_format_context->streams[i]->time_base.den);
 printf("AVStream->r_frame_rate before open coded %d/%d\n", m_format_context->streams[i]->r_frame_rate.num, m_format_context->streams[i]->r_frame_rate.den);
 printf("AVStream->start_time %" PRId64 "\n", m_format_context->streams[i]->start_time);
 printf("AVStream->duration %" PRId64 "\n", m_format_context->streams[i]->duration);
 printf("duration(s): %lf\n", (float)m_format_context->streams[i]->duration / m_format_context->streams[i]->time_base.den * m_format_context->streams[i]->time_base.num);
 AVCodec* local_codec = NULL;
 local_codec = avcodec_find_decoder(local_codec_parameters->codec_id);
 if (local_codec == NULL) {
 throw "ERROR unsupported codec!";
 }

 if (local_codec_parameters->codec_type == AVMEDIA_TYPE_VIDEO) {
 if (m_video_stream_index == -1) {
 m_video_stream_index = i;
 m_video_codec = local_codec;
 m_video_codec_parameters = local_codec_parameters;
 }
 m_height = local_codec_parameters->height;
 m_width = local_codec_parameters->width;
 printf("Video Codec: resolution %dx%d\n", m_width, m_height);
 }
 else if (local_codec_parameters->codec_type == AVMEDIA_TYPE_AUDIO) {
 if (m_audio_stream_index == -1) {
 m_audio_stream_index = i;
 m_audio_codec = local_codec;
 m_audio_codec_parameters = local_codec_parameters;
 }
 printf("Audio Codec: %d channels, sample rate %d\n", local_codec_parameters->channels, local_codec_parameters->sample_rate);
 }

 printf("\tCodec %s ID %d bit_rate %lld\n", local_codec->name, local_codec->id, local_codec_parameters->bit_rate);
 }

 m_codec_context = avcodec_alloc_context3(m_video_codec);
 if (!m_codec_context) {
 throw "ERROR failed to allocate memory for AVCodecContext";
 }

 if (avcodec_parameters_to_context(m_codec_context, m_video_codec_parameters) < 0) {
 throw "ERROR failed to copy codec params to codec context";
 }

 if (avcodec_open2(m_codec_context, m_video_codec, NULL) < 0) {
 throw "ERROR avcodec_open2 failed to open codec";
 }

 m_frame = av_frame_alloc();
 if (!m_frame) {
 throw "ERROR failed to allocate AVFrame memory";
 }

 m_packet = av_packet_alloc();
 if (!m_packet) {
 throw "ERROR failed to allocate AVPacket memory";
 }
}

MediaContainerMgr::~MediaContainerMgr() {
 avformat_close_input(&m_format_context);
 av_packet_free(&m_packet);
 av_frame_free(&m_frame);
 avcodec_free_context(&m_codec_context);


 glDeleteVertexArrays(1, &m_VAO);
 glDeleteBuffers(1, &m_VBO);
}


bool MediaContainerMgr::advance_frame() {
 while (true) {
 if (av_read_frame(m_format_context, m_packet) < 0) {
 // Do we actually need to unref the packet if it failed?
 av_packet_unref(m_packet);
 continue;
 //return false;
 }
 else {
 if (m_packet->stream_index == m_video_stream_index) {
 //printf("AVPacket->pts %" PRId64 "\n", m_packet->pts);
 int response = decode_packet();
 av_packet_unref(m_packet);
 if (response != 0) {
 continue;
 //return false;
 }
 return true;
 }
 else {
 printf("m_packet->stream_index: %d\n", m_packet->stream_index);
 printf(" m_packet->pts: %lld\n", m_packet->pts);
 printf(" mpacket->size: %d\n", m_packet->size);
 if (m_recording) {
 int err = 0;
 //err = avcodec_send_packet(m_output_video_codec_context, m_packet);
 printf(" encoding error: %d\n", err);
 }
 }
 }

 // We're done with the packet (it's been unpacked to a frame), so deallocate & reset to defaults:
/*
 if (m_frame == NULL)
 return false;

 if (m_frame->data[0] == NULL || m_frame->data[1] == NULL || m_frame->data[2] == NULL) {
 printf("WARNING: null frame data");
 continue;
 }
*/
 }
}

int MediaContainerMgr::decode_packet() {
 // Supply raw packet data as input to a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3
 int response = avcodec_send_packet(m_codec_context, m_packet);

 if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 }

 // Return decoded output data (into a frame) from a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c
 response = avcodec_receive_frame(m_codec_context, m_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 return response;
 } else if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 } else {
 printf(
 "Frame %d (type=%c, size=%d bytes) pts %lld key_frame %d [DTS %d]\n",
 m_codec_context->frame_number,
 av_get_picture_type_char(m_frame->pict_type),
 m_frame->pkt_size,
 m_frame->pts,
 m_frame->key_frame,
 m_frame->coded_picture_number
 );
 }
 return 0;
}


bool MediaContainerMgr::init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height) {
 if (m_recording)
 return true;
 m_recording = true;

 advance_to(0L); // I've deleted the implmentation. Just seeks to beginning of vid. Works fine.

 if (!(m_output_format = av_guess_format(nullptr, video_file_name.c_str(), nullptr))) {
 printf("Cannot guess output format.\n");
 return false;
 }

 int err = avformat_alloc_output_context2(&m_output_format_context, m_output_format, nullptr, video_file_name.c_str());
 if (err < 0) {
 printf("Failed to allocate output context.\n");
 return false;
 }

 //TODO(P0): Break out the video and audio inits into their own methods.
 m_output_video_codec = avcodec_find_encoder(m_output_format->video_codec);
 if (!m_output_video_codec) {
 printf("Failed to create video codec.\n");
 return false;
 }
 m_output_video_stream = avformat_new_stream(m_output_format_context, m_output_video_codec);
 if (!m_output_video_stream) {
 printf("Failed to find video format.\n");
 return false;
 } 
 m_output_video_codec_context = avcodec_alloc_context3(m_output_video_codec);
 if (!m_output_video_codec_context) {
 printf("Failed to create video codec context.\n");
 return(false);
 }
 m_output_video_stream->codecpar->codec_id = m_output_format->video_codec;
 m_output_video_stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 m_output_video_stream->codecpar->width = width;
 m_output_video_stream->codecpar->height = height;
 m_output_video_stream->codecpar->format = AV_PIX_FMT_YUV420P;
 // Use the same bit rate as the input stream.
 m_output_video_stream->codecpar->bit_rate = m_format_context->streams[m_video_stream_index]->codecpar->bit_rate;
 m_output_video_stream->avg_frame_rate = m_format_context->streams[m_video_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_video_codec_context, m_output_video_stream->codecpar);
 m_output_video_codec_context->time_base = m_format_context->streams[m_video_stream_index]->time_base;
 
 //TODO(P1): Set these to match the input stream?
 m_output_video_codec_context->max_b_frames = 2;
 m_output_video_codec_context->gop_size = 12;
 m_output_video_codec_context->framerate = m_format_context->streams[m_video_stream_index]->r_frame_rate;
 //m_output_codec_context->refcounted_frames = 0;
 if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H264) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H265) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else {
 av_opt_set_int(m_output_video_codec_context, "lossless", 1, 0);
 }
 avcodec_parameters_from_context(m_output_video_stream->codecpar, m_output_video_codec_context);

 m_output_audio_codec = avcodec_find_encoder(m_output_format->audio_codec);
 if (!m_output_audio_codec) {
 printf("Failed to create audio codec.\n");
 return false;
 }



I've commented out all of the audio stream init beyond this next line, because this is where
the trouble begins. Creating this output stream causes the null reference I mentioned. If I
uncomment everything below here, I still get the null deref. If I comment out this line, the
deref exception vanishes. (IOW, I commented out more and more code until I found that this
was the trigger that caused the problem.)


I assume that there's something I'm doing wrong in the rest of the commented out code, that,
when fixed, will fix the nullptr and give me a working audio stream.


m_output_audio_stream = avformat_new_stream(m_output_format_context, m_output_audio_codec);
 if (!m_output_audio_stream) {
 printf("Failed to find audio format.\n");
 return false;
 }
 /*
 m_output_audio_codec_context = avcodec_alloc_context3(m_output_audio_codec);
 if (!m_output_audio_codec_context) {
 printf("Failed to create audio codec context.\n");
 return(false);
 }
 m_output_audio_stream->codecpar->codec_id = m_output_format->audio_codec;
 m_output_audio_stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
 m_output_audio_stream->codecpar->format = m_format_context->streams[m_audio_stream_index]->codecpar->format;
 m_output_audio_stream->codecpar->bit_rate = m_format_context->streams[m_audio_stream_index]->codecpar->bit_rate;
 m_output_audio_stream->avg_frame_rate = m_format_context->streams[m_audio_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_audio_codec_context, m_output_audio_stream->codecpar);
 m_output_audio_codec_context->time_base = m_format_context->streams[m_audio_stream_index]->time_base;
 */

 //TODO(P2): Free assets that have been allocated.
 err = avcodec_open2(m_output_video_codec_context, m_output_video_codec, nullptr);
 if (err < 0) {
 printf("Failed to open codec.\n");
 return false;
 }

 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 err = avio_open(&m_output_format_context->pb, video_file_name.c_str(), AVIO_FLAG_WRITE);
 if (err < 0) {
 printf("Failed to open output file.");
 return false;
 }
 }

 err = avformat_write_header(m_output_format_context, NULL);
 if (err < 0) {
 printf("Failed to write header.\n");
 return false;
 }

 av_dump_format(m_output_format_context, 0, video_file_name.c_str(), 1);

 return true;
}


//TODO(P2): make this a member. (Thanks to https://emvlo.wordpress.com/2016/03/10/sws_scale/)
void PrepareFlipFrameJ420(AVFrame* pFrame) {
 for (int i = 0; i < 4; i++) {
 if (i)
 pFrame->data[i] += pFrame->linesize[i] * ((pFrame->height >> 1) - 1);
 else
 pFrame->data[i] += pFrame->linesize[i] * (pFrame->height - 1);
 pFrame->linesize[i] = -pFrame->linesize[i];
 }
}



This is where we take an altered frame and write it to the output container. This works fine
as long as we haven't set up an audio stream in the output container.


bool MediaContainerMgr::output_video_frame(uint8_t* buf) {
 int err;

 if (!m_output_video_frame) {
 m_output_video_frame = av_frame_alloc();
 m_output_video_frame->format = AV_PIX_FMT_YUV420P;
 m_output_video_frame->width = m_output_video_codec_context->width;
 m_output_video_frame->height = m_output_video_codec_context->height;
 err = av_frame_get_buffer(m_output_video_frame, 32);
 if (err < 0) {
 printf("Failed to allocate output frame.\n");
 return false;
 }
 }

 if (!m_output_scale_context) {
 m_output_scale_context = sws_getContext(m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_RGB24,
 m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_YUV420P, SWS_BICUBIC, nullptr, nullptr, nullptr);
 }

 int inLinesize[1] = { 3 * m_output_video_codec_context->width };
 sws_scale(m_output_scale_context, (const uint8_t* const*)&buf, inLinesize, 0, m_output_video_codec_context->height,
 m_output_video_frame->data, m_output_video_frame->linesize);
 PrepareFlipFrameJ420(m_output_video_frame);
 //TODO(P0): Switch m_frame to be m_input_video_frame so I don't end up using the presentation timestamp from
 // an audio frame if I threadify the frame reading.
 m_output_video_frame->pts = m_frame->pts;
 printf("Output PTS: %d, time_base: %d/%d\n", m_output_video_frame->pts,
 m_output_video_codec_context->time_base.num, m_output_video_codec_context->time_base.den);
 err = avcodec_send_frame(m_output_video_codec_context, m_output_video_frame);
 if (err < 0) {
 printf(" ERROR sending new video frame output: ");
 switch (err) {
 case AVERROR(EAGAIN):
 printf("AVERROR(EAGAIN): %d\n", err);
 break;
 case AVERROR_EOF:
 printf("AVERROR_EOF: %d\n", err);
 break;
 case AVERROR(EINVAL):
 printf("AVERROR(EINVAL): %d\n", err);
 break;
 case AVERROR(ENOMEM):
 printf("AVERROR(ENOMEM): %d\n", err);
 break;
 }

 return false;
 }

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;
 pkt.flags |= AV_PKT_FLAG_KEY;
 int ret = 0;
 if ((ret = avcodec_receive_packet(m_output_video_codec_context, &pkt)) == 0) {
 static int counter = 0;
 printf("pkt.key: 0x%08x, pkt.size: %d, counter:\n", pkt.flags & AV_PKT_FLAG_KEY, pkt.size, counter++);
 uint8_t* size = ((uint8_t*)pkt.data);
 printf("sizes: %d %d %d %d %d %d %d %d %d\n", size[0], size[1], size[2], size[2], size[3], size[4], size[5], size[6], size[7]);
 av_interleaved_write_frame(m_output_format_context, &pkt);
 }
 printf("push: %d\n", ret);
 av_packet_unref(&pkt);

 return true;
}

bool MediaContainerMgr::finalize_output() {
 if (!m_recording)
 return true;

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;

 for (;;) {
 avcodec_send_frame(m_output_video_codec_context, nullptr);
 if (avcodec_receive_packet(m_output_video_codec_context, &pkt) == 0) {
 av_interleaved_write_frame(m_output_format_context, &pkt);
 printf("final push:\n");
 } else {
 break;
 }
 }

 av_packet_unref(&pkt);

 av_write_trailer(m_output_format_context);
 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 int err = avio_close(m_output_format_context->pb);
 if (err < 0) {
 printf("Failed to close file. err: %d\n", err);
 return false;
 }
 }

 return true;
}



-
ffmpeg unbale to initialize threading in some cases
16 octobre 2020, par Sudipta RoyI am posting this again since the earlier question I posted has been closed


I have a JAVA service running in wildfly which is calling an external ffmpeg binary to convert .au files to .wav files. The actual command that is being executed is as follows :


ffmpeg -y -i INPUT.au OUTPUT.wav



It is running smoothly, except every once in a while it is creating an empty .wav file becasue of the following error :


Error: ffmpeg version c6710aa Copyright (c) 2000-2017 the FFmpeg 
developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
configuration: --prefix=/tmp/ffmpeg-static/target --pkg-config-flags=- 
-static --extra-cflags=-I/tmp/ffmpeg-static/target/include --extra- 
ldflags=-L/tmp/ffmpeg-static/target/lib --extra-ldexeflags=-static -- 
bindir=/tmp/ffmpeg-static/bin --enable-pic --enable-ffplay --enable- 
ffserver --enable-fontconfig --enable-frei0r --enable-gpl --enable- 
version3 --enable-libass --enable-libfribidi --enable-libfdk-aac -- 
enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb -- 
enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus -- 
enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora - 
-enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis -- 
enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 -- 
enable-libxvid --enable-libzimg --enable-nonfree --enable-openssl
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100

Input #0, ogg, from 'INPUT.au'
Duration: 00:00:34.08, start: 0.01500, bitrate: 15kb/s
Stream: #0.0: Audio: speex, 8000Hz, mono, s16, 15kb/s

[AVFilterGraph @ 0x43ec6e0] Error initializing threading.
[AVFilterGraph @ 0x43ec6e0] Error creating filter 'anull'



If I try to manually convert the file from command line, it works. A brief internet search (this) shows that it might be due to the fact that
ffmpeg
is unable to create threads for internal use. Can anyone please elaborate ?

The server where I am facing the problem have relatively high load. I have seen that wildfly is creating close to 1800 threads.


Thanks


P.s. I have managed to recreate the problem. Below is the code :


SystemCommandExecutor.java


import java.io.*;
 import java.util.List;
 public class SystemCommandExecutor {
 private List<string> commandInformation;
 private String adminPassword;
 private ThreadedStreamHandler inputStreamHandler;
 private ThreadedStreamHandler errorStreamHandler;

 public SystemCommandExecutor(final List<string> commandInformation)
 {
 if (commandInformation==null) throw new NullPointerException("The commandInformation is required.");
 this.commandInformation = commandInformation;
 this.adminPassword = null;
 }

 public int executeCommand()
 throws IOException, InterruptedException
 {
 int exitValue = -99;

 try
 {
 ProcessBuilder pb = new ProcessBuilder(commandInformation);
 Process process = pb.start();
 OutputStream stdOutput = process.getOutputStream();
 InputStream inputStream = process.getInputStream();
 InputStream errorStream = process.getErrorStream();
 inputStreamHandler = new ThreadedStreamHandler(inputStream, stdOutput, adminPassword);
 errorStreamHandler = new ThreadedStreamHandler(errorStream);
 inputStreamHandler.start();
 errorStreamHandler.start();
 exitValue = process.waitFor();
 inputStreamHandler.interrupt();
 errorStreamHandler.interrupt();
 inputStreamHandler.join();
 errorStreamHandler.join();
 }
 catch (IOException e)
 {
 throw e;
 }
 catch (InterruptedException e)
 {
 throw e;
 }
 finally
 {
 return exitValue;
 }
 }

 public StringBuilder getStandardOutputFromCommand()
 {
 return inputStreamHandler.getOutputBuffer();
 }

 public StringBuilder getStandardErrorFromCommand()
 {
 return errorStreamHandler.getOutputBuffer();
 }
}
</string></string>


ThreadedStreamHandler.java


import java.io.*;

class ThreadedStreamHandler extends Thread
{
 InputStream inputStream;
 String adminPassword;
 OutputStream outputStream;
 PrintWriter printWriter;
 StringBuilder outputBuffer = new StringBuilder();
 private boolean sudoIsRequested = false;

 
 ThreadedStreamHandler(InputStream inputStream)
 {
 this.inputStream = inputStream;
 }

 
 ThreadedStreamHandler(InputStream inputStream, OutputStream outputStream, String adminPassword)
 {
 this.inputStream = inputStream;
 this.outputStream = outputStream;
 this.printWriter = new PrintWriter(outputStream);
 this.adminPassword = adminPassword;
 this.sudoIsRequested = true;
 }

 public void run()
 {
 
 if (sudoIsRequested)
 {
 printWriter.println(adminPassword);
 printWriter.flush();
 }

 BufferedReader bufferedReader = null;
 try
 {
 bufferedReader = new BufferedReader(new InputStreamReader(inputStream));
 String line = null;
 while ((line = bufferedReader.readLine()) != null)
 {
 outputBuffer.append(line + "\n");
 }
 }
 catch (IOException ioe)
 {
 ioe.printStackTrace();
 }
 catch (Throwable t)
 {
 t.printStackTrace();
 }
 finally
 {
 try
 {
 bufferedReader.close();
 }
 catch (IOException e)
 {
 // ignore this one
 }
 }
 }

 private void doSleep(long millis)
 {
 try
 {
 Thread.sleep(millis);
 }
 catch (InterruptedException e)
 {
 // ignore
 }
 }

 public StringBuilder getOutputBuffer()
 {
 return outputBuffer;
 }

}



FfmpegRunnable.java


import java.io.IOException;
import java.util.List;

public class FfmpegRunnable implements Runnable {
 private List<string> command;
 SystemCommandExecutor executor;

 public FfmpegRunnable(List<string> command) {
 this.command = command;
 this.executor = new SystemCommandExecutor(command);
 }

 @Override
 public void run() {
 try {
 int id = (int) Thread.currentThread().getId();
 int result = executor.executeCommand();
 if(result != 0) {
 StringBuilder err = executor.getStandardErrorFromCommand();
 System.out.println("[" + id + "]" + "[ERROR] " + err);
 } else {
 System.out.println("[" + id + "]" + "[SUCCESS]");
 }
 } catch (IOException e) {
 e.printStackTrace();
 } catch (InterruptedException e) {
 e.printStackTrace();
 }
 }
}
</string></string>


FfmpegMain.java


import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
public class FfmpegMain {
 public static void main(String[] args) {
 //boolean threading = false;
 System.out.println(args[0]);
 int nrThread = Integer.parseInt(args[0]);
 boolean threading = Boolean.parseBoolean(args[1]);
 System.out.println("nrThread : " + nrThread + ", threading : " + threading);
 if(threading) {
 System.out.println("ffmpeg threading enabled");
 } else {
 System.out.println("ffmpeg threading not enabled");
 }
 ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(nrThread);
 for(int i=0; i cmd = new ArrayList<string>();
 String dest = "/tmp/OUTPUT/output_" + (Math.random()*1000) + ".wav";
 String cmdStr = "/tmp/FFMPEG/ffmpeg" + (threading ? " -threads 1 " : " ")
 + "-y -i /tmp/input.au " + dest;
 cmd.add("/bin/sh");
 cmd.add("-c");
 cmd.add(cmdStr);

 executor.submit(new FfmpegRunnable(cmd));
 }
 executor.shutdown();
 }
}
</string>


I have created a jar with the class files and run the jar from two seperate terminal with the following command


java -jar JAR.jar 40 true



Here 40 is the number of threads, simulating varous users accessing the system. Every once in a while I get above mentioned error.