
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (29)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (6032)
-
Why does adding audio stream to libavcodec output container causes a crash ?
19 mars 2021, par SniggerfardimungusAs it stands, my project correctly uses libavcodec to decode a video, where each frame is manipulated (it doesn't matter how) and output to a new video. I've cobbled this together from examples found online, and it works. The result is a perfect .mp4 of the manipulated frames, minus the audio.


My problem is, when I try to add an audio stream to the output container, I get a crash in mux.c that I can't explain. It's in
static int compute_muxer_pkt_fields(AVFormatContext *s, AVStream *st, AVPacket *pkt)
. Wherest->internal->priv_pts->val = pkt->dts;
is attempted,priv_pts
is nullptr.

I don't recall the version number, but this is from a November 4, 2020 ffmpeg build from git.


My
MediaContentMgr
is much bigger than what I have here. I'm stripping out everything to do with the frame manipulation, so if I'm missing anything, please let me know and I'll edit.

The code that, when added, triggers the nullptr exception, is called out inline


The .h :


#ifndef _API_EXAMPLE_H
#define _API_EXAMPLE_H

#include <glad></glad>glad.h>
#include <glfw></glfw>glfw3.h>
#include "glm/glm.hpp"

extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>opt.h>
#include <libswscale></libswscale>swscale.h>
}

#include "shader_s.h"

class MediaContainerMgr {
public:
 MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag, 
 const glm::vec3* extents);
 ~MediaContainerMgr();
 void render();
 bool recording() { return m_recording; }

 // Major thanks to "shi-yan" who helped make this possible:
 // https://github.com/shi-yan/videosamples/blob/master/libavmp4encoding/main.cpp
 bool init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height);
 bool output_video_frame(uint8_t* buf);
 bool finalize_output();

private:
 AVFormatContext* m_format_context;
 AVCodec* m_video_codec;
 AVCodec* m_audio_codec;
 AVCodecParameters* m_video_codec_parameters;
 AVCodecParameters* m_audio_codec_parameters;
 AVCodecContext* m_codec_context;
 AVFrame* m_frame;
 AVPacket* m_packet;
 uint32_t m_video_stream_index;
 uint32_t m_audio_stream_index;
 
 void init_rendering(const glm::vec3* extents);
 int decode_packet();

 // For writing the output video:
 void free_output_assets();
 bool m_recording;
 AVOutputFormat* m_output_format;
 AVFormatContext* m_output_format_context;
 AVCodec* m_output_video_codec;
 AVCodecContext* m_output_video_codec_context;
 AVFrame* m_output_video_frame;
 SwsContext* m_output_scale_context;
 AVStream* m_output_video_stream;
 
 AVCodec* m_output_audio_codec;
 AVStream* m_output_audio_stream;
 AVCodecContext* m_output_audio_codec_context;
};

#endif



And, the hellish .cpp :


#include 
#include 
#include 
#include 
#include 

#include "media_container_manager.h"

MediaContainerMgr::MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag,
 const glm::vec3* extents) :
 m_video_stream_index(-1),
 m_audio_stream_index(-1),
 m_recording(false),
 m_output_format(nullptr),
 m_output_format_context(nullptr),
 m_output_video_codec(nullptr),
 m_output_video_codec_context(nullptr),
 m_output_video_frame(nullptr),
 m_output_scale_context(nullptr),
 m_output_video_stream(nullptr)
{
 // AVFormatContext holds header info from the format specified in the container:
 m_format_context = avformat_alloc_context();
 if (!m_format_context) {
 throw "ERROR could not allocate memory for Format Context";
 }
 
 // open the file and read its header. Codecs are not opened here.
 if (avformat_open_input(&m_format_context, infile.c_str(), NULL, NULL) != 0) {
 throw "ERROR could not open input file for reading";
 }

 printf("format %s, duration %lldus, bit_rate %lld\n", m_format_context->iformat->name, m_format_context->duration, m_format_context->bit_rate);
 //read avPackets (?) from the avFormat (?) to get stream info. This populates format_context->streams.
 if (avformat_find_stream_info(m_format_context, NULL) < 0) {
 throw "ERROR could not get stream info";
 }

 for (unsigned int i = 0; i < m_format_context->nb_streams; i++) {
 AVCodecParameters* local_codec_parameters = NULL;
 local_codec_parameters = m_format_context->streams[i]->codecpar;
 printf("AVStream->time base before open coded %d/%d\n", m_format_context->streams[i]->time_base.num, m_format_context->streams[i]->time_base.den);
 printf("AVStream->r_frame_rate before open coded %d/%d\n", m_format_context->streams[i]->r_frame_rate.num, m_format_context->streams[i]->r_frame_rate.den);
 printf("AVStream->start_time %" PRId64 "\n", m_format_context->streams[i]->start_time);
 printf("AVStream->duration %" PRId64 "\n", m_format_context->streams[i]->duration);
 printf("duration(s): %lf\n", (float)m_format_context->streams[i]->duration / m_format_context->streams[i]->time_base.den * m_format_context->streams[i]->time_base.num);
 AVCodec* local_codec = NULL;
 local_codec = avcodec_find_decoder(local_codec_parameters->codec_id);
 if (local_codec == NULL) {
 throw "ERROR unsupported codec!";
 }

 if (local_codec_parameters->codec_type == AVMEDIA_TYPE_VIDEO) {
 if (m_video_stream_index == -1) {
 m_video_stream_index = i;
 m_video_codec = local_codec;
 m_video_codec_parameters = local_codec_parameters;
 }
 m_height = local_codec_parameters->height;
 m_width = local_codec_parameters->width;
 printf("Video Codec: resolution %dx%d\n", m_width, m_height);
 }
 else if (local_codec_parameters->codec_type == AVMEDIA_TYPE_AUDIO) {
 if (m_audio_stream_index == -1) {
 m_audio_stream_index = i;
 m_audio_codec = local_codec;
 m_audio_codec_parameters = local_codec_parameters;
 }
 printf("Audio Codec: %d channels, sample rate %d\n", local_codec_parameters->channels, local_codec_parameters->sample_rate);
 }

 printf("\tCodec %s ID %d bit_rate %lld\n", local_codec->name, local_codec->id, local_codec_parameters->bit_rate);
 }

 m_codec_context = avcodec_alloc_context3(m_video_codec);
 if (!m_codec_context) {
 throw "ERROR failed to allocate memory for AVCodecContext";
 }

 if (avcodec_parameters_to_context(m_codec_context, m_video_codec_parameters) < 0) {
 throw "ERROR failed to copy codec params to codec context";
 }

 if (avcodec_open2(m_codec_context, m_video_codec, NULL) < 0) {
 throw "ERROR avcodec_open2 failed to open codec";
 }

 m_frame = av_frame_alloc();
 if (!m_frame) {
 throw "ERROR failed to allocate AVFrame memory";
 }

 m_packet = av_packet_alloc();
 if (!m_packet) {
 throw "ERROR failed to allocate AVPacket memory";
 }
}

MediaContainerMgr::~MediaContainerMgr() {
 avformat_close_input(&m_format_context);
 av_packet_free(&m_packet);
 av_frame_free(&m_frame);
 avcodec_free_context(&m_codec_context);


 glDeleteVertexArrays(1, &m_VAO);
 glDeleteBuffers(1, &m_VBO);
}


bool MediaContainerMgr::advance_frame() {
 while (true) {
 if (av_read_frame(m_format_context, m_packet) < 0) {
 // Do we actually need to unref the packet if it failed?
 av_packet_unref(m_packet);
 continue;
 //return false;
 }
 else {
 if (m_packet->stream_index == m_video_stream_index) {
 //printf("AVPacket->pts %" PRId64 "\n", m_packet->pts);
 int response = decode_packet();
 av_packet_unref(m_packet);
 if (response != 0) {
 continue;
 //return false;
 }
 return true;
 }
 else {
 printf("m_packet->stream_index: %d\n", m_packet->stream_index);
 printf(" m_packet->pts: %lld\n", m_packet->pts);
 printf(" mpacket->size: %d\n", m_packet->size);
 if (m_recording) {
 int err = 0;
 //err = avcodec_send_packet(m_output_video_codec_context, m_packet);
 printf(" encoding error: %d\n", err);
 }
 }
 }

 // We're done with the packet (it's been unpacked to a frame), so deallocate & reset to defaults:
/*
 if (m_frame == NULL)
 return false;

 if (m_frame->data[0] == NULL || m_frame->data[1] == NULL || m_frame->data[2] == NULL) {
 printf("WARNING: null frame data");
 continue;
 }
*/
 }
}

int MediaContainerMgr::decode_packet() {
 // Supply raw packet data as input to a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3
 int response = avcodec_send_packet(m_codec_context, m_packet);

 if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 }

 // Return decoded output data (into a frame) from a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c
 response = avcodec_receive_frame(m_codec_context, m_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 return response;
 } else if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 } else {
 printf(
 "Frame %d (type=%c, size=%d bytes) pts %lld key_frame %d [DTS %d]\n",
 m_codec_context->frame_number,
 av_get_picture_type_char(m_frame->pict_type),
 m_frame->pkt_size,
 m_frame->pts,
 m_frame->key_frame,
 m_frame->coded_picture_number
 );
 }
 return 0;
}


bool MediaContainerMgr::init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height) {
 if (m_recording)
 return true;
 m_recording = true;

 advance_to(0L); // I've deleted the implmentation. Just seeks to beginning of vid. Works fine.

 if (!(m_output_format = av_guess_format(nullptr, video_file_name.c_str(), nullptr))) {
 printf("Cannot guess output format.\n");
 return false;
 }

 int err = avformat_alloc_output_context2(&m_output_format_context, m_output_format, nullptr, video_file_name.c_str());
 if (err < 0) {
 printf("Failed to allocate output context.\n");
 return false;
 }

 //TODO(P0): Break out the video and audio inits into their own methods.
 m_output_video_codec = avcodec_find_encoder(m_output_format->video_codec);
 if (!m_output_video_codec) {
 printf("Failed to create video codec.\n");
 return false;
 }
 m_output_video_stream = avformat_new_stream(m_output_format_context, m_output_video_codec);
 if (!m_output_video_stream) {
 printf("Failed to find video format.\n");
 return false;
 } 
 m_output_video_codec_context = avcodec_alloc_context3(m_output_video_codec);
 if (!m_output_video_codec_context) {
 printf("Failed to create video codec context.\n");
 return(false);
 }
 m_output_video_stream->codecpar->codec_id = m_output_format->video_codec;
 m_output_video_stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 m_output_video_stream->codecpar->width = width;
 m_output_video_stream->codecpar->height = height;
 m_output_video_stream->codecpar->format = AV_PIX_FMT_YUV420P;
 // Use the same bit rate as the input stream.
 m_output_video_stream->codecpar->bit_rate = m_format_context->streams[m_video_stream_index]->codecpar->bit_rate;
 m_output_video_stream->avg_frame_rate = m_format_context->streams[m_video_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_video_codec_context, m_output_video_stream->codecpar);
 m_output_video_codec_context->time_base = m_format_context->streams[m_video_stream_index]->time_base;
 
 //TODO(P1): Set these to match the input stream?
 m_output_video_codec_context->max_b_frames = 2;
 m_output_video_codec_context->gop_size = 12;
 m_output_video_codec_context->framerate = m_format_context->streams[m_video_stream_index]->r_frame_rate;
 //m_output_codec_context->refcounted_frames = 0;
 if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H264) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H265) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else {
 av_opt_set_int(m_output_video_codec_context, "lossless", 1, 0);
 }
 avcodec_parameters_from_context(m_output_video_stream->codecpar, m_output_video_codec_context);

 m_output_audio_codec = avcodec_find_encoder(m_output_format->audio_codec);
 if (!m_output_audio_codec) {
 printf("Failed to create audio codec.\n");
 return false;
 }



I've commented out all of the audio stream init beyond this next line, because this is where
the trouble begins. Creating this output stream causes the null reference I mentioned. If I
uncomment everything below here, I still get the null deref. If I comment out this line, the
deref exception vanishes. (IOW, I commented out more and more code until I found that this
was the trigger that caused the problem.)


I assume that there's something I'm doing wrong in the rest of the commented out code, that,
when fixed, will fix the nullptr and give me a working audio stream.


m_output_audio_stream = avformat_new_stream(m_output_format_context, m_output_audio_codec);
 if (!m_output_audio_stream) {
 printf("Failed to find audio format.\n");
 return false;
 }
 /*
 m_output_audio_codec_context = avcodec_alloc_context3(m_output_audio_codec);
 if (!m_output_audio_codec_context) {
 printf("Failed to create audio codec context.\n");
 return(false);
 }
 m_output_audio_stream->codecpar->codec_id = m_output_format->audio_codec;
 m_output_audio_stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
 m_output_audio_stream->codecpar->format = m_format_context->streams[m_audio_stream_index]->codecpar->format;
 m_output_audio_stream->codecpar->bit_rate = m_format_context->streams[m_audio_stream_index]->codecpar->bit_rate;
 m_output_audio_stream->avg_frame_rate = m_format_context->streams[m_audio_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_audio_codec_context, m_output_audio_stream->codecpar);
 m_output_audio_codec_context->time_base = m_format_context->streams[m_audio_stream_index]->time_base;
 */

 //TODO(P2): Free assets that have been allocated.
 err = avcodec_open2(m_output_video_codec_context, m_output_video_codec, nullptr);
 if (err < 0) {
 printf("Failed to open codec.\n");
 return false;
 }

 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 err = avio_open(&m_output_format_context->pb, video_file_name.c_str(), AVIO_FLAG_WRITE);
 if (err < 0) {
 printf("Failed to open output file.");
 return false;
 }
 }

 err = avformat_write_header(m_output_format_context, NULL);
 if (err < 0) {
 printf("Failed to write header.\n");
 return false;
 }

 av_dump_format(m_output_format_context, 0, video_file_name.c_str(), 1);

 return true;
}


//TODO(P2): make this a member. (Thanks to https://emvlo.wordpress.com/2016/03/10/sws_scale/)
void PrepareFlipFrameJ420(AVFrame* pFrame) {
 for (int i = 0; i < 4; i++) {
 if (i)
 pFrame->data[i] += pFrame->linesize[i] * ((pFrame->height >> 1) - 1);
 else
 pFrame->data[i] += pFrame->linesize[i] * (pFrame->height - 1);
 pFrame->linesize[i] = -pFrame->linesize[i];
 }
}



This is where we take an altered frame and write it to the output container. This works fine
as long as we haven't set up an audio stream in the output container.


bool MediaContainerMgr::output_video_frame(uint8_t* buf) {
 int err;

 if (!m_output_video_frame) {
 m_output_video_frame = av_frame_alloc();
 m_output_video_frame->format = AV_PIX_FMT_YUV420P;
 m_output_video_frame->width = m_output_video_codec_context->width;
 m_output_video_frame->height = m_output_video_codec_context->height;
 err = av_frame_get_buffer(m_output_video_frame, 32);
 if (err < 0) {
 printf("Failed to allocate output frame.\n");
 return false;
 }
 }

 if (!m_output_scale_context) {
 m_output_scale_context = sws_getContext(m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_RGB24,
 m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_YUV420P, SWS_BICUBIC, nullptr, nullptr, nullptr);
 }

 int inLinesize[1] = { 3 * m_output_video_codec_context->width };
 sws_scale(m_output_scale_context, (const uint8_t* const*)&buf, inLinesize, 0, m_output_video_codec_context->height,
 m_output_video_frame->data, m_output_video_frame->linesize);
 PrepareFlipFrameJ420(m_output_video_frame);
 //TODO(P0): Switch m_frame to be m_input_video_frame so I don't end up using the presentation timestamp from
 // an audio frame if I threadify the frame reading.
 m_output_video_frame->pts = m_frame->pts;
 printf("Output PTS: %d, time_base: %d/%d\n", m_output_video_frame->pts,
 m_output_video_codec_context->time_base.num, m_output_video_codec_context->time_base.den);
 err = avcodec_send_frame(m_output_video_codec_context, m_output_video_frame);
 if (err < 0) {
 printf(" ERROR sending new video frame output: ");
 switch (err) {
 case AVERROR(EAGAIN):
 printf("AVERROR(EAGAIN): %d\n", err);
 break;
 case AVERROR_EOF:
 printf("AVERROR_EOF: %d\n", err);
 break;
 case AVERROR(EINVAL):
 printf("AVERROR(EINVAL): %d\n", err);
 break;
 case AVERROR(ENOMEM):
 printf("AVERROR(ENOMEM): %d\n", err);
 break;
 }

 return false;
 }

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;
 pkt.flags |= AV_PKT_FLAG_KEY;
 int ret = 0;
 if ((ret = avcodec_receive_packet(m_output_video_codec_context, &pkt)) == 0) {
 static int counter = 0;
 printf("pkt.key: 0x%08x, pkt.size: %d, counter:\n", pkt.flags & AV_PKT_FLAG_KEY, pkt.size, counter++);
 uint8_t* size = ((uint8_t*)pkt.data);
 printf("sizes: %d %d %d %d %d %d %d %d %d\n", size[0], size[1], size[2], size[2], size[3], size[4], size[5], size[6], size[7]);
 av_interleaved_write_frame(m_output_format_context, &pkt);
 }
 printf("push: %d\n", ret);
 av_packet_unref(&pkt);

 return true;
}

bool MediaContainerMgr::finalize_output() {
 if (!m_recording)
 return true;

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;

 for (;;) {
 avcodec_send_frame(m_output_video_codec_context, nullptr);
 if (avcodec_receive_packet(m_output_video_codec_context, &pkt) == 0) {
 av_interleaved_write_frame(m_output_format_context, &pkt);
 printf("final push:\n");
 } else {
 break;
 }
 }

 av_packet_unref(&pkt);

 av_write_trailer(m_output_format_context);
 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 int err = avio_close(m_output_format_context->pb);
 if (err < 0) {
 printf("Failed to close file. err: %d\n", err);
 return false;
 }
 }

 return true;
}



-
ffmpeg, download video stream from url
5 juillet 2021, par samiullahI am developing application to save any online webinar(video or audio) given the url using ffmpeg library. Url usually contain other contents like text, images as well. So i have problem, how to separately get video stream from url using ffmpeg(or some other better free library). Url may be for any site, not only for youtube, but as an example, link may be like


http://www.youtube.com/watch?v=wnrJJYm7qIw


-
Unable to play video recorded from OpenAi Environment
10 mars 2021, par ipro_ultraI am developing some Reinforcement Learning algorithms in python and I am trying to record the environment into a video however I am unable to play the file once I have recorded it and it is only 1KB in size, I have tried an example I found online and this didn't work either so I assume it's a system issue rather than a problem with my code, I have enabled the debugging by running
gym.logger.set_level(gym.logger.DEBUG)
but this doesn't seem to through anything useful back, below is my code

def replay_episode(self):
 self._env.reset()
 video_recorder = VideoRecorder(self._env, "{}/episode{}.mp4".format(self._output_directory, episode_number), enabled=True)
 for action_taken in self._current_episode_actions:
 self._env.render()
 video_recorder.capture_frame()
 self._env.step(action_taken)
 video_recorder.close()

for i in range(solution.get_episode_count()):
 # AI Stuff
 episode_number = i + 1
 solution.intialise_episode(episode_number)
 profiler.enable()
 while not solution.current_episode_done():
 solution.run_tick()
 solution.train_model()
 profiler.disable()
 export_profiling_results(profiler, '{}/episode{}.csv'.format(solution._output_directory, episode_number))


 # This is the bit we're intrested in
 if solution.current_episode_successful() or episode_number % 50 == 0:
 solution.replay_episode()

 done_time = time.time()
 print("Episode {} Completed in {}s".format(episode_number, done_time-start_time))
 start_time = done_time

solution.close()



Here is the log I get from the environment when the
replay_episode()
method is called

DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Starting ffmpeg with "ffmpeg -nostats -loglevel error -y -f rawvideo -s:v 600x400 -pix_fmt rgb24 -framerate 30 -i - -vf scale=trunc(iw/2)*2:trunc(ih/2)*2 -vcodec libx264 -pix_fmt yuv420p 
-r 30 data/1003083349/episode50.mp4"
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Capturing video frame: path=data/1003083349/episode50.mp4
DEBUG: Closing video encoder: path=data/1003083349/episode50.mp4



When I attempt to play the video I get the error :
This file isn't playable. That might be because the file type is unsupported, the file extension is incorrect, or the file is corrupt. 0xc10100be


Thank you in advance for any responses