
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (47)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (9023)
-
Why do I get a EXC_BAD_ACCESS error while running a simple sws_scale function ? [closed]
3 décembre 2022, par VishAll I'm trying to do is decode a video using ffmpeg, and change format from YUV to RGBA using sws_scale. The code is as follows :


`


#include "video_reader.hpp"
#include 
#include <iostream>

using namespace std;

// av_err2str returns a temporary array. This doesn't work in gcc.
// This function can be used as a replacement for av_err2str.
static const char* av_make_error(int errnum) {
 static char str[AV_ERROR_MAX_STRING_SIZE];
 memset(str, 0, sizeof(str));
 return av_make_error_string(str, AV_ERROR_MAX_STRING_SIZE, errnum);
}

static AVPixelFormat correct_for_deprecated_pixel_format(AVPixelFormat pix_fmt) {
 // Fix swscaler deprecated pixel format warning
 // (YUVJ has been deprecated, change pixel format to regular YUV)
 switch (pix_fmt) {
 case AV_PIX_FMT_YUVJ420P: return AV_PIX_FMT_YUV420P;
 case AV_PIX_FMT_YUVJ422P: return AV_PIX_FMT_YUV422P;
 case AV_PIX_FMT_YUVJ444P: return AV_PIX_FMT_YUV444P;
 case AV_PIX_FMT_YUVJ440P: return AV_PIX_FMT_YUV440P;
 default: return pix_fmt;
 }
}

bool video_reader_open(VideoReaderState* state, const char* filename) {

 // Unpack members of state
 auto& width = state->width;
 auto& height = state->height;
 auto& time_base = state->time_base;
 auto& av_format_ctx = state->av_format_ctx;
 auto& av_codec_ctx = state->av_codec_ctx;
 auto& video_stream_index = state->video_stream_index;
 auto& av_frame = state->av_frame;
 auto& av_packet = state->av_packet;

 // Open the file using libavformat
 av_format_ctx = avformat_alloc_context();
 if (!av_format_ctx) {
 printf("Couldn't created AVFormatContext\n");
 return false;
 }

 if (avformat_open_input(&av_format_ctx, filename, NULL, NULL) != 0) {
 printf("Couldn't open video file\n");
 return false;
 }

 // Find the first valid video stream inside the file
 video_stream_index = -1;
 AVCodecParameters* av_codec_params;
 AVCodec* av_codec;
 for (int i = 0; i < av_format_ctx->nb_streams; ++i) {
 av_codec_params = av_format_ctx->streams[i]->codecpar;
 if (!avcodec_find_decoder(av_codec_params->codec_id)) {
 continue;
 }
 if (av_codec_params->codec_type == AVMEDIA_TYPE_VIDEO) {
 video_stream_index = i;
 width = av_codec_params->width;
 height = av_codec_params->height;
 time_base = av_format_ctx->streams[i]->time_base;
 break;
 }
 }
 if (video_stream_index == -1) {
 printf("Couldn't find valid video stream inside file\n");
 return false;
 }

 // Set up a codec context for the decoder
 av_codec_ctx = avcodec_alloc_context3(avcodec_find_decoder(av_codec_params->codec_id));
 if (!av_codec_ctx) {
 printf("Couldn't create AVCodecContext\n");
 return false;
 }
 if (avcodec_parameters_to_context(av_codec_ctx, av_codec_params) < 0) {
 printf("Couldn't initialize AVCodecContext\n");
 return false;
 }
 if (avcodec_open2(av_codec_ctx, avcodec_find_decoder(av_codec_params->codec_id), NULL) < 0) {
 printf("Couldn't open codec\n");
 return false;
 }

 av_frame = av_frame_alloc();
 if (!av_frame) {
 printf("Couldn't allocate AVFrame\n");
 return false;
 }
 av_packet = av_packet_alloc();
 if (!av_packet) {
 printf("Couldn't allocate AVPacket\n");
 return false;
 }

 return true;
}

bool video_reader_read_frame(VideoReaderState* state, uint8_t* frame_buffer, int64_t* pts) {

 // Unpack members of state
 auto& width = state->width;
 auto& height = state->height;
 auto& av_format_ctx = state->av_format_ctx;
 auto& av_codec_ctx = state->av_codec_ctx;
 auto& video_stream_index = state->video_stream_index;
 auto& av_frame = state->av_frame;
 auto& av_packet = state->av_packet;
 auto& sws_scaler_ctx = state->sws_scaler_ctx;

 // Decode one frame
 int response;
 while (av_read_frame(av_format_ctx, av_packet) >= 0) {
 if (av_packet->stream_index != video_stream_index) {
 av_packet_unref(av_packet);
 continue;
 }

 response = avcodec_send_packet(av_codec_ctx, av_packet);
 if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }

 response = avcodec_receive_frame(av_codec_ctx, av_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 av_packet_unref(av_packet);
 continue;
 } else if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }

 av_packet_unref(av_packet);
 break;
 }

 *pts = av_frame->pts;
 

 // Set up sws scaler
 if (!sws_scaler_ctx) {
 auto source_pix_fmt = correct_for_deprecated_pixel_format(av_codec_ctx->pix_fmt);
 sws_scaler_ctx = sws_getContext(width, height, AV_PIX_FMT_YUV420P,
 width, height, AV_PIX_FMT_RGB0,
 SWS_FAST_BILINEAR, NULL, NULL, NULL);
 }
 if (!sws_scaler_ctx) {
 printf("Couldn't initialize sw scaler\n");
 return false;
 }

 cout << av_codec_ctx->pix_fmt << endl;
 uint8_t* dest[4] = { frame_buffer, NULL, NULL, NULL };
 int dest_linesize[4] = { width * 4, 0, 0, 0 };
 sws_scale(sws_scaler_ctx, av_frame->data, av_frame->linesize, 0, av_frame->height, dest, dest_linesize);

 return true;
}

bool video_reader_seek_frame(VideoReaderState* state, int64_t ts) {
 
 // Unpack members of state
 auto& av_format_ctx = state->av_format_ctx;
 auto& av_codec_ctx = state->av_codec_ctx;
 auto& video_stream_index = state->video_stream_index;
 auto& av_packet = state->av_packet;
 auto& av_frame = state->av_frame;
 
 av_seek_frame(av_format_ctx, video_stream_index, ts, AVSEEK_FLAG_BACKWARD);

 // av_seek_frame takes effect after one frame, so I'm decoding one here
 // so that the next call to video_reader_read_frame() will give the correct
 // frame
 int response;
 while (av_read_frame(av_format_ctx, av_packet) >= 0) {
 if (av_packet->stream_index != video_stream_index) {
 av_packet_unref(av_packet);
 continue;
 }

 response = avcodec_send_packet(av_codec_ctx, av_packet);
 if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }

 response = avcodec_receive_frame(av_codec_ctx, av_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 av_packet_unref(av_packet);
 continue;
 } else if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }

 av_packet_unref(av_packet);
 break;
 }

 return true;
}

void video_reader_close(VideoReaderState* state) {
 sws_freeContext(state->sws_scaler_ctx);
 avformat_close_input(&state->av_format_ctx);
 avformat_free_context(state->av_format_ctx);
 av_frame_free(&state->av_frame);
 av_packet_free(&state->av_packet);
 avcodec_free_context(&state->av_codec_ctx);
}

</iostream>


`
This gives me the following error at the sws_scale step, during debugging. Could you please let me know what it is I could be doing wrong ?


EXC_BAD_ACCESS (code=1, address=0x910043e491224463)


I was expecting to get the RGBA array as shown in various tutorials.


-
Media Source Extensions - Identifying when there is no more data
8 octobre 2015, par galbarmI’m creating a fragmented MP4 of a real-time live content with constant 10FPS but occasionally a frame gets dropped before being feed to the MP4 creation process.
The MP4 is transmitted to the web through a web socket.Due to the occasional frames drop, the playback speed of the file is effectively slightly greater than 1x, because the player plays at 10FPS.
Since this is a live content, after some duration, the player reaches the present time and has no data to play.Now, to the MSE issue :
What seems to happen in Chrome, when the player doesn’t have enough data to continue playing, is that it pauses for 1-2 secs, then plays it very fast, and vice versa. So at this point the user experience becomes very bad.
The issue was discussed here :
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28379My idea to workaround this, is to identify the state (having no more data), change playback rate to 0.9 for a few seconds to allow some buffering, and then switch back to 1.0.
The problem is that I couldn’t find a way to identify the state.
The readystate of the media element seems to always have the value of "HAVE_ENOUGH_DATA" even when the issue starts.Does the MSE API exposes a way identify the state that I have described ?
-
Core : Focus invalid element when validating a custom set of inputs
16 décembre 2014, par Maks3wCore : Focus invalid element when validating a custom set of inputs
Invalid element is not focused when validate a custom set of inputs and
the last one is a valid input. The issue is element method, via
prepareElement method, via reset method, resets errorList state after
validate each input so the global state of the validator is not
preserved.Closes #1327