
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (111)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (10208)
-
Why do I get a EXC_BAD_ACCESS error while running a simple sws_scale function ? [closed]
3 décembre 2022, par VishAll I'm trying to do is decode a video using ffmpeg, and change format from YUV to RGBA using sws_scale. The code is as follows :


`


#include "video_reader.hpp"
#include 
#include <iostream>

using namespace std;

// av_err2str returns a temporary array. This doesn't work in gcc.
// This function can be used as a replacement for av_err2str.
static const char* av_make_error(int errnum) {
 static char str[AV_ERROR_MAX_STRING_SIZE];
 memset(str, 0, sizeof(str));
 return av_make_error_string(str, AV_ERROR_MAX_STRING_SIZE, errnum);
}

static AVPixelFormat correct_for_deprecated_pixel_format(AVPixelFormat pix_fmt) {
 // Fix swscaler deprecated pixel format warning
 // (YUVJ has been deprecated, change pixel format to regular YUV)
 switch (pix_fmt) {
 case AV_PIX_FMT_YUVJ420P: return AV_PIX_FMT_YUV420P;
 case AV_PIX_FMT_YUVJ422P: return AV_PIX_FMT_YUV422P;
 case AV_PIX_FMT_YUVJ444P: return AV_PIX_FMT_YUV444P;
 case AV_PIX_FMT_YUVJ440P: return AV_PIX_FMT_YUV440P;
 default: return pix_fmt;
 }
}

bool video_reader_open(VideoReaderState* state, const char* filename) {

 // Unpack members of state
 auto& width = state->width;
 auto& height = state->height;
 auto& time_base = state->time_base;
 auto& av_format_ctx = state->av_format_ctx;
 auto& av_codec_ctx = state->av_codec_ctx;
 auto& video_stream_index = state->video_stream_index;
 auto& av_frame = state->av_frame;
 auto& av_packet = state->av_packet;

 // Open the file using libavformat
 av_format_ctx = avformat_alloc_context();
 if (!av_format_ctx) {
 printf("Couldn't created AVFormatContext\n");
 return false;
 }

 if (avformat_open_input(&av_format_ctx, filename, NULL, NULL) != 0) {
 printf("Couldn't open video file\n");
 return false;
 }

 // Find the first valid video stream inside the file
 video_stream_index = -1;
 AVCodecParameters* av_codec_params;
 AVCodec* av_codec;
 for (int i = 0; i < av_format_ctx->nb_streams; ++i) {
 av_codec_params = av_format_ctx->streams[i]->codecpar;
 if (!avcodec_find_decoder(av_codec_params->codec_id)) {
 continue;
 }
 if (av_codec_params->codec_type == AVMEDIA_TYPE_VIDEO) {
 video_stream_index = i;
 width = av_codec_params->width;
 height = av_codec_params->height;
 time_base = av_format_ctx->streams[i]->time_base;
 break;
 }
 }
 if (video_stream_index == -1) {
 printf("Couldn't find valid video stream inside file\n");
 return false;
 }

 // Set up a codec context for the decoder
 av_codec_ctx = avcodec_alloc_context3(avcodec_find_decoder(av_codec_params->codec_id));
 if (!av_codec_ctx) {
 printf("Couldn't create AVCodecContext\n");
 return false;
 }
 if (avcodec_parameters_to_context(av_codec_ctx, av_codec_params) < 0) {
 printf("Couldn't initialize AVCodecContext\n");
 return false;
 }
 if (avcodec_open2(av_codec_ctx, avcodec_find_decoder(av_codec_params->codec_id), NULL) < 0) {
 printf("Couldn't open codec\n");
 return false;
 }

 av_frame = av_frame_alloc();
 if (!av_frame) {
 printf("Couldn't allocate AVFrame\n");
 return false;
 }
 av_packet = av_packet_alloc();
 if (!av_packet) {
 printf("Couldn't allocate AVPacket\n");
 return false;
 }

 return true;
}

bool video_reader_read_frame(VideoReaderState* state, uint8_t* frame_buffer, int64_t* pts) {

 // Unpack members of state
 auto& width = state->width;
 auto& height = state->height;
 auto& av_format_ctx = state->av_format_ctx;
 auto& av_codec_ctx = state->av_codec_ctx;
 auto& video_stream_index = state->video_stream_index;
 auto& av_frame = state->av_frame;
 auto& av_packet = state->av_packet;
 auto& sws_scaler_ctx = state->sws_scaler_ctx;

 // Decode one frame
 int response;
 while (av_read_frame(av_format_ctx, av_packet) >= 0) {
 if (av_packet->stream_index != video_stream_index) {
 av_packet_unref(av_packet);
 continue;
 }

 response = avcodec_send_packet(av_codec_ctx, av_packet);
 if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }

 response = avcodec_receive_frame(av_codec_ctx, av_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 av_packet_unref(av_packet);
 continue;
 } else if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }

 av_packet_unref(av_packet);
 break;
 }

 *pts = av_frame->pts;
 

 // Set up sws scaler
 if (!sws_scaler_ctx) {
 auto source_pix_fmt = correct_for_deprecated_pixel_format(av_codec_ctx->pix_fmt);
 sws_scaler_ctx = sws_getContext(width, height, AV_PIX_FMT_YUV420P,
 width, height, AV_PIX_FMT_RGB0,
 SWS_FAST_BILINEAR, NULL, NULL, NULL);
 }
 if (!sws_scaler_ctx) {
 printf("Couldn't initialize sw scaler\n");
 return false;
 }

 cout << av_codec_ctx->pix_fmt << endl;
 uint8_t* dest[4] = { frame_buffer, NULL, NULL, NULL };
 int dest_linesize[4] = { width * 4, 0, 0, 0 };
 sws_scale(sws_scaler_ctx, av_frame->data, av_frame->linesize, 0, av_frame->height, dest, dest_linesize);

 return true;
}

bool video_reader_seek_frame(VideoReaderState* state, int64_t ts) {
 
 // Unpack members of state
 auto& av_format_ctx = state->av_format_ctx;
 auto& av_codec_ctx = state->av_codec_ctx;
 auto& video_stream_index = state->video_stream_index;
 auto& av_packet = state->av_packet;
 auto& av_frame = state->av_frame;
 
 av_seek_frame(av_format_ctx, video_stream_index, ts, AVSEEK_FLAG_BACKWARD);

 // av_seek_frame takes effect after one frame, so I'm decoding one here
 // so that the next call to video_reader_read_frame() will give the correct
 // frame
 int response;
 while (av_read_frame(av_format_ctx, av_packet) >= 0) {
 if (av_packet->stream_index != video_stream_index) {
 av_packet_unref(av_packet);
 continue;
 }

 response = avcodec_send_packet(av_codec_ctx, av_packet);
 if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }

 response = avcodec_receive_frame(av_codec_ctx, av_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 av_packet_unref(av_packet);
 continue;
 } else if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }

 av_packet_unref(av_packet);
 break;
 }

 return true;
}

void video_reader_close(VideoReaderState* state) {
 sws_freeContext(state->sws_scaler_ctx);
 avformat_close_input(&state->av_format_ctx);
 avformat_free_context(state->av_format_ctx);
 av_frame_free(&state->av_frame);
 av_packet_free(&state->av_packet);
 avcodec_free_context(&state->av_codec_ctx);
}

</iostream>


`
This gives me the following error at the sws_scale step, during debugging. Could you please let me know what it is I could be doing wrong ?


EXC_BAD_ACCESS (code=1, address=0x910043e491224463)


I was expecting to get the RGBA array as shown in various tutorials.


-
aarch64/opusdsp : implement NEON accelerated postfilter and deemphasis
15 mars 2019, par Lynneaarch64/opusdsp : implement NEON accelerated postfilter and deemphasis
153372 UNITS in postfilter_c, 65536 runs, 0 skips
73164 UNITS in postfilter_neon, 65536 runs, 0 skips -> 2.1x speedup80591 UNITS in deemphasis_c, 131072 runs, 0 skips
43969 UNITS in deemphasis_neon, 131072 runs, 0 skips -> 1.83x speedupTotal decoder speedup : 15% on a Raspberry Pi 3 (from 28.1x to 33.5x realtime)
Deemphasis SIMD based on the following unrolling :
const float c1 = CELT_EMPH_COEFF, c2 = c1*c1, c3 = c2*c1, c4 = c3*c1 ;
float state = coeff ;for (int i = 0 ; i < len ; i += 4)
y[0] = x[0] + c1*state ;
y[1] = x[1] + c2*state + c1*x[0] ;
y[2] = x[2] + c3*state + c1*x[1] + c2*x[0] ;
y[3] = x[3] + c4*state + c1*x[2] + c2*x[1] + c3*x[0] ;state = y[3] ;
y += 4 ;
x += 4 ;
Unlike the x86 version, duplication is used instead of pslldq so
the structure and tables are different. -
How to use ffmpeg.wasm in Firefox without getting the SharedArrayBuffer ?
28 décembre 2020, par Pedro HenriqueI'm trying to load ffmpeg.wasm in a react app to do a small video converter project. The code is working fine on chrome, but in firefox dev edition (83.0b) I catch the following error :




ReferenceError : SharedArrayBuffer is not defined




Here's the part of the component where the error is catched (the variable ready is never becomes true) :


import React, { useState, useEffect } from 'react'
import styles from './App.module.css'
import { createFFmpeg, fetchFile } from '@ffmpeg/ffmpeg'
const ffmpeg = createFFmpeg({ log: true })

function App() {
 // load state
 const [ready, setReady] = useState(false)
 // files state
 const [video, setVideo] = useState('')
 const [gif, setGif] = useState()
 // UI state
 const [dragOver, setDragOver ] = useState(false)
 const [nOfEnters, setNOfEnters] = useState(0)

 const load = async () => {
 try {
 await ffmpeg.load()
 setReady(true)
 } catch(error) {
 console.log(error)
 }
 }

 useEffect(() => {
 load()
 }, [])



Thanks in advance, let me know if I should've provided any more detail.