
Recherche avancée
Autres articles (13)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Qualité du média après traitement
21 juin 2013, parLe bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...) -
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)
Sur d’autres sites (4741)
-
ffmpeg not working for AWS S3 signed urls
19 septembre 2017, par Ahmad hamzaI’m trying to convert video format from
mp4
towav
. It works fine with public urls :ffmpeg -i https://s3.amazonaws.com/my-fake-bucket/folder1/video.mp4 -vn -acodec pcm_s16le -ar 44100 -ac 2 video.wav
But when i use AWS’s
signed-url
to convert it reports[https @ 0x1bf6c40] HTTP error 400 Bad Request
Any idea what needs to be done for accessing signed urls using ffmpeg.?
-
FFMPEG capture video from Magewell USB Capture Family 90 fps [on hold]
9 septembre 2017, par FabrizioI’m trying to capture video from Magewell USB Capture Family at 90 fps (or 60 fps) but FFMPEG ends with error :
[dshow @ 00000000005d8fe0] Could not set video options video=USB Capture HDMI: I/O error
The FFMPEG command I use is :
ffmpeg -y -f dshow -video_size 1280x720 -rtbufsize 2000M -r 90
-i video="USB Capture HDMI" -codec:v libx264 -pix_fmt yuv420p -vb 20M
-preset veryfast -t 15 capture.mp4Reading Magewell’s specifications, 60 or 90 fps are supported.
-
The C++11 Thread Timer is not working
26 août 2017, par GathrosI’m trying to make a video player using SDL2 and FFmpeg API. The video is being decoded and I can display an image on screen. I can also play audio but I not doing that (I know it works, I’ve tried it).
My problem is I can’t update the image when it should be. I’m able to get the timestamps and work out the delay then send it to a thread, where it should call an window update when the time has elapsed. But all that happens is the images flash on the screen with no delay. I have even set the delay to 1 second and the images still flash, after there being 1 second of a blank window.
Here is my code :
extern "C"{
//FFmpeg libraries
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
//SDL2 libraries
#include <sdl2></sdl2>SDL.h>
}
// compatibility with newer API
#if LIBAVCODEC_VERSION_INT < AV_VERSION_INT(55,28,1)
#define av_frame_alloc avcodec_alloc_frame
#define av_frame_free avcodec_free_frame
#endif
//C++ libraries
#include <cstdio>
#include <chrono>
#include <thread>
#include <atomic>
#include <mutex>
#include
typedef struct PacketQueue {
AVPacketList *first_pkt, *last_pkt;
std::mutex mutex;
std::condition_variable convar;
} PacketQueue;
std::atomic<bool> quitting, decoded;
std::atomic delay;
Uint32 Update_Window;
int packet_queue_put(PacketQueue *q, AVPacket *pkt){
AVPacketList *pkt1;
if(av_dup_packet(pkt) < 0){
return -1;
}
pkt1 = (AVPacketList*) av_malloc(sizeof(AVPacketList));
if(!pkt1){
return -1;
}
pkt1->pkt = *pkt;
pkt1->next = NULL;
std::lock_guard lock(q->mutex);
if (!q->last_pkt){
q->first_pkt = pkt1;
}else{
q->last_pkt->next = pkt1;
}
q->last_pkt = pkt1;
q->convar.notify_all();
return 0;
}
static int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block){
AVPacketList *pkt1;
int ret;
std::unique_lock lk(q->mutex);
while(1){
if(quitting){
ret = -1;
break;
}
pkt1 = q->first_pkt;
if(pkt1){
q->first_pkt = pkt1->next;
if(!q->first_pkt){
q->last_pkt = NULL;
}
*pkt = pkt1->pkt;
av_free(pkt1);
ret = 1;
break;
}else if(decoded){
ret = 0;
quitting = true;
break;
}else if(block){
q->convar.wait_for(lk, std::chrono::microseconds(50));
}else {
ret = 0;
break;
}
}
return ret;
}
void UpdateEventQueue(){
SDL_Event event;
SDL_zero(event);
event.type = Update_Window;
SDL_PushEvent(&event);
}
void VideoTimerThreadFunc(){
UpdateEventQueue();
while(!quitting){
if(delay == 0){
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}else {
std::this_thread::sleep_for(std::chrono::microseconds(delay));
UpdateEventQueue();
}
}
}
int main(int argc, char *argv[]){
AVFormatContext* FormatCtx = nullptr;
AVCodecContext* CodecCtxOrig = nullptr;
AVCodecContext* CodecCtx = nullptr;
AVCodec* Codec = nullptr;
int videoStream;
AVFrame* Frame = nullptr;
AVPacket packet;
struct SwsContext* SwsCtx = nullptr;
PacketQueue videoq;
int frameFinished;
int64_t last_pts = 0;
const AVRational ms = {1, 1000};
SDL_Event event;
SDL_Window* screen;
SDL_Renderer* renderer;
SDL_Texture* texture;
std::shared_ptr<uint8> yPlane, uPlane, vPlane;
int uvPitch;
if (argc != 2) {
fprintf(stderr, "Usage: %s <file>\n", argv[0]);
return -1;
}
// Register all formats and codecs
av_register_all();
// Initialise SDL2
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
fprintf(stderr, "Couldn't initialise SDL - %s\n", SDL_GetError());
return -1;
}
// Setting things up
quitting = false;
decoded = false;
delay = 0;
Update_Window = SDL_RegisterEvents(1);
memset(&videoq, 0, sizeof(PacketQueue));
// Open video file
if(avformat_open_input(&FormatCtx, argv[1], NULL, NULL) != 0){
fprintf(stderr, "Couldn't open file\n");
return -1; // Couldn't open file
}
// Retrieve stream information
if(avformat_find_stream_info(FormatCtx, NULL) < 0){
fprintf(stderr, "Couldn't find stream information\n");
// Close the video file
avformat_close_input(&FormatCtx);
return -1; // Couldn't find stream information
}
// Find the video stream
videoStream = av_find_best_stream(FormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);
if(videoStream < 0){
fprintf(stderr, "Couldn't find video stream\n");
// Close the video file
avformat_close_input(&FormatCtx);
return -1; // Didn't find a video stream
}
// Get a pointer to the codec context for the video stream
CodecCtxOrig = FormatCtx->streams[videoStream]->codec;
// Find the decoder for the video stream
Codec = avcodec_find_decoder(CodecCtxOrig->codec_id);
if(Codec == NULL){
fprintf(stderr, "Unsupported codec\n");
// Close the codec
avcodec_close(CodecCtxOrig);
// Close the video file
avformat_close_input(&FormatCtx);
return -1; // Codec not found
}
// Copy context
CodecCtx = avcodec_alloc_context3(Codec);
if(avcodec_copy_context(CodecCtx, CodecCtxOrig) != 0){
fprintf(stderr, "Couldn't copy codec context");
// Close the codec
avcodec_close(CodecCtxOrig);
// Close the video file
avformat_close_input(&FormatCtx);
return -1; // Error copying codec context
}
// Open codec
if(avcodec_open2(CodecCtx, Codec, NULL) < 0){
fprintf(stderr, "Couldn't open codec\n");
// Close the codec
avcodec_close(CodecCtx);
avcodec_close(CodecCtxOrig);
// Close the video file
avformat_close_input(&FormatCtx);
return -1; // Could not open codec
}
// Allocate video frame
Frame = av_frame_alloc();
// Make a screen to put our video
screen = SDL_CreateWindow("Video Player", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, CodecCtx->width, CodecCtx->height, 0);
if(!screen){
fprintf(stderr, "SDL: could not create window - exiting\n");
quitting = true;
// Clean up SDL2
SDL_Quit();
// Free the YUV frame
av_frame_free(&Frame);
// Close the codec
avcodec_close(CodecCtx);
avcodec_close(CodecCtxOrig);
// Close the video file
avformat_close_input(&FormatCtx);
return -1;
}
renderer = SDL_CreateRenderer(screen, -1, 0);
if(!renderer){
fprintf(stderr, "SDL: could not create renderer - exiting\n");
quitting = true;
// Clean up SDL2
SDL_DestroyWindow(screen);
SDL_Quit();
// Free the YUV frame
av_frame_free(&Frame);
// Close the codec
avcodec_close(CodecCtx);
avcodec_close(CodecCtxOrig);
// Close the video file
avformat_close_input(&FormatCtx);
return -1;
}
// Allocate a place to put our YUV image on that screen
texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING, CodecCtx->width, CodecCtx->height);
if(!texture){
fprintf(stderr, "SDL: could not create texture - exiting\n");
quitting = true;
// Clean up SDL2
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
// Free the YUV frame
av_frame_free(&Frame);
// Close the codec
avcodec_close(CodecCtx);
avcodec_close(CodecCtxOrig);
// Close the video file
avformat_close_input(&FormatCtx);
return -1;
}
// Initialise SWS context for software scaling
SwsCtx = sws_getContext(CodecCtx->width, CodecCtx->height, CodecCtx->pix_fmt,
CodecCtx->width, CodecCtx->height, PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
if(!SwsCtx){
fprintf(stderr, "Couldn't create sws context\n");
quitting = true;
// Clean up SDL2
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
// Free the YUV frame
av_frame_free(&Frame);
// Close the codec
avcodec_close(CodecCtx);
avcodec_close(CodecCtxOrig);
// Close the video file
avformat_close_input(&FormatCtx);
return -1;
}
// set up YV12 pixel array (12 bits per pixel)
yPlane = std::shared_ptr<uint8>((Uint8 *)::operator new (CodecCtx->width * CodecCtx->height, std::nothrow));
uPlane = std::shared_ptr<uint8>((Uint8 *)::operator new (CodecCtx->width * CodecCtx->height / 4, std::nothrow));
vPlane = std::shared_ptr<uint8>((Uint8 *)::operator new (CodecCtx->width * CodecCtx->height / 4, std::nothrow));
uvPitch = CodecCtx->width / 2;
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
quitting = true;
// Clean up SDL2
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
// Free the YUV frame
av_frame_free(&Frame);
// Close the codec
avcodec_close(CodecCtx);
avcodec_close(CodecCtxOrig);
// Close the video file
avformat_close_input(&FormatCtx);
return -1;
}
std::thread VideoTimer (VideoTimerThreadFunc);
while (!quitting) {
// Check for more packets
if(av_read_frame(FormatCtx, &packet) >= 0){
// Check what stream it belongs to
if (packet.stream_index == videoStream) {
packet_queue_put(&videoq, &packet);
}else{
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
}else {
decoded = true;
}
SDL_PollEvent(&event);
if(event.type == Update_Window){
// Getting packet
if(packet_queue_get(&videoq, &packet, 0)){
// Decode video frame
avcodec_decode_video2(CodecCtx, Frame, &frameFinished, &packet);
// Did we get a video frame?
if (frameFinished) {
AVPicture pict;
pict.data[0] = yPlane.get();
pict.data[1] = uPlane.get();
pict.data[2] = vPlane.get();
pict.linesize[0] = CodecCtx->width;
pict.linesize[1] = uvPitch;
pict.linesize[2] = uvPitch;
// Convert the image into YUV format that SDL uses
sws_scale(SwsCtx, (uint8_t const * const *) Frame->data, Frame->linesize, 0, CodecCtx->height, pict.data, pict.linesize);
SDL_UpdateYUVTexture(texture, NULL, yPlane.get(), CodecCtx->width, uPlane.get(), uvPitch, vPlane.get(), uvPitch);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
// Calculating delay
delay = av_rescale_q(packet.dts, CodecCtx->time_base, ms) - last_pts;
last_pts = av_rescale_q(packet.dts, CodecCtx->time_base, ms);
}else{
//UpdateEventQueue();
delay = 1;
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}else{
//UpdateEventQueue();
}
}
switch (event.type) {
case SDL_QUIT:
quitting = true;
break;
default:
break;
}
}
VideoTimer.join();
//SDL2 clean up
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
// Free the YUV frame
av_frame_free(&Frame);
// Free Sws
sws_freeContext(SwsCtx);
// Close the codec
avcodec_close(CodecCtx);
avcodec_close(CodecCtxOrig);
// Close the video file
avformat_close_input(&FormatCtx);
return 0;
}
</uint8></uint8></uint8></file></uint8></bool></mutex></atomic></thread></chrono></cstdio>