
Recherche avancée
Autres articles (35)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Utilisation et configuration du script
19 janvier 2011, parInformations spécifiques à la distribution Debian
Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
Récupération du script
Le script d’installation peut être récupéré de deux manières différentes.
Via svn en utilisant la commande pour récupérer le code source à jour :
svn co (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...)
Sur d’autres sites (5746)
-
google speech to text errors out (grpc invalid deadline NaN)
15 décembre 2019, par jamescharlesworthI have a ffmpeg script that cuts an audio file into a short 5 second clip, however after I cut the file, calling the google speech
recognize
command errors out.Creating a clip - full code link :
const uri = 'http://traffic.libsyn.com/joeroganexp/p1400.mp3?dest-id=19997';
const command = ffmpeg(got.stream(uri));
command
.seek(0)
.duration(5)
.audioBitrate(128)
.format('mp3')
...which works fine and creates
./clip2.mp3
.I then take that file and upload it to speech to text api and it times out (script here. When I put
timeout
andmaxRetries
argument I can get the actual error :Error: 2 UNKNOWN: Getting metadata from plugin failed with error: Deadline is too far in the future
at Object.callErrorFromStatus (/Users/jamescharlesworth/Downloads/demo/node_modules/@grpc/grpc-js/build/src/call.js:30:26)
at Http2CallStream.<anonymous> (/Users/jamescharlesworth/Downloads/demo/node_modules/@grpc/grpc-js/build/src/client.js:96:33)
at Http2CallStream.emit (events.js:215:7)
at /Users/jamescharlesworth/Downloads/demo/node_modules/@grpc/grpc-js/build/src/call-stream.js:98:22
at processTicksAndRejections (internal/process/task_queues.js:75:11) {
code: 2,
details: 'Getting metadata from plugin failed with error: Deadline is too far in the future',
metadata: Metadata { internalRepr: Map {}, options: {} },
note: 'Exception occurred in retry method that was not classified as transient'
}
</anonymous>Stepping through the grpc code i see that the deadline is an invalid date.
This seems to be causing the issue but i assume it may be from incorrect params passed into the speechclient.recognize()
method.A few other things to note :
- The script works for some audio files, not all
- I can upload the broken my clip mp3 clip2.mp3 to the demo app here and it works fine.
- If I change the seek command of my ffmpeg script to start at
0.01
speech recognize command will work (however it breaks other audio clips as its not the correct starting point). I notice that when i do this the png of the mp3 gets stripped out and is a much smaller file size
-
How to Add PulseAudio Server to quay.io/browser/google-chrome-stable Docker Image for Audio Support with Screen Recording ?
17 avril, par Ahmed Seddik BouchibaI’m trying to set up an environment for recording the screen of a Chrome browser running in a Docker container, and I need to enable audio support. I’m using the quay.io/browser/google-chrome-stable:133.0.6943.98-6 image for the browser and quay.io/aerokube/xvfb:21.1 for the virtual framebuffer to capture the screen.


However, I’m facing an issue where audio is not supported in the Chrome Docker image, which I need for recording. The setup involves using FFmpeg in a separate container to stream the recorded video, but without audio from the browser, this setup isn’t complete.


I’m looking for guidance on how to add a PulseAudio server to the Chrome image to enable audio support. Specifically :


How can I configure the Docker image quay.io/browser/google-chrome-stable:133.0.6943.98-6 to support PulseAudio?

Are there any considerations or best practices when adding PulseAudio to a headless browser Docker container?

Is it possible to run the PulseAudio server in a separate container and link it to the Chrome container, or should it be included directly in the Chrome container?



Any help on adding PulseAudio support to this Chrome Docker image would be greatly appreciated !


Additional Context :


The goal is to run a headless Chrome browser with audio support to record the browser’s activities (both video and audio) and stream it using FFmpeg.

I’m using Docker Compose to orchestrate the containers but haven’t figured out how to integrate PulseAudio into the setup effectively.



Thanks in advance !


-
Saving frames as JPG with FFMPEG (Visual Studio / C++)
10 novembre 2022, par Diego SatizabalI am trying to save all frames from a mp4 video in separate JPG files, I have a code that runs and actually saves something to JPG files but files are not recognized as images and nothing is showing.


Below my full code, I am using Visual Studio 2022 in Windows 11 and FFMPEG 5.1. The function that saves the images is save_frame_as_jpeg which is actually an adaption from the code provided here but changing the use of avcodec_encode_video2 for avcodec_send_frame/avcodec_receive_packet as indicated in the documentation.


I am obiously doing something wrong but cannot quite find it, BTW, I know that a simple command (ffmpeg -i input.mp4 -vf fps=1 vid_%d.png) will do this but I am requiring to do it by code.


Any help is appreciated, thanks in advance !


// FfmpegTests.cpp : This file contains the 'main' function. Program execution begins and ends there.
//
#pragma warning(disable : 4996)
extern "C"
{
 #include "libavformat/avformat.h"
 #include "libavcodec/avcodec.h"
 #include "libavfilter/avfilter.h"
 #include "libavutil/opt.h"
 #include "libavutil/avutil.h"
 #include "libavutil/error.h"
 #include "libavfilter/buffersrc.h"
 #include "libavfilter/buffersink.h"
 #include "libswscale/swscale.h"
}

#pragma comment(lib, "avcodec.lib")
#pragma comment(lib, "avformat.lib")
#pragma comment(lib, "avfilter.lib")
#pragma comment(lib, "avutil.lib")
#pragma comment(lib, "swscale.lib")

#include <cstdio>
#include <iostream>
#include <chrono>
#include <thread>


static AVFormatContext* fmt_ctx;
static AVCodecContext* dec_ctx;
AVFilterGraph* filter_graph;
AVFilterContext* buffersrc_ctx;
AVFilterContext* buffersink_ctx;
static int video_stream_index = -1;

const char* filter_descr = "scale=78:24,transpose=cclock";
static int64_t last_pts = AV_NOPTS_VALUE;

static int open_input_file(const char* filename)
{
 const AVCodec* dec;
 int ret;

 if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
 return ret;
 }

 if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
 return ret;
 }

 /* select the video stream */
 ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &dec, 0);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot find a video stream in the input file\n");
 return ret;
 }
 video_stream_index = ret;

 /* create decoding context */
 dec_ctx = avcodec_alloc_context3(dec);
 if (!dec_ctx)
 return AVERROR(ENOMEM);
 avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar);

 /* init the video decoder */
 if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
 return ret;
 }

 return 0;
}

static int init_filters(const char* filters_descr)
{
 char args[512];
 int ret = 0;
 const AVFilter* buffersrc = avfilter_get_by_name("buffer");
 const AVFilter* buffersink = avfilter_get_by_name("buffersink");
 AVFilterInOut* outputs = avfilter_inout_alloc();
 AVFilterInOut* inputs = avfilter_inout_alloc();
 AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
 enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };

 filter_graph = avfilter_graph_alloc();
 if (!outputs || !inputs || !filter_graph) {
 ret = AVERROR(ENOMEM);
 goto end;
 }

 /* buffer video source: the decoded frames from the decoder will be inserted here. */
 snprintf(args, sizeof(args),
 "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
 dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
 time_base.num, time_base.den,
 dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);

 ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
 args, NULL, filter_graph);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
 goto end;
 }

 /* buffer video sink: to terminate the filter chain. */
 ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
 NULL, NULL, filter_graph);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
 goto end;
 }

 ret = av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
 goto end;
 }

 outputs->name = av_strdup("in");
 outputs->filter_ctx = buffersrc_ctx;
 outputs->pad_idx = 0;
 outputs->next = NULL;

 inputs->name = av_strdup("out");
 inputs->filter_ctx = buffersink_ctx;
 inputs->pad_idx = 0;
 inputs->next = NULL;

 if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
 &inputs, &outputs, NULL)) < 0)
 goto end;

 if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
 goto end;

end:
 avfilter_inout_free(&inputs);
 avfilter_inout_free(&outputs);

 return ret;
}

static void display_frame(const AVFrame* frame, AVRational time_base)
{
 int x, y;
 uint8_t* p0, * p;
 int64_t delay;

 if (frame->pts != AV_NOPTS_VALUE) {
 if (last_pts != AV_NOPTS_VALUE) {
 /* sleep roughly the right amount of time;
 * usleep is in microseconds, just like AV_TIME_BASE. */
 AVRational timeBaseQ;
 timeBaseQ.num = 1;
 timeBaseQ.den = AV_TIME_BASE;

 delay = av_rescale_q(frame->pts - last_pts, time_base, timeBaseQ);
 if (delay > 0 && delay < 1000000)
 std::this_thread::sleep_for(std::chrono::microseconds(delay));
 }
 last_pts = frame->pts;
 }

 /* Trivial ASCII grayscale display. */
 p0 = frame->data[0];
 puts("\033c");
 for (y = 0; y < frame->height; y++) {
 p = p0;
 for (x = 0; x < frame->width; x++)
 putchar(" .-+#"[*(p++) / 52]);
 putchar('\n');
 p0 += frame->linesize[0];
 }
 fflush(stdout);
}

int save_frame_as_jpeg(AVCodecContext* pCodecCtx, AVFrame* pFrame, int FrameNo) {
 int ret = 0;

 const AVCodec* jpegCodec = avcodec_find_encoder(AV_CODEC_ID_JPEG2000);
 if (!jpegCodec) {
 return -1;
 }
 AVCodecContext* jpegContext = avcodec_alloc_context3(jpegCodec);
 if (!jpegContext) {
 return -1;
 }

 jpegContext->pix_fmt = pCodecCtx->pix_fmt;
 jpegContext->height = pFrame->height;
 jpegContext->width = pFrame->width;
 jpegContext->time_base = AVRational{ 1,10 };

 ret = avcodec_open2(jpegContext, jpegCodec, NULL);
 if (ret < 0) {
 return ret;
 }
 FILE* JPEGFile;
 char JPEGFName[256];

 AVPacket packet;
 packet.data = NULL;
 packet.size = 0;
 av_init_packet(&packet);

 int gotFrame;

 ret = avcodec_send_frame(jpegContext, pFrame);
 if (ret < 0) {
 return ret;
 }

 ret = avcodec_receive_packet(jpegContext, &packet);
 if (ret < 0) {
 return ret;
 }

 sprintf(JPEGFName, "c:\\folder\\dvr-%06d.jpg", FrameNo);
 JPEGFile = fopen(JPEGFName, "wb");
 fwrite(packet.data, 1, packet.size, JPEGFile);
 fclose(JPEGFile);

 av_packet_unref(&packet);
 avcodec_close(jpegContext);
 return 0;
}

int main(int argc, char** argv)
{
 AVFrame* frame;
 AVFrame* filt_frame;
 AVPacket* packet;
 int ret;

 if (argc != 2) {
 fprintf(stderr, "Usage: %s file\n", argv[0]);
 exit(1);
 }

 frame = av_frame_alloc();
 filt_frame = av_frame_alloc();
 packet = av_packet_alloc();

 if (!frame || !filt_frame || !packet) {
 fprintf(stderr, "Could not allocate frame or packet\n");
 exit(1);
 }

 if ((ret = open_input_file(argv[1])) < 0)
 goto end;
 if ((ret = init_filters(filter_descr)) < 0)
 goto end;

 while (true)
 {
 if ((ret = av_read_frame(fmt_ctx, packet)) < 0)
 break;

 if (packet->stream_index == video_stream_index) {
 ret = avcodec_send_packet(dec_ctx, packet);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Error while sending a packet to the decoder\n");
 break;
 }

 while (ret >= 0)
 {
 ret = avcodec_receive_frame(dec_ctx, frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 break;
 }
 else if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Error while receiving a frame from the decoder\n");
 goto end;
 }

 frame->pts = frame->best_effort_timestamp;

 /* push the decoded frame into the filtergraph */
 if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
 break;
 }

 /* pull filtered frames from the filtergraph */
 while (1) {
 ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 if (ret < 0)
 goto end;
 display_frame(filt_frame, buffersink_ctx->inputs[0]->time_base);
 av_frame_unref(filt_frame);
 
 ret = save_frame_as_jpeg(dec_ctx, frame, dec_ctx->frame_number);
 if (ret < 0)
 goto end;
 }
 av_frame_unref(frame);
 }
 }
 av_packet_unref(packet);
 }

end:
 avfilter_graph_free(&filter_graph);
 avcodec_free_context(&dec_ctx);
 avformat_close_input(&fmt_ctx);
 av_frame_free(&frame);
 av_frame_free(&filt_frame);
 av_packet_free(&packet);

 if (ret < 0 && ret != AVERROR_EOF) {
 char errBuf[AV_ERROR_MAX_STRING_SIZE]{0};
 int res = av_strerror(ret, errBuf, AV_ERROR_MAX_STRING_SIZE);
 fprintf(stderr, "Error: %s\n", errBuf);
 exit(1);
 }

 exit(0);
}
</thread></chrono></iostream></cstdio>