
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (49)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (5515)
-
ffmpeg to lower/fade audio volume of one audio stream when microphone voice detected ?
11 juin 2021, par Lectos LaciousI want to do live audio translation via microphone, to get streamed live vid/audio from Facebook, plug the mic into laptop and do live translation by mixing existing audio stream with one coming from the mic (translation). This is OK, somehow I got this part by using audio filter "amix" and mix two audio streams together into one. Now I want to add more perfection to it, is it possible to (probably is) upon mic voice detection to automatically decrease/fade down 20% volume of input/original audio stream to hear translation (mic audio) more loudly and then when mic action/voice stops for lets say 3-5 seconds the volume of original audio stream fades up/goes up to normal volume... is this too much, i can play with sox or similar ?


-
FFmpeg RTP payload 96 instead of 97
26 octobre 2016, par bot1131357I am trying to create an rtp audio stream with ffmpeg. The application output and SDP file configuration are as follows :
Output #0, rtp, to 'rtp://127.0.0.1:8554':
Stream #0:0: Audio: pcm_s16be, 8000 Hz, stereo, s16, 256 kb/s
SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 57.25.101
m=audio 8554 RTP/AVP 96
b=AS:256
a=rtpmap:96 L16/8000/2However, when I try to read it with
ffplay -i test.sdp -protocol_whitelist file,udp,rtp
, it fails,shows the following :ffplay version N-78598-g98a0053 Copyright (c) 2003-2016 the FFmpeg developers
built with gcc 5.3.0 (GCC)
configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
libavutil 55. 18.100 / 55. 18.100
libavcodec 57. 24.103 / 57. 24.103
libavformat 57. 25.101 / 57. 25.101
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 34.100 / 6. 34.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
nan : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
(...waits indefinitely.)The only way to make it work again is to modify the payload type in the SDP file from 96 to 97. Can someone tell me why ? Where is this number defined ?
Here is my source. See if you can replicate it.
#include
extern "C"
{
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libavformat></libavformat>avformat.h>
}
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, *time_base, st->time_base);
/* Write the compressed frame to the media file. */
return av_interleaved_write_frame(fmt_ctx, pkt);
}
/*
* Audio encoding example
*/
static void audio_encode_example(const char *filename)
{
AVPacket pkt;
int i, j, k, ret, got_output;
int buffer_size;
uint16_t *samples;
float t, tincr;
AVCodec *outCodec = NULL;
AVCodecContext *outCodecCtx = NULL;
AVFormatContext *outFormatCtx = NULL;
AVStream * outAudioStream = NULL;
AVFrame *outFrame = NULL;
ret = avformat_alloc_output_context2(&outFormatCtx, NULL, "rtp", filename);
if (!outFormatCtx || ret < 0)
{
fprintf(stderr, "Could not allocate output context");
}
outFormatCtx->flags |= AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS;
outFormatCtx->oformat->audio_codec = AV_CODEC_ID_PCM_S16BE;
/* find the encoder */
outCodec = avcodec_find_encoder(outFormatCtx->oformat->audio_codec);
if (!outCodec) {
fprintf(stderr, "Codec not found\n");
exit(1);
}
outAudioStream = avformat_new_stream(outFormatCtx, outCodec);
if (!outAudioStream)
{
fprintf(stderr, "Cannot add new audio stream\n");
exit(1);
}
outAudioStream->id = outFormatCtx->nb_streams - 1;
outCodecCtx = outAudioStream->codec;
outCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;
/* select other audio parameters supported by the encoder */
outCodecCtx->sample_rate = 8000;
outCodecCtx->channel_layout = AV_CH_LAYOUT_STEREO;
outCodecCtx->channels = 2;
/* open it */
if (avcodec_open2(outCodecCtx, outCodec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(1);
}
// PCM has no frame, so we have to explicitly specify
outCodecCtx->frame_size = 1152;
av_dump_format(outFormatCtx, 0, filename, 1);
char buff[10000] = { 0 };
ret = av_sdp_create(&outFormatCtx, 1, buff, sizeof(buff));
printf("%s", buff);
ret = avio_open2(&outFormatCtx->pb, filename, AVIO_FLAG_WRITE, NULL, NULL);
ret = avformat_write_header(outFormatCtx, NULL);
printf("ret = %d\n", ret);
if (ret <0) {
exit(1);
}
/* frame containing input audio */
outFrame = av_frame_alloc();
if (!outFrame) {
fprintf(stderr, "Could not allocate audio frame\n");
exit(1);
}
outFrame->nb_samples = outCodecCtx->frame_size;
outFrame->format = outCodecCtx->sample_fmt;
outFrame->channel_layout = outCodecCtx->channel_layout;
/* we calculate the size of the samples buffer in bytes */
buffer_size = av_samples_get_buffer_size(NULL, outCodecCtx->channels, outCodecCtx->frame_size,
outCodecCtx->sample_fmt, 0);
if (buffer_size < 0) {
fprintf(stderr, "Could not get sample buffer size\n");
exit(1);
}
samples = (uint16_t*)av_malloc(buffer_size);
if (!samples) {
fprintf(stderr, "Could not allocate %d bytes for samples buffer\n",
buffer_size);
exit(1);
}
/* setup the data pointers in the AVFrame */
ret = avcodec_fill_audio_frame(outFrame, outCodecCtx->channels, outCodecCtx->sample_fmt,
(const uint8_t*)samples, buffer_size, 0);
if (ret < 0) {
fprintf(stderr, "Could not setup audio frame\n");
exit(1);
}
/* encode a single tone sound */
t = 0;
int next_pts = 0;
tincr = 2 * M_PI * 440.0 / outCodecCtx->sample_rate;
for (i = 0; i < 400000; i++) {
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
for (j = 0; j < outCodecCtx->frame_size; j++) {
samples[2 * j] = (uint16_t)(sin(t) * 10000);
for (k = 1; k < outCodecCtx->channels; k++)
samples[2 * j + k] = samples[2 * j];
t += tincr;
}
t = (t > 50000) ? 0 : t;
// Sets time stamp
next_pts += outFrame->nb_samples;
outFrame->pts = next_pts;
/* encode the samples */
ret = avcodec_encode_audio2(outCodecCtx, &pkt, outFrame, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding audio frame\n");
exit(1);
}
if (got_output) {
write_frame(outFormatCtx, &outCodecCtx->time_base, outAudioStream, &pkt);
av_packet_unref(&pkt);
}
printf("i:%d\n", i); // waste some time to avoid over-filling jitter buffer
printf("Audio: %d\t%d\n", samples[0], samples[1]); // waste some time to avoid over-filling jitter buffer
printf("t: %f\n", t); // waste some time to avoid over-filling jitter buffer
}
/* get the delayed frames */
for (got_output = 1; got_output; i++) {
ret = avcodec_encode_audio2(outCodecCtx, &pkt, NULL, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output) {
pkt.pts = AV_NOPTS_VALUE;
write_frame(outFormatCtx, &outCodecCtx->time_base, outAudioStream, &pkt);
av_packet_unref(&pkt);
}
}
av_freep(&samples);
av_frame_free(&outFrame);
avcodec_close(outCodecCtx);
av_free(outCodecCtx);
}
int main(int argc, char **argv)
{
const char *output;
av_register_all();
avformat_network_init(); // for network streaming
audio_encode_example("rtp://127.0.0.1:8554");
return 0;
}Update
Curiously, running on Linux Ubuntu gives me the following instead :
Output #0, rtp, to 'rtp://127.0.0.1:8554':
Stream #0:0: Unknown: none (pcm_s16be)
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 57.48.100
m=application 8554 RTP/AVP 3Does anyone know why the stream has been changed from audio to application ?
-
FFMPEG : Send Email with output after ffmpeg completes
13 décembre 2017, par MegaXLRI have a VPS running Debian 9 GNU/Linux that transcodes mp4 files, because it’s a cheap single-core server it might take several hours. I want to send an email to myself when it completes with the output from ffmpeg.
I have tried
(ffmpeg -i input.mp4 -acodec copy -vcodec copy -y output.mp4 >> ffmpeg.log; cat ffmpeg.log) | mail -s "FFMPEG COMPLETE" email@me.net
But that left me with the email sending immediatly without body.
(my SMTP client is Unix Sendmail)