
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (65)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)
Sur d’autres sites (9569)
-
avformat_seek_file timestamps not using the correct time base
19 juin 2021, par CharlieI am in the process of creating a memory loader for ffmpeg to add more functionality. I have audio playing and working, but am having an issue with
avformat_seek_file
timestamps using the wrong format.

avformat.avformat_seek_file(file.context, -1, 0, timestamp, timestamp, 0)



From looking at the docs it says if the stream index is -1 that the time should be based on
AV_TIME_BASE
. When I load the file throughavformat_open_input
with a nullAVFormatContext
and a filename, this works as expected.

However when I create my own
AVIOContext
andAVFormatContext
throughavio_alloc_context
andavformat_alloc_context
respectively, the timestamps are no longer based onAV_TIME_BASE
. When testing I received an access violation when I first tried seeking, and upon investigating, it seems that the timestamps are based on actual seconds now. How can I make these custom contexts time based onAV_TIME_BASE
?

The only difference between the two are the custom loading of
AVIOContext
andAVFormatContext
:

data = fileobject.read()

 ld = len(data)

 buf = libavutil.avutil.av_malloc(ld)
 ptr_buf = cast(buf, c_char_p)

 ptr = ctypes.create_string_buffer(ld)
 memmove(ptr, data, ld)

 seeker = libavformat.ffmpeg_seek_func(seek_data)
 reader = libavformat.ffmpeg_read_func(read_data)
 writer = libavformat.ffmpeg_read_func(write_data)

 format = libavformat.avformat.avio_alloc_context(ptr_buf, buf_size, 0,
 ptr_data,
 reader,
 writer,
 seeker
 )

 file.context = libavformat.avformat.avformat_alloc_context()
 file.context.contents.pb = format
 file.context.contents.flags |= AVFMT_FLAG_CUSTOM_IO

 result = avformat.avformat_open_input(byref(file.context),
 b"",
 None,
 None)

 if result != 0:
 raise FFmpegException('avformat_open_input in ffmpeg_open_filename returned an error opening file '
 + filename.decode("utf8")
 + ' Error code: ' + str(result))

 result = avformat.avformat_find_stream_info(file.context, None)
 if result < 0:
 raise FFmpegException('Could not find stream info')

 return file




Here is the filename code that does work :


result = avformat.avformat_open_input(byref(file.context),
 filename,
 None,
 None)
 if result != 0:
 raise FFmpegException('avformat_open_input in ffmpeg_open_filename returned an error opening file '
 + filename.decode("utf8")
 + ' Error code: ' + str(result))

 result = avformat.avformat_find_stream_info(file.context, None)
 if result < 0:
 raise FFmpegException('Could not find stream info')

 return file



I am new to ffmpeg, but any help fixing this discrepancy is greatly appreciated.


-
Writing opencv frames to avi container using libavformat Custom IO
16 juillet 2017, par AryanI have to write OpenCV cv::Mat frames to an AVI container. I cannot use OpenCV’s VideoWriter because I do not intend to write the AVI file to disk directly, instead I want to send it to a custom stream, so I have to use ffmpeg/libav. As I have never used ffmpeg before I have taken help from solutions provided here and here, alongwith the ffmpeg documentation.
I am able to send AVI container packets to my custom output stream as required but the performance is very bad. Specifically, the call to avcodec_encode_video2 is taking too long.
First, I suspect that due to my inexperience I have misconfigured or wrongly coded something. I am currently working wit 640*480 grayscale frames. On my i.MX6 platform the call to avcodec_encode_video2 is taking about 130ms on average per frame, which is unacceptably slow. Any pointers to an obvious performance killer ??? (i know sws_scale looks useless but it takes negligible time, and might be useful for me later).
Second, I am using PNG encoder, but that is not required, I would be happy to write uncompressed data if I know how to : If the slowdown is not due to my bad programming, can we just get rid of the encoder and generate uncompressed packets for the avi container ? Or use some encoder that accepts grayscale images and is not that slow ?
For Initialization and writing of header I am using :
void MyWriter::WriteHeader()
{
av_register_all();
// allocate output format context
if (avformat_alloc_output_context2(&avFmtCtx, NULL, "avi", NULL) < 0) { printf("DATAREC: avformat_alloc_output_context2 failed"); exit(1); }
// buffer for avio context
bufSize = 640 * 480 * 4; // Don't know how to derive, but this should be big enough for now
buffer = (unsigned char*)av_malloc(bufSize);
if (!buffer) { printf("DATAREC: Buffer alloc failed"); exit(1); }
// allocate avio context
avIoCtx = avio_alloc_context(buffer, bufSize, 1, this, NULL, WriteCallbackWrapper, NULL);
if (!avIoCtx) { printf("DATAREC: avio_alloc_context failed"); exit(1); }
// connect avio context to format context
avFmtCtx->pb = avIoCtx;
// set custom IO flag
avFmtCtx->flags |= AVFMT_FLAG_CUSTOM_IO;
// get encoder
encoder = avcodec_find_encoder(AV_CODEC_ID_PNG);
if (!encoder) { printf("DATAREC: avcodec_find_encoder failed"); exit(1); }
// create new stream
avStream = avformat_new_stream(avFmtCtx, encoder);
if (!avStream) { printf("DATAREC: avformat_new_stream failed"); exit(1); }
// set stream codec defaults
if (avcodec_get_context_defaults3(avStream->codec, encoder) < 0) { printf("DATAREC: avcodec_get_context_defaults3 failed"); exit(1); }
// hardcode settings for now
avStream->codec->pix_fmt = AV_PIX_FMT_GRAY8;
avStream->codec->width = 640;
avStream->codec->height = 480;
avStream->codec->time_base.den = 15;
avStream->codec->time_base.num = 1;
avStream->time_base.den = avStream->codec->time_base.den;
avStream->time_base.num = avStream->codec->time_base.num;
avStream->r_frame_rate.num = avStream->codec->time_base.den;
avStream->r_frame_rate.den = avStream->codec->time_base.num;
// open encoder
if (avcodec_open2(avStream->codec, encoder, NULL) < 0) {
printf("DATAREC: Cannot open codec\n");
exit(1);
}
// write header
if(avformat_write_header(avFmtCtx, NULL) < 0) { printf("DATAREC: avformat_write_header failed\n"); exit(1);}
// prepare for first frame
framePts = 0;
firstFrame = true;
}After writing the header, the following function is called in a loop for each cv::Mat frame :
void MyWriter::WriteFrame(cv::Mat& item)
{
if (firstFrame) // do only once, before writing the first frame
{
// allocate frame
frame = av_frame_alloc();
if (!frame) { printf("DATAREC: av_frame_alloc failed"); exit(1); }
// get size for framebuffer
int picsz = av_image_get_buffer_size(avStream->codec->pix_fmt, avStream->codec->width, avStream->codec->height, 1);
// allocate frame buffer
framebuf = (unsigned char*)av_malloc(picsz);
if (!framebuf) { printf("DATAREC: fail to alloc framebuf"); exit(1); }
// set frame width, height, format
frame->width = avStream->codec->width;
frame->height = avStream->codec->height;
frame->format = static_cast<int>(avStream->codec->pix_fmt);
// set up data pointers and linesizes
if (av_image_fill_arrays(frame->data, frame->linesize, framebuf, avStream->codec->pix_fmt, avStream->codec->width, avStream->codec->height, 1) < 0) { printf("DATAREC: av_image_fill_arrays failed\n"); exit(1);}
// get sws context
swsctx = sws_getCachedContext(nullptr, avStream->codec->width, avStream->codec->height, avStream->codec->pix_fmt, avStream->codec->width, avStream->codec->height, avStream->codec->pix_fmt, SWS_BICUBIC, nullptr, nullptr, nullptr);
if (!swsctx) { printf("DATAREC: fail to sws_getCachedContext"); exit(1); }
// done initializing
firstFrame = false; // don't repeat this for the following frames
}
// call sws scale
const int stride[] = { static_cast<int>(item.step[0]) };
sws_scale(swsctx, &item.data, stride, 0, item.rows, frame->data, frame->linesize);
// set presentation timestamp
frame->pts = framePts++;
// initialize packet
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
// THIS TAKES VERY LONG TO EXECUTE
// call encoder, convert frame to packet
if (avcodec_encode_video2(avStream->codec, &pkt, frame, &got_pkt) < 0) { printf("DATAREC: fail to avcodec_encode_video2"); exit(1); }
write packet if available
if (got_pkt)
{
pkt.duration = 1;
av_write_frame(avFmtCtx, &pkt);
}
// wipe packet
av_packet_unref(&pkt);
}
</int></int>After writing required frames, trailer is written :
void MyWriter::WriteTrailer()
{
// prepare packet for trailer
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
// encode trailer packet
if (avcodec_encode_video2(avStream->codec, &pkt, nullptr, &got_pkt) < 0) { printf("DATAREC: fail to avcodec_encode_video2"); exit(1); }
// write trailer packet
if (got_pkt)
{
pkt.duration = 1;
av_write_frame(avFmtCtx, &pkt);
}
// free everything
av_packet_unref(&pkt);
av_write_trailer(avFmtCtx);
av_frame_free(&frame);
avcodec_close(avStream->codec);
av_free(avIoCtx);
sws_freeContext(swsctx);
avformat_free_context(avFmtCtx);
av_free(framebuf);
av_free(buffer);
firstFrame = true; // for the next file
}Many many thanks to everyone who made it down to this line !
-
FFmpeg what is the correct way to manually write silence through pipe:0 ?
19 juillet 2023, par Bohdan PetrenkoI have an ffmpeg process running with this parameters :


ffmpeg -y -f s16le -ac {Channels} -ar 48000 -re -use_wallclock_as_timestamps true -i pipe:0 -f segment -segment_time {_segmentSize} -segment_list \"{_segmentListPath}\" -segment_format mp3 -segment_wrap 2 -reset_timestamps 0 -af aresample=async=1 \"{_filePath}\"



I also have a
DateTimeOffset
which represents the time when the recording was started. When an FFMpeg process is created, I need to add some some amount of silence that equals to the delay between current time and when the recording was started. This delay may be bigger than ffmpeg segments, so I calculate it relatively to the time when last ffmpeg segment should begin.
I store silence in a static byte array with length of two ffmpeg segments :

_silenceBuffer ??= new byte[_segmentSize * 2 * Channels * SampleRate * 2];



I tried two ways of writing silence :


First code I tried is this :


var delay = DateTimeOffset.UtcNow - RecordingStartDateTime;

var time = CalculateRelativeMilliseconds(delay.TotalMilliseconds); // this returns time based on current segment. It works fine.

var amount = (int)(time * 2 * Channels * SampleRate / 1000);

WriterStream.Write(_silenceBuffer, 0, amount);



As the result, I have a very loud noise everywhere in output from ffmpeg. It brokes audio, so this way doesn't work for me.


Second code I tried is this :


var delay = DateTimeOffset.UtcNow - RecordingStartDateTime;

var time = CalculateRelativeMilliseconds(delay.TotalMilliseconds); // this returns time based on current segment. It works fine.

var amount = (int)time * 2 * Channels * SampleRate / 1000;

WriterStream.Write(_silenceBuffer, 0, amount);



Difference between first and second code is that now I cast only
time
toint
type, not the result of the whole expression. But it also doesn't work. This time at the beginning I have no silence I wrote, the recording begins with voice data I piped after writing silence. But if I use this ffmpeg command :

ffmpeg -y -f s16le -ac {Channels} -ar 48000 -i pipe:0 -f segment -segment_time {_segmentSize} -segment_list \"{_segmentListPath}\" -segment_format mp3 -segment_wrap 2 -reset_timestamps 0 \"{_filePath}\"



Then it works as expected. Recording begins with silence what I need, and then goes voice data I piped.


So, how can I manually calculate and write silence to my ffmpeg instance ? Is there some universal way of writing and calculating silence that will work with any ffmpeg command ? I don`t want to use filters and other ffmpeg instances for offsetting piped voice data, because I do it only once per session. I think that I can write silence with byte arrays. I look forward to any suggestions.