
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (72)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...) -
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)
Sur d’autres sites (13231)
-
FFMPEG concat demuxer - how to make file formats compatible ?
10 avril 2015, par user206481I need to automate mp4 concatenation server-side and I’m using FFMPEG. I will get uploads of mp4 files and I want to attach a Title.mp4 and End.mp4 to each one. I am also overlaying a soundtrack (the input videos do not have sound) There is a potential high server load so I’d like to do it as efficiently as possible using ffmpeg’s concat demuxer to avoid re-encoding the video.
After receiving samples of each file, I am not successful and I believe it is due to mismatched file formats. My result has good Title.mp4 and audio, then when the sample uploaded mp4 is supposed to play there is garbled green/pink/red pixels on the top half of the video, then the End.mp4 plays fine. Here is my ffmpeg command and output :
$ ffmpeg -f concat -i <(printf "file '%s'\n" Title.mp4 Sample.mp4 End.mp4) -i SoundTrack.wav -c:v copy -strict -2 -y Out.mp4
ffmpeg version 2.6.1 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.1.2 (GCC) 20080704 (Red Hat 4.1.2-55)
configuration: --prefix=/home/dpmsmobi/ffmpeg_build --extra-cflags=-I/home/dpmsmobi/ffmpeg_build/include --extra-ldflags=-L/home/dpmsmobi/ffmpeg_build/lib --bindir=/home/dpmsmobi/bin --enable-gpl --enable-nonfree --enable-libmp3lame --enable-libvorbis --enable-libvpx --enable-libx264
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 26.100 / 56. 26.100
libavformat 56. 25.101 / 56. 25.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 11.102 / 5. 11.102
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, concat, from '/dev/fd/63':
Duration: N/A, start: 0.000000, bitrate: 1810 kb/s
Stream #0:0: Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv), 768x512 [SAR 1:1 DAR 3:2], 1810 kb/s, 30 fps, 30 tbr, 30k tbn, 60 tbc
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'SoundTrack.wav':
Metadata:
encoded_by : Adobe Premiere Pro CC 2014 (Maci
encoder : Adobe Premiere Pro CC 2014 (Macintosh)
date : 2015-04-07
creation_time : 11:12:10
time_reference : 0
Duration: 00:00:15.06, bitrate: 1551 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, 2 channels, s16, 1536 kb/s
Output #0, mp4, to 'Out.mp4':
Metadata:
encoder : Lavf56.25.101
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 768x512 [SAR 1:1 DAR 3:2], q=2-31, 1810 kb/s, 30 fps, 30 tbr, 30k tbn, 30k tbc
Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc56.26.100 aac
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[concat @ 0x1dedc20] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[concat @ 0x1dedc20] DTS 69750 < 91000 out of order
[mp4 @ 0x1f75060] Non-monotonous DTS in output stream 0:0; previous: 91000, current: 69750; changing to 91001. This may result in incorrect timestamps in the output file.
<----- many more Non-monotonous DTS messages omitted here ---->
frame= 427 fps=0.0 q=-1.0 Lsize= 4123kB time=00:00:15.06 bitrate=2242.5kbits/s
video:3873kB audio:236kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.344173%I can successfully concatenate the Title.mp4 to the End.mp4, and I can successfully concatenate two Sample.mp4 files, so I know I’ve got the ffmpeg command right. I can also successfully concat the files using the following ffmpeg command with filter_complex instead of concat demuxer (this takes considerably longer due to re-encoding) :
ffmpeg -i Title.mp4 -i Sample.mp4 -i End.mp4 -i SoundTrack.wav -filter_complex '[0:0] [1:0] [2:0] concat=n=3:v=1 [v]' -map '[v]' -map 3:0 -crf 20 -strict -2 -y Out2.mp4
Here is the MediaInfo output for each type of mp4 file :
$ mediainfo Title.mp4
General
Complete name : Title.mp4
Format : MPEG-4
Format profile : Base Media / Version 2
Codec ID : mp42
File size : 693 KiB
Duration : 3s 100ms
Overall bit rate mode : Variable
Overall bit rate : 1 831 Kbps
Encoded date : UTC 2015-04-07 19:15:03
Tagged date : UTC 2015-04-07 19:15:03
©TIM : 00:00:00:00
©TSC : 30
©TSZ : 1
Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Main@L3.1
Format settings, CABAC : Yes
Format settings, ReFrames : 3 frames
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 3s 100ms
Bit rate mode : Variable
Bit rate : 1 811 Kbps
Maximum bit rate : 3 000 Kbps
Width : 768 pixels
Height : 512 pixels
Display aspect ratio : 3:2
Frame rate mode : Constant
Frame rate : 30.000 fps
Standard : NTSC
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.154
Stream size : 685 KiB (99%)
Language : English
Encoded date : UTC 2015-04-07 19:15:03
Tagged date : UTC 2015-04-07 19:15:03
Color range : Limited
$ mediainfo Sample.mp4
General
Complete name : Sample.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom
File size : 2.93 MiB
Duration : 7s 9ms
Overall bit rate : 3 505 Kbps
Encoded date : UTC 1970-01-01 00:00:00
Tagged date : UTC 1970-01-01 00:00:00
Writing application : Lavf52.64.2
Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Baseline@L3.1
Format settings, CABAC : No
Format settings, ReFrames : 1 frame
Format settings, GOP : M=1, N=30
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 7s 9ms
Bit rate : 3 500 Kbps
Width : 768 pixels
Height : 512 pixels
Display aspect ratio : 3:2
Frame rate mode : Variable
Frame rate : 30.250 fps
Minimum frame rate : 23.462 fps
Maximum frame rate : 296.053 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.294
Stream size : 2.92 MiB (100%)
Language : English
Encoded date : UTC 1970-01-01 00:00:00
Tagged date : UTC 1970-01-01 00:00:00I’m pretty sure it’s the mp42 vs isom Codec ID’s, and potentially the constant vs variable frame rates. I can’t change the input mp4’s but I know their format will stay the same. How can I reformat the Title and End mp4’s to match the input mp4 files so I can use ffmpeg concat demux ?
-
FFmpeg encoding live audio to aac issue
12 juillet 2015, par Ruurd AdemaI’m trying to encode live raw audio coming from a Blackmagic Decklink input card to a mov file with AAC encoding.
The issue is that the audio sounds distorted and plays to fast.
I created the software based on a couple of examples/tutorials including the Dranger tutorial and examples on Github (and of course the examples in the FFmpeg codebase).
Honestly, at this moment I don’t exactly know what the cause of the problem is. I’m thinking about PTS/DTS values or a timebase mismatch (because of the too fast playout), I tried a lot of things, including working with an av_audio_fifo.
- When outputting to the mov file with the AV_CODEC_ID_PCM_S16LE codec, everything works well
- When outputting to the mov file with the AV_CODEC_ID_AAC codec, the problems occur
- When writing RAW audio VLC media info shows :
Type : Audio, Codec : PCM S16 LE (sowt), Language : English, Channels : Stereo, Sample rate : 48000 Hz, Bits per sample. - When writing with AAC codec VLC media info shows :
Type : Audio, Codec : MPEG AAC Audio (mp4a), Language : English, Channels : Stereo, Sample rate : 48000 Hz.
Any idea(s) of what’s causing the problems ?
Code
// Create output context
output_filename = "/root/movies/encoder_debug.mov";
output_format_name = "mov";
if (avformat_alloc_output_context2(&output_fmt_ctx, NULL, output_format_name, output_filename) < 0)
{
printf("[ERROR] Unable to allocate output format context for output: %s\n", output_filename);
}
// Create audio output stream
static AVStream *encoder_add_audio_stream(AVFormatContext *oc, enum AVCodecID codec_id)
{
AVCodecContext *c;
AVCodec *codec;
AVStream *st;
st = avformat_new_stream(oc, NULL);
if (!st)
{
printf("[ERROR] Could not allocate new audio stream!\n");
exit(-1);
}
c = st->codec;
c->codec_id = codec_id;
c->codec_type = AVMEDIA_TYPE_AUDIO;
c->sample_fmt = AV_SAMPLE_FMT_S16;
c->sample_rate = decklink_config()->audio_samplerate;
c->channels = decklink_config()->audio_channel_count;
c->channel_layout = av_get_default_channel_layout(decklink_config()->audio_channel_count);
c->time_base.den = decklink_config()->audio_samplerate;
c->time_base.num = 1;
if (codec_id == AV_CODEC_ID_AAC)
{
c->bit_rate = 96000;
//c->profile = FF_PROFILE_AAC_MAIN; //FIXME Generates error: "Unable to set the AOT 1: Invalid config"
// Allow the use of the experimental AAC encoder
c->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
}
// Some formats want stream headers to be seperate (global?)
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
{
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
codec = avcodec_find_encoder(c->codec_id);
if (!codec)
{
printf("[ERROR] Audio codec not found\n");
exit(-1);
}
if (avcodec_open2(c, codec, NULL) < 0)
{
printf("[ERROR] Could not open audio codec\n");
exit(-1);
}
return st;
}
// En then, at every incoming frame this function gets called:
void encoder_handle_incoming_frame(IDeckLinkVideoInputFrame *videoframe, IDeckLinkAudioInputPacket *audiopacket)
{
void *pixels = NULL;
int pitch = 0;
int got_packet = 0;
void *audiopacket_data = NULL;
long audiopacket_sample_count = 0;
long audiopacket_size = 0;
long audiopacket_channel_count = 2;
if (audiopacket)
{
AVPacket pkt = {0,0,0,0,0,0,0,0,0,0,0,0,0,0};
AVFrame *frame;
BMDTimeValue audio_pts;
int requested_size;
static int last_pts1, last_pts2 = 0;
audiopacket_sample_count = audiopacket->GetSampleFrameCount();
audiopacket_channel_count = decklink_config()->audio_channel_count;
audiopacket_size = audiopacket_sample_count * (decklink_config()->audio_sampletype/8) * audiopacket_channel_count;
audiopacket->GetBytes(&audiopacket_data);
av_init_packet(&pkt);
printf("\n=== Audiopacket: %d ===\n", audio_stream->codec->frame_number);
if (AUDIO_TYPE == AV_CODEC_ID_PCM_S16LE)
{
audiopacket->GetPacketTime(&audio_pts, audio_stream->time_base.den);
pkt.pts = audio_pts;
pkt.dts = pkt.pts;
pkt.flags |= AV_PKT_FLAG_KEY; // TODO: Make sure if this still applies
pkt.stream_index = audio_stream->index;
pkt.data = (uint8_t *)audiopacket_data;
pkt.size = audiopacket_size;
printf("[PACKET] size: %d\n", pkt.size);
printf("[PACKET] pts: %li\n", pkt.pts);
printf("[PACKET] pts delta: %li\n", pkt.pts - last_pts2);
printf("[PACKET] duration: %d\n", pkt.duration);
last_pts2 = pkt.pts;
av_interleaved_write_frame(output_fmt_ctx, &pkt);
}
else if (AUDIO_TYPE == AV_CODEC_ID_AAC)
{
frame = av_frame_alloc();
frame->format = audio_stream->codec->sample_fmt;
frame->channel_layout = audio_stream->codec->channel_layout;
frame->sample_rate = audio_stream->codec->sample_rate;
frame->nb_samples = audiopacket_sample_count;
requested_size = av_samples_get_buffer_size(NULL, audio_stream->codec->channels, audio_stream->codec->frame_size, audio_stream->codec->sample_fmt, 1);
audiopacket->GetPacketTime(&audio_pts, audio_stream->time_base.den);
printf("[DEBUG] Sample format: %d\n", frame->format);
printf("[DEBUG] Channel layout: %li\n", frame->channel_layout);
printf("[DEBUG] Sample rate: %d\n", frame->sample_rate);
printf("[DEBUG] NB Samples: %d\n", frame->nb_samples);
printf("[DEBUG] Datasize: %li\n", audiopacket_size);
printf("[DEBUG] Requested datasize: %d\n", requested_size);
printf("[DEBUG] Too less/much: %li\n", audiopacket_size - requested_size);
printf("[DEBUG] Framesize: %d\n", audio_stream->codec->frame_size);
printf("[DEBUG] Audio pts: %li\n", audio_pts);
printf("[DEBUG] Audio pts delta: %li\n", audio_pts - last_pts1);
last_pts1 = audio_pts;
frame->pts = audio_pts;
if (avcodec_fill_audio_frame(frame, audiopacket_channel_count, audio_stream->codec->sample_fmt, (const uint8_t *)audiopacket_data, audiopacket_size, 0) < 0)
{
printf("[ERROR] Filling audioframe failed!\n");
exit(-1);
}
got_packet = 0;
if (avcodec_encode_audio2(audio_stream->codec, &pkt, frame, &got_packet) != 0)
{
printf("[ERROR] Encoding audio failed\n");
}
if (got_packet)
{
pkt.stream_index = audio_stream->index;
pkt.flags |= AV_PKT_FLAG_KEY;
//printf("[PACKET] size: %d\n", pkt.size);
//printf("[PACKET] pts: %li\n", pkt.pts);
//printf("[PACKET] pts delta: %li\n", pkt.pts - last_pts2);
//printf("[PACKET] duration: %d\n", pkt.duration);
//printf("[PACKET] timebase codec: %d/%d\n", audio_stream->codec->time_base.num, audio_stream->codec->time_base.den);
//printf("[PACKET] timebase stream: %d/%d\n", audio_stream->time_base.num, audio_stream->time_base.den);
last_pts2 = pkt.pts;
av_interleaved_write_frame(output_fmt_ctx, &pkt);
}
av_frame_free(&frame);
}
av_free_packet(&pkt);
}
else
{
printf("[WARNING] No audiopacket received!\n");
}
static int count = 0;
count++;
} -
ffmpeg dshow command is listing device name different from actual device name
20 octobre 2015, par Somanshui have used the command:ffmpeg -f dshow -list_devices true -i dummy
to list the devices.
But if device name is in language other than english then device name shown by ffmpeg and actual differ.How can i solve this mismatch problem.