
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (35)
-
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4333)
-
Video Conferencing in HTML5 : WebRTC via Socket.io
5 février 2013, par silviaSix months ago I experimented with Web sockets for WebRTC and the early implementations of PeerConnection in Chrome. Last week I gave a presentation about WebRTC at Linux.conf.au, so it was time to update that codebase.
I decided to use socket.io for the signalling following the idea of Luc, which made the server code even smaller and reduced it to a mere reflector :
var app = require(’http’).createServer().listen(1337) ; var io = require(’socket.io’).listen(app) ;
io.sockets.on(’connection’, function(socket)
socket.on(’message’, function(message)
socket.broadcast.emit(’message’, message) ;
) ;
) ;Then I turned to the client code. I was surprised to see the massive changes that PeerConnection has gone through. Check out my slide deck to see the different components that are now necessary to create a PeerConnection.
I was particularly surprised to see the SDP object now fully exposed to JavaScript and thus the ability to manipulate it directly rather than through some API. This allows Web developers to manipulate the type of session that they are asking the browsers to set up. I can imaging e.g. if they have support for a video codec in JavaScript that the browser does not provide built-in, they can add that codec to the set of choices to be offered to the peer. While it is flexible, I am concerned if this might create more problems than it solves. I guess we’ll have to wait and see.
I was also surprised by the need to use ICE, even though in my experiment I got away with an empty list of ICE servers – the ICE messages just got exchanged through the socket.io server. I am not sure whether this is a bug, but I was very happy about it because it meant I could run the whole demo on a completely separate network from the Internet.
The most exciting news since my talk is that Mozilla and Google have managed to get a PeerConnection working between Firefox and Chrome – this is the first cross-browser video conference call without a plugin ! The code differences are minor.
Since the specification of the WebRTC API and of the MediaStream API are now official Working Drafts at the W3C, I expect other browsers will follow. I am also looking forward to the possibilities of :
- multi-peer video conferencing like the efforts around webrtc.io,
- the media stream recording API,
- and the peer-to-peer data API.
The best places to learn about the latest possibilities of WebRTC are webrtc.org and the W3C WebRTC WG. code.google.com has open source code that continues to be updated to the latest released and interoperable features in browsers.
The video of my talk is in the process of being published. There is a MP4 version on the Linux Australia mirror server, but I expect it will be published properly soon. I will update the blog post when that happens.
-
How can i preserve flv custom metadata when using ffmpeg
4 mars 2013, par Orlando RiveraI have a FMS flv (video/audio stream) which I created and I inserted custom metadata into flv. I able to test this flv file with a simple flash player that reads the metadata coming from the flv (which I inserted when I created flv on FMS).
When I try and use ffmpeg to convert or just copy flv (even to another flv) I am not seeing my dynamic custom metadata (I do see some static metadata flv has in header). It is unclear if this is doable (using ffmpeg 1.1) as soon as I do any thing with ffmpeg (even a copy) this custom metadata is gone. I was expecting a simple flv to flv would work but so far no.is this possible ? - one example I've tried (of many) is
ffmpeg -i meta09.flv -map_metadata 0 -c copy out4.flv
-
FFmpeg encoding live audio to aac issue
12 juillet 2015, par Ruurd AdemaI’m trying to encode live raw audio coming from a Blackmagic Decklink input card to a mov file with AAC encoding.
The issue is that the audio sounds distorted and plays to fast.
I created the software based on a couple of examples/tutorials including the Dranger tutorial and examples on Github (and of course the examples in the FFmpeg codebase).
Honestly, at this moment I don’t exactly know what the cause of the problem is. I’m thinking about PTS/DTS values or a timebase mismatch (because of the too fast playout), I tried a lot of things, including working with an av_audio_fifo.
- When outputting to the mov file with the AV_CODEC_ID_PCM_S16LE codec, everything works well
- When outputting to the mov file with the AV_CODEC_ID_AAC codec, the problems occur
- When writing RAW audio VLC media info shows :
Type : Audio, Codec : PCM S16 LE (sowt), Language : English, Channels : Stereo, Sample rate : 48000 Hz, Bits per sample. - When writing with AAC codec VLC media info shows :
Type : Audio, Codec : MPEG AAC Audio (mp4a), Language : English, Channels : Stereo, Sample rate : 48000 Hz.
Any idea(s) of what’s causing the problems ?
Code
// Create output context
output_filename = "/root/movies/encoder_debug.mov";
output_format_name = "mov";
if (avformat_alloc_output_context2(&output_fmt_ctx, NULL, output_format_name, output_filename) < 0)
{
printf("[ERROR] Unable to allocate output format context for output: %s\n", output_filename);
}
// Create audio output stream
static AVStream *encoder_add_audio_stream(AVFormatContext *oc, enum AVCodecID codec_id)
{
AVCodecContext *c;
AVCodec *codec;
AVStream *st;
st = avformat_new_stream(oc, NULL);
if (!st)
{
printf("[ERROR] Could not allocate new audio stream!\n");
exit(-1);
}
c = st->codec;
c->codec_id = codec_id;
c->codec_type = AVMEDIA_TYPE_AUDIO;
c->sample_fmt = AV_SAMPLE_FMT_S16;
c->sample_rate = decklink_config()->audio_samplerate;
c->channels = decklink_config()->audio_channel_count;
c->channel_layout = av_get_default_channel_layout(decklink_config()->audio_channel_count);
c->time_base.den = decklink_config()->audio_samplerate;
c->time_base.num = 1;
if (codec_id == AV_CODEC_ID_AAC)
{
c->bit_rate = 96000;
//c->profile = FF_PROFILE_AAC_MAIN; //FIXME Generates error: "Unable to set the AOT 1: Invalid config"
// Allow the use of the experimental AAC encoder
c->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
}
// Some formats want stream headers to be seperate (global?)
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
{
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
codec = avcodec_find_encoder(c->codec_id);
if (!codec)
{
printf("[ERROR] Audio codec not found\n");
exit(-1);
}
if (avcodec_open2(c, codec, NULL) < 0)
{
printf("[ERROR] Could not open audio codec\n");
exit(-1);
}
return st;
}
// En then, at every incoming frame this function gets called:
void encoder_handle_incoming_frame(IDeckLinkVideoInputFrame *videoframe, IDeckLinkAudioInputPacket *audiopacket)
{
void *pixels = NULL;
int pitch = 0;
int got_packet = 0;
void *audiopacket_data = NULL;
long audiopacket_sample_count = 0;
long audiopacket_size = 0;
long audiopacket_channel_count = 2;
if (audiopacket)
{
AVPacket pkt = {0,0,0,0,0,0,0,0,0,0,0,0,0,0};
AVFrame *frame;
BMDTimeValue audio_pts;
int requested_size;
static int last_pts1, last_pts2 = 0;
audiopacket_sample_count = audiopacket->GetSampleFrameCount();
audiopacket_channel_count = decklink_config()->audio_channel_count;
audiopacket_size = audiopacket_sample_count * (decklink_config()->audio_sampletype/8) * audiopacket_channel_count;
audiopacket->GetBytes(&audiopacket_data);
av_init_packet(&pkt);
printf("\n=== Audiopacket: %d ===\n", audio_stream->codec->frame_number);
if (AUDIO_TYPE == AV_CODEC_ID_PCM_S16LE)
{
audiopacket->GetPacketTime(&audio_pts, audio_stream->time_base.den);
pkt.pts = audio_pts;
pkt.dts = pkt.pts;
pkt.flags |= AV_PKT_FLAG_KEY; // TODO: Make sure if this still applies
pkt.stream_index = audio_stream->index;
pkt.data = (uint8_t *)audiopacket_data;
pkt.size = audiopacket_size;
printf("[PACKET] size: %d\n", pkt.size);
printf("[PACKET] pts: %li\n", pkt.pts);
printf("[PACKET] pts delta: %li\n", pkt.pts - last_pts2);
printf("[PACKET] duration: %d\n", pkt.duration);
last_pts2 = pkt.pts;
av_interleaved_write_frame(output_fmt_ctx, &pkt);
}
else if (AUDIO_TYPE == AV_CODEC_ID_AAC)
{
frame = av_frame_alloc();
frame->format = audio_stream->codec->sample_fmt;
frame->channel_layout = audio_stream->codec->channel_layout;
frame->sample_rate = audio_stream->codec->sample_rate;
frame->nb_samples = audiopacket_sample_count;
requested_size = av_samples_get_buffer_size(NULL, audio_stream->codec->channels, audio_stream->codec->frame_size, audio_stream->codec->sample_fmt, 1);
audiopacket->GetPacketTime(&audio_pts, audio_stream->time_base.den);
printf("[DEBUG] Sample format: %d\n", frame->format);
printf("[DEBUG] Channel layout: %li\n", frame->channel_layout);
printf("[DEBUG] Sample rate: %d\n", frame->sample_rate);
printf("[DEBUG] NB Samples: %d\n", frame->nb_samples);
printf("[DEBUG] Datasize: %li\n", audiopacket_size);
printf("[DEBUG] Requested datasize: %d\n", requested_size);
printf("[DEBUG] Too less/much: %li\n", audiopacket_size - requested_size);
printf("[DEBUG] Framesize: %d\n", audio_stream->codec->frame_size);
printf("[DEBUG] Audio pts: %li\n", audio_pts);
printf("[DEBUG] Audio pts delta: %li\n", audio_pts - last_pts1);
last_pts1 = audio_pts;
frame->pts = audio_pts;
if (avcodec_fill_audio_frame(frame, audiopacket_channel_count, audio_stream->codec->sample_fmt, (const uint8_t *)audiopacket_data, audiopacket_size, 0) < 0)
{
printf("[ERROR] Filling audioframe failed!\n");
exit(-1);
}
got_packet = 0;
if (avcodec_encode_audio2(audio_stream->codec, &pkt, frame, &got_packet) != 0)
{
printf("[ERROR] Encoding audio failed\n");
}
if (got_packet)
{
pkt.stream_index = audio_stream->index;
pkt.flags |= AV_PKT_FLAG_KEY;
//printf("[PACKET] size: %d\n", pkt.size);
//printf("[PACKET] pts: %li\n", pkt.pts);
//printf("[PACKET] pts delta: %li\n", pkt.pts - last_pts2);
//printf("[PACKET] duration: %d\n", pkt.duration);
//printf("[PACKET] timebase codec: %d/%d\n", audio_stream->codec->time_base.num, audio_stream->codec->time_base.den);
//printf("[PACKET] timebase stream: %d/%d\n", audio_stream->time_base.num, audio_stream->time_base.den);
last_pts2 = pkt.pts;
av_interleaved_write_frame(output_fmt_ctx, &pkt);
}
av_frame_free(&frame);
}
av_free_packet(&pkt);
}
else
{
printf("[WARNING] No audiopacket received!\n");
}
static int count = 0;
count++;
}