
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (59)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (9401)
-
FFmpeg C API - syncing video and audio
10 novembre 2015, par Justin BradleyI am trimming video and having a hard getting the audio to sync correctly. The code below is as close as I’ve gotten it work. I’ve tried both re-encoding and not re-encoding the output streams.
The video trims correctly and is written to the output container. The audio stream also trims correctly, but is written to the front of the output container. For example if the trim length is 10s - the correct portion of audio plays for 10s and then the correct portion of video plays.
//////// audio stream ////////
const AVStream *input_stream_audio = input_container->streams[audio_stream_index];
const AVCodec *decoder_audio = avcodec_find_decoder(input_stream_audio->codec->codec_id);
if(!decoder_audio) {
cleanup(decoded_packet, output_container, decoded_frame);
avformat_close_input(&input_container);
LOGE("=> Audio decoder not found");
return -1;
}
if(avcodec_open2(input_stream_audio->codec, decoder_audio, NULL) < 0) {
cleanup(decoded_packet, output_container, decoded_frame);
avformat_close_input(&input_container);
LOGE("=> Error opening audio decoder");
return -1;
}
AVStream *output_stream_audio = avformat_new_stream(output_container, NULL);
if(avcodec_copy_context(output_stream_audio->codec, input_stream_audio->codec) != 0){
LOGE("=> Failed to Copy audio Context ");
return -1;
}
else {
LOGI("=> Copied audio context ");
output_stream_audio->codec->codec_id = input_stream_audio->codec->codec_id;
output_stream_audio->codec->codec_tag = 0;
output_stream_audio->pts = input_stream_audio->pts;
output_stream_audio->time_base.num = input_stream_audio->time_base.num;
output_stream_audio->time_base.den = input_stream_audio->time_base.den;
}
if(avio_open(&output_container->pb, output_file, AVIO_FLAG_WRITE) < 0) {
cleanup(decoded_packet, output_container, decoded_frame);
avformat_close_input(&input_container);
LOGE("=> Error opening output file");
return -1;
}
// allocate frame for conversion
decoded_frame = avcodec_alloc_frame();
if(!decoded_frame) {
cleanup(decoded_packet, output_container, decoded_frame);
avformat_close_input(&input_container);
LOGE("=> Error allocating frame");
return -1;
}
av_dump_format(input_container, 0, input_file, 0);
avformat_write_header(output_container, NULL);
av_init_packet(&decoded_packet);
decoded_packet.data = NULL;
decoded_packet.size = 0;
int current_frame_num = 1;
int current_frame_num_audio = 1;
int got_frame, len;
AVRational default_timebase;
default_timebase.num = 1;
default_timebase.den = AV_TIME_BASE;
int64_t starttime_int64 = av_rescale_q((int64_t)( 12.0 * AV_TIME_BASE ), AV_TIME_BASE_Q, input_stream->time_base);
int64_t endtime_int64 = av_rescale_q((int64_t)( 18.0 * AV_TIME_BASE ), AV_TIME_BASE_Q, input_stream->time_base);
LOGI("=> starttime_int64: %" PRId64, starttime_int64);
LOGI("=> endtime_int64: %" PRId64, endtime_int64);
int64_t starttime_int64_audio = av_rescale_q((int64_t)( 12.0 * AV_TIME_BASE ), AV_TIME_BASE_Q, input_stream_audio->time_base);
int64_t endtime_int64_audio = av_rescale_q((int64_t)( 18.0 * AV_TIME_BASE ), AV_TIME_BASE_Q, input_stream_audio->time_base);
LOGI("=> starttime_int64_audio: %" PRId64, starttime_int64_audio);
LOGI("=> endtime_int64_audio: %" PRId64, endtime_int64_audio);
// loop input container and decode frames
while(av_read_frame(input_container, &decoded_packet)>=0) {
// video packets
if (decoded_packet.stream_index == video_stream_index) {
len = avcodec_decode_video2(input_stream->codec, decoded_frame, &got_frame, &decoded_packet);
if(len < 0) {
cleanup(decoded_packet, output_container, decoded_frame);
avformat_close_input(&input_container);
LOGE("=> No frames to decode");
return -1;
}
// this is the trim range we're looking for
if(got_frame && decoded_frame->pkt_pts >= starttime_int64 && decoded_frame->pkt_pts <= endtime_int64) {
av_init_packet(&encoded_packet);
encoded_packet.data = NULL;
encoded_packet.size = 0;
ret = avcodec_encode_video2(output_stream->codec, &encoded_packet, decoded_frame, &got_frame);
if (ret < 0) {
cleanup(decoded_packet, output_container, decoded_frame);
avformat_close_input(&input_container);
LOGE("=> Error encoding frames");
return ret;
}
if(got_frame) {
if (output_stream->codec->coded_frame->key_frame) {
encoded_packet.flags |= AV_PKT_FLAG_KEY;
}
encoded_packet.stream_index = output_stream->index;
encoded_packet.pts = av_rescale_q(current_frame_num, output_stream->codec->time_base, output_stream->time_base);
encoded_packet.dts = av_rescale_q(current_frame_num, output_stream->codec->time_base, output_stream->time_base);
ret = av_interleaved_write_frame(output_container, &encoded_packet);
if (ret < 0) {
cleanup(decoded_packet, output_container, decoded_frame);
avformat_close_input(&input_container);
LOGE("=> Error encoding frames");
return ret;
}
else {
current_frame_num +=1;
}
}
av_free_packet(&encoded_packet);
}
}
// audio packets
else if(decoded_packet.stream_index == audio_stream_index) {
// this is the trim range we're looking for
if(decoded_packet.pts >= starttime_int64_audio && decoded_packet.pts <= endtime_int64_audio) {
av_init_packet(&encoded_packet);
encoded_packet.data = decoded_packet.data;
encoded_packet.size = decoded_packet.size;
encoded_packet.stream_index = audio_stream_index;
encoded_packet.pts = av_rescale_q(current_frame_num_audio, output_stream_audio->codec->time_base, output_stream_audio->time_base);
encoded_packet.dts = av_rescale_q(current_frame_num_audio, output_stream_audio->codec->time_base, output_stream_audio->time_base);
ret = av_interleaved_write_frame(output_container, &encoded_packet);
if (ret < 0) {
cleanup(decoded_packet, output_container, decoded_frame);
avformat_close_input(&input_container);
LOGE("=> Error encoding frames");
return ret;
}
else {
current_frame_num_audio +=1;
}
av_free_packet(&encoded_packet);
}
}
}Edit
I have slight improvement on the initial code. The audio and video are still not perfectly synced, but the original problem of the audio playing first followed by the video is resolved.
I’m now writing the decoded packet to the output container rather than re-encoding it.
In the end though I have the same problem - the trimmed video’s audio and video streams are not perfectly synced.
// audio packets
else if(decoded_packet.stream_index == audio_stream_index) {
// this is the trim range we're looking for
if(decoded_packet.pts >= starttime_int64_audio && decoded_packet.pts <= endtime_int64_audio) {
ret = av_interleaved_write_frame(output_container, &decoded_packet);
if (ret < 0) {
cleanup(decoded_packet, output_container, decoded_frame);
avformat_close_input(&input_container);
LOGE("=> Error writing audio frame (%s)", av_err2str(ret));
return ret;
}
else {
current_frame_num_audio +=1;
}
}
else if(decoded_frame->pkt_pts > endtime_int64_audio) {
audio_copy_complete = true;
}
} -
How can i create a portrait video using Android's MediaRecorder
23 mars 2015, par urudroidI have an Android application which is able to record and play a videos in portrait mode, those features are working fine on Android phones.
The issue comes up because this video is needed to be played also on iOS devices (after being shared through a server).
iOS is not correctly showing the video as it looks "cropped", but videos recorded on iOS are played without issues.
So, the main difference between videos created on Android and iOS’ is the size and the rotation.
Im using CWAC-Camera library for preview and recording and ffmpeg to scale the video down to 320x568px (as this is the standard size for both Android and iOS apps).
Here is the metadata from an video created from Android :
General
Complete name : android_video.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom
File size : 447 KiB
Duration : 5s 596ms
Overall bit rate : 654 Kbps
Encoded date : UTC 1904-01-01 00:00:00
Tagged date : UTC 1904-01-01 00:00:00
Writing application : Lavf56.4.101
Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L2.1
Format settings, CABAC : Yes
Format settings, ReFrames : 4 frames
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 5s 406ms
Bit rate : 536 Kbps
Width : 568 pixels
Height : 320 pixels
Display aspect ratio : 16:9
Original display aspect ratio : 16:9
Rotation : 270°
Frame rate mode : Constant
Frame rate : 14.985 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.197
Stream size : 354 KiB (79%)
Writing library : x264 core 142
Encoding settings : cabac=1 / ref=3 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=6 / lookahead_threads=1 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=1 / weightb=1 / open_gop=0 / weightp=2 / keyint=250 / keyint_min=14 / scenecut=40 / intra_refresh=0 / rc_lookahead=40 / rc=crf / mbtree=1 / crf=23.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00
Language : English
Encoded date : UTC 1904-01-01 00:00:00
Tagged date : UTC 1904-01-01 00:00:00
Audio
ID : 2
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : 40
Duration : 5s 596ms
Bit rate mode : Constant
Bit rate : 132 Kbps
Channel(s) : 2 channels
Channel(s)_Original : 1 channel
Channel positions : Front: C
Sampling rate : 44.1 KHz
Compression mode : Lossy
Stream size : 89.4 KiB (20%)
Language : English
Encoded date : UTC 1904-01-01 00:00:00
Tagged date : UTC 1904-01-01 00:00:00And here is the metadata from the video created on iOS :
General
Complete name : ios_video.mp4
Format : MPEG-4
Format profile : Base Media / Version 2
Codec ID : mp42
File size : 673 KiB
Duration : 7s 38ms
Overall bit rate : 783 Kbps
Encoded date : UTC 2015-03-17 19:16:36
Tagged date : UTC 2015-03-17 19:16:37
Video
ID : 2
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Main@L3.0
Format settings, CABAC : Yes
Format settings, ReFrames : 2 frames
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 7s 33ms
Bit rate : 711 Kbps
Width : 320 pixels
Height : 568 pixels
Display aspect ratio : 0.563
Frame rate mode : Constant
Frame rate : 30.000 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.130
Stream size : 610 KiB (91%)
Title : Core Media Video
Encoded date : UTC 2015-03-17 19:16:36
Tagged date : UTC 2015-03-17 19:16:37
Color primaries : BT.709
Transfer characteristics : BT.709
Matrix coefficients : BT.709
Color range : Limited
Audio
ID : 1
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : 40
Duration : 7s 38ms
Source duration : 7s 105ms
Bit rate mode : Constant
Bit rate : 64.0 Kbps
Channel(s) : 2 channels
Channel(s)_Original : 1 channel
Channel positions : Front: C
Sampling rate : 44.1 KHz
Compression mode : Lossy
Stream size : 56.8 KiB (8%)
Source stream size : 57.2 KiB (9%)
Title : Core Media Audio
Encoded date : UTC 2015-03-17 19:16:36
Tagged date : UTC 2015-03-17 19:16:37The values width and height are inverted on Android, also the Rotation parameter is set to 270º (this is the rotation parameter for portrait videos).
This is a sketch of how iOS’ videos look on iOS app :
And this is how Android’s videos look on iOS app :
So, in order to get the videos correctly displayed both on iOS and Android i need to be able to set the width to 320 and height to 568 on Android. I tried it from several places (outside and inside CWAC-Camera library) but i always get a Camera.Parameters error.
It is possible to do this on Android ?
EDIT :
This is the result i get when i set the rotation to 0 with ffmpeg :
-
reading ffmpeg output and sending it to a form
3 juillet 2013, par Aeon2058I can run arguments through ffmpeg just fine, but I need to be able to read its output LIVE as it streams (as you may know, ffmpeg can take a while, and it updates its stderr twice a second with current frame, etc).
I have a form called
prog
that was declared globally withProgressForm prog = new ProgressForm();
The user inputs a folder of video files, some data is gathered, and then a button is pushed to start the encoding process with ffmpeg. The button_click event creates a new thread like so :runFFMpeg = new Thread(run_ffmpeg);
runFFMpeg.start();runFFMpeg was initialized globally earlier with
private Thread runFFMpeg;
.Now I have the method run_ffmpeg :
private void run_ffmpeg()
{
string program = "C:\\ffmpeg64.exe";
string args = //some arguments that I know work;
ProcessStartInfo run = new ProcessStartInfo(program,args);
run.UseShellExecute = false;
run.CreateNoWindow = true;
run.RedirectStandardOutput = true;
run.RedirectStandardError = true;
Process runP = new Process();
runP = Process.Start(run);
runP.WaitForExit();
//NOW WHAT?
}I'm not sure what to do now to get the data LIVE, but if I can, I would be updating
prog
, which has a number of controls, including progress bars, etc. Typical output (that I'm interested in) looks like "frame= 240 fps= 12.8 q=0.0 size= 1273802kB time=00:00:08.008 bitrate=4415.2kbits/s dup=46 drop=0". I know how to parse that to get what I need, I just need that line !