
Recherche avancée
Autres articles (33)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (5294)
-
Files created with a direct stream copy using FFmpeg's libavformat API play back too fast at 3600 fps
2 octobre 2013, par Chris BallingerI am working on a libavformat API wrapper that converts MP4 files with H.264 and AAC to MPEG-TS segments suitable for streaming. I am just doing a simple stream copy without re-encoding, but the files I produce play the video back at 3600 fps instead of 24 fps.
Here are some outputs from ffprobe https://gist.github.com/chrisballinger/6733678, the broken file is below :
r_frame_rate=1/1
avg_frame_rate=0/0
time_base=1/90000
start_pts=0
start_time=0.000000
duration_ts=2999
duration=0.033322The same input file manually sent through ffmpeg has proper timestamp information :
r_frame_rate=24/1
avg_frame_rate=0/0
time_base=1/90000
start_pts=126000
start_time=1.400000
duration_ts=449850
duration=4.998333I believe the problem lies somewhere in my setup of libavformat here : https://github.com/OpenWatch/FFmpegWrapper/blob/master/FFmpegWrapper/FFmpegWrapper.m#L349 where I repurposed a bunch of code from ffmpeg.c that was required for the direct stream copy.
Since 3600 seems like a "magic number" (60*60), it could be as simple as me not setting the time scale properly, but I can't figure out where my code diverges from ffmpeg/avconv itself.
Similar question here, but I don't think they got as far as I did : Muxing a H.264 Annex B & AAC stream using libavformat with vcopy/acopy
-
How to concat two/many mp4 files(Mac OS X Lion 10.7.5) with different resolution, bit rate [on hold]
3 septembre 2013, par praveenI have to concat different mp4 files into single mp4 file. i am using following ffmpeg command but this command is only working if both file is same(copy, or if all video property is same(codec, resolution,bitrate....) ) other wise result is unexpected video. (I am working on Mac OS X Lion 10.7.5)
ffmpeg commad :
ffmpeg -i images/1/output.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts
ffmpeg -i images/1/Video2.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts
ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -c copy -bsf:a aac_adtstoasc output2.mp4console output :
[mpegts @ 0x7f8c6c03d800] max_analyze_duration 5000000 reached at 5000000 microseconds
Input #0, mpegts, from 'concat:intermediate1.ts|intermediate2.ts':
Duration: 00:00:16.52, start: 1.400000, bitrate: 1342 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1024x768 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x101](und): Audio: aac ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 101 kb/s
Output #0, mp4, to 'output2.mp4':
Metadata:
encoder : Lavf54.63.104
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1024x768 [SAR 1:1 DAR 4:3], q=2-31, 25 fps, 90k tbn, 90k tbc
Stream #0:1(und): Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, 101 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 586 fps=0.0 q=-1.0 Lsize= 2449kB time=00:00:20.11 bitrate= 997.4kbits/s
video:2210kB audio:225kB subtitle:0 global headers:0kB muxing overhead 0.578335%Please help
-
Generate video from bitmap images using FFMPEG
29 juin 2013, par PferdI'm trying to encode a bunch of images into a video using FFMPEG in visual studio. However I couldnt get it. Can some one please tell me where am I going wrong. please find the attached code here !
void Encode::encodeVideoFromImage(char* filename){
// gET THE encoder here. ~ try with mpeg1Video as a start!
int i;
AVCodec* codec = avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
//AVCodec* codec = avcodec_find_encoder(CODEC_ID_MPEG4);
//AVCodec* codec = avcodec_find_encoder(CODEC_ID_MPEG2VIDEO);
if (!codec)
{
MessageBox(_T("can't find codec"), _T("Warning!"), MB_ICONERROR | MB_OK);
}
// Initialize codec
AVCodecContext* c = avcodec_alloc_context();
// Put sample parameters
c->bit_rate = 400000;
// Resolution must be a multiple of two
c->width = 800;
c->height = 600;
c->time_base.num = 1;
c->time_base.den = 25;
c->gop_size = 8; // Emit one intra frame every ten frames
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
// Open the codec.
if (avcodec_open(c, codec) < 0)
{
// fprintf(stderr, "could not open codec\n");
MessageBox(_T("can't open codec"), _T("Warning!"), MB_ICONERROR | MB_OK);
}
// Open the output file
FILE* f = fopen(filename, "wb");
if (!f)
{
// fprintf(stderr, "could not open %s\n", filename);
MessageBox(_T("Unable to open file"), _T("Warning!"), MB_ICONERROR | MB_OK);
}
// alloc image and output buffer
int in_width, in_height, out_width, out_height;
//here, make sure inbuffer points to the input BGR32 data,
//and the input and output dimensions are set correctly.
int out_size=1000000;
in_width=c->width;
out_width=c->width;
in_height=c->height;
out_height=c->height;
//create ffmpeg frame structures.
AVFrame* inpic = avcodec_alloc_frame();
AVFrame* outpic = avcodec_alloc_frame();
// bytes needed for the output image
int nbytes = avpicture_get_size(PIX_FMT_BGR32, out_width, out_height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t *)av_malloc(nbytes*sizeof(uint8_t));
uint8_t* inbuffer = (uint8_t *)av_malloc(nbytes*sizeof(uint8_t));
CImage capImage;
CString pictureNumber;
/* encode 1 frame of video */
for(i=0;i<50;i++) {
fflush(stdout);
/* Use existing images */
pictureNumber="";
pictureNumber.Format(_T("%d"),i+1);
capImage.Load(_T("C:\\imageDump\\test")+pictureNumber+_T(".bmp")); // TBD from memory!
//MessageBox(_T("C:\\imageDump\\test")+pictureNumber+_T(".bmp"), _T("Warning!"), MB_ICONERROR | MB_OK);
inbuffer = (uint8_t*)capImage.GetBits();
// convert RGB to YUV 420 here!
// RGBtoYUV420P(pBits,picture_buf,bpp,true,c->width,c->height,false);
//inbuffer=pBits;
avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);
//create the conversion context
SwsContext* fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);
out_size = avcodec_encode_video(c, outbuffer, out_size, inpic);
//printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuffer, 1, out_size, f);
capImage.Destroy();
//free(inbuffer);
}
// Get the delayed frames
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuffer, out_size, NULL);
//printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuffer, 1, out_size, f);
}
/* add sequence end code to have a real mpeg file */
outbuffer[0] = 0x00;
outbuffer[1] = 0x00;
outbuffer[2] = 0x01;
outbuffer[3] = 0xb7;
fwrite(outbuffer, 1, 4, f);
fclose(f);
free(inbuffer);
free(outbuffer);
avcodec_close(c);
av_free(c);
av_free(inpic);
av_free(outpic);
//printf("\n");}
Thank you !