
Recherche avancée
Médias (1)
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (35)
-
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (6901)
-
How to concat two/many mp4 files(Mac OS X Lion 10.7.5) with different resolution, bit rate [on hold]
3 septembre 2013, par praveenI have to concat different mp4 files into single mp4 file. i am using following ffmpeg command but this command is only working if both file is same(copy, or if all video property is same(codec, resolution,bitrate....) ) other wise result is unexpected video. (I am working on Mac OS X Lion 10.7.5)
ffmpeg commad :
ffmpeg -i images/1/output.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts
ffmpeg -i images/1/Video2.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts
ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -c copy -bsf:a aac_adtstoasc output2.mp4console output :
[mpegts @ 0x7f8c6c03d800] max_analyze_duration 5000000 reached at 5000000 microseconds
Input #0, mpegts, from 'concat:intermediate1.ts|intermediate2.ts':
Duration: 00:00:16.52, start: 1.400000, bitrate: 1342 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1024x768 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x101](und): Audio: aac ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 101 kb/s
Output #0, mp4, to 'output2.mp4':
Metadata:
encoder : Lavf54.63.104
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1024x768 [SAR 1:1 DAR 4:3], q=2-31, 25 fps, 90k tbn, 90k tbc
Stream #0:1(und): Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, 101 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 586 fps=0.0 q=-1.0 Lsize= 2449kB time=00:00:20.11 bitrate= 997.4kbits/s
video:2210kB audio:225kB subtitle:0 global headers:0kB muxing overhead 0.578335%Please help
-
Generate video from bitmap images using FFMPEG
29 juin 2013, par PferdI'm trying to encode a bunch of images into a video using FFMPEG in visual studio. However I couldnt get it. Can some one please tell me where am I going wrong. please find the attached code here !
void Encode::encodeVideoFromImage(char* filename){
// gET THE encoder here. ~ try with mpeg1Video as a start!
int i;
AVCodec* codec = avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
//AVCodec* codec = avcodec_find_encoder(CODEC_ID_MPEG4);
//AVCodec* codec = avcodec_find_encoder(CODEC_ID_MPEG2VIDEO);
if (!codec)
{
MessageBox(_T("can't find codec"), _T("Warning!"), MB_ICONERROR | MB_OK);
}
// Initialize codec
AVCodecContext* c = avcodec_alloc_context();
// Put sample parameters
c->bit_rate = 400000;
// Resolution must be a multiple of two
c->width = 800;
c->height = 600;
c->time_base.num = 1;
c->time_base.den = 25;
c->gop_size = 8; // Emit one intra frame every ten frames
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
// Open the codec.
if (avcodec_open(c, codec) < 0)
{
// fprintf(stderr, "could not open codec\n");
MessageBox(_T("can't open codec"), _T("Warning!"), MB_ICONERROR | MB_OK);
}
// Open the output file
FILE* f = fopen(filename, "wb");
if (!f)
{
// fprintf(stderr, "could not open %s\n", filename);
MessageBox(_T("Unable to open file"), _T("Warning!"), MB_ICONERROR | MB_OK);
}
// alloc image and output buffer
int in_width, in_height, out_width, out_height;
//here, make sure inbuffer points to the input BGR32 data,
//and the input and output dimensions are set correctly.
int out_size=1000000;
in_width=c->width;
out_width=c->width;
in_height=c->height;
out_height=c->height;
//create ffmpeg frame structures.
AVFrame* inpic = avcodec_alloc_frame();
AVFrame* outpic = avcodec_alloc_frame();
// bytes needed for the output image
int nbytes = avpicture_get_size(PIX_FMT_BGR32, out_width, out_height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t *)av_malloc(nbytes*sizeof(uint8_t));
uint8_t* inbuffer = (uint8_t *)av_malloc(nbytes*sizeof(uint8_t));
CImage capImage;
CString pictureNumber;
/* encode 1 frame of video */
for(i=0;i<50;i++) {
fflush(stdout);
/* Use existing images */
pictureNumber="";
pictureNumber.Format(_T("%d"),i+1);
capImage.Load(_T("C:\\imageDump\\test")+pictureNumber+_T(".bmp")); // TBD from memory!
//MessageBox(_T("C:\\imageDump\\test")+pictureNumber+_T(".bmp"), _T("Warning!"), MB_ICONERROR | MB_OK);
inbuffer = (uint8_t*)capImage.GetBits();
// convert RGB to YUV 420 here!
// RGBtoYUV420P(pBits,picture_buf,bpp,true,c->width,c->height,false);
//inbuffer=pBits;
avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);
//create the conversion context
SwsContext* fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);
out_size = avcodec_encode_video(c, outbuffer, out_size, inpic);
//printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuffer, 1, out_size, f);
capImage.Destroy();
//free(inbuffer);
}
// Get the delayed frames
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuffer, out_size, NULL);
//printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuffer, 1, out_size, f);
}
/* add sequence end code to have a real mpeg file */
outbuffer[0] = 0x00;
outbuffer[1] = 0x00;
outbuffer[2] = 0x01;
outbuffer[3] = 0xb7;
fwrite(outbuffer, 1, 4, f);
fclose(f);
free(inbuffer);
free(outbuffer);
avcodec_close(c);
av_free(c);
av_free(inpic);
av_free(outpic);
//printf("\n");}
Thank you !
-
Convert MPEG4 to MPEGTS on Android with FFmpeg
3 juin 2013, par ArdoramorOk, so obviously I know very little to none about ffmpeg API when I made the original post... it is quite overwhelming when one starts learning about digital media and conversion details. After reading quite a bit more and going through ffmpeg source, I was able to get a working output from mp4 to mpegts. The concept is similar to executing :
ffmpeg -i in.mp4 -vcodec copy -acodec copy -vbsf h264_mp4toannexb out.ts
But as I mentioned before, I need to implement it with ffmpeg API in C.
So, although I am able to generate a playable .ts file, its video and audio streams are not synced. That is, playing them back on Android tablet plays the video very slowly while audio is playing at normal speed and then (once audio stream ends) video plays at normal speed to the end. Playing the same generated .ts file in VLC produces a very condensed audio (as though fast-forwarded) and plays video fine.
There are still many aspects of media conversion that I am not familiar with. I am sure that some of them prevent me from successful conversion.
Here is some information (via ffprobe) about the files :
in.mp4 - file generated via Android recording - MPEG4 (H.264 + AAC)
ffmpeg.ts - file generated via ffmpeg conversion - MPEG2TS (H.264 + AAC)
out.ts - file generated via my code - MPEGTS (H.264 + AAC)
in.mp4
filename=in.mp4
nb_streams=2
format_name=mov,mp4,m4a,3gp,3g2,mj2
format_long_name=QuickTime/MPEG-4/Motion JPEG 2000 format
start_time=0:00:00.000000
duration=0:00:09.961383
size=4.730 Mibyte
bit_rate=3.983 Mbit/s
TAG:major_brand=isom
TAG:minor_version=0
TAG:compatible_brands=isom3gp4
TAG:creation_time=2013-05-28 17:06:57ffmpeg.ts
filename=ffmpeg.ts
nb_streams=2
format_name=mpegts
format_long_name=MPEG-2 transport stream format
start_time=0:00:01.400000
duration=0:00:09.741267
size=5.132 Mibyte
bit_rate=4.419 Mbit/sout.ts
filename=out.ts
nb_streams=2
format_name=mpegts
format_long_name=MPEG-2 transport stream format
start_time=0:00:00.000000
duration=0:00:09.741267
size=5.166 Mibyte
bit_rate=4.449 Mbit/sFirstly, I was unable to affect my output file's start_time. Next, upon examining the -show_packets output of probe, I saw the following :
ffmpeg.ts
[PACKET]
codec_type=video
stream_index=0
pts=N/A
pts_time=N/A
dts=N/A
dts_time=N/A
duration=0
duration_time=0:00:00.000000
size=20.506 Kibyte
pos=564
flags=K
[/PACKET]
[PACKET]
codec_type=video
stream_index=0
pts=N/A
pts_time=N/A
dts=N/A
dts_time=N/A
duration=0
duration_time=0:00:00.000000
size=11.727 Kibyte
pos=22936
flags=_
[/PACKET]
...
[PACKET]
codec_type=audio
stream_index=1
pts=126000
pts_time=0:00:01.400000
dts=126000
dts_time=0:00:01.400000
duration=2089
duration_time=0:00:00.023211
size=285.000 byte
pos=109416
flags=K
[/PACKET]
[PACKET]
codec_type=audio
stream_index=1
pts=128089
pts_time=0:00:01.423211
dts=128089
dts_time=0:00:01.423211
duration=2089
duration_time=0:00:00.023211
size=374.000 byte
pos=-1
flags=K
[/PACKET]
...
[PACKET]
codec_type=video
stream_index=0
pts=N/A
pts_time=N/A
dts=N/A
dts_time=N/A
duration=0
duration_time=0:00:00.000000
size=20.000 Kibyte
pos=87232
flags=_
[/PACKET]
[PACKET]
codec_type=video
stream_index=0
pts=N/A
pts_time=N/A
dts=N/A
dts_time=N/A
duration=0
duration_time=0:00:00.000000
size=16.852 Kibyte
pos=112800
flags=_
[/PACKET]out.ts
[PACKET]
codec_type=audio
stream_index=1
pts=0
pts_time=0:00:00.000000
dts=0
dts_time=0:00:00.000000
duration=2089
duration_time=0:00:00.023211
size=285.000 byte
pos=22936
flags=K
[/PACKET]
[PACKET]
codec_type=audio
stream_index=1
pts=1024
pts_time=0:00:00.011378
dts=1024
dts_time=0:00:00.011378
duration=2089
duration_time=0:00:00.023211
size=374.000 byte
pos=23312
flags=K
[/PACKET]
...
[PACKET]
codec_type=video
stream_index=0
pts=N/A
pts_time=N/A
dts=N/A
dts_time=N/A
duration=0
duration_time=0:00:00.000000
size=11.727 Kibyte
pos=25004
flags=_
[/PACKET]
[PACKET]
codec_type=audio
stream_index=1
pts=7168
pts_time=0:00:00.079644
dts=7168
dts_time=0:00:00.079644
duration=2089
duration_time=0:00:00.023211
size=299.000 byte
pos=55460
flags=K
[/PACKET]As you can see, ffmpeg.ts starts out with video packets that do not have pts/dts. The audio packets that follow contain pts/dts. This repeats until the end. All video packets do not have pts/dts according to ffprobe output.
However, out.ts starts with audio packets and alternate with video packets. Here, video packets also do not have pts/dts. The difference is that here there is one video packet between a series of audio packets. What happened to the rest of the video packets (ffmpeg.ts has 5 audio followed by 5 video).
Obviously, I'm still learning and don't know way too much yet... Does anything jump out as obvious a problem to anyone ? I will greatly appreciate any info / suggestions but will continue to grind at it !!