
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (54)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (8427)
-
ffmpeg copyts to preserve timestamp
13 avril 2015, par BalaI am trying to modify an HLS segment transport stream, and preserve its start time with ffmpeg. However the output does not preserve the input file’s start_time value, even if -copyts is mentioned. Here’s my command line :
ffmpeg -i fileSequence1.ts -i x.png -filter_complex '[0:v][1:v]overlay[out]' -map '[out]' -map 0:1 -acodec copy -vsync 0 -vcodec libx264 -streamid 0:257 -streamid 1:258 -copyts -profile:v baseline -level 3 output.ts
The start_time value is delayed about 2 seconds consistently.
/Users/macadmin/>ffmpeg -y -v verbose -i fileSequence0.ts -map 0:0 -vcodec libx264 -copyts -vsync 0 -async 0 output.ts
ffmpeg version 2.5.3 Copyright (c) 2000-2015 the FFmpeg developers
built on Mar 29 2015 21:31:57 with Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/2.5.3 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-libfreetype --enable-libvorbis --enable-libvpx --enable-libass --enable-ffplay --enable-libfdk-aac --enable-libopus --enable-libquvi --enable-libx265 --enable-nonfree --enable-vda
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
[h264 @ 0x7fa93b800000] Current profile doesn't provide more RBSP data in PPS, skipping
Last message repeated 2 times
[mpegts @ 0x7fa93a80da00] max_analyze_duration 5000000 reached at 5000000 microseconds
Input #0, mpegts, from 'fileSequence0.ts':
Duration: 00:00:09.65, start: 9.952111, bitrate: 412 kb/s
Program 1
Stream #0:0[0x101]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 640x360 (640x368), 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x102]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 101 kb/s
[graph 0 input from stream 0:0 @ 0x7fa93a5229c0] w:640 h:360 pixfmt:yuv420p tb:1/90000 fr:25/1 sar:0/1 sws_param:flags=2
[libx264 @ 0x7fa93b800c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x7fa93b800c00] profile High, level 3.0
[mpegts @ 0x7fa93b800600] muxrate VBR, pcr every 2 pkts, sdt every 200, pat/pmt every 40 pkts
Output #0, mpegts, to 'output.ts':
Metadata:
encoder : Lavf56.15.102
Stream #0:0: Video: h264 (libx264), yuv420p, 640x360, q=-1--1, 25 fps, 90k tbn, 25 tbc
Metadata:
encoder : Lavc56.13.100 libx264
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[NULL @ 0x7fa93b800000] Current profile doesn't provide more RBSP data in PPS, skipping
Last message repeated 1 times
frame= 87 fps=0.0 q=28.0 size= 91kB time=00:00:11.40 bitrate= 65.0kbits/[NULL @ 0x7fa93b800000] Current profile doesn't provide more RBSP data in PPS, skipping
frame= 152 fps=151 q=28.0 size= 204kB time=00:00:14.00 bitrate= 119.4kbits/[NULL @ 0x7fa93b800000] Current profile doesn't provide more RBSP data in PPS, skipping
frame= 224 fps=148 q=28.0 size= 306kB time=00:00:16.88 bitrate= 148.5kbits/No more output streams to write to, finishing.
frame= 240 fps=125 q=-1.0 Lsize= 392kB time=00:00:19.52 bitrate= 164.6kbits/s
video:334kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 17.347548%
Input file #0 (fileSequence0.ts):
Input stream #0:0 (video): 240 packets read (360450 bytes); 240 frames decoded;
Input stream #0:1 (audio): 0 packets read (0 bytes);
Total: 240 packets (360450 bytes) demuxed
Output file #0 (output.ts):
Output stream #0:0 (video): 240 frames encoded; 240 packets muxed (342204 bytes);
Total: 240 packets (342204 bytes) muxed
[libx264 @ 0x7fa93b800c00] frame I:3 Avg QP:15.08 size: 7856
[libx264 @ 0x7fa93b800c00] frame P:81 Avg QP:21.03 size: 2807
[libx264 @ 0x7fa93b800c00] frame B:156 Avg QP:23.40 size: 585
[libx264 @ 0x7fa93b800c00] consecutive B-frames: 11.7% 2.5% 7.5% 78.3%
[libx264 @ 0x7fa93b800c00] mb I I16..4: 57.4% 17.5% 25.1%
[libx264 @ 0x7fa93b800c00] mb P I16..4: 8.0% 8.2% 1.0% P16..4: 30.5% 11.3% 4.6% 0.0% 0.0% skip:36.4%
[libx264 @ 0x7fa93b800c00] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 34.6% 2.7% 0.2% direct: 1.3% skip:60.9% L0:47.3% L1:49.1% BI: 3.6%
[libx264 @ 0x7fa93b800c00] 8x8 transform intra:42.2% inter:73.3%
[libx264 @ 0x7fa93b800c00] coded y,uvDC,uvAC intra: 26.2% 43.0% 6.8% inter: 5.4% 8.5% 0.1%
[libx264 @ 0x7fa93b800c00] i16 v,h,dc,p: 46% 26% 6% 21%
[libx264 @ 0x7fa93b800c00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 50% 14% 23% 1% 2% 6% 1% 3% 1%
[libx264 @ 0x7fa93b800c00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 40% 32% 10% 3% 3% 4% 2% 5% 2%
[libx264 @ 0x7fa93b800c00] i8c dc,h,v,p: 48% 23% 26% 3%
[libx264 @ 0x7fa93b800c00] Weighted P-Frames: Y:1.2% UV:0.0%
[libx264 @ 0x7fa93b800c00] ref P L0: 71.5% 10.7% 14.8% 2.9% 0.1%
[libx264 @ 0x7fa93b800c00] ref B L0: 95.5% 4.0% 0.5%
[libx264 @ 0x7fa93b800c00] ref B L1: 96.8% 3.2%
[libx264 @ 0x7fa93b800c00] kb/s:285.17
----------- FFProbe source video
/Users/macadmin>ffprobe fileSequence0.ts
ffprobe version 2.5.3 Copyright (c) 2007-2015 the FFmpeg developers
built on Mar 29 2015 21:31:57 with Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/2.5.3 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-libfreetype --enable-libvorbis --enable-libvpx --enable-libass --enable-ffplay --enable-libfdk-aac --enable-libopus --enable-libquvi --enable-libx265 --enable-nonfree --enable-vda
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, mpegts, from 'fileSequence0.ts':
Duration: 00:00:09.65, start: 9.952111, bitrate: 412 kb/s
Program 1
Stream #0:0[0x101]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 640x360, 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x102]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 101 kb/s
------ FFPROBE result video
/Users/macadmin>ffprobe output.ts
ffprobe version 2.5.3 Copyright (c) 2007-2015 the FFmpeg developers
built on Mar 29 2015 21:31:57 with Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/2.5.3 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-libfreetype --enable-libvorbis --enable-libvpx --enable-libass --enable-ffplay --enable-libfdk-aac --enable-libopus --enable-libquvi --enable-libx265 --enable-nonfree --enable-vda
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, mpegts, from 'output.ts':
Duration: 00:00:09.60, start: 11.400000, bitrate: 334 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 640x360, 25 fps, 25 tbr, 90k tbn, 50 tbcHow do I ensure that output file has the same start_time ? Thanks.
-
encode h264 video using ffmpeg library memory issues
31 mars 2015, par ZeppaI’m trying to do screen capture on OS X using ffmpeg’s
avfoundation
library. I capture frames from the screen and encode it using H264 into an flv container.Here’s the command line output of the program :
Input #0, avfoundation, from 'Capture screen 0':
Duration: N/A, start: 9.253649, bitrate: N/A
Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 1440x900, 14.58 tbr, 1000k tbn, 1000k tbc
raw video is inCodec
FLV (Flash Video)http://localhost:8090/test.flv
[libx264 @ 0x102038e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x102038e00] profile High, level 4.0
[libx264 @ 0x102038e00] 264 - core 142 r2495 6a301b6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=1 weightp=2 keyint=50 keyint_min=5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[tcp @ 0x101a5fe70] Connection to tcp://localhost:8090 failed (Connection refused), trying next address
[tcp @ 0x101a5fe70] Connection to tcp://localhost:8090 failed: Connection refused
url_fopen failed: Operation now in progress
[flv @ 0x102038800] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
encoded frame #0
encoded frame #1
......
encoded frame #49
encoded frame #50
testmee(8404,0x7fff7e05c300) malloc: *** error for object 0x102053e08: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
(lldb) bt
* thread #10: tid = 0x43873, 0x00007fff95639286 libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGABRT
* frame #0: 0x00007fff95639286 libsystem_kernel.dylib`__pthread_kill + 10
frame #1: 0x00007fff9623742f libsystem_pthread.dylib`pthread_kill + 90
frame #2: 0x00007fff977ceb53 libsystem_c.dylib`abort + 129
frame #3: 0x00007fff9ab59e06 libsystem_malloc.dylib`szone_error + 625
frame #4: 0x00007fff9ab4f799 libsystem_malloc.dylib`small_malloc_from_free_list + 1105
frame #5: 0x00007fff9ab4d3bc libsystem_malloc.dylib`szone_malloc_should_clear + 1449
frame #6: 0x00007fff9ab4c877 libsystem_malloc.dylib`malloc_zone_malloc + 71
frame #7: 0x00007fff9ab4b395 libsystem_malloc.dylib`malloc + 42
frame #8: 0x00007fff94aa63d2 IOSurface`IOSurfaceClientLookupFromMachPort + 40
frame #9: 0x00007fff94aa6b38 IOSurface`IOSurfaceLookupFromMachPort + 12
frame #10: 0x00007fff92bfa6b2 CoreGraphics`_CGYDisplayStreamFrameAvailable + 342
frame #11: 0x00007fff92f6759c CoreGraphics`CGYDisplayStreamNotification_server + 336
frame #12: 0x00007fff92bfada6 CoreGraphics`display_stream_runloop_callout + 46
frame #13: 0x00007fff956eba07 CoreFoundation`__CFMachPortPerform + 247
frame #14: 0x00007fff956eb8f9 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 41
frame #15: 0x00007fff956eb86b CoreFoundation`__CFRunLoopDoSource1 + 475
frame #16: 0x00007fff956dd3e7 CoreFoundation`__CFRunLoopRun + 2375
frame #17: 0x00007fff956dc858 CoreFoundation`CFRunLoopRunSpecific + 296
frame #18: 0x00007fff95792ef1 CoreFoundation`CFRunLoopRun + 97
frame #19: 0x0000000105f79ff1 CMIOUnits`___lldb_unnamed_function2148$$CMIOUnits + 875
frame #20: 0x0000000105f6f2c2 CMIOUnits`___lldb_unnamed_function2127$$CMIOUnits + 14
frame #21: 0x00007fff97051765 CoreMedia`figThreadMain + 417
frame #22: 0x00007fff96235268 libsystem_pthread.dylib`_pthread_body + 131
frame #23: 0x00007fff962351e5 libsystem_pthread.dylib`_pthread_start + 176
frame #24: 0x00007fff9623341d libsystem_pthread.dylib`thread_start + 13I’ve attached the code I used below.
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libavdevice></libavdevice>avdevice.h>
#include <libavutil></libavutil>opt.h>
#include
#include
#include
/* compile using
gcc -g -o stream test.c -lavformat -lavutil -lavcodec -lavdevice -lswscale
*/
// void show_av_device() {
// inFmt->get_device_list(inFmtCtx, device_list);
// printf("Device Info=============\n");
// //avformat_open_input(&inFmtCtx,"video=Capture screen 0",inFmt,&inOptions);
// printf("===============================\n");
// }
void AVFAIL (int code, const char *what) {
char msg[500];
av_strerror(code, msg, sizeof(msg));
fprintf(stderr, "failed: %s\nerror: %s\n", what, msg);
exit(2);
}
#define AVCHECK(f) do { int e = (f); if (e < 0) AVFAIL(e, #f); } while (0)
#define AVCHECKPTR(p,f) do { p = (f); if (!p) AVFAIL(AVERROR_UNKNOWN, #f); } while (0)
void registerLibs() {
av_register_all();
avdevice_register_all();
avformat_network_init();
avcodec_register_all();
}
int main(int argc, char *argv[]) {
//conversion variables
struct SwsContext *swsCtx = NULL;
//input stream variables
AVFormatContext *inFmtCtx = NULL;
AVCodecContext *inCodecCtx = NULL;
AVCodec *inCodec = NULL;
AVInputFormat *inFmt = NULL;
AVFrame *inFrame = NULL;
AVDictionary *inOptions = NULL;
const char *streamURL = "http://localhost:8090/test.flv";
const char *name = "avfoundation";
// AVFrame *inFrameYUV = NULL;
AVPacket inPacket;
//output stream variables
AVCodecContext *outCodecCtx = NULL;
AVCodec *outCodec;
AVFormatContext *outFmtCtx = NULL;
AVOutputFormat *outFmt = NULL;
AVFrame *outFrameYUV = NULL;
AVStream *stream = NULL;
int i, videostream, ret;
int numBytes, frameFinished;
registerLibs();
inFmtCtx = avformat_alloc_context(); //alloc input context
av_dict_set(&inOptions, "pixel_format", "uyvy422", 0);
av_dict_set(&inOptions, "probesize", "7000000", 0);
inFmt = av_find_input_format(name);
ret = avformat_open_input(&inFmtCtx, "Capture screen 0:", inFmt, &inOptions);
if (ret < 0) {
printf("Could not load the context for the input device\n");
return -1;
}
if (avformat_find_stream_info(inFmtCtx, NULL) < 0) {
printf("Could not find stream info for screen\n");
return -1;
}
av_dump_format(inFmtCtx, 0, "Capture screen 0", 0);
// inFmtCtx->streams is an array of pointers of size inFmtCtx->nb_stream
videostream = av_find_best_stream(inFmtCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &inCodec, 0);
if (videostream == -1) {
printf("no video stream found\n");
return -1;
} else {
printf("%s is inCodec\n", inCodec->long_name);
}
inCodecCtx = inFmtCtx->streams[videostream]->codec;
// open codec
if (avcodec_open2(inCodecCtx, inCodec, NULL) > 0) {
printf("Couldn't open codec");
return -1; // couldn't open codec
}
//setup output params
outFmt = av_guess_format(NULL, streamURL, NULL);
if(outFmt == NULL) {
printf("output format was not guessed properly");
return -1;
}
if((outFmtCtx = avformat_alloc_context()) < 0) {
printf("output context not allocated. ERROR");
return -1;
}
printf("%s", outFmt->long_name);
outFmtCtx->oformat = outFmt;
snprintf(outFmtCtx->filename, sizeof(outFmtCtx->filename), streamURL);
printf("%s\n", outFmtCtx->filename);
outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
if(!outCodec) {
printf("could not find encoder for H264 \n" );
return -1;
}
stream = avformat_new_stream(outFmtCtx, outCodec);
outCodecCtx = stream->codec;
avcodec_get_context_defaults3(outCodecCtx, outCodec);
outCodecCtx->codec_id = AV_CODEC_ID_H264;
outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
outCodecCtx->flags = CODEC_FLAG_GLOBAL_HEADER;
outCodecCtx->width = inCodecCtx->width;
outCodecCtx->height = inCodecCtx->height;
outCodecCtx->time_base.den = 25;
outCodecCtx->time_base.num = 1;
outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
outCodecCtx->gop_size = 50;
outCodecCtx->bit_rate = 400000;
//setup output encoders etc
if(stream) {
ret = avcodec_open2(outCodecCtx, outCodec, NULL);
if (ret < 0) {
printf("Could not open output encoder");
return -1;
}
}
if (avio_open(&outFmtCtx->pb, outFmtCtx->filename, AVIO_FLAG_WRITE ) < 0) {
perror("url_fopen failed");
}
avio_open_dyn_buf(&outFmtCtx->pb);
ret = avformat_write_header(outFmtCtx, NULL);
if (ret != 0) {
printf("was not able to write header to output format");
return -1;
}
unsigned char *pb_buffer;
int len = avio_close_dyn_buf(outFmtCtx->pb, (unsigned char **)(&pb_buffer));
avio_write(outFmtCtx->pb, (unsigned char *)pb_buffer, len);
numBytes = avpicture_get_size(PIX_FMT_UYVY422, inCodecCtx->width, inCodecCtx->height);
// Allocate video frame
inFrame = av_frame_alloc();
swsCtx = sws_getContext(inCodecCtx->width, inCodecCtx->height, inCodecCtx->pix_fmt, inCodecCtx->width,
inCodecCtx->height, PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
int frame_count = 0;
while(av_read_frame(inFmtCtx, &inPacket) >= 0) {
if(inPacket.stream_index == videostream) {
avcodec_decode_video2(inCodecCtx, inFrame, &frameFinished, &inPacket);
// 1 Frame might need more than 1 packet to be filled
if(frameFinished) {
outFrameYUV = av_frame_alloc();
uint8_t *buffer = (uint8_t *)av_malloc(numBytes);
int ret = avpicture_fill((AVPicture *)outFrameYUV, buffer, PIX_FMT_YUV420P,
inCodecCtx->width, inCodecCtx->height);
if(ret < 0){
printf("%d is return val for fill\n", ret);
return -1;
}
//convert image to YUV
sws_scale(swsCtx, (uint8_t const * const* )inFrame->data,
inFrame->linesize, 0, inCodecCtx->height,
outFrameYUV->data, outFrameYUV->linesize);
//outFrameYUV now holds the YUV scaled frame/picture
outFrameYUV->format = outCodecCtx->pix_fmt;
outFrameYUV->width = outCodecCtx->width;
outFrameYUV->height = outCodecCtx->height;
AVPacket pkt;
int got_output;
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
outFrameYUV->pts = frame_count;
ret = avcodec_encode_video2(outCodecCtx, &pkt, outFrameYUV, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
return -1;
}
if(got_output) {
if(stream->codec->coded_frame->key_frame) {
pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = stream->index;
if(pkt.pts != AV_NOPTS_VALUE)
pkt.pts = av_rescale_q(pkt.pts, stream->codec->time_base, stream->time_base);
if(pkt.dts != AV_NOPTS_VALUE)
pkt.dts = av_rescale_q(pkt.dts, stream->codec->time_base, stream->time_base);
if(avio_open_dyn_buf(&outFmtCtx->pb)!= 0) {
printf("ERROR: Unable to open dynamic buffer\n");
}
ret = av_interleaved_write_frame(outFmtCtx, &pkt);
unsigned char *pb_buffer;
int len = avio_close_dyn_buf(outFmtCtx->pb, (unsigned char **)&pb_buffer);
avio_write(outFmtCtx->pb, (unsigned char *)pb_buffer, len);
} else {
ret = 0;
}
if(ret != 0) {
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
exit(1);
}
fprintf(stderr, "encoded frame #%d\n", frame_count);
frame_count++;
av_free_packet(&pkt);
av_frame_free(&outFrameYUV);
av_free(buffer);
}
}
av_free_packet(&inPacket);
}
av_write_trailer(outFmtCtx);
//close video stream
if(stream) {
avcodec_close(outCodecCtx);
}
for (i = 0; i < outFmtCtx->nb_streams; i++) {
av_freep(&outFmtCtx->streams[i]->codec);
av_freep(&outFmtCtx->streams[i]);
}
if (!(outFmt->flags & AVFMT_NOFILE))
/* Close the output file. */
avio_close(outFmtCtx->pb);
/* free the output format context */
avformat_free_context(outFmtCtx);
// Free the YUV frame populated by the decoder
av_free(inFrame);
// Close the video codec (decoder)
avcodec_close(inCodecCtx);
// Close the input video file
avformat_close_input(&inFmtCtx);
return 1;
}I’m not sure what I’ve done wrong here. But, what I’ve observed is that for each frame thats been encoded, my memory usage goes up by about 6MB. Backtracking afterward usually leads one of the following two culprits :
- avf_read_frame function in avfoundation.m
- av_dup_packet function in avpacket.h
Can I also get advice on the way I’m using avio_open_dyn_buff function to be able to stream over http? I’ve also attached my ffmpeg library versions below :
ffmpeg version N-70876-g294bb6c Copyright (c) 2000-2015 the FFmpeg developers
built with Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn)
configuration: --prefix=/usr/local --enable-gpl --enable-postproc --enable-pthreads --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-libvorbis --disable-mmx --disable-ssse3 --disable-armv5te --disable-armv6 --disable-neon --enable-shared --disable-static --disable-stripping
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 29.100 / 56. 29.100
libavformat 56. 26.101 / 56. 26.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 13.101 / 5. 13.101
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Hyper fast Audio and Video encoderValgrind analysis attached here because I exceeded stack overflow’s character limit. http://pastebin.com/MPeRhjhN
-
Convert wmv to mp4 with ffmpeg failing
10 janvier 2012, par MorphI've seen quite a few posts on this, but I can't piece together whether I am doing things right, wrong, or need to download more stuff. I am converting from wmv to mp4 without complaints, but then when I go to play it on the browser window (HTML5) the player just turns grey and blanks out the controls.
Installing ffmpeg I do
./configure --disable-yasm ; make ; make install
Unless I include the disable yasm it wont go any further. Then I do
ffmpeg -i myvideo.wmv myvideo.mp4
All good so far. In my html source I have :
<video width="320" height="240" controls="controls">
<source src="myvideo.mp4" type="'video/mp4;" codecs="avc1.42E01E, mp4a.40.2"></source>
Your browser does not support the video tag.
</video>I am playing this in Chrome 15 and
ffmpeg -v
isffmpeg version 0.8.6, Copyright (c) 2000-2011 the FFmpeg developers
built on Dec 1 2011 15:42:06 with gcc 4.1.2 20080704 (Red Hat 4.1.2-51)
configuration: --disable-yasm
libavutil 51. 9. 1 / 51. 9. 1
libavcodec 53. 7. 0 / 53. 7. 0
libavformat 53. 4. 0 / 53. 4. 0
libavdevice 53. 1. 1 / 53. 1. 1
libavfilter 2. 23. 0 / 2. 23. 0
libswscale 2. 0. 0 / 2. 0. 0So I get the HTML5, click on it to play the movie but then the control bar greys out, leaving the play button, but then the play button cannot be clicked and nothing plays.
Is there something wrong with what I have done above ? Do I need to download some separate mp4 driver and compile it ? I see people referring to h.264 but I thoughts ffmpeg had that already included...