
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (54)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Changer son thème graphique
22 février 2011, parLe thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
Modifier le thème graphique utilisé
Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
Il suffit ensuite de se rendre dans l’espace de configuration du (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (7584)
-
Stream RTP packets to FFMPEG [duplicate]
21 mars 2017, par Johnathan KanarekThis question already has an answer here :
-
Stream RTP to FFMPEG using SDP
1 answer
I get RTP stream from WebRTC server (I used mediasoup) using node.js and I get the decrypted RTP packets raw data from the stream. I want to forward this RTP data to ffmpeg. I create SDP file that describes both the audio and video streams and send the packets through UDP.
The SDP :v=0
o=mediasoup 7199daf55e496b370e36cd1d25b1ef5b9dff6858 0 IN IP4 192.168.193.182
s=7199daf55e496b370e36cd1d25b1ef5b9dff6858
c=IN IP4 192.168.193.182
t=0 0
m=audio 33400 RTP/AVP 111
a=rtpmap:111 /opus/48000
a=fmtp:111 minptime=10;useinbandfec=1
a=rtcp-fb:111 transport-cc
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=mid:audio
a=recvonly
m=video 33402 RTP/AVP 100
a=rtpmap:100 /VP8/90000
a=rtcp-fb:100 ccm fir
a=rtcp-fb:100 nack
a=rtcp-fb:100 nack pli
a=rtcp-fb:100 goog-remb
a=rtcp-fb:100 transport-cc
a=extmap:2 urn:ietf:params:rtp-hdrext:toffset
a=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time
a=extmap:4 urn:3gpp:video-orientation
a=extmap:5 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01
a=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay
a=mid:video
a=recvonly
a=rtcp-muxThe command :
ffmpeg -loglevel debug -analyzeduration 2147483647 -probesize 2147483647 -protocol_whitelist file,crypto,udp,rtp -re -vcodec vp8 -acodec opus -i test.sdp -vcodec h264 -acodec aac -y output.mp4The log :
ffmpeg version 3.2
Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-11)
configuration: --prefix=/opt/kaltura/ffmpeg-3.2 --libdir=/opt/kaltura/ffmpeg-3.2/lib --shlibdir=/opt/kaltura/ffmpeg-3.2/lib --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC -I/opt/kaltura/include' --extra-ldflags=-L/opt/kaltura/lib --disable-devices --enable-bzlib --enable-libgsm --enable-libmp3lame --enable-libschroedinger --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libx265 --enable-avisynth --enable-libxvid --enable-filter=movie --enable-avfilter --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libvpx --enable-libspeex --enable-libass --enable-postproc --enable-pthreads --enable-static --enable-shared --enable-gpl --disable-debug --disable-optimizations --enable-gpl --enable-pthreads --enable-swscale --enable-vdpau --enable-bzlib --disable-devices --enable-filter=movie --enable-version3 --enable-indev=lavfi --enable-x11grab
libavutil 55. 34.100 / 55. 34.100
libavcodec 57. 64.100 / 57. 64.100
libavformat 57. 56.100 / 57. 56.100
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Splitting the commandline.
Reading option '-loglevel' ...
matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-analyzeduration' ...
matched as AVOption 'analyzeduration' with argument '2147483647'.
Reading option '-probesize' ...
matched as AVOption 'probesize' with argument '2147483647'.
Reading option '-protocol_whitelist' ...
matched as AVOption 'protocol_whitelist' with argument 'file,crypto,udp,rtp'.
Reading option '-re' ...
matched as option 're' (read input at native frame rate) with argument '1'.
Reading option '-vcodec' ...
matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'vp8'.
Reading option '-acodec' ...
matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'opus'.
Reading option '-i' ... matched as input file with argument 'test.sdp'.
Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'h264'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'aac'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option 'output.mp4' ... matched as output file.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input file test.sdp.
Applying option re (read input at native frame rate) with argument 1.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument vp8.
Applying option acodec (force audio codec ('copy' to copy stream)) with argument opus.
Successfully parsed a group of options.
Opening an input file: test.sdp.
[sdp @ 0xb1ef00] Format sdp probed with size=2048 and score=50
[sdp @ 0xb1ef00] audio codec set to: (null)
[sdp @ 0xb1ef00] audio samplerate set to: 44100
[sdp @ 0xb1ef00] audio channels set to: 1
[sdp @ 0xb1ef00] video codec set to: (null)
[udp @ 0xb21940] end receive buffer size reported is 131072
[udp @ 0xb21660] end receive buffer size reported is 131072
[sdp @ 0xb1ef00] setting jitter buffer size to 500
[udp @ 0xb21da0] end receive buffer size reported is 131072
[udp @ 0xb22060] end receive buffer size reported is 131072
[sdp @ 0xb1ef00] setting jitter buffer size to 500
[sdp @ 0xb1ef00] Before avformat_find_stream_info() pos: 889 bytes read:889 seeks:0 nb_streams:2
[vp8 @ 0xb27600] Header size larger than data provided
Last message repeated 2 times
[sdp @ 0xb1ef00] Non-increasing DTS in stream 1: packet 2 with DTS 0, packet 3 with DTS 0
[vp8 @ 0xb27600] Header size larger than data provided
... repeats many times until I kill the socket ...
Last message repeated 1 times
[sdp @ 0xb1ef00] Non-increasing DTS in stream 1: packet 273 with DTS 553050, packet 274 with DTS 553050
[vp8 @ 0xb27600] Header size larger than data provided
received id=7199daf55e496b370e36cd1d25b1ef5b9dff6858 type=bye
PeerConnection close. id=7199daf55e496b370e36cd1d25b1ef5b9dff6858
-- PeerConnection.closed, err: undefined
-- peers in the room = 0
[sdp @ 0xb1ef00] decoding for stream 1 failed
[sdp @ 0xb1ef00] Could not find codec parameters for stream 1 (Video: vp8, 1 reference frame, yuv420p): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
[sdp @ 0xb1ef00] After avformat_find_stream_info() pos: 889 bytes read:889 seeks:0 frames:584
Input #0, sdp, from 'test.sdp':
Metadata:
title : 7199daf55e496b370e36cd1d25b1ef5b9dff6858
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0, 309, 1/90000: Audio: opus, 48000 Hz, mono, fltp
Stream #0:1, 275, 1/90000: Video: vp8, 1 reference frame, yuv420p, 90k tbr, 90k tbn, 90k tbc
Successfully opened the file.
Parsing a group of options: output file output.mp4.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument h264.
Applying option acodec (force audio codec ('copy' to copy stream)) with argument aac.
Successfully parsed a group of options.
Opening an output file: output.mp4.
Matched encoder 'libx264' for codec 'h264'.
[file @ 0xbc56e0]
Setting default whitelist 'file,crypto'
Successfully opened the file.
detected 1 logical cores
[graph 0 input from stream 0:1 @ 0xb1eca0]
Setting 'video_size' to value '0x0'
[buffer @ 0xbc54e0]
Unable to parse option value "0x0" as image size
[graph 0 input from stream 0:1 @ 0xb1eca0]
Setting 'pix_fmt' to value '0'
[graph 0 input from stream 0:1 @ 0xb1eca0]
Setting 'time_base' to value '1/90000'
[graph 0 input from stream 0:1 @ 0xb1eca0] Setting 'pixel_aspect' to value '0/1'
[graph 0 input from stream 0:1 @ 0xb1eca0] Setting 'sws_param' to value 'flags=2'
[graph 0 input from stream 0:1 @ 0xb1eca0] Setting 'frame_rate' to value '90000/1'
[buffer @ 0xbc54e0] Unable to parse option value "0x0" as image size
[buffer @ 0xbc54e0] Error setting option video_size to value 0x0.
[graph 0 input from stream 0:1 @ 0xb1eca0] Error applying options to the filter.
Error opening filters!
[AVIOContext @ 0xbc57c0] Statistics: 0 seeks, 0 writeouts
[AVIOContext @ 0xb1f8c0]
Statistics: 889 bytes read, 0 seeksAs you can see, at the beginning of the log the SDP parsed without recognizing the codecs :
Opening an input file: test.sdp.
[sdp @ 0xb1ef00] Format sdp probed with size=2048 and score=50
[sdp @ 0xb1ef00] audio codec set to: (null)
[sdp @ 0xb1ef00] audio samplerate set to: 44100
[sdp @ 0xb1ef00] audio channels set to: 1
[sdp @ 0xb1ef00] video codec set to: (null)Then it’s trying to read the packets from the sockets.
Only when I close the socket, ffmpeg continues to parse the SDP, this time finding the correct codec :Opening an input file: test.sdp.
[sdp @ 0xb1ef00] Format sdp probed with size=2048 and score=50
[sdp @ 0xb1ef00] audio codec set to: (null)
[sdp @ 0xb1ef00] audio samplerate set to: 44100
[sdp @ 0xb1ef00] audio channels set to: 1
[sdp @ 0xb1ef00] video codec set to: (null)I suspect that the "Non-increasing DTS" and "Header size larger than data provided" errors are caused by wrong parsing of the packets due to usage with the wrong codec.
I checked the SDP order and it seems the same as in other examples I have.
Can someone suggest an explanation ?
BTW, audio only works fine, but I guess it’s because of the simplicity of OPUS.
Thanks.
-
Stream RTP to FFMPEG using SDP
-
ffmpeg C api cutting a video when packet dts is greater than pts
10 mars 2017, par TastyCatFoodCorrupted videos
In trying to cut out a duration of one of videos using ffmpeg C api, using the code posted here : How to cut video with FFmpeg C API , ffmpeg spit out the log below :
D/logger: Loop count:9 out: pts:0 pts_time:0 dts:2002 dts_time:0.0333667 duration:2002 duration_time:0.0333667 stream_index:1
D/trim_video: Error muxing packet Invalid argumentffmpeg considercs an instruction to decompress a frame after presenting it to be a nonsense, which is well...reasonable but stringent.
My VLC player finds the video alright and plays it of course.
Note :
The code immediately below is in c++ written to be compiled with g++ as I’m developing for android. For C code, scroll down further.
My solution(g++) :
extern "C" {
#include "libavformat/avformat.h"
#include "libavutil/mathematics.h"
#include "libavutil/timestamp.h"
static void log_packet(
const AVFormatContext *fmt_ctx,
const AVPacket *pkt, const char *tag,
long count=0)
{
printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
count,
static_cast<double>(pkt->pts),
static_cast<double>(pkt->dts),
static_cast<double>(pkt->duration),
pkt->stream_index);
return;
}
int trimVideo(
const char* in_filename,
const char* out_filename,
double cutFrom,
double cutUpTo)
{
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
int ret, i;
//jboolean copy = true;
//const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&copy);
//const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&copy);
long loopCount = 0;
av_register_all();
// Cutting may change the pts and dts of the resulting video;
// if frames in head position are removed.
// In the case like that, src stream's copy start pts
// need to be recorded and is used to compute the new pts value.
// e.g.
// new_pts = current_pts - trim_start_position_pts;
// nb-streams is the number of elements in AVFormatContext.streams.
// Initial pts value must be recorded for each stream.
//May be malloc and memset should be replaced with [].
int64_t *dts_start_from = NULL;
int64_t *pts_start_from = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
printf( "Could not open input file '%s'", in_filename);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
printf("Failed to retrieve input stream information");
goto end;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
printf( "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ofmt = ofmt_ctx->oformat;
//preparing streams
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *in_stream = ifmt_ctx->streams[i];
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
if (!out_stream) {
printf( "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
if (ret < 0) {
printf( "Failed to copy context from input to output stream codec context\n");
goto end;
}
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
printf( "Could not open output file '%s'", out_filename);
goto end;
}
}
//preparing the header
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
printf( "Error occurred when opening output file\n");
goto end;
}
// av_seek_frame translates AV_TIME_BASE into an appropriate time base.
ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
if (ret < 0) {
printf( "Error seek\n");
goto end;
}
dts_start_from = static_cast(
malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
pts_start_from = static_cast(
malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
//writing
while (1) {
AVStream *in_stream, *out_stream;
//reading frame into pkt
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
//if end reached
if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
av_packet_unref(&pkt);
break;
}
// Recording the initial pts value for each stream
// Recording dts does not do the trick because AVPacket.dts values
// in some video files are larger than corresponding pts values
// and ffmpeg does not like it.
if (dts_start_from[pkt.stream_index] == 0) {
dts_start_from[pkt.stream_index] = pkt.pts;
printf("dts_initial_value: %f for stream index: %d \n",
static_cast<double>(dts_start_from[pkt.stream_index]),
pkt.stream_index
);
}
if (pts_start_from[pkt.stream_index] == 0) {
pts_start_from[pkt.stream_index] = pkt.pts;
printf( "pts_initial_value: %f for stream index %d\n",
static_cast<double>(pts_start_from[pkt.stream_index]),
pkt.stream_index);
}
log_packet(ifmt_ctx, &pkt, "in",loopCount);
/* Computes pts etc
* av_rescale_q_rend etc are countering changes in time_base between
* out_stream and in_stream, so regardless of time_base values for
* in and out streams, the rate at which frames are refreshed remains
* the same.
*
pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
As `time_base == 1/frame_rate`, the above is an equivalent of
(out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
frame_rate is the number of frames to be displayed per second.
AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
* */
pkt.pts =
av_rescale_q_rnd(
pkt.pts - pts_start_from[pkt.stream_index],
static_cast<avrational>(in_stream->time_base),
static_cast<avrational>(out_stream->time_base),
static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt.dts =
av_rescale_q_rnd(
pkt.dts - dts_start_from[pkt.stream_index],
static_cast<avrational>(in_stream->time_base),
static_cast<avrational>(out_stream->time_base),
static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
if(pkt.dts < 0) pkt.dts = 0;
if(pkt.pts < 0) pkt.pts = 0;
pkt.duration = av_rescale_q(
pkt.duration,
in_stream->time_base,
out_stream->time_base);
pkt.pos = -1;
log_packet(ofmt_ctx, &pkt, "out",loopCount);
// Writes to the file after buffering packets enough to generate a frame
// and probably sorting packets in dts order.
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
// ret = av_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
printf( "Error muxing packet %d \n", ret);
//continue;
break;
}
av_packet_unref(&pkt);
++loopCount;
}
//Writing end code?
av_write_trailer(ofmt_ctx);
end:
avformat_close_input(&ifmt_ctx);
if(dts_start_from)free(dts_start_from);
if(pts_start_from)free(pts_start_from);
/* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0 && ret != AVERROR_EOF) {
//printf( "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}
}
</avrounding></avrational></avrational></avrounding></avrational></avrational></double></double></double></double></double>c compatible(Console says g++ but I’m sure this is C code)
#include "libavformat/avformat.h"
#include "libavutil/mathematics.h"
#include "libavutil/timestamp.h"
static void log_packet(
const AVFormatContext *fmt_ctx,
const AVPacket *pkt, const char *tag,
long count)
{
printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
count,
(double)pkt->pts,
(double)pkt->dts,
(double)pkt->duration,
pkt->stream_index);
return;
}
int trimVideo(
const char* in_filename,
const char* out_filename,
double cutFrom,
double cutUpTo)
{
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
int ret, i;
//jboolean copy = true;
//const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&copy);
//const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&copy);
long loopCount = 0;
av_register_all();
// Cutting may change the pts and dts of the resulting video;
// if frames in head position are removed.
// In the case like that, src stream's copy start pts
// need to be recorded and is used to compute the new pts value.
// e.g.
// new_pts = current_pts - trim_start_position_pts;
// nb-streams is the number of elements in AVFormatContext.streams.
// Initial pts value must be recorded for each stream.
//May be malloc and memset should be replaced with [].
int64_t *dts_start_from = NULL;
int64_t *pts_start_from = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
printf( "Could not open input file '%s'", in_filename);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
printf("Failed to retrieve input stream information");
goto end;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
printf( "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ofmt = ofmt_ctx->oformat;
//preparing streams
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *in_stream = ifmt_ctx->streams[i];
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
if (!out_stream) {
printf( "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
if (ret < 0) {
printf( "Failed to copy context from input to output stream codec context\n");
goto end;
}
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
printf( "Could not open output file '%s'", out_filename);
goto end;
}
}
//preparing the header
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
printf( "Error occurred when opening output file\n");
goto end;
}
// av_seek_frame translates AV_TIME_BASE into an appropriate time base.
ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
if (ret < 0) {
printf( "Error seek\n");
goto end;
}
dts_start_from = (int64_t*)
malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
pts_start_from = (int64_t*)
malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
//writing
while (1) {
AVStream *in_stream, *out_stream;
//reading frame into pkt
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
//if end reached
if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
av_packet_unref(&pkt);
break;
}
// Recording the initial pts value for each stream
// Recording dts does not do the trick because AVPacket.dts values
// in some video files are larger than corresponding pts values
// and ffmpeg does not like it.
if (dts_start_from[pkt.stream_index] == 0) {
dts_start_from[pkt.stream_index] = pkt.pts;
printf("dts_initial_value: %f for stream index: %d \n",
(double)dts_start_from[pkt.stream_index],
pkt.stream_index
);
}
if (pts_start_from[pkt.stream_index] == 0) {
pts_start_from[pkt.stream_index] = pkt.pts;
printf( "pts_initial_value: %f for stream index %d\n",
(double)pts_start_from[pkt.stream_index],
pkt.stream_index);
}
log_packet(ifmt_ctx, &pkt, "in",loopCount);
/* Computes pts etc
* av_rescale_q_rend etc are countering changes in time_base between
* out_stream and in_stream, so regardless of time_base values for
* in and out streams, the rate at which frames are refreshed remains
* the same.
*
pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
As `time_base == 1/frame_rate`, the above is an equivalent of
(out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
frame_rate is the number of frames to be displayed per second.
AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
* */
pkt.pts =
av_rescale_q_rnd(
pkt.pts - pts_start_from[pkt.stream_index],
(AVRational)in_stream->time_base,
(AVRational)out_stream->time_base,
(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt.dts =
av_rescale_q_rnd(
pkt.dts - dts_start_from[pkt.stream_index],
(AVRational)in_stream->time_base,
(AVRational)out_stream->time_base,
AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
if(pkt.dts < 0) pkt.dts = 0;
if(pkt.pts < 0) pkt.pts = 0;
pkt.duration = av_rescale_q(
pkt.duration,
in_stream->time_base,
out_stream->time_base);
pkt.pos = -1;
log_packet(ofmt_ctx, &pkt, "out",loopCount);
// Writes to the file after buffering packets enough to generate a frame
// and probably sorting packets in dts order.
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
// ret = av_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
printf( "Error muxing packet %d \n", ret);
//continue;
break;
}
av_packet_unref(&pkt);
++loopCount;
}
//Writing end code?
av_write_trailer(ofmt_ctx);
end:
avformat_close_input(&ifmt_ctx);
if(dts_start_from)free(dts_start_from);
if(pts_start_from)free(pts_start_from);
/* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0 && ret != AVERROR_EOF) {
//printf( "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}What is the problem
My code does not produce the error because I’m doing
new_dts = current_dts - initial_pts_for_current_stream
. It works but now dts values are not properly computed.How to recalculate dts properly ?
P.S
Since Olaf seems to have a very strong opinion, posting the build console message for my main.c.
I don’t really know C or C++ but GNU gcc seems to be calling gcc for compiling and g++ for linking.
Well, the extension for my main is now .c and the compiler being called is gcc, so that should at least mean I have got a code written in C language...------------- Build: Debug in videoTrimmer (compiler: GNU GCC Compiler)---------------
gcc -Wall -fexceptions -std=c99 -g -I/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include -I/usr/include -I/usr/local/include -c /home/d/CodeBlockWorkplace/videoTrimmer/main.c -o obj/Debug/main.o
/home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘log_packet’:
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:15:12: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘long int’ [-Wformat=]
printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘trimVideo’:
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:79:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘avcodec_copy_context’ is deprecated [-Wdeprecated-declarations]
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
^
In file included from /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:319:0,
from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavcodec/avcodec.h:4286:5: note: declared here
int avcodec_copy_context(AVCodecContext *dest, const AVCodecContext *src);
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:91:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
out_stream->codec->codec_tag = 0;
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:93:13: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
g++ -L/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib -L/usr/lib -L/usr/local/lib -o bin/Debug/videoTrimmer obj/Debug/main.o ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavformat.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavcodec.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavutil.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswresample.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswscale.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavfilter.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libpostproc.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavdevice.a -lX11 -lvdpau -lva -lva-drm -lva-x11 -ldl -lpthread -lz -llzma -lx264
Output file is bin/Debug/videoTrimmer with size 77.24 MB
Process terminated with status 0 (0 minute(s), 16 second(s))
0 error(s), 8 warning(s) (0 minute(s), 16 second(s)) -
error : ‘CODEC_TYPE_AUDIO’ undeclared when make m3u8-segmenter
28 février 2012, par whyI want to make m3u8-segmenter for Http Live Stream : https://github.com/johnf/m3u8-segmenter
There are errors when I make, the errors are :
gcc -g -O -Wall -Wstrict-prototypes -Wmissing-prototypes -Waggregate-return -Wcast-align -Wcast-qual -Wnested-externs -Wshadow -Wbad-function-cast -Wwrite-strings -Werror m3u8-segmenter.c -o m3u8-segmenter -lavformat -lavcodec -lavutil
m3u8-segmenter.c: In function ‘add_output_stream’:
m3u8-segmenter.c:82:14: error: ‘CODEC_TYPE_AUDIO’ undeclared (first use in this function)
m3u8-segmenter.c:82:14: note: each undeclared identifier is reported only once for each function it appears in
m3u8-segmenter.c:94:14: error: ‘CODEC_TYPE_VIDEO’ undeclared (first use in this function)
m3u8-segmenter.c: In function ‘main’:
m3u8-segmenter.c:338:5: error: ‘av_open_input_file’ is deprecated (declared at /usr/local/include/libavformat/avformat.h:1090) [-Werror=deprecated-declarations]
m3u8-segmenter.c:352:5: error: implicit declaration of function ‘guess_format’ [-Werror=implicit-function-declaration]
m3u8-segmenter.c:352:5: error: nested extern declaration of ‘guess_format’ [-Werror=nested-externs]
m3u8-segmenter.c:352:10: error: assignment makes pointer from integer without a cast [-Werror]
m3u8-segmenter.c:371:18: error: ‘CODEC_TYPE_VIDEO’ undeclared (first use in this function)
m3u8-segmenter.c:376:18: error: ‘CODEC_TYPE_AUDIO’ undeclared (first use in this function)
m3u8-segmenter.c:387:5: error: ‘av_set_parameters’ is deprecated (declared at /usr/local/include/libavformat/avformat.h:1434) [-Werror=deprecated-declarations]
m3u8-segmenter.c:392:5: error: ‘dump_format’ is deprecated (declared at /usr/local/include/libavformat/avformat.h:1559) [-Werror=deprecated-declarations]
m3u8-segmenter.c:406:5: error: ‘url_fopen’ is deprecated (declared at /usr/local/include/libavformat/avio.h:279) [-Werror=deprecated-declarations]
m3u8-segmenter.c:411:5: error: ‘av_write_header’ is deprecated (declared at /usr/local/include/libavformat/avformat.h:1492) [-Werror=deprecated-declarations]
m3u8-segmenter.c:444:67: error: ‘PKT_FLAG_KEY’ undeclared (first use in this function)
m3u8-segmenter.c:455:13: error: ‘put_flush_packet’ is deprecated (declared at /usr/local/include/libavformat/avio.h:293) [-Werror=deprecated-declarations]
m3u8-segmenter.c:456:13: error: ‘url_fclose’ is deprecated (declared at /usr/local/include/libavformat/avio.h:280) [-Werror=deprecated-declarations]
m3u8-segmenter.c:476:13: error: ‘url_fopen’ is deprecated (declared at /usr/local/include/libavformat/avio.h:279) [-Werror=deprecated-declarations]
m3u8-segmenter.c:482:13: error: ‘av_write_header’ is deprecated (declared at /usr/local/include/libavformat/avformat.h:1492) [-Werror=deprecated-declarations]
m3u8-segmenter.c:514:5: error: ‘url_fclose’ is deprecated (declared at /usr/local/include/libavformat/avio.h:280) [-Werror=deprecated-declarations]
cc1: all warnings being treated as errors
make: *** [all] Error 1