
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (86)
-
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (8457)
-
Streaming audio + still image using ffmpeg
21 juin 2017, par AEnimaHI’m streaming an audio source to azure media services using RTMP and a raspberry pi. This works great using this command :
ffmpeg -f alsa -ac 1 -i hw:1,0 -c:a aac -b:a 128k -ar 44100 -f flv rtmp://...
But know we want to add an image to this flv container because azure media player wants a video component.
So I added video using this command :
ffmpeg -loop 1 -i image.png -f alsa -ac 1 -i hw:1,0 -c:a aac -b:a 128k -ar 44100 -g 50 -pix_fmt yuv420p -f flv rtmp://...
Yet it crashes with following log :
ffmpeg version N-86429-gf8593c2 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --prefix=/home/pi/ffmpeg/dependencies/output --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --extra-cflags=-I/home/pi/ffmpeg/dependencies/output/include --extra-ldflags=-L/home/pi/ffmpeg/dependencies/output/lib --extra-libs='-lx264 -lpthread -lm -ldl'
libavutil 55. 63.100 / 55. 63.100
libavcodec 57. 98.100 / 57. 98.100
libavformat 57. 73.100 / 57. 73.100
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 91.100 / 6. 91.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
Input #0, png_pipe, from 'image.png':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: png, rgba(pc), 600x400 [SAR 5669:5669 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc
Guessed Channel Layout for Input Stream #1.0 : mono
Input #1, alsa, from 'hw:1,0':
Duration: N/A, start: 1497522851.947685, bitrate: 768 kb/s
Stream #1:0: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (png (native) -> flv1 (flv))
Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[alsa @ 0x18e3330] Thread message queue blocking; consider raising the thread_queue_size option (current value: :sunglasses:
Output #0, flv, to 'rtmp://aptus2-aptus.channel.mediaservices.windows.net:1935/live/b962f6df8d3b4e97a608981d019faaa4/default':
Metadata:
encoder : Lavf57.73.100
Stream #0:0: Video: flv1 (flv) ([2][0][0][0] / 0x0002), yuv420p(progressive), 600x400 [SAR 1:1 DAR 3:2], q=2-31, 200 kb/s, 25 fps, 1k tbn, 25 tbc
Metadata:
encoder : Lavc57.98.100 flv
Side data:
cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 44100 Hz, mono, fltp, 128 kb/s
Metadata:
encoder : Lavc57.98.100 aac
frame= 3 fps=0.0 q=2.0 size= 56kB time=00:00:00.08 bitrate=5700.0kbits/sframe= 12 fps= 12 q=2.0 size= 66kB time=00:00:00.44 bitrate=1227.2kbits/sframe= 21 fps= 14 q=2.0 size= 72kB time=00:00:00.80 bitrate= 739.8kbits/sframe= 29 fps= 14 q=2.0 size= 79kB time=00:00:01.12 bitrate= 573.9kbits/s[alsa @ 0x18e3330] ALSA buffer xrun.
frame= 39 fps= 15 q=2.0 size= 85kB time=00:00:02.81 bitrate= 246.9kbits/sframe= 50 fps= 16 q=2.0 size= 86kB time=00:00:02.81 bitrate= 251.4kbits/sframe= 60 fps= 17 q=2.0 size= 157kB time=00:00:02.81 bitrate= 457.1kbits/sframe= 71 fps= 17 q=2.0 size= 159kB time=00:00:02.81 bitrate= 462.2kbits/s[alsa @ 0x18e3330] ALSA buffer xrun.
frame= 81 fps= 18 q=2.0 size= 161kB time=00:00:04.57 bitrate= 289.0kbits/sframe= 92 fps= 18 q=2.0 size= 163kB time=00:00:04.57 bitrate= 291.8kbits/sframe= 103 fps= 18 q=2.0 size= 234kB time=00:00:04.57 bitrate= 418.5kbits/sframe= 114 fps= 19 q=2.0 size= 235kB time=00:00:04.57 bitrate= 421.3kbits/s[alsa @ 0x18e3330] ALSA buffer xrun.
frame= 124 fps= 19 q=2.0 size= 238kB time=00:00:06.66 bitrate= 292.8kbits/sframe= 135 fps= 19 q=2.0 size= 240kB time=00:00:06.66 bitrate= 294.7kbits/sframe= 146 fps= 19 q=2.0 size= 241kB time=00:00:06.66 bitrate= 296.6kbits/sav_interleaved_write_frame(): Broken pipe
[flv @ 0x18f9ea0] Failed to update header with correct duration.
[flv @ 0x18f9ea0] Failed to update header with correct filesize.
Error writing trailer of rtmp://aptus2-aptus.channel.mediaservices.windows.net:1935/live/b962f6df8d3b4e97a608981d019faaa4/default: Broken pipe
frame= 151 fps= 15 q=1.6 Lsize= 306kB time=00:00:06.70 bitrate= 373.5kbits/s speed=0.644x
video:284kB audio:24kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[aac @ 0x18fe6e0] Qavg: 3428.370
Conversion failed!what am I doing wrong ? Any help would be awesome !
-
The encoding of ffmpeg does not work on iOS
25 mai 2017, par DericI would like to send encoded streaming encoded using ffmpeg.
The encoding transfer developed under the source below does not work.
Encoding Before packet operation with vlc player is done well, encoded packets can not operate.
I do not know what’s wrong.
Please help me.
av_register_all();
avformat_network_init();
AVOutputFormat *ofmt = NULL;
//Input AVFormatContext and Output AVFormatContext
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
//const char *in_filename, *out_filename;
int ret, i;
int videoindex=-1;
int frame_index=0;
int64_t start_time=0;
av_register_all();
//Network
avformat_network_init();
//Input
if ((ret = avformat_open_input(&ifmt_ctx, "rtmp://", 0, 0)) < 0) {
printf( "Could not open input file.");
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
printf( "Failed to retrieve input stream information");
}
AVCodecContext *context = NULL;
for(i=0; inb_streams; i++) {
if(ifmt_ctx->streams[i]->codecpar->codec_type==AVMEDIA_TYPE_VIDEO){
videoindex=i;
AVCodecParameters *params = ifmt_ctx->streams[i]->codecpar;
AVCodec *codec = avcodec_find_decoder(params->codec_id);
if (codec == NULL) { return; };
context = avcodec_alloc_context3(codec);
if (context == NULL) { return; };
ret = avcodec_parameters_to_context(context, params);
if(ret < 0){
avcodec_free_context(&context);
}
context->framerate = av_guess_frame_rate(ifmt_ctx, ifmt_ctx->streams[i], NULL);
ret = avcodec_open2(context, codec, NULL);
if(ret < 0) {
NSLog(@"avcodec open2 error");
avcodec_free_context(&context);
}
break;
}
}
av_dump_format(ifmt_ctx, 0, "rtmp://", 0);
//Output
avformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", "rtmp://"); //RTMP
//avformat_alloc_output_context2(&ofmt_ctx, NULL, "mpegts", out_filename);//UDP
if (!ofmt_ctx) {
printf( "Could not create output context\n");
ret = AVERROR_UNKNOWN;
}
ofmt = ofmt_ctx->oformat;
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
//Create output AVStream according to input AVStream
AVStream *in_stream = ifmt_ctx->streams[i];
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
if (!out_stream) {
printf( "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
}
out_stream->time_base = in_stream->time_base;
//Copy the settings of AVCodecContext
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
if (ret < 0) {
printf( "Failed to copy context from input to output stream codec context\n");
}
out_stream->codecpar->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) {
out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
}
//Dump Format------------------
av_dump_format(ofmt_ctx, 0, "rtmp://", 1);
//Open output URL
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, "rtmp://", AVIO_FLAG_WRITE);
if (ret < 0) {
printf( "Could not open output URL ");
}
}
//Write file header
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
printf( "Error occurred when opening output URL\n");
}
// Encoding
AVCodec *codec;
AVCodecContext *c;
AVStream *video_st = avformat_new_stream(ofmt_ctx, 0);
video_st->time_base.num = 1;
video_st->time_base.den = 25;
if(video_st == NULL){
NSLog(@"video stream error");
}
codec = avcodec_find_encoder(AV_CODEC_ID_H264);
if(!codec){
NSLog(@"avcodec find encoder error");
}
c = avcodec_alloc_context3(codec);
if(!c){
NSLog(@"avcodec alloc context error");
}
c->profile = FF_PROFILE_H264_BASELINE;
c->width = ifmt_ctx->streams[videoindex]->codecpar->width;
c->height = ifmt_ctx->streams[videoindex]->codecpar->height;
c->time_base.num = 1;
c->time_base.den = 25;
c->bit_rate = 800000;
//c->time_base = { 1,22 };
c->pix_fmt = AV_PIX_FMT_YUV420P;
c->thread_count = 2;
c->thread_type = 2;
AVDictionary *param = 0;
av_dict_set(&param, "preset", "slow", 0);
av_dict_set(&param, "tune", "zerolatency", 0);
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
}
AVFrame *pFrame = av_frame_alloc();
start_time=av_gettime();
while (1) {
AVPacket encoded_pkt;
av_init_packet(&encoded_pkt);
encoded_pkt.data = NULL;
encoded_pkt.size = 0;
AVStream *in_stream, *out_stream;
//Get an AVPacket
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0) {
break;
}
//FIX:No PTS (Example: Raw H.264)
//Simple Write PTS
if(pkt.pts==AV_NOPTS_VALUE){
//Write PTS
AVRational time_base1=ifmt_ctx->streams[videoindex]->time_base;
//Duration between 2 frames (us)
int64_t calc_duration=(double)AV_TIME_BASE/av_q2d(ifmt_ctx->streams[videoindex]->r_frame_rate);
//Parameters
pkt.pts=(double)(frame_index*calc_duration)/(double)(av_q2d(time_base1)*AV_TIME_BASE);
pkt.dts=pkt.pts;
pkt.duration=(double)calc_duration/(double)(av_q2d(time_base1)*AV_TIME_BASE);
}
//Important:Delay
if(pkt.stream_index==videoindex){
AVRational time_base=ifmt_ctx->streams[videoindex]->time_base;
AVRational time_base_q={1,AV_TIME_BASE};
int64_t pts_time = av_rescale_q(pkt.dts, time_base, time_base_q);
int64_t now_time = av_gettime() - start_time;
if (pts_time > now_time) {
av_usleep(pts_time - now_time);
}
}
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
/* copy packet */
//Convert PTS/DTS
//pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
//pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
//Print to Screen
if(pkt.stream_index==videoindex){
//printf("Send %8d video frames to output URL\n",frame_index);
frame_index++;
}
// Decode and Encode
if(pkt.stream_index == videoindex) {
ret = avcodec_send_packet(context, &pkt);
if(ret<0){
NSLog(@"avcode send packet error");
}
ret = avcodec_receive_frame(context, pFrame);
if(ret<0){
NSLog(@"avcodec receive frame error");
}
ret = avcodec_send_frame(c, pFrame);
if(ret < 0){
NSLog(@"avcodec send frame - %s", av_err2str(ret));
}
ret = avcodec_receive_packet(c, &encoded_pkt);
if(ret < 0){
NSLog(@"avcodec receive packet error");
}
}
//ret = av_write_frame(ofmt_ctx, &pkt);
//encoded_pkt.stream_index = pkt.stream_index;
av_packet_rescale_ts(&encoded_pkt, c->time_base, ofmt_ctx->streams[videoindex]->time_base);
ret = av_interleaved_write_frame(ofmt_ctx, &encoded_pkt);
if (ret < 0) {
printf( "Error muxing packet\n");
break;
}
av_packet_unref(&encoded_pkt);
av_free_packet(&pkt);
}
//Write file trailer
av_write_trailer(ofmt_ctx); -
ffmpeg - av_interleaved_write_frame() : Unknown error Error number -10053
1er février 2021, par RamUsing ffmpeg, I am able to stream video. I wrote socket Java program that connects to TCP and reading this the Live stream.



After my Java program finished recording, ffmpeg is stopping with following the error unexpectedly. I would like to keep ffmpeg running the whole time.





av_interleaved_write_frame() : Unknown error Error writing trailer of
 tcp ://XX.XX.XX.XX:5800 ?listen : Error number -10053 curred





The ffmpeg command I am using :



ffmpeg.exe -f gdigrab -an -sn -framerate 7 -i desktop -video_size 800x600 -c:v libx264 -preset ultrafast -maxrate 4000k -crf 40 -bufsize 8000k -probesize 32 -f mpegts tcp://xx.xx.xx.xx:5800?listen




Java code :



DataInputStream ois = new DataInputStream(new BufferedInputStream(socket.getInputStream()));
while((n = ois.read(buf)) != -1 && n!=3 ){
 fos.write(buf,0,n);
 fos.flush();
 cnt--;
 if(cnt ==0) break;
}