Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (65)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (8582)
-
FFmpeg pipe related issues
7 décembre 2015, par Ranjit AneeshI am new to FFmpeg and have very little knowledge of different codecs.
I am using pipes to provide input and to send out output from FFmpeg
The command I use is essentially
ffmpeg -i pipe:0 -f flv pipe:1I am using a Java program that basically provides an input stream as standard input (pipe:0), FFmpeg converts the video into the required format and sends out to standard output(pipe:1) from where I stream out to my remote location.
My java program revolves around this piece of code, providing input and sending output as an output stream.
Essentially my program does what it is supposed to, however I see that the final output video file is not of the complete duration. It is like 10 sec duration, and my sample video is 21 mins ! Also it is missing audio.
Do I need to provide more info to FFmpeg ?
EDIT : When I replace the pipe with an input file and an output file,the output is generated correctly without any issues, when I use pipes the size of the file is still larger compared to original. Just in case if it helps to diagnose.
FFmpeg version SVN-r23418, Copyright (c) 2000-2010 the FFmpeg developers
built on Jun 2 2010 04:12:01 with gcc 4.4.2
configuration : —target-os=mingw32 —enable-runtime-cpudetect —enable-avisynth —enable-gpl —enable-version3 —enable-bzlib —enable-libgsm —enable-libfaad —enable-pthreads —enable-libvorbis —enable-libtheora —enable-libspeex —enable-libmp3lame —enable-libopenjpeg —enable-libxvid —enable-libschroedinger —enable-libx264 —extra-libs=’-lx264 -lpthread’ —enable-libopencore_amrwb —enable-libopencore_amrnb —enable-librtmp —extra-libs=’-lrtmp -lssl -lcrypto -lws2_32 -lgdi32 -lwinmm -lcrypt32 -lz’ —arch=x86 —cross-prefix=i686-mingw32- —cc=’ccache i686-mingw32-gcc’ —enable-memalign-hack
libavutil 50.16. 0 / 50.16. 0
libavcodec 52.72. 1 / 52.72. 1
libavformat 52.67. 0 / 52.67. 0
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.20. 0 / 1.20. 0
libswscale 0.11. 0 / 0.11. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ’pipe:0’ :
Metadata :
major_brand : isom
minor_version : 1
compatible_brands : isom
title :
artist :
date :
album :
comment :
Feedback :
genre :
Duration : 00:21:46.63, start : 0.000000, bitrate : N/A
Stream #0.0(und) : Video : h264, yuv420p, 512x288 [PAR 1:1 DAR 16:9], 403 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc
Stream #0.1(und) : Audio : aac, 44100 Hz, mono, s16, 47 kb/s
Output #0, flv, to ’pipe:1’ :
Metadata :
encoder : Lavf52.67.0
Stream #0.0(und) : Video : flv, yuv420p, 512x288 [PAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 1k tbn, 25 tbc
Stream #0.1(und) : Audio : libmp3lame, 44100 Hz, mono, s16, 64 kb/s
Stream mapping :
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1 -
push video use ffmpeg api is terminated with error occurred : broken pipe
1er juin 2021, par guoyanzhangI met a question, I used ffmpeg API to push video to rtmp server ip PushVideoInfo.VideoServer.
in my code, the value of PushVideoInfo.VideoServer is rtmp ://192.168.128.29:1935/live/SP_20210531180743756wExIOPLvAjK2. The streaming process was interrupted. I don't know why. The following error is prompted :


Send 0 video frames to output URL
Send 1 video frames to output URL
Send 2 video frames to output URL
Send 3 video frames to output URL
Send 4 video frames to output URL
Send 5 video frames to output URL
Send 6 video frames to output URL
Send 7 video frames to output URL
Send 8 video frames to output URL
Send 9 video frames to output URL
Send 10 video frames to output URL
---------------------------------stop push video -------------------------------------------
Error occured : Broken pipe


What are the possibilities for this error, in RTMP client or on RTMP server side ?


#include "/usr/local/include/libavcodec/avcodec.h"
#include "/usr/local/include/libavformat/avformat.h"
#include "/usr/local/include/libavfilter/avfilter.h"
#include "/usr/local/include/libavutil/mathematics.h"
#include "/usr/local/include/libavutil/time.h"

extern VideoDataStruct *VideoDataListHeader;
extern PushVideoStruct PushVideoInfo;
extern enum IsPushingVideo IsPushingVideoFlag;
extern UCHAR ChangeAnotherVideo;
typedef long long int64;


#define READ_BUF_LEN 1024*8

extern enum IsStopPushVideo StopPushVideoFlag; 

static int read_packet(void *opaque, uint8_t *buf, int buf_size)
{
 int64 dataLen = 0;

 while (dataLen < buf_size)
 {
 if ((VideoDataListHeader != NULL) && (VideoDataListHeader->flag == 1))
 {
 memcpy(&buf[dataLen], VideoDataListHeader->buf, sizeof(VideoDataListHeader->buf));
 dataLen += sizeof(VideoDataListHeader->buf);

 VideoDataListHeader->flag = 0;
 VideoDataListHeader = VideoDataListHeader->next;
 }
 else 
 {
 usleep(10000);
 }
 }
 return buf_size;
}

void *PushVideoFunction(void *arg)
{
 AVFormatContext *m_pFmtCtx = NULL;
 AVPacket pkt; 
 AVIOContext *m_pIOCtx = NULL;
 AVInputFormat *in_fmt = NULL;
 int ret = 0;
 unsigned int i = 0;
 int vid_idx =-1;
 unsigned char *m_pIOBuf = NULL;
 int m_pIOBuf_size = READ_BUF_LEN;
 int64 start_time = 0;
 int frame_index = 0;
 //const char *rtmp_url = "rtmp://192.168.1.108/mytv/01";
 char rtmp_url[140] = {0};
 memset(rtmp_url, 0, sizeof(rtmp_url));
 strcpy(rtmp_url, PushVideoInfo.VideoServer);
 CHAR fileName[64] = {0};

 avformat_network_init(); 
 if (strcmp(PushVideoInfo.VideoType, REAL_VIDEO) == 0) 
 {
 m_pIOBuf = (unsigned char*)av_malloc(m_pIOBuf_size);
 if(m_pIOBuf == NULL)
 {
 printf("av malloc failed!\n");
 goto end;
 }

 
 m_pIOCtx = avio_alloc_context(m_pIOBuf, m_pIOBuf_size, 0, NULL, read_packet, NULL, NULL); 
 if (!m_pIOCtx) 
 {
 printf("avio alloc context failed!\n");
 goto end;
 }

 
 m_pFmtCtx = avformat_alloc_context();
 if (!m_pFmtCtx) 
 {
 printf("avformat alloc context failed!\n");
 goto end;
 }


 //m_pFmtCtx->probesize = BYTES_PER_FRAME * 8;
 m_pFmtCtx->pb = m_pIOCtx; 
 ret = avformat_open_input(&m_pFmtCtx, "", in_fmt, NULL);
 }
 else if (strcmp(PushVideoInfo.VideoType, HISTORY_VIDEO) == 0) 
 {
 sprintf(fileName, "%s", VIDEO_FILE_FOLDER);
 sprintf(fileName+strlen(fileName), "%s", PushVideoInfo.VideoFile);
 ret = avformat_open_input(&m_pFmtCtx, fileName, NULL, NULL);
 }
 if (ret < 0)
 {
 printf("avformat open failed!\n");
 goto end; 
 }

 ret = avformat_find_stream_info(m_pFmtCtx, 0);
 if (ret < 0)
 {
 printf("could not find stream info!\n");
 goto end; 
 } 
 for(i = 0; i < m_pFmtCtx->nb_streams; i++)
 {
 if((m_pFmtCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) && (vid_idx < 0))
 {
 vid_idx = i;
 }
 }

 AVFormatContext *octx = NULL;

 ret = avformat_alloc_output_context2(&octx, 0, "flv", rtmp_url);
 if (ret < 0)
 {
 printf("avformat alloc output context2 failed!\n");
 goto end;
 } 

 av_init_packet(&pkt); 

 
 for (i = 0;i < m_pFmtCtx->nb_streams; i++)
 {
 AVCodec *codec = avcodec_find_decoder(m_pFmtCtx->streams[i]->codecpar->codec_id);
 AVStream *out = avformat_new_stream(octx, codec);
 ret = avcodec_parameters_copy(out->codecpar, m_pFmtCtx->streams[i]->codecpar);
 out->codecpar->codec_tag = 0;
 }

 ret = avio_open(&octx->pb, rtmp_url, AVIO_FLAG_WRITE);
 if (!octx->pb)
 {
 printf("avio open failed!\n");
 goto end; 
 }

 ret = avformat_write_header(octx, 0);
 if (ret < 0)
 {
 printf("avformat write header failed!\n");
 goto end; 
 }

 start_time = av_gettime();
 AVStream *in_stream, *out_stream;
 AVRational time_base1;
 AVRational time_base;
 AVRational time_base_q;
 int64 calc_duration;
 int64 pts_time;
 int64 now_time;
 
 ChangeAnotherVideo = 0;
 while((!StopPushVideoFlag) && (ChangeAnotherVideo == 0))
 {
 ret = av_read_frame(m_pFmtCtx, &pkt);
 if (ret < 0)
 {
 break;
 }
 if (pkt.pts == AV_NOPTS_VALUE)
 {
 time_base1 = m_pFmtCtx->streams[vid_idx]->time_base;
 calc_duration = (double)AV_TIME_BASE/av_q2d(m_pFmtCtx->streams[vid_idx]->r_frame_rate);
 
 pkt.pts = (double)(frame_index*calc_duration)/(double)(av_q2d(time_base1)*AV_TIME_BASE);
 pkt.dts = pkt.pts;
 pkt.duration = (double)calc_duration/(double)(av_q2d(time_base1)*AV_TIME_BASE);
 }
 if (pkt.stream_index == vid_idx)
 {
 time_base = m_pFmtCtx->streams[vid_idx]->time_base;
 time_base_q = (AVRational){1, AV_TIME_BASE}; 
 pts_time = av_rescale_q(pkt.dts, time_base, time_base_q);
 now_time = av_gettime() - start_time;
 if (pts_time > now_time)
 {
 av_usleep(pts_time - now_time);
 }
 }
 in_stream = m_pFmtCtx->streams[pkt.stream_index];
 out_stream = octx->streams[pkt.stream_index];
 pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (enum AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
 pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (enum AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
 pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
 pkt.pos = -1;
 if(pkt.stream_index == vid_idx)
 {
 printf("Send %8d video frames to output URL\n",frame_index);
 frame_index++;
 }
 ret = av_interleaved_write_frame(octx, &pkt);
 if (ret < 0)
 {
 goto end;
 }
 av_packet_unref(&pkt);
 }
 
end:
 printf("---------------------------------stop push video -------------------------------------------\n");
 StopPushVideoFlag = NO_STOP_PUSH;
 IsPushingVideoFlag = NO_PUSHING; 
 ChangeAnotherVideo = 0;
 avformat_close_input(&m_pFmtCtx);
 if (octx)
 {
 avio_closep(&octx->pb);
 avformat_free_context(octx);
 }
 /* note: the internal buffer could have changed, and be != avio_ctx_buffer */
 if (m_pIOCtx) 
 {
 av_freep(&m_pIOCtx->buffer);
 av_freep(&m_pIOCtx);
 }

 if (ret < 0) 
 {
 printf("Error occured : %s\n", av_err2str(ret));
 //return 1;
 }
 pthread_exit((void*)"push video end!"); 
 
}


void PushVideo(void)
{
 int ret = 0;
 pthread_t pushVideoThread;

 ret = pthread_create(&pushVideoThread, NULL, PushVideoFunction, NULL);
 if(ret != 0)
 {
 printf("error : push video thread create failed!\n");
 exit(-1);
 }
 else
 {
 printf("(debug) push video thread create success!\n");
 } 
} 


I grab a pcap file by tcpdump, and use wireshark to analysis, and got such a message


37 0.400350 172.17.4.58 192.168.11.240 RTMP 411 @setDataFrame()|Video Data|FCUnpublish()|deleteStream()


-
ffmpeg pipe input and output in Python
4 juin 2021, par MinasChamI want to use ffmpeg to read an RTSP stream, extract frames via a
pipe, do some processing on them with Python and afterwards combine the processed frames via anotherpipewith the original audio. I'm using thesubprocessmodule in Python to execute the ffmpeg command as well as read and write the frames from and to ffmpeg.

Questions :


- 

- Is it possible to pipe both stdin and stdout so as to extract the frames and then feed them back in after the processing ?
- Do i also have to pipe the audio separately and feed it with the processed frames or can i simply copy the audio stream when mapping the output ?