
Recherche avancée
Médias (9)
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (105)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (16398)
-
avfilter/vf_fps : properly preserve CEA-708 captions
5 mai 2023, par Devin Heitmuelleravfilter/vf_fps : properly preserve CEA-708 captions
The existing implementation made an attempt to remove duplicate
captions if increasing the framerate, but made no attempt to
handle reducing the framerate, nor did it rewrite the caption
payloads to have the appropriate cc_count (e.g. the cc_count needs
to change from 20 to 10 when going from 1080i59 to 720p59 and
vice-versa).Make use of the new ccfifo mechanism to ensure that caption data
is properly preserved.Signed-off-by : Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by : Limin Wang <lance.lmwang@gmail.com> -
What are supported ffmpeg rtp_mpegts Muxer options ? (mpegts Muxer options are ignored)
7 mars 2020, par drake7I created a UDP stream with
-f mpegts
and some options like-mpegts_transport_stream_id
.I received the stream with "StreamXpert - Real-time stream analyzer" that shows all options are in the output. See my ffmpeg parameters and the StreamXpert at the end.
The same Muxer options seem to be ignored with
-f rtp_mpegts
.I have tried to use
-f mpegts
and pipe it to-f rtp_mpegts
like so :ffmpeg -i ... -f mpegts pipe: | ffmpeg pipe: -c copy -f rtp_mpegts "rtp://239.1.1.9:1234?pkt_size=1316"
The options are still ignored.
This ticket "support options for MPEGTS muxer when using RTP_MPEGTS" also notices the ignored option. Furthermore in this comment, "thovo" gives an analysis and suggests a solution.
Obviously the problem still exists. Anybody found a workaround for this ?
My additional question : I have not questioned if my project really needs rtp in the first place. Maybe my coworker didn’t know better and requested rtp when udp would be sufficient as well.
The aim was to receive the RTP stream with a TV using DVB via IP. This was successful an a Panasonic TV.
The SAT>IP Specification on page 10 requires rtp for Media Transport :
The SAT>IP protocol makes use of :
- UPnP for Addressing, Discovery and Description,
- RTSP or HTTP for Control,
- RTP or HTTP for Media Transport.
Is udp out of the equation ?
ffmpeg : (all options are in the output with
-f mpegts
)(HEX to decimal :
0x005A
=90
,0x005B
=91
0x005C
=92
,0x005D
=93
,0x005E
=94
)ffmpeg -f lavfi -i testsrc \
-r 25 \
-c:v libx264 \
-pix_fmt yuv420p \
-profile:v main -level 3.1 \
-preset veryfast \
-vf scale=1280:720,setdar=dar=16/9 \
-an \
-bsf:v h264_mp4toannexb \
-flush_packets 0 \
-b:v 4M \
-muxrate 8M \
-pcr_period 20 \
-pat_period 0.10 \
-sdt_period 0.25 \
-metadata:s:a:0 language=nya \
-mpegts_flags +pat_pmt_at_frames \
-mpegts_transport_stream_id 0x005A \
-mpegts_original_network_id 0x005B \
-mpegts_service_id 0x005C \
-mpegts_pmt_start_pid 0x005D \
-mpegts_start_pid 0x005E \
-mpegts_service_type advanced_codec_digital_hdtv \
-metadata service_provider='WI' \
-metadata service_name='W' \
-mpegts_flags system_b -flush_packets 0 \
-f mpegts "udp://239.1.1.10:1234?pkt_size=1316"StreamXpert Output :
-mpegts_transport_stream_id
=Transport Stream ID
(yellow text highlight)-mpegts_original_network_id
=Original Network ID
,onw
(green text highlight)-mpegts_service_id
=Program
,service
(pink text highlight)-mpegts_pmt_start_pid
=PMT PID
,Table PID
(turquoise text highlight)-mpegts_start_pid
=PID
,PCR PID
(red text highlight)-mpegts_service_type
=service type
(blue text)service_name
=Service name
(orange text)service_provider
=Service provider
(pink text) -
Stream publishing using ffmpeg rtmp : network bandwidth not fully utilized
14 février 2017, par DeducibleSteakI’m developing an application that needs to publish a media stream to an rtmp "ingestion" url (as used in YouTube Live, or as input to Wowza Streaming Engine, etc), and I’m using the ffmpeg library (programmatically, from C/C++, not the command line tool) to handle the rtmp layer. I’ve got a working version ready, but am seeing some problems when streaming higher bandwidth streams to servers with worse ping. The problem exists both when using the ffmpeg "native"/builtin rtmp implementation and the librtmp implementation.
When streaming to a local target server with low ping through a good network (specifically, a local Wowza server), my code has so far handled every stream I’ve thrown at it and managed to upload everything in real time - which is important, since this is meant exclusively for live streams.
However, when streaming to a remote server with a worse ping (e.g. the youtube ingestion urls on a.rtmp.youtube.com, which for me have 50+ms pings), lower bandwidth streams work fine, but with higher bandwidth streams the network is underutilized - for example, for a 400kB/s stream, I’m only seeing 140kB/s network usage, with a lot of frames getting delayed/dropped, depending on the strategy I’m using to handle network pushback.
Now, I know this is not a problem with the network connection to the target server, because I can successfully upload the stream in real time when using the ffmpeg command line tool to the same target server or using my code to stream to a local Wowza server which then forwards the stream to the youtube ingestion point.
So the network connection is not the problem and the issue seems to lie with my code.
I’ve timed various parts of my code and found that when the problem appears, calls to av_write_frame / av_interleaved_write_frame (I never mix & match them, I am always using one version consistently in any specific build, it’s just that I’ve experimented with both to see if there is any difference) sometimes take a really long time - I’ve seen those calls sometimes take up to 500-1000ms, though the average "bad case" is in the 50-100ms range. Not all calls to them take this long, most return instantly, but the average time spent in these calls grows bigger than the average frame duration, so I’m not getting a real time upload anymore.
The main suspect, it seems to me, could be the rtmp Acknowledgement Window mechanism, where a sender of data waits for a confirmation of receipt after sending every N bytes, before sending any more data - this would explain the available network bandwidth not being fully used, since the client would simply sit there and wait for a response (which takes a longer time because of the lower ping), instead of using the available bandwidth. Though I haven’t looked at ffmpeg’s rtmp/librtmp code to see if it actually implements this kind of throttling, so it could be something else entirely.
The full code of the application is too much to post here, but here are some important snippets :
Format context creation :
const int nAVFormatContextCreateError = avformat_alloc_output_context2(&m_pAVFormatContext, nullptr, "flv", m_sOutputUrl.c_str());
Stream creation :
m_pVideoAVStream = avformat_new_stream(m_pAVFormatContext, nullptr);
m_pVideoAVStream->id = m_pAVFormatContext->nb_streams - 1;
m_pAudioAVStream = avformat_new_stream(m_pAVFormatContext, nullptr);
m_pAudioAVStream->id = m_pAVFormatContext->nb_streams - 1;Video stream setup :
m_pVideoAVStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
m_pVideoAVStream->codecpar->codec_id = AV_CODEC_ID_H264;
m_pVideoAVStream->codecpar->width = nWidth;
m_pVideoAVStream->codecpar->height = nHeight;
m_pVideoAVStream->codecpar->format = AV_PIX_FMT_YUV420P;
m_pVideoAVStream->codecpar->bit_rate = 10 * 1000 * 1000;
m_pVideoAVStream->time_base = AVRational { 1, 1000 };
m_pVideoAVStream->codecpar->extradata_size = int(nTotalSizeRequired);
m_pVideoAVStream->codecpar->extradata = (uint8_t*)av_malloc(m_pVideoAVStream->codecpar->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
// Fill in the extradata here - I'm sure I'm doing that correctly.Audio stream setup :
m_pAudioAVStream->time_base = AVRational { 1, 1000 };
// Let's leave creation of m_pAudioCodecContext out of the scope of this question, I'm quite sure everything is done right there.
const int nAudioCodecCopyParamsError = avcodec_parameters_from_context(m_pAudioAVStream->codecpar, m_pAudioCodecContext);Opening the connection :
const int nAVioOpenError = avio_open2(&m_pAVFormatContext->pb, m_sOutputUrl.c_str(), AVIO_FLAG_WRITE);
Starting the stream :
AVDictionary * pOptions = nullptr;
const int nWriteHeaderError = avformat_write_header(m_pAVFormatContext, &pOptions);Sending a video frame :
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.dts = nTimestamp;
pkt.pts = nTimestamp;
pkt.duration = nDuration; // I know what I have the wrong duration sometimes, but I don't think that's the issue.
pkt.data = pFrameData;
pkt.size = pFrameDataSize;
pkt.flags = bKeyframe ? AV_PKT_FLAG_KEY : 0;
pkt.stream_index = m_pVideoAVStream->index;
const int nWriteFrameError = av_write_frame(m_pAVFormatContext, &pkt); // This is where too much time is spent.Sending an audio frame :
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.pts = m_nTimestampMs;
pkt.dts = m_nTimestampMs;
pkt.duration = m_nDurationMs;
pkt.stream_index = m_pAudioAVStream->index;
const int nWriteFrameError = av_write_frame(m_pAVFormatContext, &pkt);Any ideas ? Am I on the right track with thinking about the Acknowledgement Window ? Am I doing something else completely wrong ?