
Recherche avancée
Autres articles (95)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (8728)
-
FFMPEG Understanding AVFrame::linesize (Audio)
11 avril 2021, par user3584691As per the doucmentation of AVFrame, for audio, lineSize is size in bytes of each plane and only linesize[0] may be set. But however, am unsure whether lineszie[0] is holding per plane buffer size or is it the complete buffer size and we have to divide it by no of channels to get per plane buffer size.



For Example, when I call

int data_size = av_samples_get_buffer_size(NULL, iDesiredNoOfChannels, iAudioSamples, (AVSampleFormat)iDesiredFormat, 0) ;
For iDesiredNoOfChannels = 2, iAudioSamples = 1024 & iDesiredFormat = AV_SAMPLE_FMT_FLTP data_size=8192. Pretty straightforward, as each sample is 4 bytes and since there are 2 channels total memory will be (1024 * 4 * 2) bytes. As such lineSize[0] should be 4096 for planar audio. data[0] & data[1] should be each of size 4096. However, pFrame->lineSize[0] is giving 8192. So to get the size per plane, I have to do pFrame->lineSize[0] / pFrame->channels. Isn't this behaviour different from what the documentation suggests or is my understanding of the documentaion wrong.

-
FFMPEG Understanding AVFrame::linesize (Audio)
4 décembre 2015, par user3584691As per the doucmentation of AVFrame, for audio, lineSize is size in bytes of each plane and only linesize[0] may be set. But however, am unsure whether lineszie[0] is holding per plane buffer size or is it the complete buffer size and we have to divide it by no of channels to get per plane buffer size.
For Example, when I call
int data_size = av_samples_get_buffer_size(NULL, iDesiredNoOfChannels, iAudioSamples, (AVSampleFormat)iDesiredFormat, 0) ;
For iDesiredNoOfChannels = 2, iAudioSamples = 1024 & iDesiredFormat = AV_SAMPLE_FMT_FLTP data_size=8192. Pretty straightforward, as each sample is 4 bytes and since there are 2 channels total memory will be (1024 * 4 * 2) bytes. As such lineSize[0] should be 4096 for planar audio. data[0] & data[1] should be each of size 4096. However, pFrame->lineSize[0] is giving 8192. So to get the size per plane, I have to do pFrame->lineSize[0] / pFrame->channels. Isn’t this behaviour different from what the documentation suggests or is my understanding of the documentaion wrong. -
Is it possible to stream video over RTP without transcoding or compressing input file before transmitting using FFMpeg commandline ?
11 avril 2017, par Souvik DasFFmpeg supports 2 type of RTP payload type : MPEGTS/MP2T (PT 33) & Dynamic (PT 96). Dynamic PT requires explicit SDP presence at receiver while MPEGTS/MP2T doesn’t.
I used FFmpeg as both transmitter and receiver (with Loopback/localhost) and compared PSNR of the respective streams :Case 1 : FFmpeg Dynamic RTP
Sender:
ffmpeg -re -i 'sample.avi' -c:a copy -c:v copy -f rtp -y 'rtp://@225.0.0.1:5555' > sample.sdp
Receiver:
ffmpeg -protocol_whitelist file,udp,rtcp,rtp -i sample.sdp -y rec.ts
Result:
PSNR avg. = 38
This means that in idle condition, we are still not getting a perfect stream. I suspect, it's because Transcoding still takes place which downgrades the quality of video before transmitting at sender.Case 2 : FFmpeg MPEGTS RTP
Sender:
ffmpeg -re -i 'sample.avi' -c:a copy -c:v copy -f rtp_mpegts -y 'rtp://@225.0.0.1:5555'
Receiver:
ffmpeg -protocol_whitelist file,udp,rtcp,rtp -i sample.sdp -f mpegts -y rec.ts
Result:
Large # Frame Losses!
So, at Receiver, I used VLC for recording the streams. Although there was no/negligible frame loss, but PSNR avg. = 18 !!Earlier in a dedicated VLC Streamer & Recorder test, when the same video was streamed, PSNR avg. = Infinity (No Quality Loss). I want to shift to FFMpeg alternative for streaming because, I want to introduce some programmability factors for a sophisticated research work.
Hence, It would be really great, if somebody could provide me some input as to how I can achieve uncompressed & lossless video streaming using FFMpeg over RTP.
Notes :
1. I must use RTP only. I can't use RTSP or other streaming methods incl. direct UDP (udp://)
2. VLC Media Player / Libvlc used in this case, also used RTP for all cases.
3. It can assumed that Streamer and Recorder are present on same disk or has same access to storage.
4. Must support multicast!