
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (47)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (5840)
-
avformat/http: flushing tcp receive buffer when it is write only mode
4 avril 2018, par Vishwanath Dixitavformat/http: flushing tcp receive buffer when it is write only mode
In write only mode, the TCP receive buffer's data keeps growing with
http response messages and the buffer eventually becomes full.
This results in zero tcp window size, which in turn causes unwanted
issues, like, terminated tcp connection. The issue is apparent when
http persistent connection is enabled in hls/dash live streaming use
cases. To overcome this issue, the logic here reads the buffer data
when a file transfer is completed, so that any accumulated data in
the recieve buffer gets flushed out. -
Decoding NAL units in ffmpeg using C++
10 février 2019, par Lucas ZanellaI have a working code which receives NAL units through RTP using JRTPLIB. It saves the NAL units into a file which I can play through VLC.
I’m willing to decode these NAL units using ffmpeg. I’ve seen all the threads of stackoverflow about NAL units and ffmpeg, and none of them answers exactly how to do it.
As I understand, RTP is a protocol that can break a NAL unit into more than one RTP packet. Therefore, the NAL units that I’m receiving aren’t even complete. So I guess I cannot feed them directly into ffmpeg. I need somehow to acumulate them, I guess, and then send them into the right format to ffmpeg.
Take a look at
avcodec_decode_video2
function from ffmpeg library :attribute_deprecated int avcodec_decode_video2 (
AVCodecContext * avctx,
AVFrame * picture,
int * got_picture_ptr,
const AVPacket * avpkt
)Here’s what is said about avpkt :
avpkt : The input AVPacket containing the input buffer. You can create
such packet with av_init_packet()I guess there’s a way to turn NAL units into
AVPacket
s. I tried to find things related to it in the documentation but couldn’t find anything useful.I also read this tutorial : http://dranger.com/ffmpeg/tutorial01.html but it only talks about reading from files, and ffmpeg does some background work in convertind frames into
AVPacket
s, so I learned nothing on that part.ps : my code writes the VPS, SPS and PPS information once in the beggining of the file and then only saves NAL units in sequence.
So : how to decode NAL units with ffmpeg ?
UPDATE :
Here’s the code that receives NAL units. I tried to add
0x00000001
at the beggining of each NAL unit.const size_t BufSize = 98304;
uint8_t buf[4 + BufSize];//4 bytes for 0x00000001 at beggining
uint8_t* paddedBuf = buf + 4;
/* Adds the delimiters for a h264 bitstream*/
buf[0] = char(0x00);
buf[1] = char(0x00);
buf[2] = char(0x00);
buf[3] = char(0x01);
size_t size = 0;
size_t write_size = 0;
/* Get SPS, PPS, VPS manually start */
std::cout << "writing vps" << std::endl;
Client.SetObtainVpsSpsPpsPeriodly(false);
if(!Client.GetVPSNalu(paddedBuf, &size)) {
if(write(fd, paddedBuf, size) < 0) {
perror("write");
}
}
if(!Client.GetSPSNalu(paddedBuf, &size)) {
if(write(fd, paddedBuf, size) < 0) {
perror("write");
}
}
if(!Client.GetPPSNalu(paddedBuf, &size)) {
if(write(fd, paddedBuf, size) < 0) {
perror("write");
}
}
/* Get SPS, PPS, VPS manually end */
while(true) {
if(!Client.GetMediaData("video", paddedBuf+write_size, &size, BufSize)) {
if(ByeFromServerFlag) {
printf("ByeFromServerFlag\n");
break;
}
if(try_times > 5) {
printf("try_times > 5\n");
break;
}
try_times++;
continue;
}
write_size += size;
std::cout << "gonna decode frame" << std::endl;
ffmpegDecoder->decodeFrame(paddedBuf,size);
std::cout << "decoded frame" << std::endl;Now the function
decodeFrame
which is giving me segmentation fault. However I don’t even know if what I’m doing is ok.void FfmpegDecoder::decodeFrame(uint8_t* frameBuffer, int frameLength)
{
if (frameLength <= 0) return;
int frameFinished = 0;
AVPacket framePacket;
av_init_packet(&framePacket);
framePacket.size = frameLength;
framePacket.data = frameBuffer;
std::cout << "gonna decode video2" << std::endl;
int ret = avcodec_decode_video2(m_pAVCodecContext, m_pAVFrame, &frameFinished, &framePacket); //GIVES SEGMENTATION FAULT HEREHere’s a snapshot of this code and the entire project, which can be compiled if you go into
/dev
and do ./build
to build the docker image, then./run
to enter the docker image, then you just docmake .
andmake
. However you should run the binary outside of docker with~/orwell$ LD_LIBRARY_PATH=.:./jrtplib/src ./orwell_monitor
because inside docker it has a bug. -
Videos storage and streaming [closed]
12 juillet 2021, par TomerI'll do my best to describe my thought here.


We came across a need to receive videos from users, store them and later on present those videos to different users upon request.
The argument is based on the idea that we can't serve one single file of the same size and quality to all clients. There must be consideration of the client's network quality.
One side claims that the best way to overcome this issue is by storing multiple versions of one file (360p, 480p, 720p etc..) and then based on the client's network quality we will estimate which file is best suited for his conditions. We shall estimate the client's network quality by testing the connection quality to the s3 servers storing our files.


Second party claims that storing one file of high quality is enough. Then, upon request by client we shall stream the file to the client in the suitable encoding and quality using a third party framework (from brief research, ffmpeg, Gstreamer. Its not established yet which and how, only consider the idea.).


Which is more acceptable in modern ways ? Are there any other ideas we haven't thought of ?


Couple of notes. Our backend is written in node, using aws-sdk for s3 and nest for api.


Thanks