
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (16)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (2971)
-
ffmpeg processing a live screen streaming has high latency
27 février 2020, par Ant'sI’m trying to do live screen sharing using ffmpeg using the following command :
ffmpeg -f avfoundation -i "1:1" -c:v libx264 -threads 4 -preset ultrafast -c:a aac -ar 44100 -f flv rtmp://localhost/live/test
Now I have a
rtmp
server, which receives the data and using flv.js, I’m showing the live stream video on the browser. The integration works perfectly fine, but the problem was the stream is getting delayed very much. There is a delay for say atleast 10s ; I’m not sure, whether we can make it less delay (more like instant screen share).Note : I’m using the Node RTMP server using https://github.com/illuspas/Node-Media-Server. The code for that is over here :
const NodeMediaServer = require('node-media-server');
const config = {
rtmp: {
port: 1935,
chunk_size: 6000,
gop_cache: true,
ping: 30,
ping_timeout: 60
},
http: {
port: 8000,
allow_origin: '*'
}
};
var nms = new NodeMediaServer(config)
nms.run();Any suggestions ? I’m on MacOS
-
Video Feedback Loop with ffmpeg
4 mai 2020, par driangleI am trying to create a digital video feedback loop between two computers.



What I mean by this is the following :






I attempted several combinations of protocols and video formats and I found out this is the closest I can get. The original video was an "mov" but I had to convert it to .ts in order to get this far, had other issues when using mov or mp4.



I ran the commands in this order to make sure tcp listeners were up before the clients.



On Local Computer



# Command 1: Temporary attempt to capture output of loop
ffmpeg -i 'udp://0.0.0.0:6002?listen&overrun_nonfatal=1' -c copy out.ts




# Command 2: Receives stream from remote host and forwards back to beginning of loop
ffmpeg -i tcp://0.0.0.0:6001?listen -f mpegts udp://localhost:6002




On Remote Computer



# Command 3: Receives stream from local host and returns stream to another ffmpeg instance
ffmpeg -i tcp://0.0.0.0:6000?listen -f mpegts tcp://:6001




On Local Computer



# Command 4: Sends stream to remote host
ffmpeg -re -i in.ts -f mpegts tcp://:6000




The steps above don't quite complete the feedback loop, but they do result in a successful video
out.ts



Then I tried to modify Command 4 so that it could merge both a file and a udp stream, this is a naive attempt I know, I am not very good with ffmpeg.



ffmpeg \
 -re -i in.ts -i udp://0.0.0.0:6002 \
 -filter_complex " \
 [0:v]setpts=PTS-STARTPTS, scale=540x960[top]; \
 [1:v]setpts=PTS-STARTPTS, scale=540x960, \
 format=yuva420p,colorchannelmixer=aa=0.5[bottom]; \
 [top][bottom]overlay=shortest=1" \
 -f mpegts tcp://:6000




The result was that the command hung waiting for a data on the udp port, which makes sense in hindsight.



I would like to know :



- 

- Can this be done at all ? If so, what do I need to change ?
- Do I need to abandon ffmpeg for this task and look into something else ?







If you're asking why would I do this, the answer is there is no good reason other than I am curious to know if it's possible and what results it would yield.


-
FFmpeg - generate x264 CBR video transport stream with C-API
6 juillet 2020, par ZeroDefectUsing various posts sprinkled around the Internet, including this one here on SO, I've been able to understand how to use the FFmpeg cli to generate a CBR video bitrate using the x264 codec (wrapped in an MPEG-2 transport stream). Note : I'm concerned with the video bitrate - nothing else.


ffmpeg -i cbr_test_file_input.mp4 -c:v libx264 -pix_fmt yuv420p -b:v 6000000 -preset fast -tune film -g 25 -x264-params vbv-maxrate=6000:vbv-bufsize=6000:force-cfr=1:nal-hrd=cbr -flags +ildct+ilme x264_cbr_test_output.ts



However, I'm trying to approach this from an FFmpeg C-API point of view. I'm having issues. I've knocked together some code to try do something very similar to what is being done in the FFmpeg CLI. I can generate a transport stream of what I think should be CBR, but the profile of the video bitrate is very different from what I thought was the FFmpeg cli equivalent :


The initialisation of the AVCodecContext looks something like :


av_dict_set(&pDict, "preset", "faster", 0);
 av_dict_set(&pDict, "tune", "film", 0);
 av_dict_set_int(&pDict, "rc-lookahead", 25, 0);

 pCdcCtxOut->width = pCdcCtxIn->width;
 pCdcCtxOut->height = pCdcCtxIn->height;
 pCdcCtxOut->pix_fmt = AV_PIX_FMT_YUV420P;
 pCdcCtxOut->gop_size = 25;

 // Going for 6Mbit/s
 pCdcCtxOut->bit_rate = 6000000;
 //pCdcCtxOut->rc_min_rate = pCdcCtxOut->bit_rate;
 pCdcCtxOut->rc_max_rate = pCdcCtxOut->bit_rate;
 pCdcCtxOut->rc_buffer_size = pCdcCtxOut->bit_rate;
 pCdcCtxOut->rc_initial_buffer_occupancy = static_cast<int>((pCdcCtxOut->bit_rate * 9) / 10);

 std::string strParams = "vbv-maxrate="
 + std::to_string(pCdcCtxOut->bit_rate / 1000)
 + ":vbv-bufsize="
 + std::to_string(pCdcCtxOut->bit_rate / 1000)
 + ":force-cfr=1:nal-hrd=cbr";

 av_dict_set(&pDict, "x264-params", strParams.c_str(), 0);

 pCdcCtxOut->field_order = AV_FIELD_TT;
 pCdcCtxOut->flags = (AV_CODEC_FLAG_INTERLACED_DCT | AV_CODEC_FLAG_INTERLACED_ME | AV_CODEC_FLAG_CLOSED_GOP);

 // WARN: Make some assumptions here!
 pCdcCtxOut->time_base = AVRational{1,25};
 pCdcCtxOut->framerate = AVRational{25,1};
 pCdcCtxOut->sample_aspect_ratio = AVRational{64,45};
</int>


The output graphs appear very different :




Above is the FFmpeg CLI output - video bitrate holds fairly steady.




Above is the output of my sample application - some significant dips in the video bitrate.


I've taken this a step further and created a git repo consisting of :


- 

- Code of sample application
- Test input file (.mp4)
- Outputs (.ts file) of tests
- Graphs of output bitrates.