
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (12)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)
Sur d’autres sites (6419)
-
Using ffmpeg with Flash Media Server and HDS
20 avril 2012, par JonathanI want to use ffmpeg to encode and publish a live stream to Flash Media Server. In order to support iOS devices, I need to implement HTTP Live Streaming as well. The video needs to be in H.264 format and the audio should be AAC. I don't have much experience working with ffmpeg, and I'm having a hard time getting this to work. This is the command that I've tried (and some variations as well) :
ffmpeg.exe -threads 15 -f dshow -i video="USB2.0 UVC WebCam":audio="Microphone (Realtek High Defini" \
-map_channel 0.1.1 -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 \
-s vga -vb 100k -f flv "rtmp:///livepkgr/livestream1?adbe-live-event=liveevent" \
-r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s qvga -vb 200k \
-f flv "rtmp:///livepkgr/livestream2?adbe-live-event=liveevent" \
-r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s vga -vb 350k
-f flv "rtmp:///livepkgr/livestream3?adbe-live-event=liveevent"When I run this, it appears to connect to FMS, but then I get a lot of error messages about dropped frames - I'm not sure if ANY frames get encoded successfully. My CPU usage is very high as well. I get a 404 error from FMS when I enter the URL of the *.m3u8 file for one of the individual streams (the main livestream.m3u8 file is accessible though). I have also tried outputting to a file instead of FMS, with no success. All I get is some very garbled sound and no video.
Any suggestions for what options/commands I should use to get this working ? Is anyone using ffmpeg with FMS to do HTTP Dynamic Streaming / HLS with MP4 video ? I've been struggling to get HDS/HLS working for some time now, and any help would be much appreciated ! It shouldn't make a difference, but I'm using FMS on Amazon EC2 with their AMI image.
Thanks !
-
ffmpeg - color-grading video material AND display original source as picture-in-picture, using -filter_complex
5 octobre 2019, par raventhis is my first post on this forum, so please be gentle in case I accidentally do trip over any forum rules that I would not know of yet :).
I would like to apply some color-grading to underwater GoPro footage. To quicker gauge the effect of my color settings (trial-and-error, as of yet), would like to see the original input video stream as a PIP (e.g., scaled down to 50% or even 30%), in the bottom-right corner of the converted output movie.
I have one input movie that is going to be color graded. The PIP should use the original as an input, just a scaled-down version of it.
I would like to use ffmpeg’s "-filter_complex" option to do the PIP, but all examples I can find on "-filter_complex" would use two already existing movies. Instead, I would like to make the color-corrected stream an on-the-fly input to "-filter_complex", which then renders the PIP.
Is that doable, all in one go ?
Both the individual snippets below work fine, I now would like to combine these and skip the creation of an intermediate color-graded TMP output which then gets combined, with the original, in a final PIP creation process.
Your help combining these two separate steps into one single "-filter_complex" action is greatly appreciated !Thanks in advance,
raven.[existing code snippets (M$ batch files)]
::declarations/defines::
set "INPUT="
set "TMP="
set "OUTPUT="
set "FFMPG="
set "QU=9" :: quality settings
set "CONV='"0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1
0 -1 0:0 -1 0 -1 5 -1 0 -1 0'"" :: sharpening convolution filter
::color-grading part::
%FFMPG% -i %INPUT% -vf convolution=%CONV%,colorbalance=rs=%rs%:gs=%gs%:bs=%bs%:rm=%rm%:gm=%gm%:bm=%bm%:rh=%rh%:gh=%gh%:bh=%bh% -q:v %QU% -codec:v mpeg4 %TMP%
::PIP part::
%FFMPG% -i %TMP% -i %INPUT% -filter_complex "[1]scale=iw/3:ih/3
[pip]; [0][pip] overlay=main_w-overlay_w-10:main_h-overlay_h-10" -q:v
%QU% -codec:v mpeg4 %OUTPUT%
[/existing code] -
How to process remote audio/video stream on WebRTC server in real-time ? [closed]
7 septembre 2020, par Kartik RokdeI'm new to audio/video streaming. I'm using AntMedia Pro for audio/video conferencing. There will be 5-8 hosts who will be speaking and the expected audience size would be 15-20k (need to mention this as it won't be a P2P conferencing, but an MCU architecture).


I want to give a feature where a user can request for "convert voice to female / robot / whatever", which would let the user hear the manipulated voice in the conference.


From what I know is that I want to do a real-time processing on the server to be able to do this. I want to intercept the stream on the server, and do some processing (change the voice) on each of the tracks, and stream it back to the requestor.


The first challenge I'm facing is how to get the stream and/or the individual tracks on the server ?


I did some research on how to process remote WebRTC streams, real-time on the server. I came across some keywords like
RTMP ingestion
,ffmpeg
.

Here are a few questions I went through, but didn't find answers that I'm looking for :


- 

- Receive webRTC video stream using python opencv in real-time
- Extract frames as images from an RTMP stream in real-time
- android stream real time video to streaming server








I need help in receiving real-time stream on the server (any technology - preferable Python, Golang) and streaming it back.