
Recherche avancée
Autres articles (21)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)
Sur d’autres sites (2594)
-
Stream real-time (video+audio) via WebRTC (TCP) with chromakey && webm, best practices - how ? [on hold]
18 octobre 2018, par Kirill KPlease tell us about the best practics in your opinion for the case described below.
Are there any hardware solutions for this case ?I want to get the real-time stream from the ip camera, overlay a chromakey, transcode into the necessary codecs (VP8 + opus), and distribute stream via webrtc over tcp to many users with some kind of authentication, for example, via a dynamic token.
The delay from real time should be minimal.
The solution should be stable (do not fall after 1 hour or 24 hours).Now I have decided on such a solution, but the time costs are too high (delay from real-time strem), maybe there you will offer be a more elegant solution :
- IP Camera (h264 + aac)
- FFmpeg (transcoding to VP8\OPUS + chromakey)
- FFserver (pack to rtp (for webcallserver))
- WebCallServer (WebRTC)
I tried the following solutions :
- Flussonic - missing WebRTC via TCP
- Wowza (written in java) - crashes, support could not say the exact reasons, talked for more than 2 months, tested on different servers in different Data Centres
-
Cut a video in between key frames without re-encoding the full video using ffpeg ?
1er septembre 2020, par bguizI would like to cut a video at the beginning at any particular timestamp, and it need to be precise, so the nearest key frame is not good enough.


Also, these videos are rather long - an hour or longer - so I would like to avoid re-encoding this altogether if possible, or otherwise only re-encode a minimal fraction of the total duration. Thus, would like to maximise the use of
-vcodec copy
.

How can I accomplish this using
ffmpeg
?

NOTE : See scenario, and my own rough idea for a possible solution below.



Scenario :


- 

- Original video

- 

- Length of 1:00:00
- Has a key frame every 10s






- Desired cut :

- 

- From 0:01:35 through till the end




- Attempt #1 :

- 

- Using
-ss 0:01:35 -i blah.mp4 -vcodec copy
, what results is a file where : - audio starts at 0:01:30
- video also starts at 0:01:30
- this starts both the audio and the video too early










- Using
- using
-i blah.mp4 -ss 0:01:35 -vcodec copy
, what results is a file where :
- 

- audio starts at 0:01:35,
- but the video is blank/ black for the first 5 seconds,

- 

- until 0:01:40, when the video starts




- this starts the audio on time,
but the video starts too late



















Rough idea


- 

- (1) cut 0:01:30 to 0:01:40

- 

- re-encode this to have new key frames,
including one at the target time of 0:01:35
- then cut this to get the 5 seconds from 0:01:35 through 0:01:40






- (2) cut 0:01:40 through till the end

- 

- without re-encoding, using
-vcodec copy




- without re-encoding, using
- (3)
ffmpeg concat
the first short clip (the 5 second one)
with the second long clip








I know/ can work out the commands for (2) and (3), but am unsure about what commands are needed for (1).


- Original video

-
How to open a remote radio stream with ffmpeg's `avformat_open_input` without segfault ?
19 août 2020, par Keyboard embossed forheadI'm at the beginning stage of writing a small app to stream internet radio stations. For the moment I'm just trying to get the detected info of the input stream. Whilst I am successful in getting all the stream's details via the the command line tool (
ffmpeg -i ${URL}
), calling the library'savformat_open_input(...)
method call results in a SEGFAULT (a stack overflow to be precise when checked in valgrind).

Passing a local file url works fine though in both the command line utility and the library call.


Here's a minimal example :


int test() {
 const char * station_url = "http://stream.srg-ssr.ch/m/rsc_de/aacp_96";
 const char * test_file = "test.mp3"; //works
 AVFormatContext * av_ctx = avformat_alloc_context();
 int ret = 0;

 avformat_network_init();

 if( ( ret = avformat_open_input( &av_ctx, station_url, NULL, NULL ) ) < 0 ) { //SEGFAULT 
 printf( "Could not open file '%s': %i", station_url, ret );
 return -1;
 }

 printf( "Format %s, duration %ld us", av_ctx->iformat->long_name, av_ctx->duration );

 avformat_network_deinit();
 return 0;
}



If anyone with experience in dealing with acquiring remote streams using ffmpeg libraries in C has some insights I'd be grateful. Thanks in advance.


I'm using ffmpeg v4.3.1 on Linux.