
Recherche avancée
Autres articles (107)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (17697)
-
Minimal sample of muxing two streams with no reencoding (av_interleaved_write_frame fails)
19 juillet 2022, par AlveinWhat I'm trying to do : having two files, one is video-only and the other is audio-only, with identical durations, I want to "join" them in a single container.


I previously made a routine which just copied all the streams inside a container to another one. No reencoding, etc. This works perfectly :


while(true) {
 pkIn=av_packet_alloc();
 if(NULL==pkIn) {
 fprintf(stderr,"av_packet_alloc() failed");
 break;
 }
 iError=av_read_frame(fcIn,pkIn);
 if(0>iError)
 if(AVERROR_EOF==iError)
 break;
 else {
 fprintf(stderr,"av_read_frame() failed");
 break;
 }
 stIn=fcIn->streams[pkIn->stream_index];
 stOut=fcOut->streams[pkIn->stream_index];
 log_packet(fcIn,pkIn,"in");
 av_packet_rescale_ts(pkIn,stIn->time_base,stOut->time_base);
 pkIn->pos=-1;
 log_packet(fcOut,pkIn,"out");
 iError=av_interleaved_write_frame(fcOut,pkIn);
 if(0>iError) {
 fprintf(stderr,"av_interleaved_write_frame() failed");
 break;
 }
 av_packet_free(&pkIn);
}



I just did the analogy and tried to do the same, but taking each stream from a distinct container, like this :


while(true) {
 if(!bVideoInEOF) {
 pkVideoIn=av_packet_alloc();
 if(NULL==pkVideoIn) {
 fprintf(stderr,"av_packet_alloc(video in) failed");
 break;
 }
 iError=av_read_frame(fcVideoIn,pkVideoIn);
 if(0>iError)
 if(AVERROR_EOF==iError)
 bVideoInEOF=true;
 else {
 fprintf(stderr,"av_read_frame(video in) failed");
 break;
 }
 if(!bVideoInEOF) {
 log_packet(fcVideoIn,pkVideoIn,"video in");
 av_packet_rescale_ts(pkVideoIn,stVideoIn->time_base,stVideoOut->time_base);
 pkVideoIn->pos=-1;
 pkVideoIn->stream_index=stVideoOut->index; // Edit (2022-07-19)
 log_packet(fcVideoIn,pkVideoIn,"video out");
 iError=av_interleaved_write_frame(fcOut,pkVideoIn);
 if(0>iError) {
 fprintf(stderr,"av_interleaved_write_frame(video out) failed");
 break;
 }
 }
 av_packet_free(&pkVideoIn);
 }
 if(!bAudioInEOF) {
 pkAudioIn=av_packet_alloc();
 if(NULL==pkAudioIn) {
 fprintf(stderr,"av_packet_alloc(audio in) failed");
 break;
 }
 iError=av_read_frame(fcAudioIn,pkAudioIn);
 if(0>iError)
 if(AVERROR_EOF==iError)
 bAudioInEOF=true;
 else {
 fprintf(stderr,"av_read_frame(audio in) failed");
 break;
 }
 if(!bAudioInEOF) {
 log_packet(fcAudioIn,pkAudioIn,"audio in");
 av_packet_rescale_ts(pkAudioIn,stAudioIn->time_base,stAudioOut->time_base);
 pkAudioIn->pos=-1;
 pkAudioIn->stream_index=stAudioOut->index; // Edit (2022-07-19)
 log_packet(fcAudioIn,pkAudioIn,"audio out");
 iError=av_interleaved_write_frame(fcOut,pkAudioIn);
 if(0>iError) {
 fprintf(stderr,"av_interleaved_write_frame(audio out) failed");
 break;
 }
 }
 av_packet_free(&pkAudioIn);
 }
 if(bVideoInEOF&&bAudioInEOF)
 break;
}



I know the previous code looks like redundant but I wanted to leave both streams "processing" separated the way you understand my plans.


Anyway, that code ends quickly with "av_interleaved_write_frame(audio out) failed".


The error detail is "Invalid argument", and the debugger shows this :




Application provided invalid, non monotonically increasing dts to
muxer in stream 0.




If I disable any of the main blocks "if(!bVideoInEOF)" / "if(!bAudioInEOF)", the file is written successfully, with the obvious lack of the disabled stream.


I'm new into using this library so probably I'm doing something really stupid, or missing something obvious.


Suggestions ?


Edit (2022-07-19) :


By checking the logs, I noticed I was writing every frame to the stream #0. Hence, the horrible jumps in PTS/DTS.


Code edited by adding the corresponding "...->stream_index=" before each call to av_interleaved_write_frame().


...


Though it works, I still think my code is far from perfect. Comments are welcome.


-
Revision 35733 : ne pas passer par la fonction d’ajout lorsqu’on supprime un tag sinon le ...
2 mars 2010, par brunobergot@… — Logne pas passer par la fonction d’ajout lorsqu’on supprime un tag sinon le message de retour et l’invalideur ne sont pas pris en compte
-
From 2 videos create one split screen video, 2 separate audio streams and a thumbnail from the resultant split screen video - all in one pass
27 avril 2020, par Ben HardyI'm using ffmpeg to create several single splitscreen videos out of 2 separate videos. The 2 videos have audio so I want to extract the 2 video's audio streams as 2 separate mp3s and also create a thumbnail out of the finished splitscreen video. Is it possible to do these 3 actions in one pass ?

Here is the code I'm using to create the splitscreen and the 2 separate mp3


ffmpeg -i input0.mov -i input1.mov -filter_complex "[0]scale=640x360[v0];[1]scale=640x360[v1];[v0][v1] hstack[v]" -map "[v]" output.mp4 -map 0:a audio0.mp3 -map 1:a audio1.mp3



However I want to also create a thumbnail out of the resultant split screen video in the same pass, but if I add "image.jpg" at the end it only makes a thumbnail out of input1.mov not the resultant splitscreen video made from input0 and input1 and makes all the outputs only 1 frame long. Here is the code :



ffmpeg -i input0.mov -i input1.mov -filter_complex "[0]scale=640x360[v0];[1]scale=640x360[v1];[v0][v1] hstack[v]" -map "[v]" output.mp4 -map 0:a audio0.mp3 -map 1:a audio1.mp3 image.jpg



Is there a solution to this that will let me create all the correct outputs in one pass ?