
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (75)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...)
Sur d’autres sites (14070)
-
What is the optimal way to synchronize frames in ffmpeg c/c++ ?
16 septembre 2022, par TurgutI made a program that read's n number of video's as input, draws those videos to the GLFW window and finally encodes it all as a singular video output. The problem is frames of each video in question can be different, it's dependent on the user's input.


For example : the user can put two video's which has an FPS of 30 and 59, and can want an output 23,797. The problem is those video's are not in sync with each other, thus on the output we can see that the input video's are either faster or slower.


Duration of each video is also dependent on the input. For example, in accordance to the previous example, the first input might be 30 second and the second can be 13 second, while the output is 50 seconds.


I mostly read the frames similar to a moving png rather than a solid video since there are no iframe and bframes. There are just data I get from the GLFW window.


As an example, let's say we give one video as input which has an FPS of 30 and duration of 30, and our output has an FPS of 23.797 and duration of 30. I have 2 function's
skip_frame
andwait_frame
which respectively either read's a frame twice so we skip a frame or don't read the frame on that iteration. Those function's are used depending on the situation, whether it's output < input or output > input.

Here is what my code roughly looks like :


while(current_time < output_duration){
 for(auto input_video: all_inputs){
 for(int i = 0; i < amount_to_read_from_input(); i++){
 frame = input_video.read_frame();
 }
 }
 
 GLFW_window.draw_to_screen(frame);

 encoder.encode_one_video_frame(GLFW_window.read_window());
}



Basically
skip_frame
andwait_frame
are both insideamount_to_read_from_input()
and return 2 or 0 respectively.

So far I have tried multiplying duration with fps for both input and output. Then getting the result of their subtraction. Going from our previous example we get 900 - 714 = 186.
Then I divide the result to the output fps like so : 714 / 186 = 3.8. Meaning that I have to skip a frame every 3.8 iterations. (I skip a frame every 3 iterations and save the residual 0.8 for the next iter.)


But it's still a seconds or two behind. (Like it ends at 29 seconds for a 30 second output.) and the audio is out-of-sync. Ffmpeg handles my audio so there are no errors on that part.


Also seen this question but I don't think I can utilize ffmpeg's function's here since im reading from a glfw window and it comes down to my algorithm.


The problem is what is the math here ?


What can I do to make sure these frames are stabilized on almost every input/output combination ?


-
how to use ffmpeg in android ? [duplicate]
26 mai 2015, par ShaileshThis question already has an answer here :
-
FFMPEG on Android
7 answers
I have also tried some demo for FFmpeg but I think it’s not a free after some days it’s told for licence and give a message licence is expired. can anyone explain step by step process for how to use FFmpeg in android ?
-
FFMPEG on Android
-
Building and packaging a portable ffmpeg Linux program ('GLIBC_2.27' not found) [duplicate]
7 mai 2018, par Blue4WhaleThis question already has an answer here :
I am trying to build a portable version of ffmpeg to run on major Linux distributions, with the final user only having to extract the distributed targz package to the appropriate directory.
My previous builds on Ubuntu 16.04 worked fine. I upgraded to Ubuntu 18.04 and new builds, when run on Fedora 27 with
LD_LIBRARY_PATH=. ./ffmpeg
, show errors :./ffmpeg: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by ./libavfilter.so.7)
./ffmpeg: /lib64/libc.so.6: version `GLIBC_2.27' not found (required by ./libavformat.so.58)Which i interpret as the
libavxxx
libraries want to dynamically link against system librarieslibc
andlibm
that would have been compiled withGLIBC_2.27
, but those libs have been compiled with an older GLIBC version.Note that the fact the errors show from
libavxxx.so
is not the point, as if i compileffmpeg
as a fat binary (libavxxx
linked statically), i get the same error fromffmpeg
.The only workaround i have found so far is to copy the build system
libc.so.6
andlibm.so.6
libraries to the directory containing the binaries and make them part of theffmpeg
package.Is there a better way to handle this issue ?