
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (49)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6728)
-
avformat/framecrcenc : Don't read after the end of side-data
6 décembre 2020, par Andreas Rheinhardtavformat/framecrcenc : Don't read after the end of side-data
Nothing guarantees that the size of side data containing a palette
is actually divisible by four (although it should be) ; but for
big-endian systems, an algorithm is used that presupposed this.
So switch to an algorithm that does not overread : It processes
four bytes at a time, but only if all of them are contained in
the side data.Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
avutil/adler32 : Switch av_adler32_update() to size_t on bump
18 mars 2021, par Andreas Rheinhardtavutil/adler32 : Switch av_adler32_update() to size_t on bump
av_adler32_update() is used by av_hash_update() which will be switched
to size_t at the next bump. So it also has to be made to use size_t.
This is also necessary for framecrcenc.c, because the size of side data
will become a size_t, too.Reviewed-by : James Almer <jamrial@gmail.com>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com> -
Encode Frames to Video with C Library
31 juillet 2018, par NetherGraniteFor the sake of continuity, let us assume "RGB values" are the following :
typedef struct RGB {
uint8_t r, g, b;
} rgb;However, if you feel that a different color space is more appropriate for this question, please use that instead.
How might I go about writing 2D arrays of RGB values to a video in C given an output format and framerate ?
Before I continue, I should specify that I wish to be able to do this all within one program. I am trying to add functionality to an application that would allow it to compile videos frame by frame without having to leave it.
Additionally, my needs for this functionality are extremely basic ; I simply need to be able to set individual pixels to certain colors.
The closest I have come to a solution so far is the C library FFmpeg. Allow me to describe what I was able to learn on my own :
After looking through its documentation, I came across the function
avcodec_send_frame(avctx, frame)
, whose parameters are of the typesAVCodexContext*
andconst AVFrame*
respectively. If these are not the right tools for what I am trying to do, please ignore the rest of the question and instead point me towards what I should be using.However, I do not know which fields of
avctx
andframe
must be set manually and which do not. The reason I assume some do not is because both are extremely large structures, but correct me if I am wrong.Question 1 : What values of an
AVCodecContext
andAVFrame
must be set ? Of these, what is/are the recommended value(s) for each of them ?Additionally, I was only able to find instructions on how to initialize an
AVFrame
(usingav_frame_alloc()
andav_frame_get_buffer()
) but not for anAVCodexConstant
.Question 2 : Is there a proper way to initialize an
AVCodexConstant
? And just in case, is the method of initializing anAVFrame
described above correct ? Do any of the fields of either have a proper method of initialization ?Also, I was not able to find official documentation on how to take this
AVCodexConstant
(which I assume contains the video information) and turn it into a video. I apologize if the documentation for this is easy to find and I just missed it.Question 3 : How do I turn an
AVCodexConstant
into a file of a given format ?And, given my limited knowledge :
Question 4 : Are there any other parts to this process that I am missing, and do I have any of the above parts wrong ?
Please keep in mind that I found out about FFmpeg for the first time very recently, and as a result, I am a complete beginner to this. Additionally, my experience with C is very limited, so I would greatly appreciate it if you could note which files need to be included with
#include
.Feel free to even go as far as recommending something other than FFmpeg, just as long as it is written in C. I do not need power-user options, but I would greatly prefer flexibility in what audio and video file types the library can handle.
Addressing Potential Duplicates
I appologize for how long this section is ; I just want to have my bases covered. I heavily apologize, however, if this is in fact a duplicate of a question that I was just unable to find.
- ffmpeg C API documentation/tutorial [closed] — This question was too open-ended and received answers pointing the asker towards a tutorial at dranger.com, a tutorial that confusingly muddied the waters by focusing heavily on a graphics library of choice. Please do not take this as me saying it is bad ; I am just enough of a beginner that I could not wade through it all.
- Encoding frames to video with ffmpeg — Although this question seems to have been asking the same thing, it is geared towards Unreal Engine 4, and the asker provided sample code, making it difficult for me to understand which of parts of the accepted answer were necessary for me and which were not.
- How to write frames to a video file ? — While this also asked the same thing, the accepted answer simply provides a command instead of an explanation of code.
- YUV Raw frames to video stream — While the accepted answer for this question is a command, the question states that it is looking for a way to encode frames generated by C++ code. Is there some way to run commands in code that I haven’t been able to find ?
- Converting sequenced frames to video — Not only is the asker’s code written in Python, but it also seems to use already-existing image files as frames.
- How to write bitmaps as frames to H.264 with x264 in C\C++ ? — The accepted answer seems to describe a process that would take multiple applications, but I could be wrong as I am enough of a beginner that I am not sure exactly what it means other than Step 3.
- How to write bitmaps as frames to Ogg Theora in C\C++ ? — Although it isn’t a problem that the question specifies the ogg format, it is a problem that the accepted answer suggests libtheora, which appears to only work with ogg files.