
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (104)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Soumettre bugs et patchs
10 avril 2011Un logiciel n’est malheureusement jamais parfait...
Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
Si vous pensez avoir résolu vous même le bug (...)
Sur d’autres sites (12109)
-
value of got_picture_ptr is always 0. when use avcodec_decode_video2()
4 septembre 2014, par user3867261I’m using visual studio 2013 professional.
below code is simple decode tutorial using ffmpeg.
///> Include FFMpeg
extern "C" {
#include <libavformat></libavformat>avformat.h>
}
///> Library Link On Windows System
#pragma comment( lib, "avformat.lib" )
#pragma comment( lib, "avutil.lib" )
#pragma comment( lib, "avcodec.lib" )
static void write_ascii_frame(const char *szFileName, const AVFrame *pVframe);
int main(void)
{
const char *szFilePath = "C:\\singlo\\example.avi";
///> Initialize libavformat and register all the muxers, demuxers and protocols.
av_register_all();
///> Do global initialization of network components.
avformat_network_init();
int ret;
AVFormatContext *pFmtCtx = NULL;
///> Open an input stream and read the header.
ret = avformat_open_input( &pFmtCtx, szFilePath, NULL, NULL );
if( ret != 0 ) {
av_log( NULL, AV_LOG_ERROR, "File [%s] Open Fail (ret: %d)\n", ret );
exit( -1 );
}
av_log( NULL, AV_LOG_INFO, "File [%s] Open Success\n", szFilePath );
av_log( NULL, AV_LOG_INFO, "Format: %s\n", pFmtCtx->iformat->name );
///> Read packets of a media file to get stream information.
ret = avformat_find_stream_info( pFmtCtx, NULL );
if( ret < 0 ) {
av_log( NULL, AV_LOG_ERROR, "Fail to get Stream Information\n" );
exit( -1 );
}
av_log( NULL, AV_LOG_INFO, "Get Stream Information Success\n" );
///> Find Video Stream
int nVSI = -1;
int nASI = -1;
int i;
for( i = 0 ; i < pFmtCtx->nb_streams ; i++ ) {
if( nVSI < 0 && pFmtCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO ) {
nVSI = i;
}
else if( nASI < 0 && pFmtCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO ) {
nASI = i;
}
}
if( nVSI < 0 && nASI < 0 ) {
av_log( NULL, AV_LOG_ERROR, "No Video & Audio Streams were Found\n");
exit( -1 );
}
///> Find Video Decoder
AVCodec *pVideoCodec = avcodec_find_decoder( pFmtCtx->streams[nVSI]->codec->codec_id );
if( pVideoCodec == NULL ) {
av_log( NULL, AV_LOG_ERROR, "No Video Decoder was Found\n" );
exit( -1 );
}
///> Initialize Codec Context as Decoder
if( avcodec_open2( pFmtCtx->streams[nVSI]->codec, pVideoCodec, NULL ) < 0 ) {
av_log( NULL, AV_LOG_ERROR, "Fail to Initialize Decoder\n" );
exit( -1 );
}
///> Find Audio Decoder
AVCodec *pAudioCodec = avcodec_find_decoder( pFmtCtx->streams[nASI]->codec->codec_id );
if( pAudioCodec == NULL ) {
av_log( NULL, AV_LOG_ERROR, "No Audio Decoder was Found\n" );
exit( -1 );
}
///> Initialize Codec Context as Decoder
if( avcodec_open2( pFmtCtx->streams[nASI]->codec, pAudioCodec, NULL ) < 0 ) {
av_log( NULL, AV_LOG_ERROR, "Fail to Initialize Decoder\n" );
exit( -1 );
}
AVCodecContext *pVCtx = pFmtCtx->streams[nVSI]->codec;
AVCodecContext *pACtx = pFmtCtx->streams[nASI]->codec;
AVPacket pkt;
AVFrame* pVFrame, *pAFrame;
int bGotPicture = 0; // flag for video decoding
int bGotSound = 0; // flag for audio decoding
int bPrint = 0; // ë¹ëì¤ ì²« ì¥ë©´ë§ íì¼ë¡ ë¨ê¸°ê¸° ìí ìì flag ìëë¤
pVFrame = avcodec_alloc_frame();
pAFrame = avcodec_alloc_frame();
while( av_read_frame( pFmtCtx, &pkt ) >= 0 ) {
///> Decoding
if( pkt.stream_index == nVSI ) {
if( avcodec_decode_video2( pVCtx, pVFrame, &bGotPicture, &pkt ) >= 0 ) {
///////////////////////problem here/////////////////////////////////////////////
if( bGotPicture ) {
///> Ready to Render Image
av_log( NULL, AV_LOG_INFO, "Got Picture\n" );
if( !bPrint ) {
write_ascii_frame( "output.txt", pVFrame );
bPrint = 1;
}
}
}
// else ( < 0 ) : Decoding Error
}
else if( pkt.stream_index == nASI ) {
if( avcodec_decode_audio4( pACtx, pAFrame, &bGotSound, &pkt ) >= 0 ) {
if( bGotSound ) {
///> Ready to Render Sound
av_log( NULL, AV_LOG_INFO, "Got Sound\n" );
}
}
// else ( < 0 ) : Decoding Error
}
///> Free the packet that was allocated by av_read_frame
av_free_packet( &pkt );
}
av_free( pVFrame );
av_free( pAFrame );
///> Close an opened input AVFormatContext.
avformat_close_input( &pFmtCtx );
///> Undo the initialization done by avformat_network_init.
avformat_network_deinit();
return 0;
}
static void write_ascii_frame(const char *szFileName, const AVFrame *frame)
{
int x, y;
uint8_t *p0, *p;
const char arrAsciis[] = " .-+#";
FILE* fp = fopen( szFileName, "w" );
if( fp ) {
/* Trivial ASCII grayscale display. */
p0 = frame->data[0];
for (y = 0; y < frame->height; y++) {
p = p0;
for (x = 0; x < frame->width; x++)
putc( arrAsciis[*(p++) / 52], fp );
putc( '\n', fp );
p0 += frame->linesize[0];
}
fflush(fp);
fclose(fp);
}
}there is a problem in below part
if( avcodec_decode_video2( pVCtx, pVFrame, &bGotPicture, &pkt ) >= 0 ) {
///////////////////////problem here/////////////////////////////////////////////
if( bGotPicture ) {
///> Ready to Render Image
av_log( NULL, AV_LOG_INFO, "Got Picture\n" );
if( !bPrint ) {
write_ascii_frame( "output.txt", pVFrame );
bPrint = 1;
}
}
}the value of bGotPicture is always 0.. So i can’t decode video
plz help me.
where do problem occurs from ? in video ? in my code ? -
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk ?
5 mai 2021, par ksb496I am writing a C++ code where a sequence of N different frames is generated after performing some operations implemented therein. After each frame is completed, I write it on the disk as IMG_%d.png, and finally I encode them to a video through ffmpeg using the x264 codec.



The summarized pseudocode of the main part of the program is the following one :



std::vector<int> B(width*height*3);
for (i=0; i/ void generateframe(std::vector<int> &, int)
 generateframe(B, i); // Returns different images for different i values.
 sprintf(s, "IMG_%d.png", i+1);
 WriteToDisk(B, s); // void WriteToDisk(std::vector<int>, char[])
}
</int></int></int>



The problem of this implementation is that the number of desired frames, N, is usually high (N 100000) as well as the resolution of the pictures (1920x1080), resulting into an overload of the disk, producing write cycles of dozens of GB after each execution.



In order to avoid this, I have been trying to find documentation about parsing directly each image stored in the vector B to an encoder such as x264 (without having to write the intermediate image files to the disk). Albeit some interesting topics were found, none of them solved specifically what I exactly want to, as many of them concern the execution of the encoder with existing images files on the disk, whilst others provide solutions for other programming languages such as Python (here you can find a fully satisfactory solution for that platform).



The pseudocode of what I would like to obtain is something similar to this :



std::vector<int> B(width*height*3);
video_file=open_video("Generated_Video.mp4", ...[encoder options]...);
for (i=0; icode></int>



According to what I have read on related topics, the x264 C++ API might be able to do this, but, as stated above, I did not find a satisfactory answer for my specific question. I tried learning and using directly the ffmpeg source code, but both its low ease of use and compilation issues forced me to discard this possibility as a mere non-professional programmer I am (I take it as just as a hobby and unluckily I cannot waste that many time learning something so demanding).



Another possible solution that came to my mind is to find a way to call the ffmpeg binary file in the C++ code, and somehow manage to transfer the image data of each iteration (stored in B) to the encoder, letting the addition of each frame (that is, not "closing" the video file to write) until the last frame, so that more frames can be added until reaching the N-th one, where the video file will be "closed". In other words, call ffmpeg.exe through the C++ program to write the first frame to a video, but make the encoder "wait" for more frames. Then call again ffmpeg to add the second frame and make the encoder "wait" again for more frames, and so on until reaching the last frame, where the video will be finished. However, I do not know how to proceed or if it is actually possible.



Edit 1 :



As suggested in the replies, I have been documenting about named pipes and tried to use them in my code. First of all, it should be remarked that I am working with Cygwin, so my named pipes are created as they would be created under Linux. The modified pseudocode I used (including the corresponding system libraries) is the following one :



FILE *fd;
mkfifo("myfifo", 0666);

for (i=0; i/ void WriteToPipe(std::vector<int>, FILE *&fd)
 fflush(fd);
 fd=fclose("myfifo");
}
unlink("myfifo");
</int>



WriteToPipe is a slight modification of the previous WriteToFile function, where I made sure that the write buffer to send the image data is small enough to fit the pipe buffering limitations.



Then I compile and write the following command in the Cygwin terminal :



./myprogram | ffmpeg -i pipe:myfifo -c:v libx264 -preset slow -crf 20 Video.mp4




However, it remains stuck at the loop when i=0 at the "fopen" line (that is, the first fopen call). If I had not called ffmpeg it would be natural as the server (my program) would be waiting for a client program to connect to the "other side" of the pipe, but it is not the case. It looks like they cannot be connected through the pipe somehow, but I have not been able to find further documentation in order to overcome this issue. Any suggestion ?


-
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk ?
6 mai 2021, par ksb496I am writing a C++ code where a sequence of N different frames is generated after performing some operations implemented therein. After each frame is completed, I write it on the disk as IMG_%d.png, and finally I encode them to a video through ffmpeg using the x264 codec.



The summarized pseudocode of the main part of the program is the following one :



std::vector<int> B(width*height*3);
for (i=0; i/ void generateframe(std::vector<int> &, int)
 generateframe(B, i); // Returns different images for different i values.
 sprintf(s, "IMG_%d.png", i+1);
 WriteToDisk(B, s); // void WriteToDisk(std::vector<int>, char[])
}
</int></int></int>



The problem of this implementation is that the number of desired frames, N, is usually high (N 100000) as well as the resolution of the pictures (1920x1080), resulting into an overload of the disk, producing write cycles of dozens of GB after each execution.



In order to avoid this, I have been trying to find documentation about parsing directly each image stored in the vector B to an encoder such as x264 (without having to write the intermediate image files to the disk). Albeit some interesting topics were found, none of them solved specifically what I exactly want to, as many of them concern the execution of the encoder with existing images files on the disk, whilst others provide solutions for other programming languages such as Python (here you can find a fully satisfactory solution for that platform).



The pseudocode of what I would like to obtain is something similar to this :



std::vector<int> B(width*height*3);
video_file=open_video("Generated_Video.mp4", ...[encoder options]...);
for (i=0; icode></int>



According to what I have read on related topics, the x264 C++ API might be able to do this, but, as stated above, I did not find a satisfactory answer for my specific question. I tried learning and using directly the ffmpeg source code, but both its low ease of use and compilation issues forced me to discard this possibility as a mere non-professional programmer I am (I take it as just as a hobby and unluckily I cannot waste that many time learning something so demanding).



Another possible solution that came to my mind is to find a way to call the ffmpeg binary file in the C++ code, and somehow manage to transfer the image data of each iteration (stored in B) to the encoder, letting the addition of each frame (that is, not "closing" the video file to write) until the last frame, so that more frames can be added until reaching the N-th one, where the video file will be "closed". In other words, call ffmpeg.exe through the C++ program to write the first frame to a video, but make the encoder "wait" for more frames. Then call again ffmpeg to add the second frame and make the encoder "wait" again for more frames, and so on until reaching the last frame, where the video will be finished. However, I do not know how to proceed or if it is actually possible.



Edit 1 :



As suggested in the replies, I have been documenting about named pipes and tried to use them in my code. First of all, it should be remarked that I am working with Cygwin, so my named pipes are created as they would be created under Linux. The modified pseudocode I used (including the corresponding system libraries) is the following one :



FILE *fd;
mkfifo("myfifo", 0666);

for (i=0; i/ void WriteToPipe(std::vector<int>, FILE *&fd)
 fflush(fd);
 fd=fclose("myfifo");
}
unlink("myfifo");
</int>



WriteToPipe is a slight modification of the previous WriteToFile function, where I made sure that the write buffer to send the image data is small enough to fit the pipe buffering limitations.



Then I compile and write the following command in the Cygwin terminal :



./myprogram | ffmpeg -i pipe:myfifo -c:v libx264 -preset slow -crf 20 Video.mp4




However, it remains stuck at the loop when i=0 at the "fopen" line (that is, the first fopen call). If I had not called ffmpeg it would be natural as the server (my program) would be waiting for a client program to connect to the "other side" of the pipe, but it is not the case. It looks like they cannot be connected through the pipe somehow, but I have not been able to find further documentation in order to overcome this issue. Any suggestion ?