Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (69)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur
8 février 2011, parLa visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
Configuration de la boite multimédia
Dès (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (8698)
-
How we can open /dev/video0 or any v4l2 node with ffmpeg for capture raw frames and encode it in jpeg format ?
19 mars 2016, par satinderI am new in video domain . I am working now on ffmpeg , I use ffmpeg command line but there is very big issue and challenge for use ffmpeg in my own c code .So I read some tutorials like dranger.com etc . But I am not able to capture v4l2 or my laptop /dev/video0 node . I want capture raw video stream and overlay it with some text and then compress it in jpeg format . I have a little idea and that is in following code that code is working for ant .mp4 format or encoded file but not work on /dev/video0 node format . so please any one help me . Thanks in advance !!
please see following code snapshot that is a tutorial1.c from dranger.com :
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include
void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame) {
FILE *pFile;
char szFilename[32];
int y;
// Open file
sprintf(szFilename, "frame%d.ppm", iFrame);
pFile=fopen(szFilename, "wb");
if(pFile==NULL)
return;
// Write header
fprintf(pFile, "P6\n%d %d\n255\n", width, height);
// Write pixel data
for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);
// Close file
fclose(pFile);
}
int main(int argc, char *argv[]) {
AVFormatContext *pFormatCtx = NULL;
int i, videoStream;
AVCodecContext *pCodecCtx = NULL;
AVCodec *pCodec = NULL;
AVFrame *pFrame = NULL;
AVFrame *pFrameRGB = NULL;
AVPacket packet;
int frameFinished;
int numBytes;
uint8_t *buffer = NULL;
AVDictionary *optionsDict = NULL;
struct SwsContext *sws_ctx = NULL;
if(argc < 2) {
printf("Please provide a movie file\n");
return -1;
}
// Register all formats and codecs
av_register_all();
// Open video file
if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0)
return -1; // Couldn't open file
// Retrieve stream information
if(avformat_find_stream_info(pFormatCtx, NULL)<0)
return -1; // Couldn't find stream information
// Dump information about file onto standard error
av_dump_format(pFormatCtx, 0, argv[1], 0);
// Find the first video stream
videoStream=-1;
for(i=0; inb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i;
break;
}
if(videoStream==-1)
return -1; // Didn't find a video stream
// Get a pointer to the codec context for the video stream
pCodecCtx=pFormatCtx->streams[videoStream]->codec;
// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL) {
fprintf(stderr, "Unsupported codec!\n");
return -1; // Codec not found
}
// Open codec
if(avcodec_open2(pCodecCtx, pCodec, &optionsDict)<0)
return -1; // Could not open codec
// Allocate video frame
pFrame=av_frame_alloc();
// Allocate an AVFrame structure
pFrameRGB=av_frame_alloc();
if(pFrameRGB==NULL)
return -1;
// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
sws_ctx =
sws_getContext
(
pCodecCtx->width,
pCodecCtx->height,
pCodecCtx->pix_fmt,
pCodecCtx->width,
pCodecCtx->height,
AV_PIX_FMT_RGB24,
SWS_BILINEAR,
NULL,
NULL,
NULL
);
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
// Read frames and save first five frames to disk
i=0;
while(av_read_frame(pFormatCtx, &packet)>=0) {
// Is this a packet from the video stream?
if(packet.stream_index==videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
&packet);
// Did we get a video frame?
if(frameFinished) {
// Convert the image from its native format to RGB
sws_scale
(
sws_ctx,
(uint8_t const * const *)pFrame->data,
pFrame->linesize,
0,
pCodecCtx->height,
pFrameRGB->data,
pFrameRGB->linesize
);
// Save the frame to disk
if(++i<=5)
SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
i);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
// Free the RGB image
av_free(buffer);
av_free(pFrameRGB);
// Free the YUV frame
av_free(pFrame);
// Close the codec
avcodec_close(pCodecCtx);
// Close the video file
avformat_close_input(&pFormatCtx);
return 0;
} -
Torn images acquired when decoding video frames with FFmpeg
22 mars 2016, par bot1131357I am trying to decode the images using the tutorial at dranger.com. Below is the code I’m working with. The code is pretty much untouched aside from pgm_save() function and replacing the deprecated functions.
The program compiled successfully, but when I tried to process a video, I’m getting tearing effect like this : image1 and this image2.
(Side question : I’ve tried to replace avpicture_fill() which is deprecated with av_image_copy_to_buffer() but I’m getting an access violation error, so I left it as is. I wonder if there is a proper way for me to assign the frame data to a buffer.)
The library that I’m using is ffmpeg-20160219-git-98a0053-win32-dev. Would really appreciate it if someone could help me with this.
// Decode video and save frames
char filename[] = "test%0.3d.ppm";
static void ppm_save(unsigned char *buf, int wrap, int xsize, int ysize,
int framenum )
{
char filenamestr[sizeof(filename)];
FILE *f;
int i;
sprintf_s(filenamestr, sizeof(filenamestr), filename, framenum);
fopen_s(&f,filenamestr,"w");
fprintf(f,"P6\n%d %d\n%d\n",xsize,ysize,255);
for(i=0;i/ Register all formats and codecs
av_register_all();
// Open video file
if (avformat_open_input(&pFormatCtx, argv[1], NULL, NULL) != 0)
return -1; // Couldn't open file
// Retrieve stream information
if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
return -1; // Couldn't find stream information
// Dump information about file onto standard error (Not necessary)
av_dump_format(pFormatCtx, 0, argv[1], 0);
// Find the first video stream
videoStream = -1;
for (i = 0; i < pFormatCtx->nb_streams; i++)
if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
videoStream = i;
break;
}
if (videoStream == -1)
return -1; // Didn't find a video stream
/* find the video decoder */
codec = avcodec_find_decoder(pFormatCtx->streams[videoStream]->codec->codec_id);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
codecCtx= avcodec_alloc_context3(codec);
if(avcodec_copy_context(codecCtx, pFormatCtx->streams[i]->codec) != 0) {
fprintf(stderr, "Couldn't copy codec context");
return -1; // Error copying codec context
}
/* open it */
if (avcodec_open2(codecCtx, codec, NULL) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
// Allocate video frame
inframe= av_frame_alloc();
if(inframe==NULL)
return -1;
// Allocate output frame
outframe=av_frame_alloc();
if(outframe==NULL)
return -1;
// Determine required buffer size and allocate buffer
int numBytes=av_image_get_buffer_size(AV_PIX_FMT_RGB24, codecCtx->width,
codecCtx->height,1);
uint8_t* buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
// Assign appropriate parts of buffer to image planes in outframe
// Note that outframe is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)outframe, buffer, AV_PIX_FMT_RGB24,
codecCtx->width, codecCtx->height );
//av_image_copy_to_buffer(buffer, numBytes,
// outframe->data, outframe->linesize,
// AV_PIX_FMT_RGB24, codecCtx->width, codecCtx->height,1);
// initialize SWS context for software scaling
sws_ctx = sws_getContext(codecCtx->width,
codecCtx->height,
codecCtx->pix_fmt,
codecCtx->width,
codecCtx->height,
AV_PIX_FMT_RGB24,
SWS_BILINEAR,
NULL,
NULL,
NULL
);
// av_init_packet(&avpkt);
i = 0;
while(av_read_frame(pFormatCtx, &avpkt)>=0) {
// Is this a packet from the video stream?
if(avpkt.stream_index==videoStream) {
// Decode video frame
avcodec_decode_video2(codecCtx, inframe, &frameFinished, &avpkt);
// Did we get a video frame?
if(frameFinished) {
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)inframe->data,
inframe->linesize, 0, codecCtx->height,
outframe->data, outframe->linesize);
// Save the frame to disk
if(++i%15 == 0)
ppm_save(outframe->data[0], outframe->linesize[0],
codecCtx->width, codecCtx->height, i);
}
}
// Free the packet that was allocated by av_read_frame
av_packet_unref(&avpkt);
}
// Free the RGB image
av_free(buffer);
av_frame_free(&outframe);
// Free the original frame
av_frame_free(&inframe);
// Close the codecs
avcodec_close(codecCtx);
av_free(codecCtx);
// Close the video file
avformat_close_input(&pFormatCtx);
printf("\n");
return 0;
} -
ffmpeg convertation imagevideo causes artefacts
24 mars 2016, par mrgloomI want to convert video to images, do some image processing and convert images back to video.
Here is my commands :
./ffmpeg -r 30 -i $VIDEO_NAME "image%d.png"
./ffmpeg -r 30 -y -i "image%d.png" output.mpgBut in
output.mpgvideo I have some artefacts like in jpeg.Also I don’t know how to detrmine fps, I set fps=30 (
-r 30).
When I use above first command without-rit produces a lot of images > 1kk, but than I use-r 30option it produce same number of images as this command calculationg number of frames :FRAME_COUNT=`./ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 $VIDEO_NAME`So my questions are :
-
How to determine frame rate ?
-
How to convert images to video and don’t reduce initial quality ?
UPDATE :
Seems this helped, after I removed
-roption
Image sequence to video qualityso resulting command is :
./ffmpeg -y -i "image%d.png" -vcodec mpeg4 -b $BITRATE output_$BITRATE.avibut I’m still not sure how to select bitrate.
How can I see bitrate of original
.mp4file ? -