
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (104)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (12578)
-
Choppy playback video
2 janvier 2012, par SatheeshI have developed a live wallpaper application in android. For that am using a video as wallpaper. In that we cannot set the video as it is as a wallpaper. So initially i compiled that video using ffmpeg library and set wallpaper as frame by frame using opengl. These are all working fine. Now i want to play two different videos for landscape and portrait mode tilts. When the video has to be changed on either mode and should play as wallpaper on that time i get a choppy defect in that for a second. How could i resolve this.
-
Using ffmpeg to capture frames at intervals from a video
25 octobre 2011, par TheShaggyBeardSo here is what I am trying to do - I want to automate creating animated gif previews from a video file (in this case I know they will always be a specific format). Here is what I have :
echo 'Make a folder named "filenamemp4"'
folderName=$(find . -type f -name "*.mp4" | grep -o [[:alnum:]] | tr -d '\n' | cat)
echo 'Grab the frame rate and total number of frames and put them into xaa and xab'
qtinfo asa.mp4 | awk 'NR = 1 { print $2 }' | grep -o "^[0-9]*" | split -l 1
echo 'Assign values to variable'
frameRate=$(cat xaa)
frameTotal=$(cat xab)
videoLength=$(expr $frameTotal / $frameRate)
echo 'Take a screenshot at 10% intervals - this is the part that gives me a bitrate error. It says that the -r option has an invalid input being 1 / value of videoLength'
ffmpeg -i asa.mp4 -y -ss $videoLength -an -sameq -f image2 -s 'qcif' -r $(expr 1/$videoLength) preview%02d.jpg
echo 'Take the jpgs and mash them into an animated gif'
convert -delay 50 -loop 10 preview*.jpg preview.gif
echo 'Move the gif to the specified folder'
mv preview.gif $folderName/preview.gif
echo 'Clean Up'
find . -type f -name "*.jpg" -exec rm -rf {} \;So perhaps there is a better way of doing this, or I am understanding how to use the -r option of ffmpeg wrong. In the tutorial I read on ffmpeg for a similar scenario, they used -r 1/5 to produce frames with a 5 second interval. My assumption is that for the desired interval you want, you just slap it in the denominator for the -r option.
-
Why doesn't this FFmpeg code create a video from a series of images ?
20 octobre 2011, par user551117I have successfully compiled the FFmpeg library for use in an iOS application. I would like to use it for encoding a video from a series of images, but I can't seem to make it work.
The following is the code that I am using to encode this video :
AVCodec *codec;
AVCodecContext *c= NULL;
int i, out_size, size, outbuf_size;
FILE *f;
AVFrame *picture;
uint8_t *outbuf;
printf("Video encoding\n");
/// find the mpeg video encoder
codec=avcodec_find_encoder(CODEC_ID_MPEG4);
//codec = avcodec_find_encoder(CODEC_ID_MPEG4);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
picture= avcodec_alloc_frame();
// put sample parameters
c->bit_rate = 400000;
/// resolution must be a multiple of two
c->width = 320;
c->height = 480;
//frames per second
c->time_base= (AVRational){1,25};
c->gop_size = 10; /// emit one intra frame every ten frames
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
//open it
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
f = fopen([[NSTemporaryDirectory() stringByAppendingPathComponent:filename] UTF8String], "w");
if (!f) {
fprintf(stderr, "could not open %s\n",[filename UTF8String]);
exit(1);
}
// alloc image and output buffer
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
#pragma mark -
AVFrame* outpic = avcodec_alloc_frame();
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
#pragma mark -
for(i=1;i<48;i++) {
fflush(stdout);
int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i]];
CGImageRef newCgImage = [image CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);
avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGB24, c->width, c->height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGB24,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV
// encode the image
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
free(buffer);
buffer = NULL;
}
// get the delayed frames
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, outbuf_size, f);
}
// add sequence end code to have a real mpeg file
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(picture);
printf("\n");What could be wrong with this code ?