
Recherche avancée
Médias (3)
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (68)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (11602)
-
Workflow for creating animated hand-drawn videos - encoding difficulties
8 décembre 2017, par MircodeI want to create YouTube videos, kind of in the style of a white-board animation.
Tldr question : How can I encode into a lossless rgb video format with ffmpeg, including alpha channel ?
More detailed :
My current workflow looks like this :I draw the slides in Inkscape, I group all paths that are to be drawn in one go (one scene so to say) and store the slide as svg. Then I run a custom python script over that. It animates the slides as described here https://codepen.io/MyXoToD/post/howto-self-drawing-svg-animation. Each frame is exported as svg, converted to png and fed to ffmpeg for making a video from it.
For every scene (a couple of paths being drawn, there are several scenes per slide) I create an own video file and then I also store a png file that contains the last frame of that video.
I then use kdenlive to join it all together : A video containing the drawing of the first scene, then a png which holds the last image of the video while I talk about the drawing, then the next animated drawing, then the next still image where I continue talking and so on. I use these intermediate images because freezing the last frame is tedious in kdenlive and I have around 600 scenes. Here I do the editing, adjust the duration of the still images and render the final video.
The background of the video is a photo of a blackboard which never changes, the strokes are paths with a filter to make it look like chalk.
So far so good, everything almost works.
My problem is : Whenever there is a transition between an animation and a still image, it is visible in the final result. I have tried several approaches to make this work but nothing is without flaws.
My first approach was to encode the animations as mp4 like this :
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-crf', '21', '-bf', '2', '-flags', '+cgop', '-pix_fmt', 'yuv420p', '-movflags', 'faststart', '-r', str(fps), videofile], stdin=PIPE)
which is recommended for YouTube. But then there is a little brightness difference between video and still image.
Then I tried mov with png codec :
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'png', '-r', str(fps), videofile], stdin=PIPE)
I think this encodes every frame as png in the video. It creates way bigger files since every frame is encoded separately. But it’s ok since I can use transparency for the background and just store the chalk strokes. However, sometimes I want to swipe parts of the chalk on a slide away, which I do by drawing background over it. Which would work if those overlaid, animated background chunks which are stored in the video looked exactly like the underlying png in the background. But it doesn’t. It’s slightly more blurry and I believe the color changes a tiny bit as well. Which I don’t understand since I thought the video just stores a sequence of pngs... Is there some quality setting that I miss here ?
Then I read about ProRes4444 and tried that :
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-c:v', 'prores_ks', '-pix_fmt', 'yuva444p10le', '-alpha_bits', '8', '-profile:v', '4444', '-r', str(fps), videofile], stdin=PIPE)
and this actually seems to work. However, the animation files become larger than the bunch of png files they contain, probably because this format stores 12 bit per channel. This is not thaat horrible since only intermediate videos grow big, the final result is still ok.
But ideally there would be a lossless codec which stores in rgb colorspace with 8 bit per channel, 8 bit for alpha and considers only the difference to the previous frame (because all that changes from frame to frame is a tiny bit of chalk drawing). Is there such a thing ? Alternatively, I’d also be ok without transparency but then I have to store the background in every scene. But if only changes are stored from frame to frame within one scene, that should be manageable.
Or should I make some fundamental changes in my workflow altogether ?
Sorry that this is rather lengthy, I appreciate any help.
Cheers !
-
Video encoding and keyframes
24 février 2013, par TishuI am transcoding a video frame by frame and using x264+ffmpeg to encode. The original video plays fine, but the first few frames of my transcoded vide show grey artefacts. I understand this is because of time compression and these artefacts disappear after a few frames.
See these two pictures which are the first and second frames. The third frame is normal (i.e. no grey artefact and not blurry like the second one)
How can I force the first frame to be a key frame (ie fully encoded in my output video) so that these artefacts do not show ?
Edit - more details
Here is what I am doing more in details. I used bit form differents tutorials to read a video frame by frame and reencode each frame to a new video. My encoding parameters are the following :
avcodec_get_context_defaults3(c, *codec);
c->codec_id = codec_id;
c->bit_rate = output_bitrate;
/* Resolution must be a multiple of two. */
c->width = output_width;
c->height = output_height;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
st->r_frame_rate.num = output_framerate_num;
st->r_frame_rate.den = output_framerate_den;
c->time_base.den = output_timebase_den;
c->time_base.num = output_timebase_num;
c->gop_size = 3; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
/* just for testing, we also add B frames */
c->max_b_frames = 2;
}
if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
/* Needed to avoid using macroblocks in which some coeffs overflow.
* This does not happen with normal video, it just happens here as
* the motion of the chroma plane does not match the luma plane. */
c->mb_decision = 2;
}
c->max_b_frames = 2;
c->scenechange_threshold = 0;
c->rc_buffer_size = 0;
c->me_method = ME_ZERO;Then I process each frame, probably doing something wrong there. The decoding bit :
while(av_read_frame(gFormatCtx, &packet)>=0) {
// Is this a packet from the video stream?
if(packet.stream_index==gVideoStreamIndex) {
// Decode video frame
avcodec_decode_video2(gVideoCodecCtx, pCurrentFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished) {
[...]
if(firstPts == -999) /*Initial value*/
firstPts = packet.pts;
deltaPts = packet.pts - firstPts;
double seconds = deltaPts*av_q2d(gFormatCtx->streams[gVideoStreamIndex]->time_base);
[...]
muxing_writeVideoFrame(pCurrentFrame, packet.pts);
}
}
}The actual writing :
int muxing_writeVideoFrame(AVFrame *frame, int64_t pts)
{
frameCount = frameCount +1;
if(frameCount > 0)
{
if (video_st)
video_pts = (double)video_st->pts.val * video_st->time_base.num /
video_st->time_base.den;
else
video_pts = 0.0;
if (video_st && !(video_st && audio_st && audio_pts < video_pts))
{
frame->pts = pts;//av_rescale_q(frame_count, video_st->codec->time_base, video_st->time_base);
write_video_frame(oc, video_st, frame);
}
}
return 0;
}
static int write_video_frame(AVFormatContext *oc, AVStream *st, AVFrame *frame)
{
int ret;
static struct SwsContext *sws_ctx;
//LOGI(10, frame_count);
AVCodecContext *c = st->codec;
/* encode the image */
AVPacket pkt;
int got_output;
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
exit(1);
}
/* If size is zero, it means the image was buffered. */
if (got_output) {
if (c->coded_frame->key_frame)
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = st->index;
/* Write the compressed frame to the media file. */
ret = av_interleaved_write_frame(oc, &pkt);
} else {
ret = 0;
}
if (ret != 0) {
LOGI(10, av_err2str(ret));
exit(1);
}
frame_count++;
return got_output;
} -
Video encoding and keyframes
24 février 2013, par TishuI am transcoding a video frame by frame and using x264+ffmpeg to encode. The original video plays fine, but the first few frames of my transcoded vide show grey artefacts. I understand this is because of time compression and these artefacts disappear after a few frames.
See these two pictures which are the first and second frames. The third frame is normal (i.e. no grey artefact and not blurry like the second one)
How can I force the first frame to be a key frame (ie fully encoded in my output video) so that these artefacts do not show ?
Edit - more details
Here is what I am doing more in details. I used bit form differents tutorials to read a video frame by frame and reencode each frame to a new video. My encoding parameters are the following :
avcodec_get_context_defaults3(c, *codec);
c->codec_id = codec_id;
c->bit_rate = output_bitrate;
/* Resolution must be a multiple of two. */
c->width = output_width;
c->height = output_height;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
st->r_frame_rate.num = output_framerate_num;
st->r_frame_rate.den = output_framerate_den;
c->time_base.den = output_timebase_den;
c->time_base.num = output_timebase_num;
c->gop_size = 3; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
/* just for testing, we also add B frames */
c->max_b_frames = 2;
}
if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
/* Needed to avoid using macroblocks in which some coeffs overflow.
* This does not happen with normal video, it just happens here as
* the motion of the chroma plane does not match the luma plane. */
c->mb_decision = 2;
}
c->max_b_frames = 2;
c->scenechange_threshold = 0;
c->rc_buffer_size = 0;
c->me_method = ME_ZERO;Then I process each frame, probably doing something wrong there. The decoding bit :
while(av_read_frame(gFormatCtx, &packet)>=0) {
// Is this a packet from the video stream?
if(packet.stream_index==gVideoStreamIndex) {
// Decode video frame
avcodec_decode_video2(gVideoCodecCtx, pCurrentFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished) {
[...]
if(firstPts == -999) /*Initial value*/
firstPts = packet.pts;
deltaPts = packet.pts - firstPts;
double seconds = deltaPts*av_q2d(gFormatCtx->streams[gVideoStreamIndex]->time_base);
[...]
muxing_writeVideoFrame(pCurrentFrame, packet.pts);
}
}
}The actual writing :
int muxing_writeVideoFrame(AVFrame *frame, int64_t pts)
{
frameCount = frameCount +1;
if(frameCount > 0)
{
if (video_st)
video_pts = (double)video_st->pts.val * video_st->time_base.num /
video_st->time_base.den;
else
video_pts = 0.0;
if (video_st && !(video_st && audio_st && audio_pts < video_pts))
{
frame->pts = pts;//av_rescale_q(frame_count, video_st->codec->time_base, video_st->time_base);
write_video_frame(oc, video_st, frame);
}
}
return 0;
}
static int write_video_frame(AVFormatContext *oc, AVStream *st, AVFrame *frame)
{
int ret;
static struct SwsContext *sws_ctx;
//LOGI(10, frame_count);
AVCodecContext *c = st->codec;
/* encode the image */
AVPacket pkt;
int got_output;
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
exit(1);
}
/* If size is zero, it means the image was buffered. */
if (got_output) {
if (c->coded_frame->key_frame)
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = st->index;
/* Write the compressed frame to the media file. */
ret = av_interleaved_write_frame(oc, &pkt);
} else {
ret = 0;
}
if (ret != 0) {
LOGI(10, av_err2str(ret));
exit(1);
}
frame_count++;
return got_output;
}