
Recherche avancée
Médias (1)
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (91)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)
Sur d’autres sites (12609)
-
Video not clear when concatinate videos thogther using ffmpeg
5 février 2018, par no nameI have 3 types of the video first is start video played when the video start, the main video which is the main content of the video and last is the end video I want to add them together they have different resolutions so I use a text file to write the videos name like this.
file start.mp4
file main.mp4
file end.mp4then I use this command with FFmpeg
FFmpeg -f concat -i ffmpeg-sound.txt -c copy final_output.mp4
the problem is when I start watching the video the first video image is appearing in all the video duration I tried different player every one shows me a crashed photo but the sound is working fine.
also, the main video is created from mp3 file and image with subtitle it’s not original or downloaded from anywhere and I want to concatenate them.
what i tried when i make some searches and i thought it’s the problem that they have different resolution so i tried this command on both 3 videos to make them have the same resolution so no problem show after the concatenateFFmpeg -y -i end.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts end.mp4.ts
and try to add the 3 files again with ts formate and the same problem shown too so please any help to solve this
Update
i tried to use this command to change all the three videos to 1280x720FFmpeg -i a.mp4 -vf scale=1280:720 a1.mp4
after testing the problem exactly i found that the first video only works then when it go to next video to show it, the screen is a freeze on the last image from the first video. I don’t know why this problem even after i make them same scale !
here is the output offfmpeg -i start.mp4 -i a.mp4 -i end.mp4
output this before edit all the videos resulution to 1280x720
ffmpeg version N-89940-gb1af0e23a3 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 7.2.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libmfx --enable-amf --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth
libavutil 56. 7.100 / 56. 7.100
libavcodec 58. 9.100 / 58. 9.100
libavformat 58. 7.100 / 58. 7.100
libavdevice 58. 0.101 / 58. 0.101
libavfilter 7. 11.101 / 7. 11.101
libswscale 5. 0.101 / 5. 0.101
libswresample 3. 0.101 / 3. 0.101
libpostproc 55. 0.100 / 55. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'start.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
title : Tunepro
artist : Kiran Khan
album_artist : Kiran Khan
encoder : Lavf57.83.100
description : This video is about Tunepro
keywords : tunepro,new
Duration: 00:00:03.15, start: 0.000000, bitrate: 863 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 729 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 129 kb/s (default)
Metadata:
handler_name : SoundHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'a.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.7.100
Duration: 00:05:21.24, start: 0.000000, bitrate: 188 kb/s
Stream #1:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuvj444p(pc), 300x300 [SAR 1:1 DAR 1:1], 53 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : SoundHandler
Input #2, mov,mp4,m4a,3gp,3g2,mj2, from 'end.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
title : Tunepro
artist : Kiran Khan
album_artist : Kiran Khan
encoder : Lavf57.83.100
description : This video is about Tunepro
keywords : tunepro,new
Duration: 00:00:30.03, start: 0.000000, bitrate: 8051 kb/s
Stream #2:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 7923 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #2:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : SoundHandler
At least one output file must be specified -
Workflow for creating animated hand-drawn videos - encoding difficulties
8 décembre 2017, par MircodeI want to create YouTube videos, kind of in the style of a white-board animation.
Tldr question : How can I encode into a lossless rgb video format with ffmpeg, including alpha channel ?
More detailed :
My current workflow looks like this :I draw the slides in Inkscape, I group all paths that are to be drawn in one go (one scene so to say) and store the slide as svg. Then I run a custom python script over that. It animates the slides as described here https://codepen.io/MyXoToD/post/howto-self-drawing-svg-animation. Each frame is exported as svg, converted to png and fed to ffmpeg for making a video from it.
For every scene (a couple of paths being drawn, there are several scenes per slide) I create an own video file and then I also store a png file that contains the last frame of that video.
I then use kdenlive to join it all together : A video containing the drawing of the first scene, then a png which holds the last image of the video while I talk about the drawing, then the next animated drawing, then the next still image where I continue talking and so on. I use these intermediate images because freezing the last frame is tedious in kdenlive and I have around 600 scenes. Here I do the editing, adjust the duration of the still images and render the final video.
The background of the video is a photo of a blackboard which never changes, the strokes are paths with a filter to make it look like chalk.
So far so good, everything almost works.
My problem is : Whenever there is a transition between an animation and a still image, it is visible in the final result. I have tried several approaches to make this work but nothing is without flaws.
My first approach was to encode the animations as mp4 like this :
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-crf', '21', '-bf', '2', '-flags', '+cgop', '-pix_fmt', 'yuv420p', '-movflags', 'faststart', '-r', str(fps), videofile], stdin=PIPE)
which is recommended for YouTube. But then there is a little brightness difference between video and still image.
Then I tried mov with png codec :
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'png', '-r', str(fps), videofile], stdin=PIPE)
I think this encodes every frame as png in the video. It creates way bigger files since every frame is encoded separately. But it’s ok since I can use transparency for the background and just store the chalk strokes. However, sometimes I want to swipe parts of the chalk on a slide away, which I do by drawing background over it. Which would work if those overlaid, animated background chunks which are stored in the video looked exactly like the underlying png in the background. But it doesn’t. It’s slightly more blurry and I believe the color changes a tiny bit as well. Which I don’t understand since I thought the video just stores a sequence of pngs... Is there some quality setting that I miss here ?
Then I read about ProRes4444 and tried that :
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-c:v', 'prores_ks', '-pix_fmt', 'yuva444p10le', '-alpha_bits', '8', '-profile:v', '4444', '-r', str(fps), videofile], stdin=PIPE)
and this actually seems to work. However, the animation files become larger than the bunch of png files they contain, probably because this format stores 12 bit per channel. This is not thaat horrible since only intermediate videos grow big, the final result is still ok.
But ideally there would be a lossless codec which stores in rgb colorspace with 8 bit per channel, 8 bit for alpha and considers only the difference to the previous frame (because all that changes from frame to frame is a tiny bit of chalk drawing). Is there such a thing ? Alternatively, I’d also be ok without transparency but then I have to store the background in every scene. But if only changes are stored from frame to frame within one scene, that should be manageable.
Or should I make some fundamental changes in my workflow altogether ?
Sorry that this is rather lengthy, I appreciate any help.
Cheers !
-
C++/C FFmpeg artifact build up across video frames
6 mars 2017, par ChiragRamanContext :
I am building a recorder for capturing video and audio in separate threads (using Boost thread groups) using FFmpeg 2.8.6 on Ubuntu 16.04. I followed the demuxing_decoding example here : https://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.htmlVideo capture specifics :
I am reading H264 off a Logitech C920 webcam and writing the video to a raw file. The issue I notice with the video is that there seems to be a build-up of artifacts across frames until a particular frame resets. Here is my frame grabbing, and decoding functions :// Used for injecting decoding functions for different media types, allowing
// for a generic decode loop
typedef std::function PacketDecoder;
/**
* Decodes a video packet.
* If the decoding operation is successful, returns the number of bytes decoded,
* else returns the result of the decoding process from ffmpeg
*/
int decode_video_packet(AVPacket *packet,
int *got_frame,
int cached){
int ret = 0;
int decoded = packet->size;
*got_frame = 0;
//Decode video frame
ret = avcodec_decode_video2(video_decode_context,
video_frame, got_frame, packet);
if (ret < 0) {
//FFmpeg users should use av_err2str
char errbuf[128];
av_strerror(ret, errbuf, sizeof(errbuf));
std::cerr << "Error decoding video frame " << errbuf << std::endl;
decoded = ret;
} else {
if (*got_frame) {
video_frame->pts = av_frame_get_best_effort_timestamp(video_frame);
//Write to log file
AVRational *time_base = &video_decode_context->time_base;
log_frame(video_frame, time_base,
video_frame->coded_picture_number, video_log_stream);
#if( DEBUG )
std::cout << "Video frame " << ( cached ? "(cached)" : "" )
<< " coded:" << video_frame->coded_picture_number
<< " pts:" << pts << std::endl;
#endif
/*Copy decoded frame to destination buffer:
*This is required since rawvideo expects non aligned data*/
av_image_copy(video_dest_attr.video_destination_data,
video_dest_attr.video_destination_linesize,
(const uint8_t **)(video_frame->data),
video_frame->linesize,
video_decode_context->pix_fmt,
video_decode_context->width,
video_decode_context->height);
//Write to rawvideo file
fwrite(video_dest_attr.video_destination_data[0],
1,
video_dest_attr.video_destination_bufsize,
video_out_file);
//Unref the refcounted frame
av_frame_unref(video_frame);
}
}
return decoded;
}
/**
* Grabs frames in a loop and decodes them using the specified decoding function
*/
int process_frames(AVFormatContext *context,
PacketDecoder packet_decoder) {
int ret = 0;
int got_frame;
AVPacket packet;
//Initialize packet, set data to NULL, let the demuxer fill it
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
// read frames from the file
for (;;) {
ret = av_read_frame(context, &packet);
if (ret < 0) {
if (ret == AVERROR(EAGAIN)) {
continue;
} else {
break;
}
}
//Convert timing fields to the decoder timebase
unsigned int stream_index = packet.stream_index;
av_packet_rescale_ts(&packet,
context->streams[stream_index]->time_base,
context->streams[stream_index]->codec->time_base);
AVPacket orig_packet = packet;
do {
ret = packet_decoder(&packet, &got_frame, 0);
if (ret < 0) {
break;
}
packet.data += ret;
packet.size -= ret;
} while (packet.size > 0);
av_free_packet(&orig_packet);
if(stop_recording == true) {
break;
}
}
//Flush cached frames
std::cout << "Flushing frames" << std::endl;
packet.data = NULL;
packet.size = 0;
do {
packet_decoder(&packet, &got_frame, 1);
} while (got_frame);
av_log(0, AV_LOG_INFO, "Done processing frames\n");
return ret;
}Questions :
- How do I go about debugging the underlying issue ?
- Is it possible that running the decoding code in a thread other than the one in which the decoding context was opened is causing the problem ?
- Am I doing something wrong in the decoding code ?
Things I have tried/found :
-
I found this thread that is about the same problem here : FFMPEG decoding artifacts between keyframes
(I cannot post samples of my corrupted frames due to privacy issues, but the image linked to in that question depicts the same issue I have)
However, the answer to the question is posted by the OP without specific details about how the issue was fixed. The OP only mentions that he wasn’t ’preserving the packets correctly’, but nothing about what was wrong or how to fix it. I do not have enough reputation to post a comment seeking clarification. -
I was initially passing the packet into the decoding function by value, but switched to passing by pointer on the off chance that the packet freeing was being done incorrectly.
-
I found another question about debugging decoding issues, but couldn’t find anything conclusive : How is video decoding corruption debugged ?
I’d appreciate any insight. Thanks a lot !
[EDIT] In response to Ronald’s answer, I am adding a little more information that wouldn’t fit in a comment :
-
I am only calling decode_video_packet() from the thread processing video frames ; the other thread processing audio frames calls a similar decode_audio_packet() function. So only one thread calls the function. I should mention that I have set the thread_count in the decoding context to 1, failing which I would get a segfault in malloc.c while flushing the cached frames.
-
I can see this being a problem if the process_frames and the frame decoder function were run on separate threads, which is not the case. Is there a specific reason why it would matter if the freeing is done within the function, or after it returns ? I believe the freeing function is passed a copy of the original packet because multiple decode calls would be required for audio packet in case the decoder doesnt decode the entire audio packet.
-
A general problem is that the corruption does not occur all the time. I can debug better if it is deterministic. Otherwise, I can’t even say if a solution works or not.
- How do I go about debugging the underlying issue ?