
Recherche avancée
Autres articles (30)
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
MediaSPIP Player : les contrôles
26 mai 2010, parLes contrôles à la souris du lecteur
En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)
Sur d’autres sites (5812)
-
Having trouble compiling ffmpeg code in command terminal
10 septembre 2019, par m00ncakeI am having a bit of trouble compiling my c++ code in my terminal. I have ffmpeg installed as shown below.
ffmpeg version n4.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
configuration: --enable-gpl --enable-version3 --disable-static --enable-shared --enable-small --enable-avisynth --enable-chromaprint --enable-frei0r --enable-gmp --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-librtmp --enable-libshine --enable-libsmbclient --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtesseract --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libxml2 --enable-libzmq --enable-libzvbi --enable-lv2 --enable-libmysofa --enable-openal --enable-opencl --enable-opengl --enable-libdrm
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...However, when I compile my c++ code, currently trying to get NTP timestamps from private data in ffmpeg but I need to include their headers ? I have looked into ffmpeg’s
libavformat
folder and it does havertpdec.h
but when I compile it in the command line, i get this error. (trying to include this header forRTSPState
andRTSPStream
as well asRTPDemuxContext
)cf.cpp:11:10: fatal error: libavformat/rtsp.h: No such file or directory
#include <libavformat></libavformat>rtpdec.h>
^~~~~~~~~~~~~~~~~~~~
compilation terminated.This is my code :
#include
#include
#include <iostream>
#include <fstream>
#include <sstream>
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
#include <libavformat></libavformat>rtpdec.h>
#include <libswscale></libswscale>swscale.h>
}
int main(int argc, char** argv) {
// Open the initial context variables that are needed
SwsContext *img_convert_ctx;
AVFormatContext* format_ctx = avformat_alloc_context();
AVCodecContext* codec_ctx = NULL;
int video_stream_index;
uint32_t* last_rtcp_ts;
double* base_time;
double* time;
// Register everything
av_register_all();
avformat_network_init();
//open RTSP
if (avformat_open_input(&format_ctx, "rtsp://admin:password@192.168.1.67:554",
NULL, NULL) != 0) {
return EXIT_FAILURE;
}
if (avformat_find_stream_info(format_ctx, NULL) < 0) {
return EXIT_FAILURE;
}
//search video stream
for (int i = 0; i < format_ctx->nb_streams; i++) {
if (format_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
video_stream_index = i;
}
AVPacket packet;
av_init_packet(&packet);
//open output file
AVFormatContext* output_ctx = avformat_alloc_context();
AVStream* stream = NULL;
int cnt = 0;
//start reading packets from stream and write them to file
av_read_play(format_ctx); //play RTSP
// Get the codec
AVCodec *codec = NULL;
codec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!codec) {
exit(1);
}
// Add this to allocate the context by codec
codec_ctx = avcodec_alloc_context3(codec);
avcodec_get_context_defaults3(codec_ctx, codec);
avcodec_copy_context(codec_ctx, format_ctx->streams[video_stream_index]->codec);
std::ofstream output_file;
if (avcodec_open2(codec_ctx, codec, NULL) < 0)
exit(1);
img_convert_ctx = sws_getContext(codec_ctx->width, codec_ctx->height,
codec_ctx->pix_fmt, codec_ctx->width, codec_ctx->height, AV_PIX_FMT_RGB24,
SWS_BICUBIC, NULL, NULL, NULL);
int size = avpicture_get_size(AV_PIX_FMT_YUV420P, codec_ctx->width,
codec_ctx->height);
uint8_t* picture_buffer = (uint8_t*) (av_malloc(size));
AVFrame* picture = av_frame_alloc();
AVFrame* picture_rgb = av_frame_alloc();
int size2 = avpicture_get_size(AV_PIX_FMT_RGB24, codec_ctx->width,
codec_ctx->height);
uint8_t* picture_buffer_2 = (uint8_t*) (av_malloc(size2));
avpicture_fill((AVPicture *) picture, picture_buffer, AV_PIX_FMT_YUV420P,
codec_ctx->width, codec_ctx->height);
avpicture_fill((AVPicture *) picture_rgb, picture_buffer_2, AV_PIX_FMT_RGB24,
codec_ctx->width, codec_ctx->height);
while (av_read_frame(format_ctx, &packet) >= 0 && cnt < 1000) { //read ~ 1000 frames
RTSPState* rt = format_ctx->priv_data;
RTSPStream *rtsp_stream = rt->rtsp_streams[0];
RTPDemuxContext* rtp_demux_context = rtsp_stream->transport_priv;
uint32_t new_rtcp_ts = rtp_demux_context->last_rtcp_timestamp;
uint64_t last_ntp_time = 0;
if (new_rtcp_ts != *last_rtcp_ts) {
*last_rtcp_ts = new_rtcp_ts;
last_ntp_time = rtp_demux_context->last_rtcp_ntp_time;
uint32_t seconds = ((last_ntp_time >> 32) & 0xffffffff) - 2208988800;
uint32_t fraction = (last_ntp_time & 0xffffffff);
double useconds = ((double) fraction / 0xffffffff);
*base_time = seconds + useconds;
uint32_t d_ts = rtp_demux_context->timestamp - *last_rtcp_ts;
*time = *base_time + d_ts / 90000.0;
std::cout << "Time is: " << *time << std::endl;
}
std::cout << "1 Frame: " << cnt << std::endl;
if (packet.stream_index == video_stream_index) { //packet is video
std::cout << "2 Is Video" << std::endl;
if (stream == NULL) { //create stream in file
std::cout << "3 create stream" << std::endl;
stream = avformat_new_stream(output_ctx,
format_ctx->streams[video_stream_index]->codec->codec);
avcodec_copy_context(stream->codec,
format_ctx->streams[video_stream_index]->codec);
stream->sample_aspect_ratio =
format_ctx->streams[video_stream_index]->codec->sample_aspect_ratio;
}
int check = 0;
packet.stream_index = stream->id;
std::cout << "4 decoding" << std::endl;
int result = avcodec_decode_video2(codec_ctx, picture, &check, &packet);
std::cout << "Bytes decoded " << result << " check " << check
<< std::endl;
if (cnt > 100) //cnt < 0)
{
sws_scale(img_convert_ctx, picture->data, picture->linesize, 0,
codec_ctx->height, picture_rgb->data, picture_rgb->linesize);
std::stringstream file_name;
file_name << "test" << cnt << ".ppm";
output_file.open(file_name.str().c_str());
output_file << "P3 " << codec_ctx->width << " " << codec_ctx->height
<< " 255\n";
for (int y = 0; y < codec_ctx->height; y++) {
for (int x = 0; x < codec_ctx->width * 3; x++)
output_file
<< (int) (picture_rgb->data[0]
+ y * picture_rgb->linesize[0])[x] << " ";
}
output_file.close();
}
cnt++;
}
av_free_packet(&packet);
av_init_packet(&packet);
}
av_free(picture);
av_free(picture_rgb);
av_free(picture_buffer);
av_free(picture_buffer_2);
av_read_pause(format_ctx);
avio_close(output_ctx->pb);
avformat_free_context(output_ctx);
return (EXIT_SUCCESS);
}
</sstream></fstream></iostream>The command i use to compile my code :
g++ -w cf.cpp -o cf $(pkg-config --cflags --libs libavformat libswscale libavcodec)
I am a bit new to coding in FFmpeg.
-
Video concatenation puts sound out of sync
9 août 2019, par mmorin(Cross-posted from Video Production, where the question received no answers and may be more technical than usual video production.)
I have several
MOV
files from a DSLR camera. I concatenate them with directions from this thread :ffmpeg -safe 0 -f concat -i files_to_combine -vcodec copy -acodec copy temp.MOV
where
files_to_combine
is :file ./DSC_0013.MOV
...
file ./DSC_0019.MOVThe result has image and sound in sync for the first clip and is out of sync by fractions of a second in the second clip, and out of sync by around a second for the last clip. It is probably related to this error from the log :
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] st: 0 edit list: 1 Missing key frame while searching for timestamp: 1000
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] st: 0 edit list 1 Cannot find an index entry before timestamp: 1000.
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] Auto-inserting h264_mp4toannexb bitstream filterHow can I trim the frames to the available sound stream, then concatenate the two videos ?
The full log from the
ffmpeg
command is :ffmpeg version 4.1.3 Copyright (c) 2000-2019 the FFmpeg developers
built with Apple LLVM version 10.0.1 (clang-1001.0.46.4)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.3_1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-11.0.2.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-11.0.2.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dc00e000] Auto-inserting h264_mp4toannexb bitstream filter
Input #0, concat, from 'files_to_combine':
Duration: N/A, start: -0.592000, bitrate: 36888 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, smpte170m/bt709/bt470m), 1920x1080, 35352 kb/s, 50 fps, 50 tbr, 50k tbn, 100 tbc
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s
Metadata:
handler_name : SoundHandler
Output #0, mov, to 'temp.MOV':
Metadata:
encoder : Lavf58.20.100
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, smpte170m/bt709/bt470m), 1920x1080, q=2-31, 35352 kb/s, 50 fps, 50 tbr, 50k tbn, 50k tbc
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s
Metadata:
handler_name : SoundHandler
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] st: 0 edit list: 1 Missing key frame while searching for timestamp: 1000
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] st: 0 edit list 1 Cannot find an index entry before timestamp: 1000.
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] Auto-inserting h264_mp4toannexb bitstream filter
frame=41886 fps=547 q=-1.0 Lsize= 3789826kB time=00:13:58.75 bitrate=37014.8kbits/s speed=10.9x
video:3631879kB audio:157123kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.021759%Update (1 July 2019)
I thought that the files had a problem at the beginning or at the end, so I
trimmed one second from each end, but it still had the sound out of sync :FILES=files_to_combine
OUTPUT=show2.MOV
rm $FILES
for i in 3 4 5 6 7 8 9; do
rm ${i}.MOV
duration=$(ffprobe -v 0 -show_entries format=duration -of compact=p=0:nk=1 DSC_001${i}.MOV)
trimmed=$(echo $duration - 1 | bc)
ffmpeg -ss 1 -t $trimmed -i DSC_001${i}.MOV -vcodec copy -acodec copy ${i}.MOV
echo file ./${i}.MOV >> $FILES
done
rm $OUTPUT
ffmpeg -safe 0 -f concat -i $FILES -vcodec copy -acodec copy $OUTPUTWhen I trim a single file near the end, the sound and video do not seem out of sync :
ffmpeg -ss 00:09:20 -t 20 -i DSC_0014.MOV -vcodec copy -acodec copy end.MOV
When I concatenate only 30 seconds from each video, the result seems OK :
FILES=files_to_combine
OUTPUT=show2.MOV
rm $FILES
for i in 3 4 5 6 7 8 9; do
rm ${i}.MOV
duration=$(ffprobe -v 0 -show_entries format=duration -of compact=p=0:nk=1 DSC_001${i}.MOV)
start=$(echo $duration - 30 | bc)
end=$(echo $duration - 1 | bc)
ffmpeg -ss $start -t $end -i DSC_001${i}.MOV -vcodec copy -acodec copy ${i}.MOV
echo file ./${i}.MOV >> $FILES
done
rm $OUTPUT
ffmpeg -safe 0 -f concat -i $FILES -vcodec copy -acodec copy $OUTPUTThis last concatenation gives this error multiple times :
[mov @ 0x7fc3c7837400] Non-monotonous DTS in output stream 0:0; previous: 9080205, current: 9080200; changing to 9080206. This may result in incorrect timestamps in the output file.
So I am guessing that the problem is small differences in timestamps that
accumulate and become more noticeable with longer durations and the
concatenation of multiple files.For reference, the DSLR that shot these clips is a Nikon D3300 and the result
offfprobe
on one of the files is :$ ffprobe DSC_0017.MOV -hide_banner
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fab70003800] st: 0 edit list: 1 Missing key frame while searching for timestamp: 1000
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fab70003800] st: 0 edit list 1 Cannot find an index entry before timestamp: 1000.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'DSC_0017.MOV':
Metadata:
major_brand : qt
minor_version : 537331968
compatible_brands: qt niko
creation_time : 2019-06-12T23:52:37.000000Z
Duration: 00:09:53.58, start: 0.000000, bitrate: 36843 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, smpte170m/bt709/bt470m), 1920x1080, 35300 kb/s, 50 fps, 50 tbr, 50k tbn, 100 tbc (default)
Metadata:
creation_time : 2019-06-12T23:52:37.000000Z
Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, 2 channels, s16, 1536 kb/s (default)
Metadata:
creation_time : 2019-06-12T23:52:37.000000ZUpdate (9 August 2019)
I concatenated the files in iMovie and the sound and image are not as out of sync as with FFMPEG. Maybe iMovie aligns the timestamps at the end of each clip instead of concatenating the audio and image streams separately.
I ran the concatenation again with the latest
ffmpeg 4.1.4_1
on these files and others from the same camera. The audio and image are in sync in one case (the results lasts 46 minutes) out of sync in another (the result lasts 48 minutes). -
How can fix CalledProcessError : in ffmpeg
7 décembre 2020, par MarioI hope someone can help to troubleshoot this problem. I'm trying to save the loss plots out of Keras in the form of the following animation.






but I have been facing the following error, and ultimately I can't save the animation :



MovieWriter stderr:
[h264_v4l2m2m @ 0x55a67176f430] Could not find a valid device
[h264_v4l2m2m @ 0x55a67176f430] can't configure encoder
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

---------------------------------------------------------------------------
BrokenPipeError Traceback (most recent call last)
~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in saving(self, fig, outfile, dpi, *args, **kwargs)
 229 try:
--> 230 yield self
 231 finally:

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in save(self, filename, writer, fps, dpi, codec, bitrate, extra_args, metadata, extra_anim, savefig_kwargs, progress_callback)
 1155 frame_number += 1
-> 1156 writer.grab_frame(**savefig_kwargs)
 1157 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in grab_frame(self, **savefig_kwargs)
 383 self.fig.savefig(self._frame_sink(), format=self.frame_format,
--> 384 dpi=self.dpi, **savefig_kwargs)
 385 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/figure.py in savefig(self, fname, transparent, **kwargs)
 2179 
-> 2180 self.canvas.print_figure(fname, **kwargs)
 2181 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, **kwargs)
 2081 bbox_inches_restore=_bbox_inches_restore,
-> 2082 **kwargs)
 2083 finally:

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py in print_raw(self, filename_or_obj, *args, **kwargs)
 445 cbook.open_file_cm(filename_or_obj, "wb") as fh:
--> 446 fh.write(renderer._renderer.buffer_rgba())
 447 

BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

CalledProcessError Traceback (most recent call last)
 in <module>
 17 print(f'{model_type.upper()} Train Time: {Timer} sec')
 18 
---> 19 create_loss_animation(model_type, hist.history['loss'], hist.history['val_loss'], epoch)
 20 
 21 evaluate(model, trainX, trainY, testX, testY, scores_train, scores_test)

 in create_loss_animation(model_type, loss_hist, val_loss_hist, epoch)
 34 
 35 ani = matplotlib.animation.FuncAnimation(fig, animate, fargs=(l1, l2, loss, val_loss, title), repeat=True, interval=1000, repeat_delay=1000)
---> 36 ani.save(f'loss_animation_{model_type}_oneDataset.mp4', writer=writer)

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in save(self, filename, writer, fps, dpi, codec, bitrate, extra_args, metadata, extra_anim, savefig_kwargs, progress_callback)
 1154 progress_callback(frame_number, total_frames)
 1155 frame_number += 1
-> 1156 writer.grab_frame(**savefig_kwargs)
 1157 
 1158 # Reconnect signal for first draw if necessary

~/anaconda3/envs/CR7/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
 97 value = type()
 98 try:
---> 99 self.gen.throw(type, value, traceback)
 100 except StopIteration as exc:
 101 # Suppress StopIteration *unless* it's the same exception that

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in saving(self, fig, outfile, dpi, *args, **kwargs)
 230 yield self
 231 finally:
--> 232 self.finish()
 233 
 234 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in finish(self)
 365 def finish(self):
 366 '''Finish any processing for writing the movie.'''
--> 367 self.cleanup()
 368 
 369 def grab_frame(self, **savefig_kwargs):

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in cleanup(self)
 409 if self._proc.returncode:
 410 raise subprocess.CalledProcessError(
--> 411 self._proc.returncode, self._proc.args, out, err)
 412 
 413 @classmethod

CalledProcessError: Command '['/usr/bin/ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '720x720', '-pix_fmt', 
'rgba', '-r', '5', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-b', '800k', '-y', 
'loss_animation_CNN_oneDataset.mp4']' returned non-zero exit status 1.
</module>



I tried to ignore the error by this answer but it seems it's not the case. I also checked similar case but its answer for getting a static git binary is not my cas as well since not especial converting PNG to MP4 !



My code is as follows :



plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'

def animate(i, data1, data2, line1, line2):
 temp1 = data1.iloc[:int(i+1)]
 temp2 = data2.iloc[:int(i+1)]

 line1.set_data(temp1.index, temp1.value)
 line2.set_data(temp2.index, temp2.value)

 return (line1, line2)


def create_loss_animation(model_type, data1, data2):
 fig = plt.figure()
 plt.title(f'Loss on Train & Test', fontsize=25)
 plt.xlabel('Epoch', fontsize=20)
 plt.ylabel('Loss MSE for Sx-Sy & Sxy', fontsize=20)
 plt.xlim(min(data1.index.min(), data2.index.min()), max(data1.index.max(), data2.index.max()))
 plt.ylim(min(data1.value.min(), data2.value.min()), max(data1.value.max(), data2.value.max()))

 l1, = plt.plot([], [], 'o-', label='Train Loss', color='b', markevery=[-1])
 l2, = plt.plot([], [], 'o-', label='Test Loss', color='r', markevery=[-1])
 plt.legend(loc='center right', fontsize='xx-large')

 Writer = animation.writers['ffmpeg']
 writer = Writer(fps=5, bitrate=1800)

 ani = matplotlib.animation.FuncAnimation(fig, animate, fargs=(data1, data2, l1, l2), repeat=True, interval=1000, repeat_delay=1000)
 ani.save(f'{model_type}.mp4', writer=writer)

# create datasets
x = np.linspace(0,150,50)
y1 = 41*np.exp(-x/20)
y2 = 35*np.exp(-x/50)

my_data_number_1 = pd.DataFrame({'x':x, 'value':y1}).set_index('x')
my_data_number_2 = pd.DataFrame({'x':x, 'value':y2}).set_index('x')

create_loss_animation('test', my_data_number_1, my_data_number_2)