
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (68)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (6648)
-
Python : Extracting device and lens information from video metadata
14 mai 2023, par cat_got_my_tongueI am interested in extracting device and lens information from videos. Specifically, make and model of the device and the focal length. I was able to do this successfully for still images using the
exifread
module and extract a whole bunch of very useful information :

image type : MPO
Image ImageDescription: Shot with DxO ONE
Image Make: DxO
Image Model: DxO ONE
Image Orientation: Horizontal (normal)
Image XResolution: 300
Image YResolution: 300
Image ResolutionUnit: Pixels/Inch
Image Software: V3.0.0 (2b448a1aee) APP:1.0
Image DateTime: 2022:04:05 14:53:45
Image YCbCrCoefficients: [299/1000, 587/1000, 57/500]
Image YCbCrPositioning: Centered
Image ExifOffset: 158
Thumbnail Compression: JPEG (old-style)
Thumbnail XResolution: 300
Thumbnail YResolution: 300
Thumbnail ResolutionUnit: Pixels/Inch
Thumbnail JPEGInterchangeFormat: 7156
Thumbnail JPEGInterchangeFormatLength: 24886
EXIF ExposureTime: 1/3
EXIF FNumber: 8
EXIF ExposureProgram: Aperture Priority
EXIF ISOSpeedRatings: 100
EXIF SensitivityType: ISO Speed
EXIF ISOSpeed: 100
EXIF ExifVersion: 0221
EXIF DateTimeOriginal: 2022:04:05 14:53:45
EXIF DateTimeDigitized: 2022:04:05 14:53:45
EXIF ComponentsConfiguration: CrCbY
EXIF CompressedBitsPerPixel: 3249571/608175
EXIF ExposureBiasValue: 0
EXIF MaxApertureValue: 212/125
EXIF SubjectDistance: 39/125
EXIF MeteringMode: MultiSpot
EXIF LightSource: Unknown
EXIF Flash: Flash did not fire
EXIF FocalLength: 1187/100
EXIF SubjectArea: [2703, 1802, 675, 450]
EXIF MakerNote: [68, 88, 79, 32, 79, 78, 69, 0, 12, 0, 0, 0, 21, 0, 3, 0, 5, 0, 2, 0, ... ]
EXIF SubSecTime: 046
EXIF SubSecTimeOriginal: 046
EXIF SubSecTimeDigitized: 046
EXIF FlashPixVersion: 0100
EXIF ColorSpace: sRGB
EXIF ExifImageWidth: 5406
EXIF ExifImageLength: 3604
Interoperability InteroperabilityIndex: R98
Interoperability InteroperabilityVersion: [48, 49, 48, 48]
EXIF InteroperabilityOffset: 596
EXIF FileSource: Digital Camera
EXIF ExposureMode: Auto Exposure
EXIF WhiteBalance: Auto
EXIF DigitalZoomRatio: 1
EXIF FocalLengthIn35mmFilm: 32
EXIF SceneCaptureType: Standard
EXIF ImageUniqueID: C01A1709306530020220405185345046
EXIF BodySerialNumber: C01A1709306530



Unfortunately, I have been unable to extract this kind of info from videos so far.


This is what I have tried so far, with the
ffmpeg
module :

import ffmpeg
from pprint import pprint

test_video = "my_video.mp4"
pprint(ffmpeg.probe(test_video)["streams"])



And the output I get contains a lot of info but nothing related to the device or lens, which is what I am looking for :


[{'avg_frame_rate': '30/1',
 'bit_rate': '1736871',
 'bits_per_raw_sample': '8',
 'chroma_location': 'left',
 'codec_long_name': 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10',
 'codec_name': 'h264',
 'codec_tag': '0x31637661',
 'codec_tag_string': 'avc1',
 'codec_time_base': '1/60',
 'codec_type': 'video',
 'coded_height': 1088,
 'coded_width': 1920,
 'display_aspect_ratio': '16:9',
 'disposition': {'attached_pic': 0,
 'clean_effects': 0,
 'comment': 0,
 'default': 1,
 'dub': 0,
 'forced': 0,
 'hearing_impaired': 0,
 'karaoke': 0,
 'lyrics': 0,
 'original': 0,
 'timed_thumbnails': 0,
 'visual_impaired': 0},
 'duration': '20.800000',
 'duration_ts': 624000,
 'has_b_frames': 0,
 'height': 1080,
 'index': 0,
 'is_avc': 'true',
 'level': 40,
 'nal_length_size': '4',
 'nb_frames': '624',
 'pix_fmt': 'yuv420p',
 'profile': 'Constrained Baseline',
 'r_frame_rate': '30/1',
 'refs': 1,
 'sample_aspect_ratio': '1:1',
 'start_pts': 0,
 'start_time': '0.000000',
 'tags': {'creation_time': '2021-05-08T13:23:20.000000Z',
 'encoder': 'AVC Coding',
 'handler_name': 'VideoHandler',
 'language': 'und'},
 'time_base': '1/30000',
 'width': 1920},
 {'avg_frame_rate': '0/0',
 'bit_rate': '79858',
 'bits_per_sample': 0,
 'channel_layout': 'stereo',
 'channels': 2,
 'codec_long_name': 'AAC (Advanced Audio Coding)',
 'codec_name': 'aac',
 'codec_tag': '0x6134706d',
 'codec_tag_string': 'mp4a',
 'codec_time_base': '1/48000',
 'codec_type': 'audio',
 'disposition': {'attached_pic': 0,
 'clean_effects': 0,
 'comment': 0,
 'default': 1,
 'dub': 0,
 'forced': 0,
 'hearing_impaired': 0,
 'karaoke': 0,
 'lyrics': 0,
 'original': 0,
 'timed_thumbnails': 0,
 'visual_impaired': 0},
 'duration': '20.864000',
 'duration_ts': 1001472,
 'index': 1,
 'max_bit_rate': '128000',
 'nb_frames': '978',
 'profile': 'LC',
 'r_frame_rate': '0/0',
 'sample_fmt': 'fltp',
 'sample_rate': '48000',
 'start_pts': 0,
 'start_time': '0.000000',
 'tags': {'creation_time': '2021-05-08T13:23:20.000000Z',
 'handler_name': 'SoundHandler',
 'language': 'und'},
 'time_base': '1/48000'}]



Are these pieces of info available for videos ? Should I be using a different package ?


Thanks.


Edit :


pprint(ffmpeg.probe(test_video)["format"])
gives

{'bit_rate': '1815244',
 'duration': '20.864000',
 'filename': 'my_video.mp4',
 'format_long_name': 'QuickTime / MOV',
 'format_name': 'mov,mp4,m4a,3gp,3g2,mj2',
 'nb_programs': 0,
 'nb_streams': 2,
 'probe_score': 100,
 'size': '4734158',
 'start_time': '0.000000',
 'tags': {'artist': 'Microsoft Game DVR',
 'compatible_brands': 'mp41isom',
 'creation_time': '2021-05-08T12:12:33.000000Z',
 'major_brand': 'mp42',
 'minor_version': '0',
 'title': 'Snipping Tool'}}



-
Capture and encode desktop with libav in real time not giving corect images
3 septembre 2022, par thoxeyAs part of a larger project I want to be able to capture and encode the desktop frame by frame in real time. I have the following test code to reproduce the issue shown in the screenshot :


#include 
#include 
#include <iostream>
#include <fstream>
#include <string>
#include 
#include 

extern "C"
{
#include "libavdevice/avdevice.h"
#include "libavutil/channel_layout.h"
#include "libavutil/mathematics.h"
#include "libavutil/opt.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
}


/* 5 seconds stream duration */
#define STREAM_DURATION 5.0
#define STREAM_FRAME_RATE 25 /* 25 images/s */
#define STREAM_NB_FRAMES ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */

int videoStreamIndx;
int framerate = 30;

int width = 1920;
int height = 1080;

int encPacketCounter;

AVFormatContext* ifmtCtx;
AVCodecContext* avcodecContx;
AVFormatContext* ofmtCtx;
AVStream* videoStream;
AVCodecContext* avCntxOut;
AVPacket* avPkt;
AVFrame* avFrame;
AVFrame* outFrame;
SwsContext* swsCtx;

std::ofstream fs;


AVDictionary* ConfigureScreenCapture()
{

 AVDictionary* options = NULL;
 //Try adding "-rtbufsize 100M" as in https://stackoverflow.com/questions/6766333/capture-windows-screen-with-ffmpeg
 av_dict_set(&options, "rtbufsize", "100M", 0);
 av_dict_set(&options, "framerate", std::to_string(framerate).c_str(), 0);
 char buffer[16];
 sprintf(buffer, "%dx%d", width, height);
 av_dict_set(&options, "video_size", buffer, 0);
 return options;
}

AVCodecParameters* ConfigureAvCodec()
{
 AVCodecParameters* av_codec_par_out = avcodec_parameters_alloc();
 av_codec_par_out->width = width;
 av_codec_par_out->height = height;
 av_codec_par_out->bit_rate = 40000;
 av_codec_par_out->codec_id = AV_CODEC_ID_H264; //AV_CODEC_ID_MPEG4; //Try H.264 instead of MPEG4
 av_codec_par_out->codec_type = AVMEDIA_TYPE_VIDEO;
 av_codec_par_out->format = 0;
 return av_codec_par_out;
}

int GetVideoStreamIndex()
{
 int VideoStreamIndx = -1;
 avformat_find_stream_info(ifmtCtx, NULL);
 /* find the first video stream index . Also there is an API available to do the below operations */
 for (int i = 0; i < (int)ifmtCtx->nb_streams; i++) // find video stream position/index.
 {
 if (ifmtCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 VideoStreamIndx = i;
 break;
 }
 }

 if (VideoStreamIndx == -1)
 {
 }

 return VideoStreamIndx;
}

void CreateFrames(AVCodecParameters* av_codec_par_in, AVCodecParameters* av_codec_par_out)
{

 avFrame = av_frame_alloc();
 avFrame->width = avcodecContx->width;
 avFrame->height = avcodecContx->height;
 avFrame->format = av_codec_par_in->format;
 av_frame_get_buffer(avFrame, 0);

 outFrame = av_frame_alloc();
 outFrame->width = avCntxOut->width;
 outFrame->height = avCntxOut->height;
 outFrame->format = av_codec_par_out->format;
 av_frame_get_buffer(outFrame, 0);
}

bool Init()
{
 AVCodecParameters* avCodecParOut = ConfigureAvCodec();

 AVDictionary* options = ConfigureScreenCapture();

 AVInputFormat* ifmt = av_find_input_format("gdigrab");
 auto ifmtCtxLocal = avformat_alloc_context();
 if (avformat_open_input(&ifmtCtxLocal, "desktop", ifmt, &options) < 0)
 {
 return false;
 }
 ifmtCtx = ifmtCtxLocal;

 videoStreamIndx = GetVideoStreamIndex();

 AVCodecParameters* avCodecParIn = avcodec_parameters_alloc();
 avCodecParIn = ifmtCtx->streams[videoStreamIndx]->codecpar;

 AVCodec* avCodec = avcodec_find_decoder(avCodecParIn->codec_id);
 if (avCodec == NULL)
 {
 return false;
 }

 avcodecContx = avcodec_alloc_context3(avCodec);
 if (avcodec_parameters_to_context(avcodecContx, avCodecParIn) < 0)
 {
 return false;
 }

 //av_dict_set
 int value = avcodec_open2(avcodecContx, avCodec, NULL); //Initialize the AVCodecContext to use the given AVCodec.
 if (value < 0)
 {
 return false;
 }

 AVOutputFormat* ofmt = av_guess_format("h264", NULL, NULL);

 if (ofmt == NULL)
 {
 return false;
 }

 auto ofmtCtxLocal = avformat_alloc_context();
 avformat_alloc_output_context2(&ofmtCtxLocal, ofmt, NULL, NULL);
 if (ofmtCtxLocal == NULL)
 {
 return false;
 }
 ofmtCtx = ofmtCtxLocal;

 AVCodec* avCodecOut = avcodec_find_encoder(avCodecParOut->codec_id);
 if (avCodecOut == NULL)
 {
 return false;
 }

 videoStream = avformat_new_stream(ofmtCtx, avCodecOut);
 if (videoStream == NULL)
 {
 return false;
 }

 avCntxOut = avcodec_alloc_context3(avCodecOut);
 if (avCntxOut == NULL)
 {
 return false;
 }

 if (avcodec_parameters_copy(videoStream->codecpar, avCodecParOut) < 0)
 {
 return false;
 }

 if (avcodec_parameters_to_context(avCntxOut, avCodecParOut) < 0)
 {
 return false;
 }

 avCntxOut->gop_size = 30; //3; //Use I-Frame frame every 30 frames.
 avCntxOut->max_b_frames = 0;
 avCntxOut->time_base.num = 1;
 avCntxOut->time_base.den = framerate;

 //avio_open(&ofmtCtx->pb, "", AVIO_FLAG_READ_WRITE);

 if (avformat_write_header(ofmtCtx, NULL) < 0)
 {
 return false;
 }

 value = avcodec_open2(avCntxOut, avCodecOut, NULL); //Initialize the AVCodecContext to use the given AVCodec.
 if (value < 0)
 {
 return false;
 }

 if (avcodecContx->codec_id == AV_CODEC_ID_H264)
 {
 av_opt_set(avCntxOut->priv_data, "preset", "ultrafast", 0);
 av_opt_set(avCntxOut->priv_data, "zerolatency", "1", 0);
 av_opt_set(avCntxOut->priv_data, "tune", "ull", 0);
 }

 if ((ofmtCtx->oformat->flags & AVFMT_GLOBALHEADER) != 0)
 {
 avCntxOut->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }

 CreateFrames(avCodecParIn, avCodecParOut);

 swsCtx = sws_alloc_context();
 if (sws_init_context(swsCtx, NULL, NULL) < 0)
 {
 return false;
 }

 swsCtx = sws_getContext(avcodecContx->width, avcodecContx->height, avcodecContx->pix_fmt,
 avCntxOut->width, avCntxOut->height, avCntxOut->pix_fmt, SWS_FAST_BILINEAR,
 NULL, NULL, NULL);
 if (swsCtx == NULL)
 {
 return false;
 }

 return true;
}

void Encode(AVCodecContext* enc_ctx, AVFrame* frame, AVPacket* pkt)
{
 int ret;

 /* send the frame to the encoder */
 ret = avcodec_send_frame(enc_ctx, frame);
 if (ret < 0)
 {
 return;
 }

 while (ret >= 0)
 {
 ret = avcodec_receive_packet(enc_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 return;
 if (ret < 0)
 {
 return;
 }

 fs.write((char*)pkt->data, pkt->size);
 av_packet_unref(pkt);
 }
}

void EncodeFrames(int noFrames)
{
 int frameCount = 0;
 avPkt = av_packet_alloc();
 AVPacket* outPacket = av_packet_alloc();
 encPacketCounter = 0;

 while (av_read_frame(ifmtCtx, avPkt) >= 0)
 {
 if (frameCount++ == noFrames)
 break;
 if (avPkt->stream_index != videoStreamIndx) continue;

 avcodec_send_packet(avcodecContx, avPkt);

 if (avcodec_receive_frame(avcodecContx, avFrame) >= 0) // Frame successfully decoded :)
 {
 outPacket->data = NULL; // packet data will be allocated by the encoder
 outPacket->size = 0;

 outPacket->pts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);
 if (outPacket->dts != AV_NOPTS_VALUE)
 outPacket->dts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);

 outPacket->dts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);
 outPacket->duration = av_rescale_q(1, avCntxOut->time_base, videoStream->time_base);

 outFrame->pts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);
 outFrame->pkt_duration = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);
 encPacketCounter++;

 int sts = sws_scale(swsCtx,
 avFrame->data, avFrame->linesize, 0, avFrame->height,
 outFrame->data, outFrame->linesize);

 /* make sure the frame data is writable */
 auto ret = av_frame_make_writable(outFrame);
 if (ret < 0)
 break;
 Encode(avCntxOut, outFrame, outPacket);
 }
 av_frame_unref(avFrame);
 av_packet_unref(avPkt);
 }
}

void Dispose()
{
 fs.close();

 auto ifmtCtxLocal = ifmtCtx;
 avformat_close_input(&ifmtCtx);
 avformat_free_context(ifmtCtx);
 avcodec_free_context(&avcodecContx);

}

int main(int argc, char** argv)
{
 avdevice_register_all();

 fs.open("out.h264");

 if (Init())
 {
 EncodeFrames(300);
 }
 else
 {
 std::cout << "Failed to Init \n";
 } 

 Dispose();

 return 0;
}
</string></fstream></iostream>


As far as I can tell the setup of the encoding process is correct as it is largely unchanged from how the example given in the official documentation is working : https://libav.org/documentation/doxygen/master/encode__video_8c_source.html


However there is limited documentation around the desktop capture online so I am not sure if I have set that up correctly.




-
ffmpeg command never work in lambda function using nodejs [closed]
4 décembre 2022, par Santosh swainI am trying to implement FFmpeg video streaming functionality such as Instagram countdown functionality. In this code, first of all, I get records(URLs) from the s3 bucket and then split them according to my need, and then create the command and execute it with exec() belonging to childe_process. in this, I am trying to store the out in some specific folder in lambda function but it was never stored. I thought lambda does allow to write files locally so I am trying to do the direct upload on the s3 bucket by using the stdout parameter of exec()'s callback. guys, please help to do that. I have a question lambda does allow to write content in its local folder ? or if not allow then whats the way to do that thing ? I just share my code please guide me.



 // dependencies
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var { exec } = require('child_process');
var path = require('path')
var AWS_ACCESS_KEY = '';
var AWS_SECRET_ACCESS_KEY = '';
var fs = require('fs')

s3 = new AWS.S3({
 accessKeyId: AWS_ACCESS_KEY,
 secretAccessKey: AWS_SECRET_ACCESS_KEY
});

exports.handler = async function (event, context) {

 var bucket_name = "sycu-game";
 var bucketName = "sycu-test";

 //CREATE OVERLAY AND BG_VALUE PATH TO GET VALUE FROM S3
 const bgValue = (event.Records[0].bg_value).split('/');
 const overlayImage = (event.Records[0].overlay_image_url).split('/');


 var s3_bg_value = bgValue[3] + "/" + bgValue[4];
 var s3_overlay_image = overlayImage[4] + "/" + overlayImage[5] + "/" + overlayImage[6];
 const signedUrlExpireSeconds = 60 * 5;


 //RETREIVE BG_VALUE FROM S3 AND CREATE URL FOR FFMPEG INPUT VALUE
 var bg_value_url = s3.getSignedUrl('getObject', {
 Bucket: bucket_name,
 Key: s3_bg_value,
 Expires: signedUrlExpireSeconds
 });
 bg_value_url = bg_value_url.split("?");
 bg_value_url = bg_value_url[0];


 //RETREIVE OVERLAY IMAGE FROM S3 AND CREATE URL FOR FFMPEG INPUT VALUE 
 var overlay_image_url = s3.getSignedUrl('getObject', {
 Bucket: bucket_name,
 Key: s3_overlay_image,
 Expires: signedUrlExpireSeconds
 });
 overlay_image_url = overlay_image_url.split("?");
 overlay_image_url = overlay_image_url[0];


 //MANUAL ASSIGN VARIABLE FOR FFMPEG COMMAND 
 var command,
 ExtraTimerSec = event.Records[0].timer_seconds + 5,
 TimerSec = event.Records[0].timer_seconds + 1,
 BackgroundWidth = 1080,
 BackgroundHeight = 1920,
 videoPath = (__dirname + '/tmp/' + event.Records[0].name);
 console.log("path", videoPath)
 //TEMP DIRECTORY

 var videoPath = '/media/volume-d/generatedCountdownS3/tmp/' + event.Records[0].name
 var tmpFile = fs.createWriteStream(videoPath)
 //FFMPEG COMMAND 
 if (event.Records[0].bg_type == 2) {
 if (event.Records[0].is_rotate) {
 command = ' -stream_loop -1 -t ' + ExtraTimerSec + ' -i ' + bg_value_url + ' -i ' + overlay_image_url + ' -filter_complex "color=color=0x000000@0.0:s= ' + event.Records[0].resized_box_width + 'x' + event.Records[0].resized_box_height + ',drawtext=fontcolor=' + event.Records[0].time_text_color + ':fontsize=' + event.Records[0].time_text_size + ':x=' + event.Records[0].minute_x + ':y=' + event.Records[0].minute_y + ':text=\'%{eif\\:trunc(mod(((' + TimerSec + '-if(between(t, 0, 1),1,if(gte(t,' + TimerSec + '),' + TimerSec + ',t)))/60),60))\\:d\\:2}\',drawtext=fontcolor=' + event.Records[0].time_text_color + ':fontsize=' + event.Records[0].time_text_size + ':x=' + event.Records[0].second_x + ':y=' + event.Records[0].second_y + ':text=\'%{eif\\:trunc(mod(' + TimerSec + '-if(between(t, 0, 1),1,if(gte(t,' + TimerSec + '),' + TimerSec + ',t))\,60))\\:d\\:2}\'[txt]; [txt] rotate=' + event.Records[0].box_angle + '*PI/180:fillcolor=#00000000 [rotated];[0] scale=w=' + BackgroundWidth + ':h=' + BackgroundHeight + '[t];[1] scale=w=' + BackgroundWidth + ':h=' + BackgroundHeight + '[ot];[t][ot] overlay = :x=0 :y=0 [m1];[m1][rotated]overlay = :x=' + event.Records[0].flat_box_coordinate_x + ' :y=' + event.Records[0].flat_box_coordinate_x + ' [m2]" -map "[m2]" -pix_fmt yuv420p -t ' +
 ExtraTimerSec + ' -r 24 -c:a copy ' + videoPath + "";
 }
 else {
 command = ' -stream_loop -1 -t ' + ExtraTimerSec + ' -i ' + bg_value_url + ' -i ' + overlay_image_url + ' -filter_complex "color=color=0x000000@0.0:s= ' + event.Records[0].resized_box_width + 'x' + event.Records[0].resized_box_height + ',drawtext=fontcolor=' + event.Records[0].time_text_color + ':fontsize=' + event.Records[0].time_text_size + ':x=' + event.Records[0].minute_x + ':y=' + event.Records[0].minute_y + ':text=\'%{eif\\:trunc(mod(((' + TimerSec + '-if(between(t, 0, 1),1,if(gte(t,' + TimerSec + '),' + TimerSec + ',t)))/60),60))\\:d\\:2}\',drawtext=fontcolor=' + event.Records[0].time_text_color + ':fontsize=' + event.Records[0].time_text_size + ':x=' + event.Records[0].second_x + ':y=' + event.Records[0].second_y + ':text=\'%{eif\\:trunc(mod(' + TimerSec + '-if(between(t, 0, 1),1,if(gte(t,' + TimerSec + '),' + TimerSec + ',t))\,60))\\:d\\:2}\'[txt]; [txt] rotate=' + event.Records[0].box_angle + '*PI/180:fillcolor=#00000000 [rotated];[0] scale=w=' + BackgroundWidth + ':h=' + BackgroundHeight + '[t];[1] scale=w=' + BackgroundWidth + ':h=' + BackgroundHeight + '[ot];[t][ot] overlay = :x=0 :y=0 [m1];[m1][rotated]overlay = :x=' + event.Records[0].flat_box_coordinate_x + ' :y=' + event.Records[0].flat_box_coordinate_x + ' [m2]" -map "[m2]" -pix_fmt yuv420p -t ' +
 ExtraTimerSec + ' -r 24 -c:a copy ' + videoPath + "";
 }
 }
 var final_command = '/usr/bin/ffmpeg' + command;


 //COMMAND EXECUTE HERE

 await exec(final_command, function (err, stdout, stderr) {
 console.log("data is here")
 console.log('err:', err);
 console.log('stdout:', stdout);
 console.log('stderr:', stderr);
 const params = {
 Bucket: bucketName,
 Key: "countdown/output.mp4",
 Body: stdout,
 }
 s3.upload(params).promise().then(data => {
 console.log("data is here -->", data)
 });
 });
 var tmpFile = fs.createReadStream(videoPath)
 console.log('temp file data:', tmpFile.toString())
};