
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (47)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (8494)
-
moviepy VideoFileClip IndexError : list index out of range, OSError : MoviePy error : failed to read the duration of file
21 mai 2022, par Mohamed MedhatWe are recording videos from the browser with the following encoding
'video/webm ; codecs="vp8, opus"', then we upload these videos to an AWS S3 bucket.
Our ML model works on these videos, and one of the models needs to extarct the audio and process it.
Here is a code snapit for extracting the audio


import speech_recognition as sr
import moviepy.editor as me
from denoise2 import denoise
from sentence_transformers import SentenceTransformer
from sklearn.metrics.pairwise import cosine_similarity
import math

model_name = 'bert-base-nli-mean-tokens'
model = SentenceTransformer(model_name)


class recomm:
 y = 0.0

 def __init__(self, path, keywords):
 video_clip = me.VideoFileClip(r"{}".format(path))
 path2 = "y2.wav"
 video_clip.audio.write_audiofile(r"{}".format(path2), nbytes=2)
 recognizer = sr.Recognizer()
 """a = AudioSegment.from_wav(path2)
 a = a + 5
 a.export(path2, "wav")"""
 audio_clip = sr.AudioFile("{}".format(path2))
 with audio_clip as source:
 audio_file = recognizer.record(source)
 sent = []
 result = ""
 try:
 result = recognizer.recognize_google(audio_file)
 except sr.UnknownValueError:
 print("Can not process audio ")
 if not result:
 self.y = 0
 else:
 print(result)
 sent.append(result)
 sent = sent + keywords
 sent_vec3 = model.encode(sent)
 x = cosine_similarity(
 [sent_vec3[0]],
 sent_vec3[1:]
 )
 for i in range(len(x)):
 self.y = self.y + x[0][i]
 self.y = (self.y / (len(sent) - 1)) * 1000.0

 def res(self):
 if self.y < 0:
 self.y = 0
 return self.y



And that's the traceback,
the error occurred at this line


video_clip = me.VideoFileClip(r"{}".format(path))



Traceback (most recent call last):
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/test/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_reader.py", line 286, in ffmpeg_parse_infos
match = re.findall("([0-9][0-9]:[0-9][0-9]:[0-9][0-9].[0-9][0-9])", line)[0]
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 90, in
main()
File "main.py", line 85, in main
interviews_channel.start_consuming()
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/test/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 1865, in start_consuming
self._process_data_events(time_limit=None)
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/test/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 2026, in _process_data_events
self.connection.process_data_events(time_limit=time_limit)
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/test/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 833, in process_data_events
self._dispatch_channel_events()
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/test/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 567, in _dispatch_channel_events
impl_channel._get_cookie()._dispatch_events()
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/test/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 1492, in _dispatch_events
consumer_info.on_message_callback(self, evt.method,
File "main.py", line 79, in callback
processing(json.loads(body))
File "main.py", line 34, in processing
r = recomm(path, keywords)
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/recommendation.py", line 17, in init
video_clip = me.VideoFileClip(r"{}".format(path))
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/test/lib/python3.8/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in init
self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/test/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in init
infos = ffmpeg_parse_infos(filename, print_infos, check_duration,
File "/home/medo/Dev/Smart-remotely-interviewing-system/backend/Process-interview/test/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos
raise IOError(("MoviePy error: failed to read the duration of file %s.\n"
OSError: MoviePy error: failed to read the duration of file 74b74292-3642-486a-8319-255bb7e7da5a-1647363285285.webm.
Here are the file infos returned by ffmpeg:

ffmpeg version 4.2.2-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 8 (Debian 8.3.0-6)
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, matroska,webm, from '74b74292-3642-486a-8319-255bb7e7da5a-1647363285285.webm':
Metadata:
encoder : Chrome
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
Stream #0:1(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR 4:3, 29.42 fps, 29.42 tbr, 1k tbn, 1k tbc (default)
Metadata:
alpha_mode : 1
At least one output file must be specified



-
Encoding real, source duration of a timelapse into MP4 container using FFMPEG (GoPro) [closed]
13 août 2024, par Marek TowarekFootage recorded with GoPro TimeLapse / TimeWarp, indicates the total, real-time duration of recorded data, while the video stream is of reduced length by the timelapse interval.


General
Complete name : E:\Video\GoPro\GoPro\GH010656.MP4
Format : MPEG-4
Format profile : Base Media / Version 1
Codec ID : mp41 (mp41)
File size : 1.94 GiB
Duration : 22 min 55 s
Overall bit rate mode : Variable
Overall bit rate : 12.1 Mb/s

Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L5
Format settings : CABAC / 2 Ref Frames
Format settings, CABAC : Yes
Format settings, Reference : 2 frames
Format settings, GOP : M=1, N=15
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 4 min 35 s
Bit rate mode : Variable
Bit rate : 60.0 Mb/s
Width : 1 920 pixels
Height : 1 440 pixels
Display aspect ratio : 4:3
Rotation : 180°
Frame rate mode : Constant
Frame rate : 29.970 (30000/1001) FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.724
Stream size : 1.92 GiB (99%)
Title : GoPro AVC 
Language : English
Color range : Limited
colour_range_Original : Full
Color primaries : BT.709
Transfer characteristics : BT.709
Matrix coefficients : BT.709
Codec configuration box : avcC

Other #1
ID : 2
Type : Time code
Format : QuickTime TC
Duration : 4 min 35 s
Bit rate mode : Constant
Frame rate : 29.970 (30000/1001) FPS
Title : GoPro TCD 
Language : English

Other #2
Type : meta
Duration : 22 min 55 s
Source duration : 4 min 35 s
Bit rate mode : Variable
Stream size : 15.0 MiB
Source stream size : 15.0 MiB



This information could be omitted.

But it becomes quite important for correctness of GPS Data stored in Stream #2.

Unfortunately, all the settings I have tried for FFMPEG, do not preserve the duration of Stream #2.
& the output ends up looking like this :


General
Complete name : C:\Video_Encode\GoPro\GH010656.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/iso2/mp41)
File size : 717 MiB
Duration : 4 min 35 s
Overall bit rate : 21.9 Mb/s
Encoded date : UTC 2026-03-29 11:28:23
Tagged date : UTC 2026-03-29 11:28:23
Writing application : Lavf61.5.101

Video
ID : 1
Format : HEVC
Format/Info : High Efficiency Video Coding
Format profile : Main@L5@Main
Codec ID : hvc1
Codec ID/Info : High Efficiency Video Coding
Duration : 4 min 35 s
Bit rate : 21.4 Mb/s
Width : 1 920 pixels
Height : 1 440 pixels
Display aspect ratio : 4:3
Frame rate mode : Constant
Frame rate : 29.970 (30000/1001) FPS
Color space : YUV
Chroma subsampling : 4:2:0 (Type 0)
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.258
Stream size : 702 MiB (98%)
Title : GoPro AVC 
Writing library : x265 3.6+35-dd594f59d:[Windows][GCC 14.1.0][64 bit] 8bit+10bit+12bit
Language : English
Encoded date : UTC 2026-03-29 11:28:23
Tagged date : UTC 2026-03-29 11:28:23
Color range : Full
Color primaries : BT.709
Transfer characteristics : BT.709
Matrix coefficients : BT.709
Codec configuration box : hvcC

Other #1
ID : 2
Type : Time code
Format : QuickTime TC
Duration : 4 min 35 s
Frame rate : 29.970 (30000/1001) FPS
Time code of first frame : 17:55:35:02
Time code of last frame : 18:00:09:28
Time code, stripped : Yes
Title : GoPro TCD 
Language : English
Default : Yes
Alternate group : 2
Encoded date : UTC 2026-03-29 11:28:23
Tagged date : UTC 2026-03-29 11:28:23
mdhd_Duration : 275175

Other #2
Type : meta
Duration : 4 min 35 s
Bit rate mode : Variable



Any ideas how to preserve that real time duration indicator ?

Here is the FFMPEG binary I use to get the TMCD & GMPD data to copy : GitHub Link

-
Video created using H263 codec and ffmpeg does not play on android device [closed]
21 mars 2013, par susheel tickooI have created a video using FFmpeg and H263 codec. But when I play the video on an Android device the player is unable to play it. I have used both the extensions .mp4 and .3gp.
void generate(JNIEnv *pEnv, jobject pObj,jobjectArray stringArray,int famerate,int width,int height,jstring videoFilename)
{
AVCodec *codec;
AVCodecContext *c= NULL;
//int framesnum=5;
int i,looper, out_size, size, x, y,encodecbuffsize,j;
__android_log_write(ANDROID_LOG_INFO, "record","************into generate************");
int imagecount= (*pEnv)->GetArrayLength(pEnv, stringArray);
__android_log_write(ANDROID_LOG_INFO, "record","************got magecount************");
int retval=-10;
FILE *f;
AVFrame *picture,*encoded_avframe;
uint8_t *encodedbuffer;
jbyte *raw_record;
char logdatadata[100];
int returnvalue = -1,numBytes =-1;
const char *gVideoFileName = (char *)(*pEnv)->GetStringUTFChars(pEnv, videoFilename, NULL);
__android_log_write(ANDROID_LOG_INFO, "record","************got video file name************");
/* find the mpeg1 video encoder */
codec = avcodec_find_encoder(CODEC_ID_H264);
if (!codec) {
__android_log_write(ANDROID_LOG_INFO, "record","codec not found");
exit(1);
}
c= avcodec_alloc_context();
/*c->bit_rate = 400000;
c->width = width;
c->height = height;
c->time_base= (AVRational){1,famerate};
c->gop_size = 12; // emit one intra frame every ten frames
c->max_b_frames=0;
c->pix_fmt = PIX_FMT_YUV420P;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->codec_id = CODEC_ID_H263;*/
c->bit_rate = 400000;
// resolution must be a multiple of two
c->width = 176;
c->height = 144;
c->pix_fmt = PIX_FMT_YUV420P;
c->qcompress = 0.0;
c->qblur = 0.0;
c->gop_size = 20; //or 1
c->sub_id = 1;
c->workaround_bugs = FF_BUG_AUTODETECT;
//pFFmpeg->c->time_base = (AVRational){1,25};
c->time_base.num = 1;
c->time_base.den = famerate;
c->max_b_frames = 0; //pas de B frame en H263
// c->opaque = opaque;
c->dct_algo = FF_DCT_AUTO;
c->idct_algo = FF_IDCT_AUTO;
//lc->rtp_mode = 0;
c->rtp_payload_size = 1000;
c->rtp_callback = 0; // ffmpeg_rtp_callback;
c->flags |= CODEC_FLAG_QSCALE;
c->mb_decision = FF_MB_DECISION_RD;
c->thread_count = 1;
#define DEFAULT_RATE (16 * 8 * 1024)
c->rc_min_rate = DEFAULT_RATE;
c->rc_max_rate = DEFAULT_RATE;
c->rc_buffer_size = DEFAULT_RATE * 64;
c->bit_rate = DEFAULT_RATE;
sprintf(logdatadata, "------width from c ---- = %d",width);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
sprintf(logdatadata, "------height from c ---- = %d",height);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
__android_log_write(ANDROID_LOG_INFO, "record","************Found codec and now opening it************");
/* open it */
retval = avcodec_open(c, codec);
if ( retval < 0)
{
sprintf(logdatadata, "------avcodec_open ---- retval = %d",retval);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
__android_log_write(ANDROID_LOG_INFO, "record","could not open codec");
exit(1);
}
__android_log_write(ANDROID_LOG_INFO, "record","statement 5");
f = fopen(gVideoFileName, "ab");
if (!f) {
__android_log_write(ANDROID_LOG_INFO, "record","could not open video file");
exit(1);
}
__android_log_write(ANDROID_LOG_INFO, "record", "***************Allocating encodedbuffer*********\n");
encodecbuffsize = avpicture_get_size(PIX_FMT_RGB24, c->width, c->height);
sprintf(logdatadata, "encodecbuffsize = %d",encodecbuffsize);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
encodedbuffer = malloc(encodecbuffsize);
jclass cls = (*pEnv)->FindClass(pEnv, "com/canvasm/mediclinic/VideoGenerator");
jmethodID mid = (*pEnv)->GetMethodID(pEnv, cls, "videoProgress", "(Ljava/lang/String;)Ljava/lang/String;");
jmethodID mid_delete = (*pEnv)->GetMethodID(pEnv, cls, "deleteTempFile", "(Ljava/lang/String;)Ljava/lang/String;");
if (mid == 0)
return;
__android_log_write(ANDROID_LOG_INFO, "native","got method id");
for(i=0;i<=imagecount;i++) {
jboolean isCp;
int progress = 0;
float temp;
jstring string;
if(i==imagecount)
string = (jstring) (*pEnv)->GetObjectArrayElement(pEnv, stringArray, imagecount-1);
else
string = (jstring) (*pEnv)->GetObjectArrayElement(pEnv, stringArray, i);
const char *rawString = (*pEnv)->GetStringUTFChars(pEnv, string, &isCp);
__android_log_write(ANDROID_LOG_INFO, "record",rawString);
picture = OpenImage(rawString,width,height);
//WriteJPEG(c,picture,i);
// encode video
memset(encodedbuffer,0,encodecbuffsize);
//do{
for(looper=0;looper<5;looper++)
{
memset(encodedbuffer,0,encodecbuffsize);
out_size = avcodec_encode_video(c, encodedbuffer, encodecbuffsize, picture);
sprintf(logdatadata, "avcodec_encode_video ----- out_size = %d \n",out_size );
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
if(out_size>0)
break;
}
__android_log_write(ANDROID_LOG_INFO, "record","*************Start looping for same image*******");
returnvalue = fwrite(encodedbuffer, 1, out_size, f);
sprintf(logdatadata, "fwrite ----- returnvalue = %d \n",returnvalue );
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
__android_log_write(ANDROID_LOG_INFO, "record","*************End looping for same image*******");
// publishing progress
progress = ((i*100)/(imagecount+1))+15;//+1 is for last frame duplicated entry
if(progress<20 )
progress =20;
if(progress>=95 )
progress =95;
sprintf(logdatadata, "%d",progress );
jstring jstrBuf = (*pEnv)->NewStringUTF(pEnv, logdatadata);
(*pEnv)->CallObjectMethod(pEnv, pObj, mid,jstrBuf);
if(i>0)
(*pEnv)->CallObjectMethod(pEnv, pObj, mid_delete,string);
}
/* get the delayed frames */
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, encodedbuffer, encodecbuffsize, NULL);
fwrite(encodedbuffer, 20, out_size, f);
}
/* add sequence end code to have a real mpeg file */
encodedbuffer[0] = 0x00;
encodedbuffer[1] = 0x00;
encodedbuffer[2] = 0x01;
encodedbuffer[3] = 0xb7;
fwrite(encodedbuffer, 1, 4, f);
fclose(f);
free(encodedbuffer);
avcodec_close(c);
av_free(c);
__android_log_write(ANDROID_LOG_INFO, "record","Video created ");
// last updation of 100%
sprintf(logdatadata, "%d",100 );
jstring jstrBuf = (*pEnv)->NewStringUTF(pEnv, logdatadata);
(*pEnv)->CallObjectMethod(pEnv, pObj, mid,jstrBuf);
}
AVFrame* OpenImage(const char* imageFileName,int w,int h)
{
AVFrame *pFrame;
AVCodec *pCodec ;
AVFormatContext *pFormatCtx;
AVCodecContext *pCodecCtx;
uint8_t *buffer;
int frameFinished,framesNumber = 0,retval = -1,numBytes=0;
AVPacket packet;
char logdatadata[100];
//__android_log_write(ANDROID_LOG_INFO, "OpenImage",imageFileName);
if(av_open_input_file(&pFormatCtx, imageFileName, NULL, 0, NULL)!=0)
//if(avformat_open_input(&pFormatCtx,imageFileName,NULL,NULL)!=0)
{
__android_log_write(ANDROID_LOG_INFO, "record",
"Can't open image file ");
return NULL;
}
pCodecCtx = pFormatCtx->streams[0]->codec;
pCodecCtx->width = w;
pCodecCtx->height = h;
pCodecCtx->pix_fmt = PIX_FMT_YUV420P;
// Find the decoder for the video stream
pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if (!pCodec)
{
__android_log_write(ANDROID_LOG_INFO, "record",
"Can't open image file ");
return NULL;
}
pFrame = avcodec_alloc_frame();
numBytes = avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
sprintf(logdatadata, "numBytes = %d",numBytes);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
retval = avpicture_fill((AVPicture *) pFrame, buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
// Open codec
if(avcodec_open(pCodecCtx, pCodec)<0)
{
__android_log_write(ANDROID_LOG_INFO, "record","Could not open codec");
return NULL;
}
if (!pFrame)
{
__android_log_write(ANDROID_LOG_INFO, "record","Can't allocate memory for AVFrame\n");
return NULL;
}
int readval = -5;
while (readval = av_read_frame(pFormatCtx, &packet) >= 0)
{
if(packet.stream_index != 0)
continue;
int ret = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
sprintf(logdatadata, "avcodec_decode_video2 ret = %d",ret);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
if (ret > 0)
{
__android_log_write(ANDROID_LOG_INFO, "record","Frame is decoded\n");
pFrame->quality = 4;
av_free_packet(&packet);
av_close_input_file(pFormatCtx);
return pFrame;
}
else
{
__android_log_write(ANDROID_LOG_INFO, "record","error while decoding frame \n");
}
}
sprintf(logdatadata, "readval = %d",readval);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
}The
generate
method takes a list of strings (path to images) and converts them to video and theOpenImage
method is responsible for convertign a single image toAVFrame
.