
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (101)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs. -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (13233)
-
How can I create video thumbnails in different Folders in ffmpeg ?
1er janvier 2020, par termnlencodesI’m trying to generate video thumbnails for all my video collection using ffmpeg. Downside is, I don’t know how to create them in they’re respective folders.
Example : Videos are in the following folders ;
C :/Media/TV Show/
<showname></showname>
/<seasonnum></seasonnum>
/C :/Media/Movies/
<moviename></moviename>
/I want to generate the thumbnails under and folders.
Here’s the script I’m using rn and I don’t know what to add on it.
Hope somebody can help me.
Edit : Whenever I create the thumbnails there’s a ".1" after the file extension. How can I remove it ? -
is there a faster way to extract various video clips from different sources and concatenate them, using moviepy ?
27 août 2019, par user2627082So I’ve made a small script using moviepy to help me with my video editing process. It basically scans a bunch of subtitle files for specified words and the time duration when it occurs. With that it extracts that particular time duration from video files corresponding to the subtitle files. The extracted mp4 clips are all concatenated and written into one big composition.
So it’s all running fine but it’s very slow. Can someone tell me it’s possible to make it faster. Am I doing something wrong ? Or is it normal for the process to be slow.
import os,re
from pathlib import Path
from moviepy.editor import *
import datetime
def search(words_list, sub_list):
for x in range(len(words_list)):
print(words_list[x])
clips = []
clips.clear()
for y in range(len(sub_list)):
print(sub_list[y])
stamps = []
stamps.clear()
with open(sub_list[y]) as f:
paragraphs = (paragraph.split("\n") for paragraph in
f.read().split("\n\n"))
for paragraph in paragraphs:
if any(words_list[x] in line.lower() for line in paragraph):
stamps.append(f"[{paragraph[1].strip()}]")
videopath = str(sub_list[y]).replace("srt", "mp4").replace(":\\",
":\\\\")
my_clip = VideoFileClip(videopath)
for stamp in stamps:
print(stamp)
pre_stamp = stamp[1:9]
post_stamp = stamp[18:26]
format = '%H:%M:%S'
pre_stamp = str(datetime.datetime.strptime(pre_stamp, format)
- datetime.timedelta(seconds=4))[11:19]
post_stamp = str(datetime.datetime.strptime(post_stamp,format)
+ datetime.timedelta(seconds=4))[11:19]
trim_clip = my_clip.subclip(pre_stamp,post_stamp)
clips.append(trim_clip)
conc = concatenate_videoclips(clips)
print(clips)
conc.write_videofile("C:\\Users\Sri\PycharmProjects\subscrape\movies\\" + words_list[x] + "-comp.mp4")
words = ["does what","spins","size"]
subs = list(Path('C:\\Users\Sri\PycharmProjects\subscrape\movies').glob('**/*.srt'))
search(words,subs) -
Watson NarrowBand Speech to Text not accepting ogg file
19 janvier 2017, par Bob DillNodeJS app using ffmpeg to create ogg files from mp3 & mp4. If the source file is broadband, Watson Speech to Text accepts the file with no issues. If the source file is narrow band, Watson Speech to Text fails to read the ogg file. I’ve tested the output from ffmpeg and the narrowband ogg file has the same audio content (e.g. I can listen to it and hear the same people) as the mp3 file. Yes, in advance, I am changing the call to Watson to correctly specify the model and content_type. Code follows :
exports.createTranscript = function(req, res, next)
{ var _name = getNameBase(req.body.movie);
var _type = getType(req.body.movie);
var _voice = (_type == "mp4") ? "en-US_BroadbandModel" : "en-US_NarrowbandModel" ;
var _contentType = (_type == "mp4") ? "audio/ogg" : "audio/basic" ;
var _audio = process.cwd()+"/HTML/movies/"+_name+'ogg';
var transcriptFile = process.cwd()+"/HTML/movies/"+_name+'json';
speech_to_text.createSession({model: _voice}, function(error, session) {
if (error) {console.log('error:', error);}
else
{
var params = { content_type: _contentType, continuous: true,
audio: fs.createReadStream(_audio),
session_id: session.session_id
};
speech_to_text.recognize(params, function(error, transcript) {
if (error) {console.log('error:', error);}
else
{ fs.writeFile(transcriptFile, JSON.stringify(transcript), function(err) {if (err) {console.log(err);}});
res.send(transcript);
}
});
}
});
}_type
is either mp3 (narrowband from phone recording) or mp4 (broadband)
model: _voice
has been traced to ensure correct setting
content_type: _contentType
has been traced to ensure correct settingAny ogg file submitted to Speech to Text with narrowband settings fails with
Error: No speech detected for 30s.
Tested with both real narrowband files and asking Watson to read a broadband ogg file (created from mp4) as narrowband. Same error message. What am I missing ?