
Recherche avancée
Médias (1)
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
Autres articles (63)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (8908)
-
Run python ffmpeg audio convertion code file from subprocess.call() within a flask server
11 novembre 2019, par KasunI have build a small flask server to handle the request. I have 3 parameters in the api function that i want to get. those are
type
,user_id
,audio_file
, One is a file. Since it’s used for the audio file conversion. I have to get a file.. I have tested with this in Postman audio file get saved but the subprocess.call(command) in the api function doesn’t work..this is the flask server code
@app.route('/voice', methods=['GET','POST'])
def process_audio():
try:
if request.method == 'POST':
input_type = request.form['type']
user_id = request.form['user_id']
static_file = request.files['audio_file']
audio_name = secure_filename(static_file.filename)
path = 't/'
status = static_file.save(path + audio_name)
full_file = path + audio_name
if input_type == 1:
cmd = "python t/convert_english.py --audio " + full_file
res = subprocess.call([cmd],shell=True)
f = open('t/ress.txt', 'w')
f.write(str(res))
f.close()
return "true"
else:
cmd = "python t/convert_sinhala.py --audio " + full_file
os.system(cmd)
return "true"
else:
return "false"
except Exception as e:
print(e)
logger.error(e)
return eThe audio file get saved in the directory as expected..
this is the
convert_english.py
import subprocess
import argparse
import os
import logging
import speech_recognition as sr
from tqdm import tqdm
from multiprocessing.dummy import Pool
#subprocess.call('pip install pydub',shell=True)
from os import path
from pydub import AudioSegment
logging.basicConfig(filename='/var/www/img-p/t/ee.log', level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(name)s %(message)s')
logger=logging.getLogger(__name__)
ap = argparse.ArgumentParser()
ap.add_argument("-a", "--audio", required=True,
help="path to input audio file")
args = vars(ap.parse_args())
src = args["audio"]
dst = "audio.wav"
sound = AudioSegment.from_mp3(src)
sound.export(dst, format="wav")
#subprocess.call('pip install ffmpeg-python',shell=True)
subprocess.call('mkdir parts',shell=True)
subprocess.call('ffmpeg -i audio.wav -f segment -segment_time 30 -c copy parts/out%09d.wav',shell=True)
#subprocess.call('pip install SpeechRecognition',shell=True)
pool = Pool(8) # Number of concurrent threads
with open("api-key.json") as f:
GOOGLE_CLOUD_SPEECH_CREDENTIALS = f.read()
r = sr.Recognizer()
files = sorted(os.listdir('parts/'))
def transcribe(data):
idx, file = data
name = "parts/" + file
print(name + " started")
# Load audio file
with sr.AudioFile(name) as source:
audio = r.record(source)
# Transcribe audio file
text = r.recognize_google_cloud(audio, credentials_json=GOOGLE_CLOUD_SPEECH_CREDENTIALS)
print(name + " done")
return {
"idx": idx,
"text": text
}
all_text = pool.map(transcribe, enumerate(files))
pool.close()
pool.join()
transcript = ""
for t in sorted(all_text, key=lambda x: x['idx']):
total_seconds = t['idx'] * 30
# Cool shortcut from:
# https://stackoverflow.com/questions/775049/python-time-seconds-to-hms
# to get hours, minutes and seconds
m, s = divmod(total_seconds, 60)
h, m = divmod(m, 60)
# Format time as h:m:s - 30 seconds of text
transcript = transcript + "{:0>2d}:{:0>2d}:{:0>2d} {}\n".format(h, m, s, t['text'])
print(transcript)
with open("transcript.txt", "w") as f:
f.write(transcript)
f = open("transcript.txt")
lines = f.readlines()
f.close()
f = open("transcript.txt", "w", encoding="utf-8")
for line in lines:
f.write(line[8:])
f.close()The thing is above code works when i manually run the command -> python t/convert_english.py —audio t/tttttttttt.mp3 like this in the terminal..
But when i try to run from the flask server itself it doesn’t works.. And I’m not getting an error either.
-
How to fetch video frame and its timestamp from ffmpeg to python code
14 février 2017, par vijiboySearching for an alternative as OpenCV would not provide timestamps which were required in my computer vision algorithm, I found this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
Working up the code on windows I still could’nt get the frame timestamps.
I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected.Here’s what I tried :
import subprocess as sp
import numpy
import cv2
command = [ 'ffmpeg',
'-i', 'e:\sample.wmv',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-f', 'image2pipe', '-'] # we need to output to a pipe
pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???
for i in range(10):
raw_image = pipe.stdout.read(1280*720*3)
img_info = pipe.stdout.read(244) # 244 characters is the current output of showinfo video filter
print "showinfo output", img_info
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3))
# write video frame to file just to verify
videoFrameName = 'Video_Frame{0}.png'.format(i)
cv2.imwrite(videoFrameName,image2)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm ...
-
Anomalie #3620 : Token de prévisualisation inconsistant selon que l’article est en cours de rédact...
12 février 2017b b a écrit :
Donc, je parle bien d’une fonctionnalité native de SPIP dont le comportement a changé entre SPIP 2.1 et SPIP 3.1
Non, il n’y a pas non plus de trace de
var_relecture
dans le core en 2.1, la fonctionnalité du var_relecture est dans le plugin bonux et diffère du var_mode preview du core.Si j’ai donné les fichiers concernés, c’est bien pour montrer que var_relecture ne faisait jamais partie du Core ;-)
Contrairement au token pour la preview qui lui, est disponible depuis 3 ans en 2.1 (cf r21084)