
Recherche avancée
Médias (1)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
Autres articles (41)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (4848)
-
No such file or directory Error with FFMPEG + CarrierWave screenshot method
10 juillet 2013, par dodgerogers747I am using AWS CORS to upload videos to my site, all of which works as planned.
I have the following model method which runs as an after_create callback (for speed) to take a screenshot from the video file on AWS. I plan to move this out into a delayed job but I don't think this will solve this particular issue. Please advise if mistaken.
I use FFMPEG to take a screenshot from the AWS self.file location, I then send the file to CarrierWave by saving the file to self.screenshot where it is uploaded to AWS.
Approx. 50% of the time it errors out with
Errno::ENOENT - No such file or directory
for the location of the screenshot image.How can I rectify my code to remove this error and how come it only occurs around 50% of the time ? If anyone needs more code just shout.
video.rb
after_create :take_screenshot
mount_uploader :screenshot, ImageUploader
def take_screenshot
location = "#{Rails.root}/public/uploads/tmp/screenshots/#{unique}_#{File.basename(file)}.jpg"
system `ffmpeg #{log_level} -i #{self.file} -ss 00:00:0#{time_frame} -vframes 1 #{location}`
logger.debug "Trying to take screenshot from #{self.file}"
#pass the actual file to CarrierWave to handle the image upload
self.screenshot = File.open(location)
self.save
logger.debug "Deleting tmp file: #{location}: #{File.delete(location)}" if self.screenshot.present?
end
def unique
(0..6).map{(65+rand(26)).chr}.join
end
def log_level
"-loglevel panic"
end
def time_frame
rand(0..3)
endStack trace :
Started POST "/videos" for 127.0.0.1 at 2013-07-10 03:58:49 +0800
Processing by VideosController#create as JS
Parameters: {"utf8"=>"✓", "authenticity_token"=>"6M1Ia+Ag2E3HVKH2PO/p7jewxSpMPdWeVHGA933Bzjw=", "video"=>{"file"=>"http://bucketname.s3.amazonaws.com/uploads/video/file/671a87fb-91de-4eaf-a38a-1b25c51798e5/Good_7iron.m4v"}}
User Load (0.3ms) SELECT `users`.* FROM `users` WHERE `users`.`id` = 9 LIMIT 1
(0.1ms) BEGIN
SQL (0.2ms) INSERT INTO `videos` (`created_at`, `file`, `question_id`, `screenshot`, `updated_at`, `user_id`) VALUES ('2013-07-09 19:58:49', 'http://bucketname.s3.amazonaws.com/uploads/video/file/671a87fb-91de-4eaf-a38a-1b25c51798e5/Good_7iron.m4v', NULL, NULL, '2013-07-09 19:58:49', 9)
Trying to take screenshot from http://bucketname.s3.amazonaws.com/uploads/video/file/671a87fb-91de-4eaf-a38a-1b25c51798e5/Good_7iron.m4v
(0.8ms) ROLLBACK
Completed 500 Internal Server Error in 3550ms
Errno::ENOENT - No such file or directory - /Users/me/rails/project/public/uploads/tmp/screenshots/WCACLIC_Good_7iron.m4v.jpg:
app/models/video.rb:24:in `initialize'
app/models/video.rb:24:in `open'
app/models/video.rb:24:in `take_screenshot' -
'C' program to pipeout audio file to FFMPEG and generate Video file
9 mai 2017, par soflowI am attempting to write a short ’C’ program which reads in an Audio file using FFMPEG, processes that file using a ’C’ program, and then outputs a file via FFMEPG, which combines the new, modified audio together with a Video representation using the FFMPEG showwaves filter.
At present the program attempts to do the following :-
i) Read in an audio file, using pipein thorugh FFMPEG
ii) Process the audio file using a portion of the ’C’ program
iii) Pipeout the modified audio to FFMPEG, and generate a file using the ’showwaves’ filter in FFMEPG to create an MP4 file with audio and video.The following code run form the ommand line in FFMPEG generates the Audio/Video MP4 I want to create :-
ffmpeg -y -f s16le -ar 44100 -ac 1 -i 12345678.wav -i 12345678.wav -filter_complex "[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]" -map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart 12345678.mp4
"
This code generates a processed audio file, and outputs it to a .wav file as required :-
#include
#include
#include
void main()
{
// Launch two instances of FFmpeg, one to read the original WAV
// file and another to write the modified WAV file. In each case,
// data passes between this program and FFmpeg through a pipe.
FILE *pipein;
FILE *pipeout;
pipein = popen("ffmpeg -i 12345678.wav -f s16le -ac 1 -", "r");
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - out.wav", "w");
// Read, modify and write one sample at a time
int16_t sample;
int count, n=0;
while(1)
{
count = fread(&sample, 2, 1, pipein); // read one 2-byte sample
if (count != 1) break;
++n;
sample = sample * sin(n * 5.0 * 2*M_PI / 44100.0);
fwrite(&sample, 2, 1, pipeout);
}
// Close input and output pipes
pclose(pipein);
pclose(pipeout);
}(This code borrowed from ted Burke’s excellent post here)
I have made an attempt as shown below, but this is not working :-
#include
#include
#include
void main()
{
// Launch two instances of FFmpeg, one to read the original WAV
// file and another to write the modified WAV file. In each case,
// data passes between this program and FFmpeg through a pipe.
FILE *pipein;
FILE *pipeout;
pipein = popen("ffmpeg -i 12345678.wav -f s16le -ac 1 -", "r");
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i 12345678.wav -i
12345678.wav -filter_complex "
[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]"
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -
codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart
12345678.mp4
", "w");
// Read, modify and write one sample at a time
int16_t sample;
int count, n=0;
while(1)
{
count = fread(&sample, 2, 1, pipein); // read one 2-byte sample
if (count != 1) break;
++n;
sample = sample * sin(n * 5.0 * 2*M_PI / 44100.0);
fwrite(&sample, 2, 1, pipeout);
}
// Close input and output pipes
pclose(pipein);
pclose(pipeout);
}Ideally someone can suggest an improved version of the pipeout command above - alternately another process to achieve this would be interesting
* EDIT *
Thanks to @Mulvya, the revised pipeout line is now :-
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - -filter_complex "[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]" -map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart 12345678.mp4
", "w") ;
On compiling with gcc I get the following error messages :-
avtovid2.c: In function \u2018main\u2019:
wavtovid2.c:13:83: error: expected \u2018]\u2019 before \u2018:\u2019
token
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - -
filter_complex "
[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]"
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -
codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart
12345678.mp4
^
wavtovid2.c:13:86: error: expected \u2018)\u2019 before
\u2018showwaves\u2019
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - -
filter_complex "
[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]"
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -
codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart
12345678.mp4
^
wavtovid2.c:13:98: error: invalid suffix "x720" on integer constant
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - -
filter_complex "
[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]"
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -
codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart
12345678.mp4
^
wavtovid2.c:13:153: warning: missing terminating " character
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - -
filter_complex "
[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]"
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -
codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart
12345678.mp4
^
wavtovid2.c:13:86: error: missing terminating " character
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - -
filter_complex "
[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]"
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -
codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart
12345678.mp4
^
wavtovid2.c:14:6: warning: missing terminating " character
", "w");
^
wavtovid2.c:14:1: error: missing terminating " character
", "w");
^
wavtovid2.c:13:21: warning: passing argument 1 of \u2018popen\u2019 makes
pointer from integer without a cast
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - -
filter_complex "
[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]"
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -
codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart
12345678.mp4
^
In file included from wavtovid2.c:1:0:
/usr/include/stdio.h:872:14: note: expected \u2018const char *\u2019 but
argument is of type \u2018char\u2019
extern FILE *popen (const char *__command, const char *__modes) __wur;
^
wavtovid2.c:13:15: error: too few arguments to function \u2018popen\u2019
pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - -
filter_complex "
[0:a]showwaves=s=1280x720:mode=line:rate=25,format=yuv420p[v]" -map "[v]"
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -
codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart
12345678.mp4
^
In file included from wavtovid2.c:1:0:
/usr/include/stdio.h:872:14: note: declared here
extern FILE *popen (const char *__command, const char *__modes) __wur;
^
wavtovid2.c:32:1: error: expected \u2018;\u2019 before \u2018}\u2019
token
} -
Speech recognition with python-telegram-bot without downloading an audio file
25 juin 2022, par linzI'm developing a telegram bot in which the user sends a voice message, the bot transcribes it and sends back what was said in text.
For that I am using the python-telegram-bot library and the speech_recognition library with the google engine.
My problem is, the voice messages sent by the users are .mp3, however in order to transcribe them i need to convert them to .wav. In order to do that I have to download the file sent to the bot.
Is there a way to avoid that ? I understand this is not an efficient and a safe way to do this since many active users at once will result in race conditions and takes a lot of space.



def voice_handler(update, context):
 bot = context.bot
 file = bot.getFile(update.message.voice.file_id)
 file.download('voice.mp3')
 filename = "voice.wav"
 
 # convert mp3 to wav file
 subprocess.call(['ffmpeg', '-i', 'voice.mp3',
 'voice.wav', '-y'])

 # initialize the recognizer
 r = sr.Recognizer()
 
 # open the file
 with sr.AudioFile(filename) as source:
 
 # listen for the data (load audio to memory)
 audio_data = r.record(source)
 # recognize (convert from speech to text)
 text = r.recognize_google(audio_data, language='ar-AR')
 
 
def main() -> None:
 updater.dispatcher.add_handler(MessageHandler(Filters.voice, voice_handler))