
Recherche avancée
Autres articles (44)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (10194)
-
Failed to convert web-saved .wemb audio to .wav by using php "shell_exec" and javascript
30 mai 2022, par AnirbasgnawI'm working on an online experimenter which could record participants' audio from the browser. The audio data I get has an extension of .wemb, so I plan to use ffmpeg to convert it to .wav while I save the data.


I tried to use PHP's
shell_exec
but nothing happens when I run the scripts. Then I found that myecho
andprint_r
also did not work. I'm new to PHP and javascript, so I''m really confused now.

Below are the relevant codes, I really appreciate it if you could help !


write_data.php
:

<?php
 $post_data = json_decode(file_get_contents('php://input'), true); 
 // the directory "data" must be writable by the server
 $name = "../".$post_data['filename'];
 $data = $post_data['filedata'];
 // write the file to disk
 file_put_contents($name, $data);
 
 $INPUT = trim($name) . ".webm";
 $OUTPUT = trim($name) . ".wav";
 echo "start converting...";

 // check if ffmprg is available
 $ffmpeg = trim(shell_exec('which ffmpeg'));
 print_r($ffmpeg);
 // call ffmpeg
 shell_exec("ffmpeg -i '$INPUT' -ac 1 -f wav '$OUTPUT' 2>&1 ");
?>



javascript
:

saveData: function(fileName,format){
 // save as json by default
 if (!format){ format = 'json';}
 // add extension to filename
 fileName = `${fileName}.${format}`
 // create saveData object using fetch
 let saveData = [];
 if (format == 'json') {
 saveData = {
 type: 'call-function',
 async: true,
 func: async function(done) {
 let data = jsPsych.data.get().json();
 const response = await fetch("../write_data.php", {
 method: "POST",
 headers: {
 "content-type": "application/json"
 },
 body: JSON.stringify({ filename: fileName, filedata: data })
 });
 if (response.ok) {
 const responseBody = await response.text();
 done(responseBody);
 }
 }
 }
 } else {
 saveData = {
 type: 'call-function',
 async: true,
 func: async function(done) {
 let data = jsPsych.data.get().csv();
 const response = await fetch("../write_data.php", {
 method: "POST",
 headers: {
 "content-type": "application/json"
 },
 body: JSON.stringify({ filename: fileName, filedata: data })
 });
 if (response.ok) {
 const responseBody = await response.text();
 done(responseBody);
 }
 }
 }
 }
 return saveData;
 },



-
Getting know how much progress has ffmpeg done in C#
29 décembre 2022, par Aenye_CerbinI'm writing an app that uses ffmpeg to convert audio/video files.
I can call ffmpeg and specify it's options, I can see that it's working.
I want to be able to check how much of the job is done, so I can present it to user.
As I've read ffmpeg doesn't support any progress bar or percentage and ffmpeg console output is not very friendly, so I cannot simply show it's output to user, because it will look awful. I am not using any wrapper and do not plan to use any because I need to write my own backend that call ffmpeg and frontend to communicate with user.


I'm using System.Threading to start ffmpeg in new process, I can say if the process is running, or get it's exit code, but I don't see any way to get info about how much of the job is done. I thought I can simply measure input file size and check periodically output file size, but it won't be any accurate, because the output file will have different size depending on what codec and container we use.
I've read I can also use frame progress, but the way of obtaining it is still not clear to me. I also need to do it for audio files.


Is there any reasonable way to do so ?


-
I have a ffmpeg command to concatenate 300+ videos of different formats. What is the proper syntax for the concat complex filter ?
25 avril 2022, par jokoonI plan to concatenate a large amount of video files of different formats and resolution, some without sound, and add a short black screen "pause" of about 0.5s between each.


I wrote a python script to generate such command.


I created a 0.5s video file using
ffmpeg.exe -t 0.5 -f lavfi -i color=c=black:s=640x480 -c:v libx264 -tune stillimage -pix_fmt yuv420p blank500ms.mp4
.

I then added a silent audio to it with
-f lavfi -i anullsrc -c:v copy -c:a aac -shortest


I now have the problem of adding a blank audio track for streams without one, but I don't want to generate new file, I want to add it to my complex filter.


This is my complex script and generate command.


The command (there are line returns, because I send this with the python subprocess module)


ffmpeg.exe
-i
input0.mp4
-i
input1.mp4
-i
input2.mp4
-i
input3.mp4
-i
input4.mp4
-i
input5.mp4
-i
input6.mp4
-i
input7.mp4
-i
input8.mp4
-i
input9.mp4
-i
input10.mp4
-f
lavfi
-i
anullsrc
-filter_complex_script
C:/filter_complex_script.txt
-map
"[final_video]"
-map
"[final_audio]"
output.mp4



The complex_filter_script :


[0]fps=24[fps0];
[fps0]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled0];
[1]fps=24[fps1];
[fps1]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled1];
[2]fps=24[fps2];
[fps2]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled2];
[3]fps=24[fps3];
[fps3]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled3];
[4]fps=24[fps4];
[fps4]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled4];
[5]fps=24[fps5];
[fps5]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled5];
[6]fps=24[fps6];
[fps6]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled6];
[7]fps=24[fps7];
[fps7]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled7];
[8]fps=24[fps8];
[fps8]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled8];
[9]fps=24[fps9];
[fps9]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled9];
[10]fps=24[fps10];
[fps10]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled10];
[10]split=10[blank0][blank1][blank2][blank3][blank4][blank5][blank6][blank7][blank8][blank9];
[rescaled0:v][0:a][blank0][rescaled1:v][1:a][blank1][rescaled2:v][2:a][blank2][rescaled3:v][3:a][blank3][rescaled4:v][4:a][blank4][rescaled5:v][5:a][blank5][rescaled6:v][11:a][blank6][rescaled7:v][11:a][blank7][rescaled8:v][11:a][blank8][rescaled9:v][11:a][blank9]concat=n=22:v=1:a=1[final_video][final_audio]



As you can see, some video use
[11:a]
, because it's a silent audio stream.

input10.mp4, mapped to [10] and then split (or "cloned") into blanked0 to 9, is a short pause separator.


ffmpeg tells me the error


[Parsed_split_55 @ 000001591c33b280] Media type mismatch between the 'Parsed_split_55' filter output pad 1 (video) and the 'Parsed_concat_56' filter input pad 5 (audio)
[AVFilterGraph @ 000001591bf1e6c0] Cannot create the link split:1 -> concat:5
Error initializing complex filters.
Invalid argument



I'm a bit lost when it comes to using the [X:Y:Z] syntax, and how the order matter in the concat argument list.


I'm open to any other suggestion to solve my problem. I would rather do this in a single command, without intermediate file.


EDIT :


For details, I already wrote a large concat+xstack filter that worked well with 8GB of memory.


In this case, there are a lot of inputs, but those inputs are small, most of them are between 1 and 10MB, so it would probably not generate out-of-memory problems, although I'm not certain.