
Recherche avancée
Médias (1)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (67)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (9109)
-
the "non-monotonic DTS in output stream " error while concatenation even after reencoded input files
19 juin 2024, par nishhi I am trying to write a Python program that is trying to first edit different videos separately under the heading "intro" "story" and "byte" with the help of FFmpeg and Python subprocess module and then concatenate them in the function named "bind". At first, I encountered this "Non-monotonic DTS in output stream" then i reencoded the input files before concatenating them but i am still getting this error


# function to assemble all the videos 



def bind() :


bunch = ["final_intro.mp4","final_story.mp4","finalbite.mp4"]
new_bunch =[]
for video in bunch:
 name = f"re_{video}"
 re_command =[
 "ffmpeg", "-y",
 "-i", video,
 "-c:v", "libx264",
 "-c:a", "aac",
 "-strict", "experimental",
 "-b:a", "192k",
 name
 ]
 subprocess.run(re_command)
 new_bunch.append(name)
 

with open("concat_list_final.txt", "w") as f:
 for video in new_bunch:
 f.write(f"file './{video}'\n")
 
command_final = [ 
 "ffmpeg", "-y",
 "-f", "concat",
 "-safe", "0",
 "-i", "concat_list_final.txt",
 "-c", "copy",
 "done.mp4"
]
subprocess.run(command_final)
for video in new_bunch:
 os.remove(video)
os.remove("final.mp4")
os.remove("updated_final.mp4")
os.remove("concat_list_final.txt")


return "done.mp4"



for reference this is the error messages :-


[mov,mp4,m4a,3gp,3g2,mj2 @ 0x14cf16040] Auto-inserting h264_mp4toannexb bitstream filter [mp4 @ 0x12ce06880] Non-monotonic DTS in output stream 0:0 ; previous : 152933, current : 127760 ; changing to 152934. This may result in incorrect timestamps in the output file. [mp4 @ 0x12ce06880] Non-monotonic DTS in output stream 0:0 ; previous : 152934, current : 128272 ; changing to 152935. This may result in incorrect timestamps in the output file..........


-
FFmpeg error with ffmpeg.FS("readfile", "output.mp4"). trying to get ffmpeg to work in the react app
21 juin 2024, par Paul Thamconst stackVideos = useCallback(
 async (video1) => {
 try {
 console.log("Fetching video2 from storage...");
 const video2Ref = ref(storage, "video2.mp4");
 const video2Url = await getDownloadURL(video2Ref);
 const video2Blob = await (await fetch(video2Url)).blob();

 console.log("Writing video1 to FFmpeg FS...");
 await ffmpeg.FS("writeFile", "video1.mp4", await fetchFile(video1));

 console.log("Writing video2 to FFmpeg FS...");
 await ffmpeg.FS("writeFile", "video2.mp4", await fetchFile(video2Blob));

 console.log("Files in FFmpeg FS after write:");
 const files = await ffmpeg.FS("readdir", "/");
 console.log(files);

 const { start, end } = inputs[0];
 const startSeconds = new Date(`1970-01-01T${start}Z`).getTime() / 1000;
 const endSeconds = new Date(`1970-01-01T${end}Z`).getTime() / 1000;
 const duration = endSeconds - startSeconds;

 console.log("Running FFmpeg command...");
 await ffmpeg.run(
 "-i",
 "video1.mp4",
 "-ss",
 startSeconds.toString(),
 "-t",
 duration.toString(),
 "-i",
 "video2.mp4",
 "-filter_complex",
 "[0:v]scale=1080:-1[v1];[1:v]scale=-1:1920/2[v2scaled];[v2scaled]crop=1080:1920/2[v2cropped];[v1][v2cropped]vstack=inputs=2,scale=1080:1920[vid]",
 "-map",
 "[vid]",
 "-map",
 "0:a",
 "-c:v",
 "libx264",
 "-crf",
 "23",
 "-preset",
 "veryfast",
 "-shortest",
 "output1.mp4"
 );

 console.log("Files in FFmpeg FS after run:");
 const filesAfterRun = await ffmpeg.FS("readdir", "/");
 console.log(filesAfterRun);

 console.log("Reading output1.mp4 from FFmpeg FS...");
 const data = await ffmpeg.FS("readfile", "output1.mp4");
 console.log("after the FS readfile");
 const url = URL.createObjectURL(
 new Blob([data.buffer], { type: "video/mp4" })
 );
 setStackedVideo(url);
 setOutputFileReady(true); // Mark output file as ready
 } catch (err) {
 console.error("FFmpeg error output:", err);
 setError(`FFmpeg run error: ${err.message}`);
 setIsProcessing(false);
 }
 },
 [inputs]
 );



My error seems to be stemming from this line :


const data = await ffmpeg.FS("readfile", "output1.mp4");



Seeing the ffmpeg.wasm documentation i thought the functions for some of the functions had changed, but when I changed it, it seemed like they did not recognise the new functions. Sometimes this will also give me some other errors like worker.js which I dont understand enough to debug this myself.


words word words words words words word words words wordswords word words words wordswords word words words wordswords word words words wordswords word words words wordswords word words words words


-
Alexa developer console gives me "The audio is not of a supported MPEG version" Error
20 mars, par Boban BoBo BanjevicI am trying to make an Alexa skill, and I am using audio mp3 files stored is S3, lambda function has access to my DynamoDb and ultimately S3. But I have an issue with my audio files. I keep on getting "The audio is not of a supported MPEG version" when I test alexa skill. I tried multiple
ffmpeg -i
file conversions.

When I type
ffprobe -v quiet -print_format json -show_format -show_streams "Piano.mf.A1.audio10.mp3"
this is what i have for my file down below :


Input #0, mp3, from 'Piano.mf.A1.audio10.mp3':
 Metadata:
 encoder : Lavf61.9.107
 Duration: 00:00:07.00, start: 0.025057, bitrate: 129 kb/s
 Stream #0:0: Audio: mp3 (mp3float), 44100 Hz, stereo, fltp, 128 kb/s
 Metadata:
 encoder : Lavc61.33
PS C:\Users\boban\OneDrive\Desktop\newBob2> ffprobe -v quiet -print_format json -show_format -show_streams "Piano.mf.A1.audio10.mp3"
{
 "streams": [
 {
 "index": 0,
 "codec_name": "mp3",
 "codec_long_name": "MP3 (MPEG audio layer 3)",
 "codec_type": "audio",
 "codec_tag_string": "[0][0][0][0]",
 "codec_tag": "0x0000",
 "sample_fmt": "fltp",
 "sample_rate": "44100",
 "channels": 2,
 "channel_layout": "stereo",
 "bits_per_sample": 0,
 "initial_padding": 0,
 "r_frame_rate": "0/0",
 "avg_frame_rate": "0/0",
 "time_base": "1/14112000",
 "start_pts": 353600,
 "start_time": "0.025057",
 "duration_ts": 98795520,
 "duration": "7.000816",
 "bit_rate": "128000",
 "disposition": {
 "default": 0,
 "dub": 0,
 "original": 0,
 "comment": 0,
 "lyrics": 0,
 "karaoke": 0,
 "forced": 0,
 "hearing_impaired": 0,
 "visual_impaired": 0,
 "clean_effects": 0,
 "attached_pic": 0,
 "timed_thumbnails": 0,
 "non_diegetic": 0,
 "captions": 0,
 "descriptions": 0,
 "metadata": 0,
 "dependent": 0,
 "still_image": 0,
 "multilayer": 0
 },
 "tags": {
 "encoder": "Lavc61.33"
 }
 }
 ],
 "format": {
 "filename": "Piano.mf.A1.audio10.mp3",
 "nb_streams": 1,
 "nb_programs": 0,
 "nb_stream_groups": 0,
 "format_name": "mp3",
 "format_long_name": "MP2/3 (MPEG audio layer 2/3)",
 "start_time": "0.025057",
 "duration": "7.000816",
 "size": "112892",
 "bit_rate": "129004",
 "probe_score": 51,
 "tags": {
 "encoder": "Lavf61.9.107"
 }
 }
}



It should be everything that is required, am I doing something wrong, this is the only thing stopping me from finishing the skill. Asking ChatGPT said it could be because of the mp3float, and after many conversions it still doesn't work.