
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (57)
-
D’autres logiciels intéressants
12 avril 2011, parOn ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
Videopress
Site Internet : (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (9758)
-
7 Benefits Segmentation Examples + How to Get Started
26 mars 2024, par Erin -
Multiple trims to a video using ffmpeg generating video with shorter duration than expected [closed]
9 septembre 2024, par GerardoI have an application that given a video it trims multiple parts of that video using
ffmpeg
. Each part is cropped, scaled and then concatenated to generate a single video.

To share an example, I have a video of 1 minute and 44 seconds of duration and 60 fps. My goal is to trim 3 parts of the video :


- 

- First one between seconds 0 to 44.666
- Second one between seconds 44.666 to 74.349
- Third one between seconds 74.349 to 103.985








The ffmpeg command I use to achieve that is the following one :


ffmpeg -y -hide_banner -i bg_720_1280.png -i error.mp4 -filter_complex "
[1:v]trim=0.0:44.666,setpts=PTS-STARTPTS,crop=405.0:720.0:437.5:0.0,scale=-2:1280.0[crop_1_0_v];
[1:a]atrim=0.0:44.666,volume=1.0,asetpts=PTS-STARTPTS[crop_1_0_a];
[0:v][crop_1_0_v]overlay=enable='between(t,0,44.666)':x=0.0:y=0.0[crop_1_0_v];
[1:v]trim=44.666:74.349,setpts=PTS-STARTPTS,crop=405.0:720.0:437.5:0.0,scale=-2:1280.0[crop_2_0_v];
[1:a]atrim=44.666:74.349,volume=1.0,asetpts=PTS-STARTPTS[crop_2_0_a];
[0:v][crop_2_0_v]overlay=enable='between(t,0,29.683)':x=0.0:y=0.0[crop_2_0_v];
[1:v]trim=74.349:103.985,setpts=PTS-STARTPTS,crop=405.0:720.0:437.5:0.0,scale=-2:1280.0[crop_3_0_v];
[1:a]atrim=74.349:103.985,volume=1.0,asetpts=PTS-STARTPTS[crop_3_0_a];
[0:v][crop_3_0_v]overlay=enable='between(t,0,29.636)':x=0.0:y=0.0[crop_3_0_v];
[crop_1_0_a][crop_2_0_a][crop_3_0_a]concat=n=3:v=0:a=1[a];
[crop_1_0_v][crop_2_0_v][crop_3_0_v]concat=n=3:v=1:a=0[outv];
[a]amix=1:duration=longest[outa]" -map "[outv]" -map "[outa]" -vcodec libx264 -acodec aac -sws_flags lanczos -pix_fmt yuv420p -crf 17 -preset superfast -r 60 test.mp4



Running this command it generates a video of 11 seconds of duration and I'm unable to understand it. What is wrong with the command ? Also I'm open to recommendations of the ffmpeg command in case you find another way more efficient or performant.


I'm using the following FFMPEG version :


ffmpeg version 7.0.2 Copyright (c) 2000-2024 the FFmpeg developers
 built with Apple clang version 15.0.0 (clang-1500.3.9.4)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/7.0.2 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox
 libavutil 59. 8.100 / 59. 8.100
 libavcodec 61. 3.100 / 61. 3.100
 libavformat 61. 1.100 / 61. 1.100
 libavdevice 61. 1.100 / 61. 1.100
 libavfilter 10. 1.100 / 10. 1.100
 libswscale 8. 1.100 / 8. 1.100
 libswresample 5. 1.100 / 5. 1.100
 libpostproc 58. 1.100 / 58. 1.100



But I got the same issue with static ffmpeg builds


The file
bg_720_1280.png
is just a transparent image of resolution 720x1280. I think I could achieve the same by usingnullsrc
filter with that resolution instead of using this background image.

-
Python pydub issue : Can't repoen a wav file after it as been reencoded by ffpeg [Silved]
28 avril 2024, par Trent TompkinsSolved by Michael Butscher. Updated code is now in the repo in the fixed branch. Thanks !


I have a python script that reads in all the *.wav files in a directory and saves them in a folder called output after it adjusts their volume.


lower.py :


import os
from pydub import AudioSegment as ass

# get the current working directory
current_working_directory = os.getcwd()
all_files = os.listdir();

 


while len(all_files) > 0:
 file = all_files.pop()
 song = ass.from_wav(file)
 song = song - 16
 song.export('out\\'+file)




https://github.com/tibberous/ChangeWavVolume/tree/fixed


The problem is (and this took me a LONG time to figure out), after the files are output once, if you try to read them again with AudioSegment ::from_wav(file), ffmpeg throws an error saying it can't decode the file. The output wav files play in Windows Media Player, and even have their audio adjusted, but I am guessing there somehow an error either in the file data itself or the file headers.


I put the code on github. Checkout the branch 'fixed'.


Btw, I just installed everything, so I should have the current versions of everything. I am using the Chocolatey package manager for Windows 10. Read README.txt (in progress) to see how I setup my dev environment.




Python 3.12.2
Chocolatey v2.2.2
C:\Users\moren>C:\ProgramData\chocolatey\bin\ffmpeg
ffmpeg version 7.0-essentials_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers
 built with gcc 13.2.0 (Rev5, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
 libavutil 59. 8.100 / 59. 8.100
 libavcodec 61. 3.100 / 61. 3.100
 libavformat 61. 1.100 / 61. 1.100
 libavdevice 61. 1.100 / 61. 1.100
 libavfilter 10. 1.100 / 10. 1.100
 libswscale 8. 1.100 / 8. 1.100
 libswresample 5. 1.100 / 5. 1.100
 libpostproc 58. 1.100 / 58. 1.100
Universal media converter







I opened the wav files the program generated with Media Player, VLC Media Player and Audacity, thinking they might throw an error, but the only thing that can't seem to read the exported wav files is the script that created them !