
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (38)
-
L’espace de configuration de MediaSPIP
29 novembre 2010, parL’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
Il permet de configurer finement votre site.
La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (3697)
-
FFMPEG sidechaincompress error initializing complex filters with invalid argument
15 août 2022, par DevDevRunI am using FFMPEG to convert process various audio files.
When using the sidechaincompress filter, I found this command line to work on my personal Windows 10 machine but not working anymore on our Debian server.
Maybe this command is not working anymore on newer version, or am I missing something (a «’» in the commande line, or a package ?) on our Linux server ? We definitely need to use this sidechaincompress filter with two audio files « Assets » and « Main ».


Here is the command we run :


FFMPEG -i /var/asset.flac -i /var/main.flac -filter_complex "[1:a]asplit=2[sc][mix];[0:a][sc]sidechaincompress=threshold=0.05:ratio=11:release:3000[compr];[compr][mix]amix=normalize=0" /var/sidechain.flacs



Linux version : ffmpeg version 4.3.4-0+deb11u1


and it returns :


ffmpeg version 4.3.4-0+deb11u1 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 10 (Debian 10.2.1-6)
 configuration: --prefix=/usr --extra-version=0+deb11u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
 libavutil 56. 51.100 / 56. 51.100
 libavcodec 58. 91.100 / 58. 91.100
 libavformat 58. 45.100 / 58. 45.100
 libavdevice 58. 10.100 / 58. 10.100
 libavfilter 7. 85.100 / 7. 85.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 7.100 / 5. 7.100
 libswresample 3. 7.100 / 3. 7.100
 libpostproc 55. 7.100 / 55. 7.100
Input #0, flac, from '/var/asset.flac':
 Metadata:
 encoder : Lavf59.25.100
 Duration: 00:02:33.65, start: 0.000000, bitrate: 760 kb/s
 Stream #0:0: Audio: flac, 44100 Hz, stereo, s32 (24 bit)
Input #1, flac, from '/var/2-main.flac':
 Metadata:
 encoder : Lavf58.45.100
 Duration: 00:02:33.65, start: 0.000000, bitrate: 1093 kb/s
 Stream #1:0: Audio: flac, 44100 Hz, stereo, s32 (24 bit)
[sidechaincompress @ 0x5622749721c0] [Eval @ 0x7fff94caf520] Undefined constant or missing '(' in 'release'
[sidechaincompress @ 0x5622749721c0] Unable to parse option value "release"
[sidechaincompress @ 0x5622749721c0] Value 3000.000000 for parameter 'mode' out of range [0 - 1]
[sidechaincompress @ 0x5622749721c0] [Eval @ 0x7fff94caf550] Undefined constant or missing '(' in 'release'
[sidechaincompress @ 0x5622749721c0] Unable to parse option value "release"
[sidechaincompress @ 0x5622749721c0] Error setting option level_in to value release.
[Parsed_sidechaincompress_1 @ 0x5622749720c0] Error applying options to the filter.
[AVFilterGraph @ 0x56227493ea40] Error initializing filter 'sidechaincompress' with args 'threshold=0.05:ratio=11:release:3000'
Error initializing complex filters.
Invalid argument



-
Image to Video Conversion in Laravel Using FFMpeg
21 août 2022, par Yash Bohrai'm trying to convert a image into video with the use of FFMpeg in laravel. it generates the video but the the video duration is 0 second....


Here is my code...


$filename = $_FILES['video']['name'];
 $tempname = $_FILES['video']['tmp_name'];
 move_uploaded_file($tempname, storage_path('app/public/image2video/').$filename);

 FFMpeg::fromDisk('image2video')
 ->open($filename)
 ->addFilter('-loop', 3)
 // ->addFilter('-c:v', 'libx264')
 ->addFilter('-t', 3000)
 ->addFilter('-s', '1920x1080')
 // ->addFilter('-pix_fmt', 'yuv420p')
 ->export()
 ->toDisk('image2videoConverted')
 // ->dd('output.mp4');
 ->save('output.mp4');



here is my FFMpeg Command which is generating


ffmpeg -y -i C:/xampp/htdocs/laravel/textToSpeech2/storage/app/public/image2video/a.jpg -loop 3 -t 3000 -s 1920x1080 -threads 12 C:/xampp/htdocs/laravel/textToSpeech2/storage/app/image2videoConverted/output.mp4



it converted the image into mp4 format and stored at my given location but the video length is 0 second.


-
How to batch process a series of video files with powershell and other-transcode/ffmpeg
7 juin 2022, par DarkDiamondTL ;DR


What did I do wrong in the following PowerShell-Script ? It does not work as expected.



I am recording some of my lectures in my university with a photo camera. This works pretty well although I have to split the single lecture into three to four parts because the camera can only record 29 minutes of video in one take. I know that this is a common issue related to some licensensing problem that most photo cameras simply don't have the right license to record longer videos. But it confronts me with the problem that I later have to edit the files together after I did some post processing on them.


With the camera I produce up to four video files with sizes around 3.5 GB which is way to big in order to be of any use because our IT department understandably doesn't want to host so much data, as I produce around 22 GB of video material each week.


Some time ago I came across a very useful tool called "other-video-transcoding" by Don Melton over on GitHub, written in ruby, that allows me to compress the files to a reasonable file size without any visual loss. In addition I crop the videos to remove the part of each frame that is neither the board nor a place where my professor stands in order to decrease the filesize even further and do some privacy protection by cutting out most of the students.


As the tools are accessable via the command line, it is relatively easy to configure and does not cost additional computational power to render a nice gui, so I can edit one of the 29 minute clips in less than 10 minutes.


Now I wanted to optimize my workflow by writing a PowerShell script that only takes the parameters what to crop and which files to work on and then does the rest on its own so I can just start the script and then do something else while my laptop renders the new files.


So far I have the following :


$video_path = Get-ChildItem ..\ -Directory | findstr "SoSe"

Get-ChildItem $video_path -name | findstr ".MP4" | Out-File temp.txt -Append 
Get-Content temp.txt | ForEach-Object {"file " + $_} >> .\files.txt

Get-ChildItem $video_path |
Foreach-Object {
other-transcode --hevc --mp4 --target 3000 --crop 1920:780:0:0 $_.FullName
}

#other-transcode --hevc --mp4 --crop 1920:720:60:0 ..\SoSe22_Theo1_videos_v14_RAW\
ffmpeg -f concat -i files.txt -c copy merged.mp4
Remove-Item .\temp.txt



but it does not quite do what I it expect to do.
This is my file system :


sciebo/
└── SoSe22_Theo1_videos/
 ├── SoSe22_Theo1_videos_v16/
 │ ├── SoSe22_Theo1_videos_v16_KOMPR/
 │ │ ├── C0001.mp4
 │ │ ├── C0002.mp4
 │ │ ├── C0003.mp4
 │ │ ├── C0004.mp4
 │ │ ├── temp.txt
 │ │ ├── files.txt
 │ │ └── merged.mp4
 │ └── SoSe22_Theo1_videos_v16_RAW/
 │ ├── C0001.mp4
 │ ├── C0002.mp4
 │ ├── C0003.mp4
 │ └── C0004.mp4
 └── SoSe22_Theo1_videos_v17/
 ├── SoSe22_Theo1_videos_v17_KOMPR
 └── SoSe22_Theo1_videos_v17_RAW/
 ├── C0006.mp4
 ├── C0007.mp4
 ├── C0008.mp4
 └── C0009.mp4



where the 16th lecture is already processed and the 17th is not. I always have the raw video data in the folders ending on
RAW
and the edited/compressed output files in the one ending onKOMPR
. Note that the video files in theKOMPR
folder are the output files of theother-transcode
tool.

The real work happens in the line where it says


other-transcode --hevc --mp4 --target 3000 --crop 1920:780:0:0 $_.FullName



and in the line


ffmpeg -f concat -i files.txt -c copy merged.mp4



where I concat the output files into the final version I can upload to our online learning platform.
What is wrong with my script ? In the end I'd like to pass the
--crop
parameter just to my script, but that is not the primary problem.


A little information on the transcoding script so you don't have to look into the documentation :

As the last argument the tool takes the location of the video files to work on, be it relative or absolute file paths. The output is placed in the folder the script is called in, so if I cd into one of theKOMPR
directories and then call

other-transcode --mp4 ../SoSe22_Theo1_videos_v16_RAW/C0001.mp4



a new file
C0001.mp4
is created in theKOMPR
directory and the transcoded video and old audio are written to that new video file.