Recherche avancée

Médias (2)

Mot : - Tags -/rotation

Autres articles (4)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (152)

  • ffmpeg - concatenate several videos with xfade transition

    8 mai 2021, par ilim

    I have N videos, and I'm using ffmpeg to concatenate them with xfade transitions between them. The video files are named as positive integers representing their order in the concatenated output, and none of them have any audio. In fact, each file just includes a different image with no animation that stays on screen all through that video's duration.

    


    I'm using a simple command I found in a response to an existing question to concatenate the first 2 videos :

    


    ffmpeg -i 1.mp4 -i 2.mp4 -y
       -filter_complex "xfade=transition=fade:offset=3.5:duration=0.5"
       1-2.mp4


    


    I planned to employ that same command for all the videos, each time appending a single video to the cumulative one produced so far, and producing an intermediary output to be used later.

    


    Specifically, I first concatenated files 1.mp4 and 2.mp4, and recorded the result into 1-2.mp4, which was produced correctly. Videos had the respective duration of 4 and 8 seconds, and the resulting 1-2.mp4 was 12 seconds long, with the transition occurring when it should.

    


    The problem started when I tried concatenating 1-2.mp4 and 3.mp4. I used the following command to generate a concatenation of videos [1-3].

    


    ffmpeg -i 1-2.mp4 -i 3.mp4 -y
       -filter_complex "xfade=transition=fade:offset=11.5:duration=0.5"
       1-3.mp4


    


    The video produced seems to be an exact copy of 1-2.mp4, and contents of 3.mp4 are not present at all in the resulting 1-3.mp4. Video file 3.mp4 was 3 seconds long, but the resulting 1-3.mp4 was 12 seconds long, just like 1-2.mp4 was.

    


    Apart from the input and output file names, both commands are the same. I was suspicious of having set the offset parameter of the transition incorrectly, but the first 2 files were merged successfully.

    


    I am at a loss as to what I'm doing incorrectly here. Is there a particular caveat of the xfade filter that gets problematic when used among 2 files with one of them already having that filter ?

    


    Any suggestions as to how I may debug or fix this behavior ? I'd welcome any alternative means of concatenating these (i.e., with some transition effects) in a simple fashion as well.

    


    I'm planning to use these commands in a Python script, so I'd appreciate any alternative solution not involving any gigantic commands or parameters.

    


    Just to be thorough, I should mention that I have ffmpeg version 4.3.1 installed via snap.

    


  • FFMPEG Recording Audio from Adafruit I2S MEMS Microphone Having Issues

    24 juin 2021, par Turkey

    I am attempting to use FFMPEG to record and stream video off a Raspberry Pi Zero using the pi camera and the Adafruit I2S MEMS Microphone. I have successfully gotten video recording, but I am having trouble getting the audio correctly added on.

    


    I followed the directions at https://learn.adafruit.com/adafruit-i2s-mems-microphone-breakout/raspberry-pi-wiring-test and using their command of arecord -D dmic_sv -c2 -r 44100 -f S32_LE -t wav -V mono -v file.wav I do get a correct audio recording with no issues.

    


    However with my FFMPEG command of ffmpeg -f alsa -ar 44100 -ac 2 -c:a pcm_s32le -i default:CARD=sndrpii2scard -vcodec h264 -framerate 30 -i - -pix_fmt yuv420p -preset ultrafast -crf 0 -vcodec copy -codec:a aac -f segment -segment_time 1800 -segment_start_number 1 /RPICRecord%04d.mkv (The last bit starting at -f segment varies depending on recording vs streaming) I get audio that sorta just has a blip and then sounds like it's resetting. The actual recorded video also seems to not play correctly locally, however it does on YouTube. Testing with streaming the video and audio does the same, but it produces a consistent pattern on the audio blips. In the stream video I also finger snap 5 or so times, but you only ever hear 2, so it's for sure not recoding everything.

    


    


    My limited knowledge of FFMPEG has failed me here to understand why this happens or how to debug this further to work towards a fix. Let me know if there is any additional info or logs that would be beneficial.

    


  • ffmpeg : improving MP4 to webm ogg conversions

    14 juillet 2017, par Randy

    (Edited to include some of the things I’ve tried)

    I’m a musician, and occasional web coder. I’ve been using video editing software (old version of Roxio Videowave from 2011) to build promotional videos from clips of some of my performances, and I’d like to put some of them on my own web pages in HTML5 video format. So that currently means I need MP4, WEBM, and OGG conversions. Fortunately the editing software churns out some very nice MP4 (H264) files, and has plenty of options for doing so. I purposely output the output size about 2X the likely display size, in hopes of offering more detail for better conversions. Specifically, the video output was AVC/H.264, 800 x 450, 30fps, variable bit rate, but with 600000 as a base line (that was the default for this setting anyway).

    Now I’m nowhere near expert at this stuff, and I probably left out some important data. But bottom line, the resulting MP4 looked very good. Unfortunately, to put it on my own web page means at least converting to WEBM and OGG formats. It would be nice if all browsers just supported MP4, but then there would be licensing fees, so conversions are needed. Sadly, I’ve been wasting days now trying to do this with ffmpeg. Its easy to do, its doing it WELL that is a mystery to me. Just letting ffmpeg work using its defaults (meaning I just specify an input and output file) results in pretty terrible video. But I’ve also tried most of the settings for better quality available, and the resulting conversions are nowhere near as good as youtube’s conversions.

    Based on the info about my original MP4 file, can someone suggest some better settings for ffmpeg conversions to WEBM and OGG ? Am I going about this all wrong ? The best I’ve done so far was with a string like this, which specified a high quality and a fairly robust bit rate...

    ffmpeg -i input-file.mp4 -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis output-file.webm

    That was much better than the default settings, but still nowhere near the quality of YOUTUBE conversions. In my resulting WEBM video, you can pretty plainly see how the picture degrades, and will snap into focus every few seconds when a "key frame" comes up. These artifacts should not be so obvious. Thanks for any help.