Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (36)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (6193)

  • How do I compose three overlapping videos w/audio in ffmpeg ?

    10 avril 2021, par Idan Gazit

    I have three videos : let's call them intro, recording and outro. My ultimate goal is to stitch them together like so :

    


    enter image description here

    


    Both intro and outro have alpha (prores 4444) and a "wipe" to transition, so when overlaying, they must be on top of the recording. The recording is h264, and ultimately I'm encoding out for youtube with these recommended settings.

    


    I've figured out how to make the thing work correctly for intro + recording :

    


    $ ffmpeg \
  -i intro.mov \
  -i recording.mp4 \
  -filter_complex \
  "[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
   [1:a]adelay=delays=10s:all=1[ra]; \
   [rv][0:v]overlay[v];[0:a][ra]amix[a]" \
  -map "[a]" -map "[v]" \
  -movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
  out.mp4 -y


    


    However I can't use the tpad trick for the outro because it would render black frames over everything.

    


    I've tried various iterations with setpts/asetpts as well as passing -itsoffset for the input, but haven't come up with a solution that works correctly for both video and audio. This tries to start the outro at 16 seconds into the recording (10s start + 16s of recording is how I got to setpts=PTS+26/TB). del, but doesn't work correctly, I get both intro and outro audio from the first frame, and the recording audio cuts out when the outro overlay begins :

    


    $ ffmpeg \
  -i intro.mov \
  -i recording.mp4 \
  -i outro.mov \
  -filter_complex \
  "[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
   [1:a]adelay=delays=10s:all=1[ra]; \
   [2:v]setpts=PTS+26/TB[outv]; \
   [2:a]asetpts=PTS+26/TB[outa]; \
   [rv][0:v]overlay[v4]; \
   [0:a][ra]amix[a4]; \
   [v4][outv]overlay[v]; \
   [a4][outa]amix[a]" \
  -map "[a]" -map "[v]" \
  -movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
  out.mp4 -y


    


    I think the right solution lies in the direction of using setpts correctly but I haven't been able to wrap my brain fully around it. Or, maybe I'm making life complicated and there's an easier approach ?

    


    In the nice-to-have realm, I'd love to be able to specify the start of the outro relative to the end of the recording. I will be doing this to a bunch of recordings of varying lengths. It would be nice to have one command to invoke on everything rather than figuring out a specific timestamp for each one.

    


    Thank you !

    


  • Why is ffmpeg choking on this particular PNG file ?

    13 novembre 2022, par kohloth

    I'm trying to make a video out of 4 still images with ffmpeg.

    


    This is the command I am having trouble with getting working at the moment :

    


    ffmpeg -y -loop 1 -framerate 24 -t 3 \
-i ./images/title-card.png -loop 1 -framerate 24 -t 4 \
-i ./images/001.png -loop 1 -framerate 24 -t 4 \
-i ./images/002.png -loop 1 -framerate 24 -t 3 \
-i ./images/003.png -loop 1 -framerate 24 -t 4 \
-filter_complex "[0][1][2][3]concat=n=4:v=1:a=0" \
/tmp/silentVideoTest.mp4


    


    Unfortunately, I get this error :

    


    [Parsed_concat_0 @ 0x5603df1c4080] Input link in1:v0 parameters (size 1024x1024, SAR 0:1) do not match the corresponding output link in0:v0 parameters (1024x1024, SAR 3937:3937)
[Parsed_concat_0 @ 0x5603df1c4080] Failed to configure output pad on Parsed_concat_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #3:0


    


    I have no idea what this error means. But if I use a different png for the first image, it works fine. I understand that there is something about the first png that ffmpeg does not like ; the SAR, which I believe is some kind of png metadata bit or something ?

    


    Problem is :

    


      

    • The error is confusing to me
    • 


    • I see no option to set the SAR when exporting images from Krita, Pinta, or Photopea.
    • 


    • I do not know how to view the SAR value of an image, so I can't verify that this is really the problem
    • 


    


    Also, whats more is, I've used this command in the past to change an image's SAR (I think) but it seems to only work half the time ?

    


    ffmpeg -i title-card.png -vf setsar=1 title-card-new-sar.png


    


    No idea what the value of setsar should be, so I am using 1.

    


    Would love if someone could tell me how to get it to work. In particular, how do I view the SAR of a PNG file ?

    


    Maybe I am naieve, but shouldn't ffmpeg just be able to accept png images that are the same dimensions, compression, and stitch em together without errors ? i.e. Is there an option just to say "Fix the SAR ?" or "Use the SAR of the first image for all images ?"

    



    


    Edit : Trying it on some different images, having set the SAR with ffmpeg -i ./card.png -vf setsar=1 ./card-new-sar.png, I get a similar error :

    


    [Parsed_concat_0 @ 0x55e7b6b8b640] Input link in2:v0 parameters (size 1024x1024, SAR 2834:2834) do not match the corresponding output link in0:v0 parameters (1024x1024, SAR 1:1)
[Parsed_concat_0 @ 0x55e7b6b8b640] Failed to configure output pad on Parsed_concat_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #26:0


    


    ffmpeg still seems to complain that the SARs don't match...but surely, as its a ratio, a SAR of 2834:2834 does match a SAR of 1:1 ?

    



    


    Edit : Tried setting the SAR with ffmpeg -i ./card.png -vf setsar=2834:2834 ./card-new-sar.png, but now the error is (size 1024x1024, SAR 0:1) do not match the corresponding output link in0:v0 parameters (1024x1024, SAR 2834:2834).

    


  • FFMPEG buffer input to mp4

    2 novembre 2020, par Michael Joseph Aubry

    How to properly send a buffer array to FFMPEG ?

    


    The process I am creating looks like this. A puppeteer session is open, requestAnimationFrame is called inside the browser context, then the frame is sent to Nodejs as a base64 string, this happens over and over again because requestAnimationFrame is inside a recursive function. The buffer will send each frame through one at a time.

    


    Before passing into FFMPEG I am converting the base64 string into a readable buffer because FFMPEG has a limited selection of input types.

    


    const buf = Buffer.from(base64, "base64");
let readableVideoBuffer = new stream.PassThrough();
readableVideoBuffer.write(buf, "utf8");
readableVideoBuffer.end();


    


    My goal with FFMPEG is to keep the process going until no more bytes are sent through. I want to pass each buffer as a frame and have FFMPEG stitch together the frames into an mp4 video. The output should be a writable file that is open to writing until the buffer stream closes.

    


    enter image description here

    


    Here is the code I have been experimenting with

    


    export default (base64: any) => {
  const buf = Buffer.from(base64, "base64");
  let readableVideoBuffer = new stream.PassThrough();
  readableVideoBuffer.write(buf, "utf8");
  readableVideoBuffer.end();

  const childProcess = spawn(ffmpegPath.path, [
    "-f",
    "image2pipe",
    "-r",
    "25",
    "-s",
    "1080x1080",
    "-i",
    "-",
    "-vcodec",
    "libx264",
    "-pix_fmt",
    "yuv420p",
    "-movflags",
    "faststart",
    "-f",
    "mp4",
    "pipe:1"
  ]);

  childProcess.stdout.on("data", (data) =>
    fs.createWriteStream("~/Desktop/test.mp4").write(data)
  );
  childProcess.stderr.on("data", (data) => console.log(data.toString()));
  childProcess.on("close", (code) => {
    console.log(`done! (${code})`);
  });

  readableVideoBuffer.pipe(childProcess.stdin);
};


    


    I don't understand what is required to fully make this work. I do know if the input -i - is a dash - then that signals the input will be read from childProcess.stdin. If I don't specify an input format like -f image2pipe then the command will fail.

    


    If I specify an output like ~/Desktop/test.mp4 I get an error File '~/Desktop/test.mp4' already exists. Exiting.

    


    With the code I have in the example above the error I get is [image2pipe @ 0x7f93f4004400] Could not find codec parameters for stream 0 (Video: none, none, 1080x1080): unknown codec Consider increasing the value for the 'analyzeduration' and 'probesize' options

    


    So it seems like this version may work if I can figure out how to prevent the command from trying to overwrite the video on the next buffer sequence and do some sort of passthrough and keep the write open.

    


    Any ideas ?