Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (38)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6698)

  • Issues with video frame dropout using Accord.NET VideoFileWriter and FFMPEG

    9 janvier 2018, par David

    I am testing out writing video files using the Accord.Video library. I have a WPF project created in Visual Studio 2017, and I have installed Accord.Video.FFMPEG as well as Accord.Video.VFW using Nuget, as well as their dependencies.

    I have created a very simple video to test basic file output. However, I am running into some issues. My goal is to be able to output videos with a variable frame rate, because in the future I will be using this code to input images from a webcam device that will then be saved to a video file, and video from webcams typically has variable frame rates.

    For now, in this example, I am not inputting video from a webcam, but rather I am generating a simple "moving box" image and outputting the frames to a video file. The box changes color every 20 frames : red, green, blue, yellow, and finally white. I also set the frame rate to be 20 fps.

    When I use Accord.Video.VFW, the frame rate is correctly set, and all the frames are correctly outputted to the video file. The resulting video looks like this (see the YouTube link) : https://youtu.be/K8E9O7bJIbg

    This is just a reference, however. I don’t intend on using Accord.Video.VFW because it outputs uncompressed data to an AVI file, and it doesn’t support variable frame rates. I would like to use Accord.Video.FFMPEG because it is supposed to support variable frame rates.

    When I attempt to use the Accord.Video.FFMPEG library, however, the video does not result in how I would expect it to look. Here is a YouTube link : https://youtu.be/cW19yQFUsLI

    As you can see, in that example, the box remains the first color for a longer amount of time than the other colors. It also never reaches the final color (white). When I inspect the video file, 100 frames were not outputted to the file. There are 69 or 73 frames typically. And the expected frame rate and duration obviously do not match up.

    Here is the code that generates both these videos :

    public MainWindow()
    {
       InitializeComponent();

       Accord.Video.VFW.AVIWriter avi_writer = new Accord.Video.VFW.AVIWriter();
       avi_writer.FrameRate = 20;
       avi_writer.Open("test2.avi", 640, 480);

       Accord.Video.FFMPEG.VideoFileWriter k = new Accord.Video.FFMPEG.VideoFileWriter();
       k.FrameRate = 20;
       k.Width = 640;
       k.Height = 480;
       k.Open("test.mp4");
       for (int i = 0; i < 100; i++)
       {
           TimeSpan t = new TimeSpan(0, 0, 0, 0, 50 * i);
           var b = new System.Drawing.Bitmap(640, 480);
           var g = Graphics.FromImage(b);
           var br = System.Drawing.Brushes.Blue;
           if (t.TotalMilliseconds < 1000)
               br = System.Drawing.Brushes.Red;
           else if (t.TotalMilliseconds < 2000)
               br = System.Drawing.Brushes.Green;
           else if (t.TotalMilliseconds < 3000)
               br = System.Drawing.Brushes.Blue;
           else if (t.TotalMilliseconds < 4000)
               br = System.Drawing.Brushes.Yellow;
           else
               br = System.Drawing.Brushes.White;

           g.FillRectangle(br, 50 + i, 50, 100, 100);
           System.Console.WriteLine("Frame: " + (i + 1).ToString() + ", Millis: " + t.TotalMilliseconds.ToString());

           #region This is the code in question

           k.WriteVideoFrame(b, t);
           avi_writer.AddFrame(b);

           #endregion
       }

       avi_writer.Close();
       k.Close();
       System.Console.WriteLine("Finished writing video");
    }

    I have tried changing a few things under the assumption that maybe the "WriteVideoFrame" function isn’t able to finish in time, and so I need to slow down the program so it can complete itself. Under that assumption, I have replaced the "WriteVideoFrame" call with the following code :

    Task taskA = new Task(() => k.WriteVideoFrame(b, t));
    taskA.Start();
    taskA.Wait();

    And I have tried the following code :

    Task.WaitAll(
       Task.Run( () =>
       {
           lock(syncObj)
           {
               k.WriteVideoFrame(b, t);
           }
       }
    ));

    And even just a standard call where I don’t specify a timestamp :

    k.WriteVideoFrame(b);

    None of these work. They all result in something similar.

    Any suggestions on getting the WriteVideoFrame function to work that is a part of the Accord.Video.FFMPEG.VideoFileWriter class ?

    Thanks for any and all help !

    [edits below]

    I have done some more investigating. I still haven’t found a good solution, but here is what I have found so far. After declaring my VideoFileWriter object, I have tried setting up some options for the video.

    When I use an H264 codec with the following options, it correctly saves 100 frames at a frame-rate of 20 fps, however any normal media player (both VLC and Windows Media Player) end up playing a 10-second video instead of a 5-second video. Essentially, it seems like they play it at half-speed. Here is the code that gives that result :

    k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.H264;
    k.VideoOptions["crf"] = "18";
    k.VideoOptions["preset"] = "veryfast";
    k.VideoOptions["tune"] = "zerolatency";
    k.VideoOptions["x264opts"] = "no-mbtree:sliced-threads:sync-lookahead=0";

    Additionally, if I use an Mpeg4 codec, I get the same "half-speed" result :

    k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.Mpeg4;

    However, if I use a WMV codec, then it correctly results in 100 frames at 20 fps, and a 5 second video that is correctly played by both media players :

    k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.Wmv1;

    Although this is good news, this still doesn’t solve the problem because WMV doesn’t support variable frame rates. Also, this still doesn’t answer the question as to why the problem is happening in the first place.

    As always, any help would be appreciated !

  • How to use FFMPEG on Python/Windows10 with Pipe for Screen recording ?

    20 septembre 2020, par Trmotta

    I'd like to record the screen with ffmpeg as it seems to be the only player out there who can record a region of the screen along with the mouse cursor.

    


    The following code was adapted from i want to display mouse pointer in my recording but it doesn't work on a Windows 10 (x64) setup (using Python 3.6).

    


    #!/usr/bin/env python3

# ffmpeg -y -pix_fmt bgr0 -f avfoundation -r 20 -t 10 -i 1 -vf scale=w=3840:h=2160 -f rawvideo /dev/null

import sys
import cv2
import time
import subprocess
import numpy as np

w,h = 100, 100

def ffmpegGrab():
    """Generator to read frames from ffmpeg subprocess"""

    #ffmpeg -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 640x480 -show_region 1 -i desktop output.mkv #CODE THAT ACTUALLY WORKS WITH FFMPEG CLI

    cmd = 'D:/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -show_region 1 -i desktop -f image2pipe, -pix_fmt bgr24 -vcodec rawvideo -an -sn' 

    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
    #out, err = proc.communicate()
    while True:
        frame = proc.stdout.read(w*h*3)
        yield np.frombuffer(frame, dtype=np.uint8).reshape((h,w,3))

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
    # Read next frame from ffmpeg
    frame = next(gen)
    nFrames += 1

    cv2.imshow('screenshot', frame)

    if cv2.waitKey(1) == ord("q"):
        break

    fps = nFrames/(time.time()-start)
    print(f'FPS: {fps}')


cv2.destroyAllWindows()
out.release()


    


    By using 'cmd' as stated above, I'll get the following error :

    


    b"ffmpeg version git-2020-08-31-4a11a6f Copyright (c) 2000-2020 the FFmpeg developers\r\n  built with gcc 10.2.1 (GCC) 20200805\r\n  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --enable-libsvtav1 --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf\r\n  libavutil      56. 58.100 / 56. 58.100\r\n  libavcodec     58.101.101 / 58.101.101\r\n  libavformat    58. 51.101 / 58. 51.101\r\n  libavdevice    58. 11.101 / 58. 11.101\r\n  libavfilter     7. 87.100 /  7. 87.100\r\n  libswscale      5.  8.100 /  5.  8.100\r\n  libswresample   3.  8.100 /  3.  8.100\r\n  libpostproc    55.  8.100 / 55.  8.100\r\nTrailing option(s) found in the command: may be ignored.\r\n[gdigrab @ 0000017ab857f100] Capturing whole desktop as 100x100x32 at (10,20)\r\nInput #0, gdigrab, from 'desktop':\r\n  Duration: N/A, start: 1599021857.538752, bitrate: 9612 kb/s\r\n    Stream #0:0: Video: bmp, bgra, 100x100, 9612 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc\r\n**At least one output file must be specified**\r\n"


    


    Which is the contents of proc (and also of proc.communicate). The program crashes right after when trying to resize this message to an image of size 100x100.

    


    I do not want to have an output file. I need to use Python subprocess along with Pipe in order to directly deliver those screen frames to my Python code, no IO required at all.

    


    If I try the following :

    


    cmd = 'D :/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe'

    


    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)


    


    Then 'frame', inside 'while True', is filled with b''.

    


    Tried using the following libraries with no success, as I couldnt either find how to capture the mouse cursor or capture the screen at all : https://github.com/abhiTronix/vidgear, https://github.com/kkroening/ffmpeg-python

    


    What am I missing ?
Thank you.

    


  • How to overlay multiple landscape regions from a single input to a new portrait video ? FFmpeg

    27 août 2023, par 3V1LXD

    I have an electron program that selects multiple regions of a landscape video and lets you rearrange them in a portrait canvas. I'm having trouble building the proper ffmpeg command to create the video. I have this somewhat working. I can export 2 layers, but i can't export if i only have 1 layer or if i have 3 or more layers selected.

    


    2 regions of video selected

    


    layers [
  { top: 0, left: 658, width: 576, height: 1080 },
  { top: 262, left: 0, width: 576, height: 324 }
]
newPositions [
  { top: 0, left: 0, width: 576, height: 1080 },
  { top: 0, left: 0, width: 576, height: 324 }
]
filtergraph [0]crop=576:1080:658:0,scale=576:1080[v0];[0]crop=576:324:0:262,scale=576:324[v1];[v0][v1]overlay=0:0:0:0[out]

No Error export successful


    


    1 region selected

    


    layers [ { top: 0, left: 650, width: 576, height: 1080 } ]
newPositions [ { top: 0, left: 0, width: 576, height: 1080 } ]
filtergraph [0]crop=576:1080:650:0,scale=576:1080[v0];[v0]overlay=0:0[out]

FFmpeg error: [fc#0 @ 000001dd3b6db0c0] Cannot find a matching stream for unlabeled input pad overlay
Error initializing complex filters: Invalid argument


    


    3 regions of video selected

    


    layers [
  { top: 0, left: 641, width: 576, height: 1080 },
  { top: 250, left: 0, width: 576, height: 324 },
  { top: 756, left: 0, width: 576, height: 324 }
]
newPositions [
  { top: 0, left: 0, width: 576, height: 1080 },
  { top: 0, left: 0, width: 576, height: 324 },
  { top: 756, left: 0, width: 576, height: 324 }
]
filtergraph [0]crop=576:1080:641:0,scale=576:1080[v0];[0]crop=576:324:0:250,scale=576:324[v1];[0]crop=576:324:0:756,scale=576:324[v2];[v0][v1][v2]overlay=0:0:0:0:0:756[out]

FFmpeg error: [AVFilterGraph @ 0000018faf2189c0] More input link labels specified for filter 'overlay' than it has inputs: 3 > 2
[AVFilterGraph @ 0000018faf2189c0] Error linking filters

FFmpeg error: Failed to set value '[0]crop=576:1080:698:0,scale=576:1080[v0];[0]crop=576:324:0:264,scale=576:324[v1];[0]crop=576:324:0:756,scale=576:324[v2];[v0][v1][v2]overlay=0:0:0:0:0:0[out]' for option 'filter_complex': Invalid argument
Error parsing global options: Invalid argument


    


    I can't figure out how to construct the proper overlay command. Here is the js code i'm using from my electron app.

    


    ipcMain.handle('export-video', async (_event, args) => {
  const { videoFile, outputName, layers, newPositions } = args;
  const ffmpegPath = path.join(__dirname, 'bin', 'ffmpeg');
  const outputDir = checkOutputDir();
  
  // use same video for each layer as input
  // crop, scale, and position each layer
  // overlay each layer on top of each other

  // export video
  console.log('layers', layers);
  console.log('newPositions', newPositions);

  let filtergraph = '';

  for (let i = 0; i < layers.length; i++) {
    const { top, left, width, height } = layers[i];
    const { width: newWidth, height: newHeight } = newPositions[i];
    const filter = `[0]crop=${width}:${height}:${left}:${top},scale=${newWidth}:${newHeight}[v${i}];`;
    filtergraph += filter;
  }

  for (let i = 0; i < layers.length; i++) {
    const filter = `[v${i}]`;
    filtergraph += filter;
  }

  filtergraph += `overlay=`;
  for (let i = 0; i < layers.length; i++) {
    const { top: newTop, left: newLeft } = newPositions[i];
    const overlay = `${newLeft}:${newTop}:`;
    filtergraph += overlay;
  }

  filtergraph = filtergraph.slice(0, -1); // remove last comma
  filtergraph += `[out]`;
  
  console.log('filtergraph', filtergraph);

  const ffmpeg = spawn(ffmpegPath, [
    '-i', videoFile,
    '-filter_complex', filtergraph,
    '-map', '[out]',
    '-c:v', 'libx264',
    '-preset', 'ultrafast',
    '-crf', '18',
    '-y',
    path.join(outputDir, `${outputName}`)
  ]);  

  ffmpeg.stdout.on('data', (data) => {
    console.log(`FFmpeg output: ${data}`);
  });

  ffmpeg.stderr.on('data', (data) => {
    console.error(`FFmpeg error: ${data}`);
  });

  ffmpeg.on('close', (code) => {
    console.log(`FFmpeg process exited with code ${code}`);
    // event.reply('ffmpeg-export-done'); // Notify the renderer process
  });
});


    


    Any advice might be helpful, The docs are confusing, Thanks.

    


    Edit
I'm getting closer with this
Output :

    


    layers [
  { top: 0, left: 677, width: 576, height: 1080 },
  { top: 240, left: 0, width: 576, height: 324 }
]
newPositions [
  { top: 0, left: 0, width: 576, height: 1080 },
  { top: 0, left: 0, width: 576, height: 324 }
]
filtergraph [0]crop=576:1080:677:0,scale=576:1080[v0];[0]crop=576:324:0:240,scale=576:324[v1];[0][v0]overlay=0:0[o0];[o0][v1]overlay=0:0[o1]


    


    ipcMain.handle('export-video', async (_event, args) => {
  const { videoFile, outputName, layers, newPositions } = args;
  const ffmpegPath = path.join(__dirname, 'bin', 'ffmpeg');
  const outputDir = checkOutputDir();
  
  // use same video for each layer as input
  // crop, scale, and position each layer
  // overlay each layer on top of each other

  // export video
  console.log('layers', layers);
  console.log('newPositions', newPositions);

  let filtergraph = '';

  for (let i = 0; i < layers.length; i++) {
    const { top, left, width, height } = layers[i];
    const { width: newWidth, height: newHeight } = newPositions[i];
    const filter = `[0]crop=${width}:${height}:${left}:${top},scale=${newWidth}:${newHeight}[v${i}];`;
    filtergraph += filter;
  }

  for (let i = 0; i < layers.length; i++) {
    if (i === 0) {
      filtergraph += `[0][v${i}]overlay=`;
    } else {
      filtergraph += `[o${i-1}][v${i}]overlay=`;
    }
    const { top: newTop, left: newLeft } = newPositions[i];
    let overlay = '';
    if (i !== layers.length - 1) {
      overlay = `${newLeft}:${newTop}[o${i}];`;
    } else {
      overlay = `${newLeft}:${newTop};`;
    }
    filtergraph += overlay;
  }

  filtergraph = filtergraph.slice(0, -1); // remove last comma
  filtergraph += `[o${layers.length-1}]`;
  
  console.log('filtergraph', filtergraph);

  const ffmpeg = spawn(ffmpegPath, [
    '-i', videoFile,
    '-filter_complex', filtergraph,
    '-map', `[o${layers.length-1}]`,
    '-c:v', 'libx264',
    '-preset', 'ultrafast',
    '-crf', '18',
    '-y',
    path.join(outputDir, `${outputName}`)
  ]);  

  ffmpeg.stdout.on('data', (data) => {
    console.log(`FFmpeg output: ${data}`);
  });

  ffmpeg.stderr.on('data', (data) => {
    console.error(`FFmpeg error: ${data}`);
  });

  ffmpeg.on('close', (code) => {
    console.log(`FFmpeg process exited with code ${code}`);
    // event.reply('ffmpeg-export-done'); // Notify the renderer process
  });
});


    


    The problem I'm having now is that its overlaying the regions over the original input and keeping the landscape dimensions instead of making a portrait video.