Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (68)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (7737)

  • Processing Video Frames in C# (FFMPEG Slow)

    9 septembre 2020, par julian bechtold

    I am trying to extract frames out of mp4 videos in order to process them.

    


    Namely there is a watermark / timestamp within the video image which I want to use to automatically stitch the videos together. The Video creation date is not sufficient for this task.
enter image description here

    


    Also the part of extracting the text out of the video with AI is fine.

    


    However, FFMPEG seems terribly slow. the source Video is 1080p / 60fps (roughly 1GB per 5 Minutes of video).

    


    I have tried two methods so far using Accord.FFMPEG wrapper :

    


    public void GetVideoFrames(string path)
{
    using (var vFReader = new VideoFileReader())
    {
        // open video file
        vFReader.Open(path);
        // counter is beeing used to extract every xth frame (1 Frame per second)
        int counter = 0;
        for (int i = 0; i < vFReader.FrameCount;i ++)
        {
            counter++;
            if (counter <= 60)
            {
                _ = vFReader.ReadVideoFrame();
                continue;
            }
            else
            {
                Bitmap frame = vFReader.ReadVideoFrame();
                // Process Bitmap
            }
        }
    }
}


    


    The other attempt :

    


    for (int i = 0; i < vFReader.FrameCount;i+= 60)
{
    // notice here, I am specifying which exact frame to extract
    Bitmap frame = vFReader.ReadVideoFrame(i);
    // process frame
}


    


    The second method is what I tried first and it's totally unfeasible. Apparently FFMPEG makes a new seek for each specific frame and thus the operation takes longer and longer for each frame processed.
After 5 frames already, it takes roughly 4 seconds to produce one Frame.

    


    The first method at least does not seem to suffer from that issue as heavily but it still takes roughly 2 seconds to yield a frame. At this rate i'm faster to process the video manually.

    


    Is there anything wrong with my approach ? Also I rather don't want to have a solution where I need to separately install third party libraries on the target machine.
So, if there are any alternatives, I'd be happy to try them out but it seems litterally everyone on stack overflow is either pointing to ffmpeg or opencv.

    


  • FFMPEG fails while processing MOV files

    29 septembre 2020, par Tom

    I'm trying to convert video files to DASH format. All videos work great except MOV videos.

    


    I'm using the following command :

    


    /usr/local/bin/ffmpeg -y -i /path/to/mov/video.mov -c:v libx264 -c:a aac -bf 1 -keyint_min 25 -g 250 -sc_threshold 40 -use_timeline 1 -use_template 1 -init_seg_name 'video_init_$RepresentationID$.$ext$' -media_seg_name 'video_chunk_$RepresentationID$_$Number%05d$.$ext$' -seg_duration 10 -hls_playlist 0 -f dash -adaptation_sets -0:s -map 0 -s:v:0 854x480 -b:v:0 750k -strict -2 -threads 12 /output/path/video.mpd


    


    I get the error :

    


      

    • Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument ... Error initializing output stream 0:1
    • 


    


    The full command output is :

    


    ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
built with Apple clang version 11.0.3 (clang-1103.0.32.62)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
libavutil 56. 51.100 / 56. 51.100
libavcodec 58. 91.100 / 58. 91.100
libavformat 58. 45.100 / 58. 45.100
libavdevice 58. 10.100 / 58. 10.100
libavfilter 7. 85.100 / 7. 85.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 7.100 / 5. 7.100
libswresample 3. 7.100 / 3. 7.100
libpostproc 55. 7.100 / 55. 7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/path/to/file.mov':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
creation_time : 2020-09-21T09:45:27.000000Z
com.apple.quicktime.make: Apple
com.apple.quicktime.model: iPhone 7
com.apple.quicktime.software: 13.4.1
com.apple.quicktime.creationdate: 2020-06-15T11:59:36+0200
com.apple.photos.originating.signature: AXfhZgW4nrUdSusOMUuJRarfxD7R
Duration: 00:01:13.40, start: 0.000000, bitrate: 10616 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 96 kb/s (default)
Metadata:
creation_time : 2020-09-21T09:45:27.000000Z
handler_name : Core Media Audio
Stream #0:1(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 10514 kb/s, 30 fps, 30 tbr, 600 tbn, 1200 tbc (default)
Metadata:
creation_time : 2020-09-21T09:45:27.000000Z
handler_name : Core Media Video
encoder : H.264
Stream #0:2(und): Data: none (mebx / 0x7862656D) (default)
Metadata:
creation_time : 2020-09-21T09:45:27.000000Z
handler_name : Core Media Metadata
Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
Metadata:
creation_time : 2020-09-21T09:45:27.000000Z
handler_name : Core Media Metadata
Stream mapping:
Stream #0:0 -> #0:0 (aac (native) -> aac (native))
Stream #0:1 -> #0:1 (h264 (native) -> h264 (libx264))
Stream #0:2 -> #0:2 (copy)
Stream #0:3 -> #0:3 (copy)
Press [q] to stop, [?] for help
[libx264 @ 0x7f7f2600e000] using SAR=1280/1281
[libx264 @ 0x7f7f2600e000] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7f7f2600e000] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0x7f7f2600e000] 264 - core 160 r3011 cde9a93 - H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=1 b_pyramid=0 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=750 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:1 --
[aac @ 0x7f7f2600c000] Qavg: 880.111
[aac @ 0x7f7f2600c000] 2 frames left in the queue on closing
[libx264 @ 0x7f7f2600e000] final ratefactor: 28.97
Conversion failed!


    


    Stream #0:2 -> #0:2 (copy)
Stream #0:3 -> #0:3 (copy)


    


    I guess the problem is that the file contains two not audio/video streams :
I can not find a way to exclude or ignore or copy without processing those last two streams (#2 and #3).

    


    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'IMG_3599.mov':
  Metadata:
    major_brand     : qt
    minor_version   : 0
    compatible_brands: qt
    creation_time   : 2020-09-21T09:45:27.000000Z
    com.apple.quicktime.make: Apple
    com.apple.quicktime.model: iPhone 7
    com.apple.quicktime.software: 13.4.1
    com.apple.quicktime.creationdate: 2020-06-15T11:59:36+0200
    com.apple.photos.originating.signature: AXfhZgW4nrUdSusOMUuJRarfxD7R
  Duration: 00:01:13.40, start: 0.000000, bitrate: 10616 kb/s
    Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 96 kb/s (default)
    Metadata:
      creation_time   : 2020-09-21T09:45:27.000000Z
      handler_name    : Core Media Audio
    Stream #0:1(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 10514 kb/s, 30 fps, 30 tbr, 600 tbn, 1200 tbc (default)
    Metadata:
      creation_time   : 2020-09-21T09:45:27.000000Z
      handler_name    : Core Media Video
      encoder         : H.264
    Stream #0:2(und): Data: none (mebx / 0x7862656D) (default)
    Metadata:
      creation_time   : 2020-09-21T09:45:27.000000Z
      handler_name    : Core Media Metadata
    Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
    Metadata:
      creation_time   : 2020-09-21T09:45:27.000000Z
      handler_name    : Core Media Metadata


    


  • Processing Frames of Screen Recording in Node Usin FFMPEG

    28 septembre 2020, par The42ndTurtle

    I am trying to capture my screen using Node.JS and FFMPEG, and I have gotten as far as saving an flv file, but I am unable to process the frames real-time.

    


    My code so far is

    


    const ffmpeg = require('ffmpeg-static');
const {spawn} = require('child_process');
const {createWriteStream} = require('fs');

const canvas = document.querySelector('#canvas');
const ctx = canvas.getContext('2d');

const process = spawn(
  ffmpeg,
  ["-f", "gdigrab", "-framerate", "30", "-i", "desktop", '-crf', '0', '-preset', 'ultrafast', '-f', 'flv', '-'],
  { stdio: "pipe" }
);


const stream = process.stdout;

const file = createWriteStream('capture.flv');
stream.pipe(file);

stream.on('data', chunk => {
  const base64 = chunk.toString('base64');
  const data = `data:image/png;base64,${base64}`;

  const image = new Image();
  image.src = data;
  ctx.drawImage(image, 0, 0);
});



    


    The output.flv file that is created is fine, but the image getting created in the stream does not seem to be a valid image. When I log the base64 and try just turning the single string into an image, it appears to be invalid image data. I want to use the stream to capture each frame of the stream.