Recherche avancée

Médias (91)

Autres articles (58)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (10110)

  • what is the faster way to load a local image using javascript and / or nodejs and faster way to getImageData ?

    4 octobre 2020, par Tom Lecoz

    I'm working on a video-editing-tool online for a large audience.
Users can create some "scenes" with multiple images, videos, text and sound , add a transition between 2 scenes, add some special effects, etc...

    


    When the users are happy with what they made, they can download the result as a mp4 file with a desired resolution and framerate. Let's say full-hd-60fps for example (it can be bigger).

    


    I'm using nodejs & ffmpeg to build the mp4 from HtmlCanvasElement.
Because it's impossible to seek perfectly frame-by-frame with a HtmlVideoElement, I start to convert the videos from each "scene" in a sequence of png using ffmpeg.
Then, I read my scene frame by frame and , if there are some videos, I replace the videoElements by an image containing the right frame. Once every images are loaded, I launch the capture and go to the next frame.

    


    Everythings works as expected but it's too slow !
Even with a powerfull computer (ryzen 3900X, rtx 2080 super, 32 gb of ram , nvme 970 evo plus) , in the best case, I can capture basic full-hd movie (if it contains videos inside) at 40 FPS.

    


    It may sounds good enought but it's not.
Our company produce thousands of mp4 every day.
A slow encoding process means more servers at works so it will be more expensive for us.

    


    Until now, my company used (and is still using) a tool based on Adobe Flash because the whole video-editing-tool was made with Flash. I was (and am) in charge to translate the whole thing into HTML. I reproduced every feature one by one during 4 years (it's by far my biggest project) and this is the very last step but even if the html-version of our player works very well, the encoding process is much slower than the flash version - able to encode full-hd at 90-100FPS - )

    


    I put console.log everywhere in order to find what makes the encoding so slow and there are 2 bottlenecks :

    


    As I said before, for each frame, if there are videos on the current scene, I replace video-elements by images representing the right frame at the right time. Since I'm using local files, I expected a loading time almost synchronous. It's not the case at all, it required more than 10 ms in most cases.

    


    So my first question is "what is the fastest way to handle local image loading with javascript used as final output ?".

    


    I don't care about the technology involved, I have no preference, I just want to be able to load my local image faster than what I get for now.

    


    The second bottleneck is weird and to be honest I don't understand what's happening here.

    


    When the current frame is ready to be captured, I need to get it's data using CanvasRenderingContext2D.getImageData in order to send it to ffmpeg and this particular step is very slow.

    


    This single line

    


    let imageData = canvas.getContext("2d").getImageData(0,0,1920,1080);  


    


    takes something like 12-13 ms.
It's very slow !

    


    So I'm also searching another way to extract the pixels-data from my canvas.

    


    Few days ago, I found an alternative to getImageData using the new class called VideoFrame that has been created to be used with the classes VideoEncoder & VideoDecoder that will come in Chrome 86.
You can do something like that

    


    let buffers:Uint8Array[] = [];
createImageBitmap(canvas).then((bmp)=>{
   let videoFrame = new VideoFrame(bmp);
   for(let i = 0;i<3;i++){
      buffers[i] = new Uint8Array(videoFrame.planes[id].length);
      videoFrame.planes[id].readInto(buffers[i])
   }
})


    


    It allow me to grab the pixel data around 25% quickly than getImageData but as you can see, I don't get a single RGBA buffer but 3 weirds buffers matching with I420 format.

    


    In an ideal way, I would like to send it directly to ffmpeg but I don't know how to deals with these 3 buffers (i have no experience with I420 format) .

    


    I'm not sure at all the solution that involve VideoFrame is a good one. If you know a faster way to transfer the data from a canvas to ffmpeg, please tell me.

    


    Thanks for reading this very long post.
Any help would be very appreciated

    


  • FFMPEG, player plays jerky video after encoding

    9 septembre 2020, par AKL

    I've been using FFMPEG on Windows 10 for a while. To help me do my job, I have got several DOS batch files which take parameters and perform tasks like cut from beginning, blur area, mute audio etc.

    


    I usually take big videos, extract the parts that I like using Avidemux, cut / apply filter on some parts using FFMPEG, join them using FFMPEG and then encode again to get rid of any errors (or jerkiness, etc) on my target player, which is Kodi 18.xx running on an Android TV (TCL brand).

    


    I have done a similar process for a video that I have been doing for several years and this video seems to be out of my control. I have chopped up the video using Avidemux 2.7.2. The video has been cut at key frames. If I take a single un-encoded file, it runs perfectly on my Android TV. If I encode it using FFMPEG, it starts to jerk when played on my Android TV.

    


    The videos always plays fine on VLC player running on my PC.

    


    Some information that may be helpful. I can attach detailed frame information if desired.

    


    ffmpeg version 4.3 Copyright (c) 2000-2020 the FFmpeg developers
      built with gcc 9.3.1 (GCC) 20200621
      configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
      libavutil      56. 51.100 / 56. 51.100
      libavcodec     58. 91.100 / 58. 91.100
      libavformat    58. 45.100 / 58. 45.100
      libavdevice    58. 10.100 / 58. 10.100
      libavfilter     7. 85.100 /  7. 85.100
      libswscale      5.  7.100 /  5.  7.100
      libswresample   3.  7.100 /  3.  7.100
      libpostproc    55.  7.100 / 55.  7.100


    


    Showing info for the source file. Codec is hevc, I think FFMPEG does not have default support for ?

    


    ffmpeg -i _p007_cut_start.mp4 -hide_banner
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '_p007_cut_start.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2mp41
        encoder         : Lavf58.20.100
      Duration: 00:00:08.51, start: 0.006000, bitrate: 5501 kb/s
        Stream #0:0(und): Video: hevc (Main 10) (hev1 / 0x31766568), yuv420p10le(tv, bt709), 1920x816 [SAR 1:1 DAR 40:17], 5091 kb/s, 23.98 fps, 23.98 tbr, 24390 tbn, 23.98 tbc (default)
        Metadata:
          handler_name    : VideoHandler
        Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 401 kb/s (default)
        Metadata:
          handler_name    : SoundHandler


    


    Showing info for the file that was encoded :

    


    ffmpeg -i p007_done2.mp4 -hide_banner
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'p007_done2.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf58.45.100
      Duration: 00:00:08.53, start: 0.000000, bitrate: 3092 kb/s
        Stream #0:0(und): Video: h264 (High 10) (avc1 / 0x31637661), yuv420p10le, 1920x816 [SAR 1:1 DAR 40:17], 2743 kb/s, 23.98 fps, 23.98 tbr, 24390 tbn, 47.95 tbc (default)
        Metadata:
          handler_name    : VideoHandler
        Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 345 kb/s (default)
        Metadata:
          handler_name    : SoundHandler


    


    I would appreciate if someone can point to me where I am going wrong.

    


    The command used to perform the encoding is as follows (only the library was swapped between libx264 and lib265) :

    


    ffmpeg -i in.mp4 -map 0:v -c:v libx264 -video_track_timescale 24390 -crf 23 -map 0:a -c:a aac -copyts -vsync 0 -async 0 -movflags +faststart out.mp4


    


    P.S. My knowledge is more based on trial-and-error and I don't understand video formats (and concepts) much except for what got me to this point.

    


    Regards,
AK

    


  • Building FFMPEG to a dll to be used in a c# application

    18 août 2021, par Venkata K. C. Tata

    I want to export a custom version of FFMPEG with only H.264 codec support and want to build it to a DLL to use it in a c# application.

    


    Can someone tell me if it is possible ?

    


    I can currently generate a .exe file.

    


    I tried —enable-static —enable-shared and it generates an exe.

    


    currently, this is the command I am using.

    


    ./configure --arch=x86 --target-os=mingw32  --cross-prefix=i686-w64-mingw32- --disable-everything --disable-network --disable-autodetect --enable-small --enable-decoder=aac*,ac3*,opus,vorbis --enable-demuxer=mov,m4v,matroska --enab                                          le-muxer=mp3,mp4 --enable-protocol=file --enable-filter=aresample --disable-programs --disable-doc --enable-static --enable-shared


    


    when this is run, it generates a bunch of DLL's but I am trying to even have the main method of FFMPEG in a DLL to access it in C#

    


    It's been 15 years since I wrote any line of C, so, please show some mercy if my question doesn't make sense.

    


    Thanks in advance.