Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (72)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (8420)

  • FFMPEG says "No such file or directory" when trying to convert image sequence

    17 juin 2020, par James Morris

    From the shell, when I specify a sequence of images via %d in the input filename, FFMPEG insists "No such file or directory", despite evidence to the contrary. Looking online, I haven't managed to find any references to generating video from a sequence of images using FFMPEG where %d is not used, yet it seems to fail here.

    



    My images should be identified by FFMPEG from img%06d.gif. Issuing ls img[0-9][0-9][0-9][0-9][0-9][0-9].gif succeeds in the very same directory I issue the FFMPEG command.

    



    The command I use is :

    



    ffmpeg  -i img%06d.gif -c:v libx264 -r 30 -pix_fmt yuv720p test.mp4


    



    What could possibly be going wrong ???

    


  • what is the faster way to load a local image using javascript and / or nodejs and faster way to getImageData ?

    4 octobre 2020, par Tom Lecoz

    I'm working on a video-editing-tool online for a large audience.
Users can create some "scenes" with multiple images, videos, text and sound , add a transition between 2 scenes, add some special effects, etc...

    


    When the users are happy with what they made, they can download the result as a mp4 file with a desired resolution and framerate. Let's say full-hd-60fps for example (it can be bigger).

    


    I'm using nodejs & ffmpeg to build the mp4 from HtmlCanvasElement.
Because it's impossible to seek perfectly frame-by-frame with a HtmlVideoElement, I start to convert the videos from each "scene" in a sequence of png using ffmpeg.
Then, I read my scene frame by frame and , if there are some videos, I replace the videoElements by an image containing the right frame. Once every images are loaded, I launch the capture and go to the next frame.

    


    Everythings works as expected but it's too slow !
Even with a powerfull computer (ryzen 3900X, rtx 2080 super, 32 gb of ram , nvme 970 evo plus) , in the best case, I can capture basic full-hd movie (if it contains videos inside) at 40 FPS.

    


    It may sounds good enought but it's not.
Our company produce thousands of mp4 every day.
A slow encoding process means more servers at works so it will be more expensive for us.

    


    Until now, my company used (and is still using) a tool based on Adobe Flash because the whole video-editing-tool was made with Flash. I was (and am) in charge to translate the whole thing into HTML. I reproduced every feature one by one during 4 years (it's by far my biggest project) and this is the very last step but even if the html-version of our player works very well, the encoding process is much slower than the flash version - able to encode full-hd at 90-100FPS - )

    


    I put console.log everywhere in order to find what makes the encoding so slow and there are 2 bottlenecks :

    


    As I said before, for each frame, if there are videos on the current scene, I replace video-elements by images representing the right frame at the right time. Since I'm using local files, I expected a loading time almost synchronous. It's not the case at all, it required more than 10 ms in most cases.

    


    So my first question is "what is the fastest way to handle local image loading with javascript used as final output ?".

    


    I don't care about the technology involved, I have no preference, I just want to be able to load my local image faster than what I get for now.

    


    The second bottleneck is weird and to be honest I don't understand what's happening here.

    


    When the current frame is ready to be captured, I need to get it's data using CanvasRenderingContext2D.getImageData in order to send it to ffmpeg and this particular step is very slow.

    


    This single line

    


    let imageData = canvas.getContext("2d").getImageData(0,0,1920,1080);  


    


    takes something like 12-13 ms.
It's very slow !

    


    So I'm also searching another way to extract the pixels-data from my canvas.

    


    Few days ago, I found an alternative to getImageData using the new class called VideoFrame that has been created to be used with the classes VideoEncoder & VideoDecoder that will come in Chrome 86.
You can do something like that

    


    let buffers:Uint8Array[] = [];
createImageBitmap(canvas).then((bmp)=>{
   let videoFrame = new VideoFrame(bmp);
   for(let i = 0;i<3;i++){
      buffers[i] = new Uint8Array(videoFrame.planes[id].length);
      videoFrame.planes[id].readInto(buffers[i])
   }
})


    


    It allow me to grab the pixel data around 25% quickly than getImageData but as you can see, I don't get a single RGBA buffer but 3 weirds buffers matching with I420 format.

    


    In an ideal way, I would like to send it directly to ffmpeg but I don't know how to deals with these 3 buffers (i have no experience with I420 format) .

    


    I'm not sure at all the solution that involve VideoFrame is a good one. If you know a faster way to transfer the data from a canvas to ffmpeg, please tell me.

    


    Thanks for reading this very long post.
Any help would be very appreciated

    


  • FFMPEG- H.264 encoding BGR image data to YUP420P video file resulting in empty video

    22 septembre 2022, par Cogentleman

    I'm new to FFMPEG and trying to use it to do some screen capture to a video file, but after a lot of online searching I am stumped as to what I'm doing wrong. Basically, I've already done the effort of capturing screen data via DirectX which stores in a BGR pixel format and I'm just trying to put each frame in a video file. There's two functions, setup which does all the ffmpeg initialization work, and addImage which is called in the main program loop and puts each buffer of BGR image data into a video file. The technique I'm doing for this is to make two frames, one with the BGR data and one with YUP420P (doesn't need to be the latter but after a lot of trial and error it was all I was able to get working with H.264), and use sws_scale to copy data between the two, and then send that frame to video.mp4. The file seems to be having data written to it successfully (the file size grows and grows as the program runs), but when I try and view it in VLC I see nothing- indeed, VLC fails to fetch a length of the video, and bringing up codec and media information both are empty. I turned on ffmpeg verbose logging but all that is spit out is the following :

    


    Setting default whitelist 'Epu��'
Timestamps are unset in a packet for stream -1259342440. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
Encoder did not produce proper pts, making some up.


    


    From what I am reading, I understand this to be warnings rather than errors that would totally corrupt my video file. I separately went through all the error codes being spit out and everything seems nominal to me (zero for success for most calls, -11 sometimes for avcodec_receive_packet but the docs indicate that's expected sometimes).

    


    Based on my understanding of things as they are, this should be working, but isn't, and the logs and error codes give me nothing to go on, so someone with experience with this I reckon would save me a ton of time. The code is as follows :

    


    VideoService.h

    


    #ifndef VIDEO_SERVICE_H&#xA;#define VIDEO_SERVICE_H&#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;}&#xA;&#xA;class VideoService {&#xA;    public:&#xA;        void setup();&#xA;        void addImage(unsigned char* data, int lineSize, int width, int height, int align);&#xA;    private:&#xA;        AVCodecContext* context;&#xA;        AVFormatContext* formatContext;&#xA;        AVFrame* bgrFrame;&#xA;        AVFrame* yuvFrame;&#xA;        AVStream* videoStream;&#xA;        SwsContext* swsContext;&#xA;};&#xA;&#xA;#endif&#xA;

    &#xA;

    VideoService.cpp

    &#xA;

    #include "VideoService.h"&#xA;#include &#xA;&#xA;void FfmpegLogCallback(void *ptr, int level, const char *fmt, va_list vargs)&#xA;{&#xA;    FILE* f = fopen("ffmpeg.txt", "a");&#xA;    fprintf(f, fmt, vargs);&#xA;    fclose(f);&#xA;}&#xA;&#xA;void VideoService::setup() {&#xA;    int result = 0;&#xA;    av_log_set_level(AV_LOG_VERBOSE);&#xA;    av_log_set_callback(FfmpegLogCallback);&#xA;    bgrFrame = av_frame_alloc();&#xA;    bgrFrame->width = 1920;&#xA;    bgrFrame->height = 1080;&#xA;    bgrFrame->format = AV_PIX_FMT_BGRA;&#xA;    bgrFrame->time_base.num = 1;&#xA;    bgrFrame->time_base.den = 60;&#xA;    result = av_frame_get_buffer(bgrFrame, 1);&#xA;    yuvFrame = av_frame_alloc();&#xA;    yuvFrame->width = 1920;&#xA;    yuvFrame->height = 1080;&#xA;    yuvFrame->format = AV_PIX_FMT_YUV420P;&#xA;    yuvFrame->time_base.num = 1;&#xA;    yuvFrame->time_base.den = 60;&#xA;    result = av_frame_get_buffer(yuvFrame, 1);&#xA;    const AVOutputFormat* outputFormat = av_guess_format("mp4", "video.mp4", "video/mp4");&#xA;    result = avformat_alloc_output_context2(&#xA;        &amp;formatContext,&#xA;        outputFormat,&#xA;        "mp4",&#xA;        "video.mp4"&#xA;    );&#xA;    formatContext->oformat = outputFormat;&#xA;    const AVCodec* codec = avcodec_find_encoder(AVCodecID::AV_CODEC_ID_H264);&#xA;    result = avio_open2(&amp;formatContext->pb, "video.mp4", AVIO_FLAG_WRITE, NULL, NULL);&#xA;    videoStream = avformat_new_stream(formatContext, codec);&#xA;    AVCodecParameters* codecParameters = videoStream->codecpar;&#xA;    codecParameters->codec_type = AVMediaType::AVMEDIA_TYPE_VIDEO;&#xA;    codecParameters->codec_id = AVCodecID::AV_CODEC_ID_HEVC;&#xA;    codecParameters->width = 1920;&#xA;    codecParameters->height = 1080;&#xA;    codecParameters->format = AVPixelFormat::AV_PIX_FMT_YUV420P;&#xA;    videoStream->time_base.num = 1;&#xA;    videoStream->time_base.den = 60;&#xA;    result = avformat_write_header(formatContext, NULL);&#xA;    &#xA;    codec = avcodec_find_encoder(videoStream->codecpar->codec_id);&#xA;    context = avcodec_alloc_context3(codec);&#xA;    context->time_base.num = 1;&#xA;    context->time_base.den = 60;&#xA;    avcodec_parameters_to_context(context, videoStream->codecpar);&#xA;    result = avcodec_open2(context, codec, nullptr);&#xA;    swsContext = sws_getContext(1920, 1080, AV_PIX_FMT_BGRA, 1920, 1080, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);&#xA;}&#xA;&#xA;void VideoService::addImage(unsigned char* data, int lineSize, int width, int height, int align) {&#xA;    int result = 0;&#xA;    result = av_image_fill_arrays(bgrFrame->data, bgrFrame->linesize, data, AV_PIX_FMT_BGRA, 1920, 1080, 1);&#xA;    sws_scale(swsContext, bgrFrame->data, bgrFrame->linesize, 0, 1080, &amp;yuvFrame->data[0], yuvFrame->linesize); &#xA;    result = avcodec_send_frame(context, yuvFrame);&#xA;    AVPacket *packet = av_packet_alloc();&#xA;    result = avcodec_receive_packet(context, packet);&#xA;    if (result != 0) {&#xA;        return;&#xA;    }&#xA;    result = av_interleaved_write_frame(formatContext, packet);&#xA;}&#xA;

    &#xA;

    My environment is windows 10, I'm building with clang++ 12.0.1, and using the FFMPEG 5.1 libs.

    &#xA;