Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (58)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (15397)

  • Facebook Live Stream API on iOS

    11 août 2020, par Deepak Sharma

    I see Facebook has a graph API to go live on Facebook, fetch all user reactions, create a poll, etc. But I don't see any sample code in the SDK for the same. I want to stream video from iOS/Android app to Facebook.

    


      

    1. Is it sufficient to use ffmpeg based libraries on iOS to create live RTMPS stream on Facebook or a third party cloud service is required ? Any sample code that has the Facebook live function builtin ?

      


    2. 


    3. What does the live video API review involves ? Anyone familiar with common causes of rejection for the live video API ?

      


    4. 


    


  • Trying to save frames as colored image using Ffmpeg in C++

    2 septembre 2021, par Tolga

    I am new to FFmpeg and I am trying to save the video frames as colored images. I have achieved saving them as grayscale using Netpbm, however, I need to save the frames as colored. I have tried implementing the code in this link.

    


    However, I get an error :

    


    'Exception thrown at 0x00E1FC4F (swscale-5.dll) in VideoDecoding2.exe:
 0xC0000005: Access violation writing location 0xCCCCCCCC.'


    


    Is there any way to improve this code or another way to save frames as colored ?

    


    Here is my code below.

    


      

    • src_pix_fmt is AV_PIX_FMT_YUV420p.
    • 


    • dst_pix_fmt is AV_PIX_FMT_RGB24.
    • 


    


    src_pix_fmt = avcc->pix_fmt;

src_width = avcc->width;
src_height = avcc->height;

dst_width = src_width;
dst_height = src_height;

numBytes = av_image_get_buffer_size(dst_pix_fmt, dst_width, dst_height, 0);

buffer = (uint8_t*)av_malloc(numBytes);

if ((ret = av_image_alloc(src_data, src_linesize, src_width, src_height, src_pix_fmt, 16)) < 0)
{
    printf("Couldn't allocate source image.\n");
    return 0;
}

av_image_fill_arrays(frameRGB->data, frameRGB->linesize, buffer, dst_pix_fmt, dst_width, dst_height, 0);

while (av_read_frame(avfc, packet) >= 0)
{
    ret = avcodec_send_packet(avcc, packet);
    if (ret < 0)
    {
        printf("Packets could not supplied to decoder.\n");
        return -1;
    }

    ret = avcodec_receive_frame(avcc, frame);
    printf("%d", ret);

    if (packet->stream_index == videoStream)
    {
        sws_ctx = sws_getContext(src_width, src_height, src_pix_fmt,
            dst_width, dst_height, dst_pix_fmt,
            SWS_BILINEAR, NULL, NULL, NULL);

        if (!sws_ctx)
        {
            printf("Cannot create scale context for conversion\n"
                "fmt:%s s:%dx%d --> fmt:%s s:%dx%d\n",
                av_get_pix_fmt_name(src_pix_fmt), src_width, src_height,
                av_get_pix_fmt_name(dst_pix_fmt), dst_width, dst_height);
            return 0;
        }

        sws_scale(sws_ctx, (const uint8_t* const*)frame->data, frame->linesize, 0, frame->height, dst_data, dst_linesize);

        FILE* f;
        char szFilename[32];
        int y;

        snprintf(szFilename, sizeof(szFilename), "frame%d.ppm", avcc->frame_number);
        fopen_s(&f, szFilename, "wb");
        
        if (f == NULL)
        {
            printf("Couldn't open file.\n");
            return 0;
        }
        
        fprintf(f, "P6\n%d %d\n255\n", dst_width, dst_height);

        for (y = 0; y < dst_height; y++)
            fwrite(frameRGB->data[0] + y * frameRGB->linesize[0], 1, dst_width * 3, f);
        
        fclose(f);
    }
}


    


  • How to record an HTML animation and save it as a video, in an automated manner in the backend

    14 mai 2022, par frizurd

    I need to record a webpage and save it as a video, in an automated manner, without human interaction.

    


    I am creating a NodeJS app that generates MP4 videos on the request of the user. The user provides an MP3 file, the app generates animated waveforms for the sound file on top of an illustration.

    


    What I came up with so far is a system that opens a generated web page in the backend, plays the audio file, and shows audio visualization for the audio file on an HTML canvas element. On top of another canvas with mainly static components, such as images, that do not animate. The system records this, the output will be a video file. Finally, I will merge the video file with the sound file to create the final file for the user.

    


    I came up with 2 possible solutions but both of them have problems which I am not able to solve at the moment.

    



    


    Solution #1

    


    Use a headless browser API such as Phantomjs or Puppeteer to snatch a screenshot x time every second and pipe it to FFmpeg.

    


    The problem

    


    The problem with this is that the process is not realtime. It would work fine if it's JUST an animation but mine is dependant on the audio file. The audio file will play-on during the render which results in a glitchy 1FPS-esque video.

    


    Possible solution ?

    


    Don't play the audio file live but convert the audio file into raw data. Animate the audio visualization based on the raw data instead.
Not sure how to do this and if it's even possible.

    



    


    Solution #2

    


    Play, record, and save the animation, all in the frontend.
Could use ccapture.js to record and save a canvas.
Use a headless browser to open the page and save it to disk when it's done playing.
Doesn't sound like it's the best solution.

    


    The problem(s)

    


    I have more than 1 canvas.
It takes a while, especially when the audio file is longer than 10 minutes.
Making users wait for a long time can be a deal-breaker.

    


    Possible solution ?

    


    Merge canvases into one.

    


    No idea how to speed up the rendering time and I doubt it's possible this way.