Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (20)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • L’espace de configuration de MediaSPIP

    29 novembre 2010, par

    L’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
    Il permet de configurer finement votre site.
    La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (5292)

  • ffmpeg GRAY16 stream over network

    28 novembre 2023, par Norbert P.

    Im working in a school project where we need to use depth cameras. The camera produces color and depth (in other words 16bit grayscale image). We decided to use ffmpeg, as later on compression could be very useful. For now we got some basic stream running form one PC to other. These settings include :

    


      

    • rtmp
    • 


    • flv as container
    • 


    • pixel format AV_PIX_FMT_YUV420P
    • 


    • codec AV_CODEC_ID_H264
    • 


    


    The problem we are having is with grayscale image. Not every codec is able to cope with this format, so as not every protocol able to work with given codec. I got some settings "working" but receiver side is just stuck on avformat_open_input() method.
I have also tested it with commandline where ffmpeg is listening for connection and same happens.

    


    I include a minimum "working" example of client code. Server can be tested with "ffmpeg.exe -f apng -listen 1 -i rtmp ://localhost:9999/stream/stream1 -c copy -f apng -listen 1 rtmp ://localhost:2222/live/l" or code below. I get no warnings, ffmpeg is newest version installed with "vcpkg install —triplet x64-windows ffmpeg[ffmpeg,ffprobe,zlib]" on windows or packet manager on linux.

    


    The question : Did I miss something ? How do I get it to work ? If you have any better ideas I would very gladly consider them. In the end I need 16 bits of lossless transmission, could be split between channels etc. which I also tried with same effect.

    


    Client code that would have camera and connect to server :

    


    extern "C" {&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavutil></libavutil>channel_layout.h>&#xA;#include <libavutil></libavutil>common.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;}&#xA;&#xA;int main() {&#xA;&#xA;    std::string container = "apng";&#xA;    AVCodecID codec_id = AV_CODEC_ID_APNG;&#xA;    AVPixelFormat pixFormat = AV_PIX_FMT_GRAY16BE;&#xA;&#xA;    AVFormatContext* format_ctx;&#xA;    AVCodec* out_codec;&#xA;    AVStream* out_stream;&#xA;    AVCodecContext* out_codec_ctx;&#xA;    AVFrame* frame;&#xA;    uint8_t* data;&#xA;&#xA;    std::string server = "rtmp://localhost:9999/stream/stream1";&#xA;&#xA;    int width = 1280, height = 720, fps = 30, bitrate = 1000000;&#xA;&#xA;    //initialize format context for output with flv and no filename&#xA;    avformat_alloc_output_context2(&amp;format_ctx, nullptr, container.c_str(), server.c_str());&#xA;    if (!format_ctx) {&#xA;        return 1;&#xA;    }&#xA;&#xA;    //AVIOContext for accessing the resource indicated by url&#xA;    if (!(format_ctx->oformat->flags &amp; AVFMT_NOFILE)) {&#xA;        int avopen_ret = avio_open(&amp;format_ctx->pb, server.c_str(),&#xA;            AVIO_FLAG_WRITE);// , nullptr, nullptr);&#xA;        if (avopen_ret &lt; 0) {&#xA;            fprintf(stderr, "failed to open stream output context, stream will not work\n");&#xA;            return 1;&#xA;        }&#xA;    }&#xA;&#xA;&#xA;    const AVCodec* tmp_out_codec = avcodec_find_encoder(codec_id);&#xA;    //const AVCodec* tmp_out_codec = avcodec_find_encoder_by_name("hevc");&#xA;    out_codec = const_cast(tmp_out_codec);&#xA;    if (!(out_codec)) {&#xA;        fprintf(stderr, "Could not find encoder for &#x27;%s&#x27;\n",&#xA;            avcodec_get_name(codec_id));&#xA;&#xA;        return 1;&#xA;    }&#xA;&#xA;    out_stream = avformat_new_stream(format_ctx, out_codec);&#xA;    if (!out_stream) {&#xA;        fprintf(stderr, "Could not allocate stream\n");&#xA;        return 1;&#xA;    }&#xA;&#xA;    out_codec_ctx = avcodec_alloc_context3(out_codec);&#xA;&#xA;    const AVRational timebase = { 60000, fps };&#xA;    const AVRational dst_fps = { fps, 1 };&#xA;    av_log_set_level(AV_LOG_VERBOSE);&#xA;    //codec_ctx->codec_tag = 0;&#xA;    //codec_ctx->codec_id = codec_id;&#xA;    out_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    out_codec_ctx->width = width;&#xA;    out_codec_ctx->height = height;&#xA;    out_codec_ctx->gop_size = 1;&#xA;    out_codec_ctx->time_base = timebase;&#xA;    out_codec_ctx->pix_fmt = pixFormat;&#xA;    out_codec_ctx->framerate = dst_fps;&#xA;    out_codec_ctx->time_base = av_inv_q(dst_fps);&#xA;    out_codec_ctx->bit_rate = bitrate;&#xA;    //if (fctx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    //{&#xA;    //    codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    //}&#xA;&#xA;    out_stream->time_base = out_codec_ctx->time_base; //will be set afterwards by avformat_write_header to 1/1000&#xA;&#xA;    int ret = avcodec_parameters_from_context(out_stream->codecpar, out_codec_ctx);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Could not initialize stream codec parameters!\n");&#xA;        return 1;&#xA;    }&#xA;&#xA;    AVDictionary* codec_options = nullptr;&#xA;    av_dict_set(&amp;codec_options, "tune", "zerolatency", 0);&#xA;&#xA;    // open video encoder&#xA;    ret = avcodec_open2(out_codec_ctx, out_codec, &amp;codec_options);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Could not open video encoder!\n");&#xA;        return 1;&#xA;    }&#xA;    av_dict_free(&amp;codec_options);&#xA;&#xA;    out_stream->codecpar->extradata_size = out_codec_ctx->extradata_size;&#xA;    out_stream->codecpar->extradata = static_cast(av_mallocz(out_codec_ctx->extradata_size));&#xA;    memcpy(out_stream->codecpar->extradata, out_codec_ctx->extradata, out_codec_ctx->extradata_size);&#xA;&#xA;    av_dump_format(format_ctx, 0, server.c_str(), 1);&#xA;&#xA;    frame = av_frame_alloc();&#xA;&#xA;    int sz = av_image_get_buffer_size(pixFormat, width, height, 32);&#xA;#ifdef _WIN32&#xA;    data = (uint8_t*)_aligned_malloc(sz, 32);&#xA;    if (data == NULL)&#xA;        return ENOMEM;&#xA;#else&#xA;    ret = posix_memalign(reinterpret_cast(&amp;data), 32, sz);&#xA;#endif&#xA;    av_image_fill_arrays(frame->data, frame->linesize, data, pixFormat, width, height, 32);&#xA;    frame->format = pixFormat;&#xA;    frame->width = width;&#xA;    frame->height = height;&#xA;    frame->pts = 1;&#xA;    if (avformat_write_header(format_ctx, nullptr) &lt; 0) //Header making problems!!!&#xA;    {&#xA;        fprintf(stderr, "Could not write header!\n");&#xA;        return 1;&#xA;    }&#xA;&#xA;    printf("stream time base = %d / %d \n", out_stream->time_base.num, out_stream->time_base.den);&#xA;&#xA;    double inv_stream_timebase = (double)out_stream->time_base.den / (double)out_stream->time_base.num;&#xA;    printf("Init OK\n");&#xA;    /*  Init phase end*/&#xA;    int dts = 0;&#xA;    int frameNo = 0;&#xA;&#xA;    while (true) {&#xA;        //Fill dummy frame with something&#xA;        for (int y = 0; y &lt; height; y&#x2B;&#x2B;) {&#xA;            uint16_t color = ((y &#x2B; frameNo) * 256) % (256 * 256);&#xA;            for (int x = 0; x &lt; width; x&#x2B;&#x2B;) {&#xA;                data[x&#x2B;y*width] = color;&#xA;            }&#xA;        }&#xA;&#xA;        memcpy(frame->data[0], data, 1280 * 720 * sizeof(uint16_t));&#xA;        AVPacket* pkt = av_packet_alloc();&#xA;&#xA;        int ret = avcodec_send_frame(out_codec_ctx, frame);&#xA;        if (ret &lt; 0)&#xA;        {&#xA;            fprintf(stderr, "Error sending frame to codec context!\n");&#xA;            return ret;&#xA;        }&#xA;        while (ret >= 0) {&#xA;            ret = avcodec_receive_packet(out_codec_ctx, pkt);&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                break;&#xA;            else if (ret &lt; 0) {&#xA;                fprintf(stderr, "Error during encoding\n");&#xA;                break;&#xA;            }&#xA;            pkt->dts = dts;&#xA;            pkt->pts = dts;&#xA;            dts &#x2B;= 33;&#xA;            av_write_frame(format_ctx, pkt);&#xA;            frameNo&#x2B;&#x2B;;&#xA;            av_packet_unref(pkt);&#xA;        }&#xA;        printf("Streamed %d frames\n", frameNo);&#xA;    }&#xA;    return 0;&#xA;}&#xA;

    &#xA;

    And part of server that should receive. code where is stops and waits

    &#xA;

    extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavformat></libavformat>avio.h>&#xA;}&#xA;&#xA;int main() {&#xA;    AVFormatContext* fmt_ctx = NULL;&#xA;    av_log_set_level(AV_LOG_VERBOSE);&#xA;    AVDictionary* options = nullptr;&#xA;    av_dict_set(&amp;options, "protocol_whitelist", "file,udp,rtp,tcp,rtmp,rtsp,hls", 0);&#xA;    av_dict_set(&amp;options, "timeout", "500000", 0); // Timeout in microseconds &#xA;&#xA;//Next Line hangs   &#xA;    int ret = avformat_open_input(&amp;fmt_ctx, "rtmp://localhost:9999/stream/stream1", NULL, &amp;options);&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Could not open RTMP stream\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Find the first video stream&#xA;    ret = avformat_find_stream_info(fmt_ctx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        return ret;&#xA;    }&#xA;    //...&#xA;} &#xA;&#xA;

    &#xA;

    Edit :&#xA;I tried to just create a animated png and tried to stream that from the console to another console window to avoid any programming mistakes on my side. It was the same, I just could not get 16 PNG encoded stream to work. I hung trying to receive and closed when the file ended with in total zero frames received.

    &#xA;

    I managed to get other thing working :&#xA;To not encode gray frames with YUV420, I installed ffmpeg with libx264 support (was thinking is the same as H264, which in code is, but it adds support to new pixel formats). Used H264 again but with GRAY8 with doubled image width and reconstructing the image on the other side.

    &#xA;

    Maybe as a side note, I could not get any other formats to work. Is "flv" the only option here ? Could I get more performance if I changed it to... what ?

    &#xA;

  • Python : ani.save very slow. Any alternatives to create videos ?

    14 novembre 2023, par Czeskleba

    Im doing some simple diffusion calculations. I save 2 matrices to 2 datasets every so many steps (every 2s or so) to a single .h5 file. After that I then load the file in another script, create some figures (2 subplots etc., see/run code - i know could be prettier). Then I use matplotlib.animation to make the animation. In the code below, in the very last lines, I then run the ani.save command from matplotlib.

    &#xA;

    And that's where the problem is. The animation is created within 2 seconds, even for my longer animations (14.755 frames, done in under 2s at 8284 it/s) but after that, ani.save in line 144 takes forever (it didn't finish over night). It reserves/uses about 10gb of my RAM constantly but seemingly takes forever. If you run the code below be sure to set the frames_to_do (line 20) to something like 30 or 60 to see that it does in fact save an mp4 for shorter videos. You can set it higher to see how fast the time to save stuff increases to something unreasonable.

    &#xA;

    I've been fiddling this for 2 days now and I cant figure it out. I guess my question is : Is there any way to create the video in a reasonable time like this ? Or do I need something other than animation ?

    &#xA;

    You should be able to just run the code. Ill provide a diffusion_array.h5 with 140 frames so you dont have to create a dummy file, if I can figure out how to upload something like this safely. (The results are with dummy numbers for now, diffusion coefficients etc. are not right yet.)&#xA;I used dropbox. Not sure if thats allowed, if not I'll delete the link and uhh PM me or something ?

    &#xA;

    https://www.dropbox.com/scl/fi/fv9stfqkm4trmt3zwtvun/diffusion_array.h5?rlkey=2oxuegnlcxq0jt6ed77rbskyu&dl=0

    &#xA;

    Here is the code :

    &#xA;

    import h5py&#xA;import matplotlib.pyplot as plt&#xA;import matplotlib.colors as mcolors&#xA;from matplotlib.animation import FuncAnimation&#xA;from tqdm import tqdm&#xA;import numpy as np&#xA;&#xA;&#xA;# saving the .mp4 after tho takes forever&#xA;&#xA;# Create an empty figure and axis&#xA;fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 9), dpi=96)&#xA;&#xA;# Load all saved arrays into a list&#xA;file_name = &#x27;diffusion_array.h5&#x27;&#xA;loaded_u_arrays = []&#xA;loaded_h_arrays = []&#xA;frames_to_do = 14755  # for now like this, use # version once the slow mp4 convert is cleared up&#xA;&#xA;# with h5py.File(file_name, &#x27;r&#x27;) as hf:&#xA;#     for key in hf.keys():&#xA;#         if key.startswith(&#x27;u_snapshot_&#x27;):&#xA;#             loaded_u_arrays.append(hf[key][:])&#xA;#         elif key.startswith(&#x27;h_snapshot_&#x27;):&#xA;#             loaded_h_arrays.append(hf[key][:])&#xA;&#xA;with h5py.File(file_name, &#x27;r&#x27;) as hf:&#xA;    for i in range(frames_to_do):&#xA;        target_key1 = f&#x27;u_snapshot_{i:05d}&#x27;&#xA;        target_key2 = f&#x27;h_snapshot_{i:05d}&#x27;&#xA;        if target_key1 in hf:&#xA;            loaded_u_arrays.append(hf[target_key1][:])&#xA;        else:&#xA;            print(f&#x27;Dataset u for time step {i} not found in the file.&#x27;)&#xA;        if target_key2 in hf:&#xA;            loaded_h_arrays.append(hf[target_key2][:])&#xA;        else:&#xA;            print(f&#x27;Dataset h for time step {i} not found in the file.&#x27;)&#xA;&#xA;# Create "empty" imshow objects&#xA;# First one&#xA;norm1 = mcolors.Normalize(vmin=140, vmax=400)&#xA;cmap1 = plt.get_cmap(&#x27;hot&#x27;)&#xA;cmap1.set_under(&#x27;0.85&#x27;)&#xA;im1 = ax1.imshow(loaded_u_arrays[0], cmap=cmap1, norm=norm1)&#xA;ax1.set_title(&#x27;Diffusion Heatmap&#x27;)&#xA;ax1.set_xlabel(&#x27;X&#x27;)&#xA;ax1.set_ylabel(&#x27;Y&#x27;)&#xA;cbar_ax = fig.add_axes([0.05, 0.15, 0.03, 0.7])&#xA;cbar_ax.set_xlabel(&#x27;$T$ / K&#x27;, labelpad=20)&#xA;fig.colorbar(im1, cax=cbar_ax)&#xA;&#xA;&#xA;# Second one&#xA;ax2 = plt.subplot(1, 2, 2)&#xA;norm2 = mcolors.Normalize(vmin=-0.1, vmax=5)&#xA;cmap2 = plt.get_cmap(&#x27;viridis&#x27;)&#xA;cmap2.set_under(&#x27;0.85&#x27;)&#xA;im2 = ax2.imshow(loaded_h_arrays[0], cmap=cmap2, norm=norm2)&#xA;ax2.set_title(&#x27;Diffusion Hydrogen&#x27;)&#xA;ax2.set_xlabel(&#x27;X&#x27;)&#xA;ax2.set_ylabel(&#x27;Y&#x27;)&#xA;cbar_ax = fig.add_axes([0.9, 0.15, 0.03, 0.7])&#xA;cbar_ax.set_xlabel(&#x27;HD in ml/100g&#x27;, labelpad=20)&#xA;fig.colorbar(im2, cax=cbar_ax)&#xA;&#xA;# General&#xA;fig.subplots_adjust(right=0.85)&#xA;time_text = ax2.text(-15, 0.80, f&#x27;Time: {0} s&#x27;, transform=plt.gca().transAxes, color=&#x27;black&#x27;, fontsize=20)&#xA;&#xA;# Annotations&#xA;# Heat 1&#xA;marker_style = dict(marker=&#x27;o&#x27;, markersize=6, markerfacecolor=&#x27;black&#x27;, markeredgecolor=&#x27;black&#x27;)&#xA;ax1.scatter(*[10, 40], s=marker_style[&#x27;markersize&#x27;], c=marker_style[&#x27;markerfacecolor&#x27;],&#xA;            edgecolors=marker_style[&#x27;markeredgecolor&#x27;])&#xA;ann_heat1 = ax1.annotate(f&#x27;Temp: {loaded_u_arrays[0][40, 10]:.0f}&#x27;, xy=[10, 40], xycoords=&#x27;data&#x27;,&#xA;             xytext=([10, 40][0], [10, 40][1] &#x2B; 48), textcoords=&#x27;data&#x27;,&#xA;             arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color=&#x27;black&#x27;)&#xA;# Heat 2&#xA;ax1.scatter(*[140, 85], s=marker_style[&#x27;markersize&#x27;], c=marker_style[&#x27;markerfacecolor&#x27;],&#xA;            edgecolors=marker_style[&#x27;markeredgecolor&#x27;])&#xA;ann_heat2 = ax1.annotate(f&#x27;Temp: {loaded_u_arrays[0][85, 140]:.0f}&#x27;, xy=[140, 85], xycoords=&#x27;data&#x27;,&#xA;             xytext=([140, 85][0] &#x2B; 55, [140, 85][1] &#x2B; 3), textcoords=&#x27;data&#x27;,&#xA;             arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color=&#x27;black&#x27;)&#xA;&#xA;# Diffusion 1&#xA;marker_style = dict(marker=&#x27;o&#x27;, markersize=6, markerfacecolor=&#x27;black&#x27;, markeredgecolor=&#x27;black&#x27;)&#xA;ax2.scatter(*[10, 40], s=marker_style[&#x27;markersize&#x27;], c=marker_style[&#x27;markerfacecolor&#x27;],&#xA;            edgecolors=marker_style[&#x27;markeredgecolor&#x27;])&#xA;ann_diff1 = ax2.annotate(f&#x27;HD: {loaded_h_arrays[0][40, 10]:.0f}&#x27;, xy=[10, 40], xycoords=&#x27;data&#x27;,&#xA;             xytext=([10, 40][0], [10, 40][1] &#x2B; 48), textcoords=&#x27;data&#x27;,&#xA;             arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color=&#x27;black&#x27;)&#xA;# Diffusion 2&#xA;ax2.scatter(*[140, 85], s=marker_style[&#x27;markersize&#x27;], c=marker_style[&#x27;markerfacecolor&#x27;],&#xA;            edgecolors=marker_style[&#x27;markeredgecolor&#x27;])&#xA;ann_diff2 = ax2.annotate(f&#x27;HD: {loaded_h_arrays[0][85, 140]:.0f}&#x27;, xy=[140, 85], xycoords=&#x27;data&#x27;,&#xA;             xytext=([140, 85][0] &#x2B; 55, [140, 85][1] &#x2B; 3), textcoords=&#x27;data&#x27;,&#xA;             arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color=&#x27;black&#x27;)&#xA;&#xA;&#xA;# Function to update the animation&#xA;def update(frame, *args):&#xA;    loaded_u_array, loaded_h_array = args&#xA;&#xA;    s_per_frame = 2  # during weld/cooling you save a state every 2s&#xA;    frames_to_room_temp = 7803  # that means this many frames need to be animated&#xA;    dt_big = 87  # during "just diffusion" you save every 10 frame but 87s pass in those&#xA;&#xA;    # Update the time step shown&#xA;    if frame &lt;= frames_to_room_temp:&#xA;        im1.set_data(loaded_u_array[frame])&#xA;        im2.set_data(loaded_h_array[frame])&#xA;        time_text.set_text(f&#x27;Time: {frame * s_per_frame} s&#x27;)&#xA;&#xA;    else:&#xA;        im1.set_data(loaded_u_array[frame])&#xA;        im2.set_data(loaded_h_array[frame])&#xA;        calc_time = int(((2 * frames_to_room_temp) &#x2B; (frame - frames_to_room_temp) * 87) / 3600)&#xA;        time_text.set_text(f&#x27;Time: {calc_time} s&#x27;)&#xA;&#xA;    # Annotate some points&#xA;    ann_heat1.set_text(f&#x27;Temp: {loaded_u_arrays[frame][40, 10]:.0f}&#x27;)&#xA;    ann_heat2.set_text(f&#x27;Temp: {loaded_u_arrays[frame][85, 140]:.0f}&#x27;)&#xA;    ann_diff1.set_text(f&#x27;HD: {loaded_h_arrays[frame][40, 10]:.0f}&#x27;)&#xA;    ann_diff2.set_text(f&#x27;HD: {loaded_h_arrays[frame][85, 140]:.0f}&#x27;)&#xA;&#xA;    return im1, im2  # Return the updated artists&#xA;&#xA;&#xA;# Create the animation without displaying it&#xA;ani = FuncAnimation(fig, update, frames=frames_to_do, repeat=False, blit=True, interval=1,&#xA;                    fargs=(loaded_u_arrays, loaded_h_arrays))  # frames=len(loaded_u_arrays)&#xA;&#xA;# Create the progress bar with tqdm&#xA;with tqdm(total=frames_to_do, desc=&#x27;Creating Animation&#x27;) as pbar:  # total=len(loaded_u_arrays)&#xA;    for i in range(frames_to_do):  # for i in range(len(loaded_u_arrays)):&#xA;        update(i, loaded_u_arrays, loaded_h_arrays)  # Manually update the frame with both datasets&#xA;        pbar.update(1)  # Update the progress bar&#xA;&#xA;# Save the animation as a video file (e.g., MP4)&#xA;print("Converting to .mp4 now. This may take some time. This is normal, wait for Python to finish this process.")&#xA;ani.save(&#x27;diffusion_animation.mp4&#x27;, writer=&#x27;ffmpeg&#x27;, dpi=96, fps=60)&#xA;&#xA;# Close the figure to prevent it from being displayed&#xA;plt.close(fig)&#xA;&#xA;

    &#xA;

  • Ffmpeg PNG image sequence into 360 video

    5 février, par Irmak Ozarslan

    I'm trying to make a PNG sequence I got from blender(already in equirectangular format, same properties all files) into 360 equirectangular video for projection mapping. My files have 2 step frame rate resulting in file names such as 002-004-006.... How is this done with ffmpeg ? I'm total noob with this program :) Thank you.

    &#xA;

    I checked ffmpeg website, however for 360 format with filenames that does not have one step like 01-02-03, instead 2 steps like 02-04-06, I did not really understand how to convert my 7000 PNG images into 360 video.

    &#xA;