Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (100)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

Sur d’autres sites (12769)

  • Send image and audio data to FFmpeg via named pipes

    5 mai, par Nicke Manarin

    I'm able to send frames one by one to FFmpeg via a name pipe to create a video out of them, but if I try sending audio to a second named pipe, FFmpeg only accepts 1 frame in the frame pipe and starts reading from the audio pipe soon after it.

    


    ffmpeg.exe -loglevel debug -hwaccel auto 
-f:v rawvideo -r 25 -pix_fmt bgra -video_size 782x601 -i \\.\pipe\video_to_ffmpeg 
-f:a s16le -ac 2 -ar 48000 -i \\.\pipe\audio_to_ffmpeg 
-c:v libx264 -preset fast -pix_fmt yuv420p 
-vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -crf 23 -f:v mp4 -vsync vfr 
-c:a aac -b:a 128k -ar 48000 -ac 2 
-y "C:\Users\user\Desktop\video.mp4"


    


    I start both pipes like so :

    


    _imagePipeServer = new NamedPipeServerStream(ImagePipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
_imagePipeStreamWriter = new StreamWriter(_imagePipeServer);
_imagePipeServer.BeginWaitForConnection(null, null);

_audioPipeServer = new NamedPipeServerStream(AudioPipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
_audioPipeStreamWriter = new StreamWriter(_audioPipeServer);
_audioPipeServer.BeginWaitForConnection(null, null);


    


    And send the data to the pipes using these methods :

    


    public void EncodeFrame(byte[] data)
{
    if (_imagePipeServer?.IsConnected != true)
        throw new FFmpegException("Pipe not connected", Arguments, Output);

    _imagePipeStreamWriter?.BaseStream.Write(data, 0, data.Length);
}


    


    public void EncodeAudio(ISampleProvider provider, long length)
{
    if (_audioPipeServer?.IsConnected != true)
        throw new FFmpegException("Pipe not connected", Arguments, Output);

    var buffer = new byte[provider.WaveFormat.AverageBytesPerSecond * length / TimeSpan.TicksPerSecond];
    var bytesRead = provider.ToWaveProvider().Read(buffer, 0, buffer.Length);

    if (bytesRead < 1)
        return;

    _audioPipeStreamWriter?.BaseStream.Write(buffer, 0, bytesRead);
    _audioPipeStreamWriter?.BaseStream.Flush();
}


    


    Not sending the audio (and thus not creating the audio pipe) works, with FFmpeg taking one frame at time and creating the video normally.

    


    But if I try sending the audio via a secondary pipe, I can only send one frame. This is the output when that happens (Btw, FFmpeg v7.1) :

    


    Splitting the commandline.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-hwaccel' ... matched as option 'hwaccel' (use HW accelerated decoding) with argument 'auto'.
Reading option '-f:v' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 'rawvideo'.
Reading option '-r' ... matched as option 'r' (override input framerate/convert to given output framerate (Hz value, fraction or abbreviation)) with argument '25'.
Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'bgra'.
Reading option '-video_size' ... matched as AVOption 'video_size' with argument '782x601'.
Reading option '-i' ... matched as input url with argument '\\.\pipe\video_to_ffmpeg'.
Reading option '-f:a' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 's16le'.
Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'.
Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '48000'.
Reading option '-i' ... matched as input url with argument '\\.\pipe\audio_to_ffmpeg'.
Reading option '-c:v' ... matched as option 'c' (select encoder/decoder ('copy' to copy stream without reencoding)) with argument 'libx264'.
Reading option '-preset' ... matched as AVOption 'preset' with argument 'fast'.
Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'yuv420p'.
Reading option '-vf' ... matched as option 'vf' (alias for -filter:v (apply filters to video streams)) with argument 'scale=trunc(iw/2)*2:trunc(ih/2)*2'.
Reading option '-crf' ... matched as AVOption 'crf' with argument '23'.
Reading option '-f:v' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 'mp4'.
Reading option '-fps_mode' ... matched as option 'fps_mode' (set framerate mode for matching video streams; overrides vsync) with argument 'vfr'.
Reading option '-c:a' ... matched as option 'c' (select encoder/decoder ('copy' to copy stream without reencoding)) with argument 'aac'.
Reading option '-b:a' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '128k'.
Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '48000'.
Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option 'C:\Users\user\Desktop\video.mp4' ... matched as output url.
Finished splitting the commandline.

Parsing a group of options: global.
Applying option loglevel (set logging level) with argument debug.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.

Parsing a group of options: input url \\.\pipe\video_to_ffmpeg.
Applying option hwaccel (use HW accelerated decoding) with argument auto.
Applying option f:v (force container format (auto-detected otherwise)) with argument rawvideo.
Applying option r (override input framerate/convert to given output framerate (Hz value, fraction or abbreviation)) with argument 25.
Applying option pix_fmt (set pixel format) with argument bgra.
Successfully parsed a group of options.

Opening an input file: \\.\pipe\video_to_ffmpeg.
[rawvideo @ 000001c302ee08c0] Opening '\\.\pipe\video_to_ffmpeg' for reading
[file @ 000001c302ee1000] Setting default whitelist 'file,crypto,data'
[rawvideo @ 000001c302ee08c0] Before avformat_find_stream_info() pos: 0 bytes read:65536 seeks:0 nb_streams:1
[rawvideo @ 000001c302ee08c0] All info found
[rawvideo @ 000001c302ee08c0] After avformat_find_stream_info() pos: 1879928 bytes read:1879928 seeks:0 frames:1
Input #0, rawvideo, from '\\.\pipe\video_to_ffmpeg':
  Duration: N/A, start: 0.000000, bitrate: 375985 kb/s
  Stream #0:0, 1, 1/25: Video: rawvideo, 1 reference frame (BGRA / 0x41524742), bgra, 782x601, 0/1, 375985 kb/s, 25 tbr, 25 tbn
Successfully opened the file.

Parsing a group of options: input url \\.\pipe\audio_to_ffmpeg.
Applying option f:a (force container format (auto-detected otherwise)) with argument s16le.
Applying option ac (set number of audio channels) with argument 2.
Applying option ar (set audio sampling rate (in Hz)) with argument 48000.
Successfully parsed a group of options.

Opening an input file: \\.\pipe\audio_to_ffmpeg.
[s16le @ 000001c302ef5380] Opening '\\.\pipe\audio_to_ffmpeg' for reading
[file @ 000001c302ef58c0] Setting default whitelist 'file,crypto,data'


    


    The difference if I try sending 1 frame then some bytes (arbitrary length based on fps) of audio is that I get this extra comment at the end :

    


    [s16le @ 0000025948c96d00] Before avformat_find_stream_info() pos: 0 bytes read:15360 seeks:0 nb_streams:1


    


    Extra calls to EncodeFrame() hang forever at the BaseStream.Write(frameBytes, 0, frameBytes.Length) call, suggesting that FFmpeg is no longer reading the data.

    


    Something is causing FFmpeg to close or stop reading the first pipe and only accept data from the second one.

    


    Perhaps the command is missing something ?

    



    


    🏆 Working solution

    


    I started using two BlockingCollection objects, with the consumers running in separate tasks.

    


    Start the process, setting up the pipes :

    


    private Process? _process;
private NamedPipeServerStream? _imagePipeServer;
private NamedPipeServerStream? _audioPipeServer;
private StreamWriter? _imagePipeStreamWriter;
private StreamWriter? _audioPipeStreamWriter;
private readonly BlockingCollection _videoCollection = new();
private readonly BlockingCollection _audioCollection = new();

private const string ImagePipeName = "video_to_ffmpeg";
private const string AudioPipeName = "audio_to_ffmpeg";
private const string PipeStructure = @"\\.\pipe\"; //This part is only sent to FFmpeg, not to the .NET pipe creation.

public void StartEncoding(string arguments)
{
    _process = new Process
    {
        StartInfo = new ProcessStartInfo
        {
            FileName = "path to ffmpeg",
            Arguments = arguments.Replace("{image}", PipeStructure + ImagePipeName).Replace("{audio}", PipeStructure + AudioPipeName),
            RedirectStandardInput = false,
            RedirectStandardOutput = true,
            RedirectStandardError = true,
            UseShellExecute = false,
            CreateNoWindow = true
        }
    };

    StartFramePipeConnection();
    StartAudioPipeConnection();

    _process. Start();
    _process.BeginErrorReadLine();
    _process.BeginOutputReadLine();
}

private void StartFramePipeConnection()
{
    if (_imagePipeServer != null)
    {
        if (_imagePipeServer.IsConnected)
            _imagePipeServer.Disconnect();

        _imagePipeServer.Dispose();
    }

    _imagePipeServer = new NamedPipeServerStream(ImagePipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
    _imagePipeStreamWriter = new StreamWriter(_imagePipeServer);
    _imagePipeServer.BeginWaitForConnection(VideoPipe_Connected, null);
}

private void StartAudioPipeConnection()
{
    if (_audioPipeServer != null)
    {
        if (_audioPipeServer.IsConnected)
            _audioPipeServer.Disconnect();

        _audioPipeServer.Dispose();
    }

    _audioPipeServer = new NamedPipeServerStream(AudioPipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
    _audioPipeStreamWriter = new StreamWriter(_audioPipeServer);
    _audioPipeServer.BeginWaitForConnection(AudioPipe_Connected, null);
}


    


    Start sending the data as soon as the pipe gets connected. Once the BlockingCollection gets its signal that no more data is going to be sent, it will leave the foreach block and it will wait for the pipe to drain its data.

    


    private void VideoPipe_Connected(IAsyncResult ar)
{
    Task.Run(() =>
    {
        try
        {
            foreach (var frameBytes in _videoCollection.GetConsumingEnumerable())
            {                    
                _imagePipeStreamWriter?.BaseStream.Write(frameBytes, 0, frameBytes.Length);
            }

            _imagePipeServer?.WaitForPipeDrain();
            _imagePipeStreamWriter?.Close();
        }
        catch (Exception e)
        {
            //Logging
            throw;
        }
    });
}

private void AudioPipe_Connected(IAsyncResult ar)
{
    Task.Run(() =>
    {
        try
        {
            foreach (var audioChunk in _audioCollection.GetConsumingEnumerable())
            {
                _audioPipeStreamWriter?.BaseStream.Write(audioChunk, 0, audioChunk.Length);
            }

            _audioPipeServer?.WaitForPipeDrain();
            _audioPipeStreamWriter?.Close();
        }
        catch (Exception e)
        {
            //Logging
            throw;
        }
    });
}


    


    You can send the image and audio data as soon as the BlockingCollections are initiated, no need to wait for the pipes to connect.

    


    public void EncodeImage(byte[] data)
{
    _videoCollection.Add(data);
}

public void EncodeAudio(ISampleProvider provider, long length)
{
    var sampleCount = (int)(provider.WaveFormat.SampleRate * ((double)length / TimeSpan.TicksPerSecond) * provider.WaveFormat.Channels);
    var floatBuffer = new float[sampleCount];

    var samplesRead = provider.Read(floatBuffer, 0, sampleCount);

    if (samplesRead < 1)
        return 0;

    var byteBuffer = new byte[samplesRead * 4]; //4 bytes per float, f32le.
    Buffer.BlockCopy(floatBuffer, 0, byteBuffer, 0, byteBuffer.Length);

    
    _audioCollection.Add(byteBuffer);
}


    


    Once you finished producing data, make sure to signal the BlockingCollections :

    


    public void FinishEncoding()
{
    //Signal the end of video/audio producer.
    _videoCollection.CompleteAdding();
    _audioCollection.CompleteAdding();

    //Waits for 20 seconds for encoding to finish.
    _process?.WaitForExit(20_000);
}


    


    The FFmpeg arguments were changed slightly :

    


    -loglevel trace -hwaccel auto 
-f:v rawvideo -probesize 32 -r 25 -pix_fmt bgra -video_size 1109x627 -i {image} 
-f:a f32le -ac 2 -ar 48000 -probesize 32 -i {audio} 
-c:v libx264 -preset fast -pix_fmt yuv420p 
-vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -crf 23 -f:v mp4 -fps_mode vfr 
-c:a aac -b:a 128k -ar 48000 -ac 2 
-y "C:\Users\user\Desktop\Video.mp4"


    


  • Trying to get the current FPS and Frametime value into Matplotlib title

    16 juin 2022, par TiSoBr

    I try to turn an exported CSV with benchmark logs into an animated graph. Works so far, but I can't get the Titles on top of both plots with their current FPS and frametime in ms values animated.

    


    Thats the output I'm getting. Looks like he simply stores all values in there instead of updating them ?

    


    Screengrab of cli output
Screengrab of the final output (inverted)

    


    from __future__ import division
import sys, getopt
import time
import matplotlib
import numpy as np
import subprocess
import math
import re
import argparse
import os
import glob

import matplotlib.animation as animation
import matplotlib.pyplot as plt


def check_pos(arg):
    ivalue = int(arg)
    if ivalue <= 0:
        raise argparse.ArgumentTypeError("%s Not a valid positive integer value" % arg)
    return True
    
def moving_average(x, w):
    return np.convolve(x, np.ones(w), 'valid') / w
    

parser = argparse.ArgumentParser(
    description = "Example Usage python frame_scan.py -i mangohud -c '#fff' -o mymov",
    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("-i", "--input", help = "Input data set from mangohud", required = True, nargs='+', type=argparse.FileType('r'), default=sys.stdin)
parser.add_argument("-o", "--output", help = "Output file name", required = True, type=str, default = "")
parser.add_argument("-r", "--framerate", help = "Set the desired framerate", required = False, type=float, default = 60)
parser.add_argument("-c", "--colors", help = "Colors for the line graphs; must be in quotes", required = True, type=str, nargs='+', default = 60)
parser.add_argument("--fpslength", help = "Configures how long the data will be shown on the FPS graph", required = False, type=float, default = 5)
parser.add_argument("--fpsthickness", help = "Changes the line width for the FPS graph", required = False, type=float, default = 3)
parser.add_argument("--frametimelength", help = "Configures how long the data will be shown on the frametime graph", required = False, type=float, default = 2.5)
parser.add_argument("--frametimethickness", help = "Changes the line width for the frametime graph", required = False, type=float, default = 1.5)
parser.add_argument("--graphcolor", help = "Changes all of the line colors on the graph; expects hex value", required = False, default = '#FFF')
parser.add_argument("--graphthicknes", help = "Changes the line width of the graph", required = False, type=float, default = 1)
parser.add_argument("-ts","--textsize", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 23)
parser.add_argument("-fsM","--fpsmax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 180)
parser.add_argument("-fsm","--fpsmin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fss","--fpsstep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 30)
parser.add_argument("-ftM","--frametimemax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 50)
parser.add_argument("-ftm","--frametimemin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fts","--frametimestep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 10)

arg = parser.parse_args()
status = False


if arg.input:
    status = True
if arg.output:
    status = True
if arg.framerate:
    status = check_pos(arg.framerate)
if arg.fpslength:
    status = check_pos(arg.fpslength)
if arg.fpsthickness:
    status = check_pos(arg.fpsthickness)
if arg.frametimelength:
    status = check_pos(arg.frametimelength)
if arg.frametimethickness:
    status = check_pos(arg.frametimethickness)
if arg.colors:
    if len(arg.output) != len(arg.colors):
        for i in arg.colors:
            if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", i):
                status = True
            else:
                print('{} : Isn\'t a valid hex value!'.format(i))
                status = False
    else:
        print('You must have the same amount of colors as files in input!')
        status = False
if arg.graphcolor:
    if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", arg.graphcolor):
        status = True
    else:
        print('{} : Isn\'t a vaild hex value!'.format(arg.graphcolor))
        status = False
if arg.graphthicknes:
    status = check_pos(arg.graphthicknes)
if arg.textsize:
    status = check_pos(arg.textsize)
if not status:
    print("For a list of arguments try -h or --help") 
    exit()


# Empty output folder
files = glob.glob('/output/*')
for f in files:
    os.remove(f)


# We need to know the longest recording out of all inputs so we know when to stop the video
longest_data = 0

# Format the raw data into a list of tuples (fps, frame time in ms, time from start in micro seconds)
# The first three lines of our data are setup so we ignore them
data_formated = []
for li, i in enumerate(arg.input):
    t = 0
    sublist = []
    for line in i.readlines()[3:]:
        x = line[:-1].split(',')
        fps = float(x[0])
        frametime = int(x[1])/1000 # convert from microseconds to milliseconds
        elapsed = int(x[11])/1000 # convert from nanosecond to microseconds
        data = (fps, frametime, elapsed)
        sublist.append(data)
    # Compare last entry of each list with the 
    if sublist[-1][2] >= longest_data:
        longest_data = sublist[-1][2]
    data_formated.append(sublist)


max_blocksize = max(arg.fpslength, arg.frametimelength) * arg.framerate
blockSize = arg.framerate * arg.fpslength


# Get step time in microseconds
step = (1/arg.framerate) * 1000000 # 1000000 is one second in microseconds
frame_size_fps = (arg.fpslength * arg.framerate) * step
frame_size_frametime = (arg.frametimelength * arg.framerate) * step


# Total frames will have to be updated for more then one source
total_frames = int(int(longest_data) / step)


if True: # Gonna be honest, this only exists so I can collapse this block of code

    # Sets up our figures to be next to each other (horizontally) and with a ratio 3:1 to each other
    fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})

    # Size of whole output 1920x360 1080/3=360
    fig.set_size_inches(19.20, 3.6)

    # Make the background transparent
    fig.patch.set_alpha(0)


    # Loop through all active axes; saves a lot of lines in ax1.do_thing(x) ax2.do_thing(x)
    for axes in fig.axes:

        # Set all splines to the same color and width
        for loc, spine in axes.spines.items():
            axes.spines[loc].set_color(arg.graphcolor)
            axes.spines[loc].set_linewidth(arg.graphthicknes)

        # Make sure we don't render any data points as this will be our background
        axes.set_xlim(-(max_blocksize * step), 0)
        

        # Make both plots transparent as well as the background
        axes.patch.set_alpha(.5)
        axes.patch.set_color('#020202')

        # Change the Y axis info to be on the right side
        axes.yaxis.set_label_position("right")
        axes.yaxis.tick_right()

        # Add the white lines across the graphs; the location of the lines are based off set_{}ticks
        axes.grid(alpha=.8, b=True, which='both', axis='y', color=arg.graphcolor, linewidth=arg.graphthicknes)

        # Remove X axis info
        axes.set_xticks([])

    # Add a another Y axis so ticks are on both sides
    tmp_ax1 = ax1.secondary_yaxis("left")
    tmp_ax2 = ax2.secondary_yaxis("left")

    # Set both to the same values
    ax1.set_yticks(np.arange(arg.fpsmin, arg.fpsmax + 1, step=arg.fpsstep))
    ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))
    tmp_ax1.set_yticks(np.arange(arg.fpsmin , arg.fpsmax + 1, step=arg.fpsstep))
    tmp_ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))

    # Change the "ticks" to be white and correct size also change font size
    ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    tmp_ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0) # Label size of 0 disables the fps/frame numbers
    tmp_ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0)


    # Limits Y scale
    ax1.set_ylim(arg.fpsmin,arg.fpsmax + 1)
    ax2.set_ylim(arg.frametimemin,arg.frametimemax + 1)

    # Add an empty plot
    line = ax1.plot([], lw=arg.fpsthickness)
    line2 = ax2.plot([], lw=arg.frametimethickness)

    # Sets all the data for our benchmark
    for benchmarks, color in zip(data_formated, arg.colors):
        y = moving_average([x[0] for x in benchmarks], 25)
        y2 = [x[1] for x in benchmarks]
        x = [x[2] for x in benchmarks]
        line += ax1.plot(x[12:-12],y, c=color, lw=arg.fpsthickness)
        line2 += ax2.step(x,y2, c=color, lw=arg.fpsthickness)
    
    # Add titles with values
    ax1.set_title("Avg. frames per second: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')
    ax2.set_title("Frametime in ms: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')  

    # Removes unwanted white space; also controls the space between the two graphs
    plt.tight_layout(pad=0, h_pad=0, w_pad=2.5)
    
    fig.canvas.draw()

    # Cache the background
    axbackground = fig.canvas.copy_from_bbox(ax1.bbox)
    ax2background = fig.canvas.copy_from_bbox(ax2.bbox)


# Create a ffmpeg instance as a subprocess we will pipe the finished frame into ffmpeg
# encoded in Apple QuickTime (qtrle) for small(ish) file size and alpha support
# There are free and opensource types that will also do this but with much larger sizes
canvas_width, canvas_height = fig.canvas.get_width_height()
outf = '{}.mov'.format(arg.output)
cmdstring = ('ffmpeg',
                '-stats', '-hide_banner', '-loglevel', 'error', # Makes ffmpeg less annoying / to much console output
                '-y', '-r', '60', # set the fps of the video
                '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
                '-pix_fmt', 'argb', # format cant be changed since this is what  `fig.canvas.tostring_argb()` outputs
                '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
                '-vcodec', 'qtrle', outf) # output encoding must support alpha channel
pipe = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

def render_frame(frame : int):

    # Set the bounds of the graph for each frame to render the correct data
    start = (frame * step) - frame_size_fps
    end = start + frame_size_fps
    ax1.set_xlim(start,end)
     
     
    start = (frame * step) - frame_size_frametime
    end = start + frame_size_frametime
    ax2.set_xlim(start,end)
    

    # Restore background
    fig.canvas.restore_region(axbackground)
    fig.canvas.restore_region(ax2background)

    # Redraw just the points will only draw points with in `axes.set_xlim`
    for i in line:
        ax1.draw_artist(i)
        
    for i in line2:
        ax2.draw_artist(i)

    # Fill in the axes rectangle
    fig.canvas.blit(ax1.bbox)
    fig.canvas.blit(ax2.bbox)
    
    fig.canvas.flush_events()

    # Converts the finished frame to ARGB
    string = fig.canvas.tostring_argb()
    return string




#import multiprocessing
#p = multiprocessing.Pool()
#for i, _ in enumerate(p.imap(render_frame, range(0, int(total_frames + max_blocksize))), 20):
#    pipe.stdin.write(_)
#    sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))
#p.close()

#Signle Threaded not much slower then multi-threading
if __name__ == "__main__":
    for i , _ in enumerate(range(0, int(total_frames + max_blocksize))):
        render_frame(_)
        pipe.stdin.write(render_frame(_))
        sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))


    


  • Evolution #4462 (Nouveau) : Rendu du plan

    20 mars 2020, par jluc -

    Sur la page ?exec=plan on peut déployer l’arborescence des rubriques. Il y a toutefois un déficit d’indications visuelles sur cette possibilité car
    - lorsqu’on hover le nom d’une rubrique, le style change et on sait qu’on peut cliquer, mais lorsqu’on clique sur le nom d’une rubriques on est projeté vers la page de cette rubrique, dans le même onglet
    - il y a bien un minitruc pâlot triangulaire à gauche, mais il ne change pas de style au survol (à part le pointeur de la souris) et sa minitaille et son faible contraste n’invitent pas au survol, et le simple changement de curseur invite mal au clic. C’est pourtant lui qu’il faut cliquer pour déployer la rubrique.

    Il serait utile de bénéficier d’indications plus claires facilitant l’usage de l’arbre du plan.
    - peut être un ’+’ plus gros et plus contrasté que ce triangle
    - et en tout cas, un changement d’apparence au survol (+ un title ?)
    - une indication textuelle sur ce que permet cette page (le compagnon le fait il déjà peut être ?). Par exemple "Vous pouvez déployer les rubriques en cliquant le + ou le triangle à gauche, et vous pouvez déplacer les rubriques et leur contenu dans l’arborescence par glisser-déposer"