Recherche avancée

Médias (91)

Autres articles (97)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (7932)

  • Trying to get the current FPS and Frametime value into Matplotlib title

    16 juin 2022, par TiSoBr

    I try to turn an exported CSV with benchmark logs into an animated graph. Works so far, but I can't get the Titles on top of both plots with their current FPS and frametime in ms values animated.

    


    Thats the output I'm getting. Looks like he simply stores all values in there instead of updating them ?

    


    Screengrab of cli output
Screengrab of the final output (inverted)

    


    from __future__ import division
import sys, getopt
import time
import matplotlib
import numpy as np
import subprocess
import math
import re
import argparse
import os
import glob

import matplotlib.animation as animation
import matplotlib.pyplot as plt


def check_pos(arg):
    ivalue = int(arg)
    if ivalue <= 0:
        raise argparse.ArgumentTypeError("%s Not a valid positive integer value" % arg)
    return True
    
def moving_average(x, w):
    return np.convolve(x, np.ones(w), 'valid') / w
    

parser = argparse.ArgumentParser(
    description = "Example Usage python frame_scan.py -i mangohud -c '#fff' -o mymov",
    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("-i", "--input", help = "Input data set from mangohud", required = True, nargs='+', type=argparse.FileType('r'), default=sys.stdin)
parser.add_argument("-o", "--output", help = "Output file name", required = True, type=str, default = "")
parser.add_argument("-r", "--framerate", help = "Set the desired framerate", required = False, type=float, default = 60)
parser.add_argument("-c", "--colors", help = "Colors for the line graphs; must be in quotes", required = True, type=str, nargs='+', default = 60)
parser.add_argument("--fpslength", help = "Configures how long the data will be shown on the FPS graph", required = False, type=float, default = 5)
parser.add_argument("--fpsthickness", help = "Changes the line width for the FPS graph", required = False, type=float, default = 3)
parser.add_argument("--frametimelength", help = "Configures how long the data will be shown on the frametime graph", required = False, type=float, default = 2.5)
parser.add_argument("--frametimethickness", help = "Changes the line width for the frametime graph", required = False, type=float, default = 1.5)
parser.add_argument("--graphcolor", help = "Changes all of the line colors on the graph; expects hex value", required = False, default = '#FFF')
parser.add_argument("--graphthicknes", help = "Changes the line width of the graph", required = False, type=float, default = 1)
parser.add_argument("-ts","--textsize", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 23)
parser.add_argument("-fsM","--fpsmax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 180)
parser.add_argument("-fsm","--fpsmin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fss","--fpsstep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 30)
parser.add_argument("-ftM","--frametimemax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 50)
parser.add_argument("-ftm","--frametimemin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fts","--frametimestep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 10)

arg = parser.parse_args()
status = False


if arg.input:
    status = True
if arg.output:
    status = True
if arg.framerate:
    status = check_pos(arg.framerate)
if arg.fpslength:
    status = check_pos(arg.fpslength)
if arg.fpsthickness:
    status = check_pos(arg.fpsthickness)
if arg.frametimelength:
    status = check_pos(arg.frametimelength)
if arg.frametimethickness:
    status = check_pos(arg.frametimethickness)
if arg.colors:
    if len(arg.output) != len(arg.colors):
        for i in arg.colors:
            if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", i):
                status = True
            else:
                print('{} : Isn\'t a valid hex value!'.format(i))
                status = False
    else:
        print('You must have the same amount of colors as files in input!')
        status = False
if arg.graphcolor:
    if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", arg.graphcolor):
        status = True
    else:
        print('{} : Isn\'t a vaild hex value!'.format(arg.graphcolor))
        status = False
if arg.graphthicknes:
    status = check_pos(arg.graphthicknes)
if arg.textsize:
    status = check_pos(arg.textsize)
if not status:
    print("For a list of arguments try -h or --help") 
    exit()


# Empty output folder
files = glob.glob('/output/*')
for f in files:
    os.remove(f)


# We need to know the longest recording out of all inputs so we know when to stop the video
longest_data = 0

# Format the raw data into a list of tuples (fps, frame time in ms, time from start in micro seconds)
# The first three lines of our data are setup so we ignore them
data_formated = []
for li, i in enumerate(arg.input):
    t = 0
    sublist = []
    for line in i.readlines()[3:]:
        x = line[:-1].split(',')
        fps = float(x[0])
        frametime = int(x[1])/1000 # convert from microseconds to milliseconds
        elapsed = int(x[11])/1000 # convert from nanosecond to microseconds
        data = (fps, frametime, elapsed)
        sublist.append(data)
    # Compare last entry of each list with the 
    if sublist[-1][2] >= longest_data:
        longest_data = sublist[-1][2]
    data_formated.append(sublist)


max_blocksize = max(arg.fpslength, arg.frametimelength) * arg.framerate
blockSize = arg.framerate * arg.fpslength


# Get step time in microseconds
step = (1/arg.framerate) * 1000000 # 1000000 is one second in microseconds
frame_size_fps = (arg.fpslength * arg.framerate) * step
frame_size_frametime = (arg.frametimelength * arg.framerate) * step


# Total frames will have to be updated for more then one source
total_frames = int(int(longest_data) / step)


if True: # Gonna be honest, this only exists so I can collapse this block of code

    # Sets up our figures to be next to each other (horizontally) and with a ratio 3:1 to each other
    fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})

    # Size of whole output 1920x360 1080/3=360
    fig.set_size_inches(19.20, 3.6)

    # Make the background transparent
    fig.patch.set_alpha(0)


    # Loop through all active axes; saves a lot of lines in ax1.do_thing(x) ax2.do_thing(x)
    for axes in fig.axes:

        # Set all splines to the same color and width
        for loc, spine in axes.spines.items():
            axes.spines[loc].set_color(arg.graphcolor)
            axes.spines[loc].set_linewidth(arg.graphthicknes)

        # Make sure we don't render any data points as this will be our background
        axes.set_xlim(-(max_blocksize * step), 0)
        

        # Make both plots transparent as well as the background
        axes.patch.set_alpha(.5)
        axes.patch.set_color('#020202')

        # Change the Y axis info to be on the right side
        axes.yaxis.set_label_position("right")
        axes.yaxis.tick_right()

        # Add the white lines across the graphs; the location of the lines are based off set_{}ticks
        axes.grid(alpha=.8, b=True, which='both', axis='y', color=arg.graphcolor, linewidth=arg.graphthicknes)

        # Remove X axis info
        axes.set_xticks([])

    # Add a another Y axis so ticks are on both sides
    tmp_ax1 = ax1.secondary_yaxis("left")
    tmp_ax2 = ax2.secondary_yaxis("left")

    # Set both to the same values
    ax1.set_yticks(np.arange(arg.fpsmin, arg.fpsmax + 1, step=arg.fpsstep))
    ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))
    tmp_ax1.set_yticks(np.arange(arg.fpsmin , arg.fpsmax + 1, step=arg.fpsstep))
    tmp_ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))

    # Change the "ticks" to be white and correct size also change font size
    ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    tmp_ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0) # Label size of 0 disables the fps/frame numbers
    tmp_ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0)


    # Limits Y scale
    ax1.set_ylim(arg.fpsmin,arg.fpsmax + 1)
    ax2.set_ylim(arg.frametimemin,arg.frametimemax + 1)

    # Add an empty plot
    line = ax1.plot([], lw=arg.fpsthickness)
    line2 = ax2.plot([], lw=arg.frametimethickness)

    # Sets all the data for our benchmark
    for benchmarks, color in zip(data_formated, arg.colors):
        y = moving_average([x[0] for x in benchmarks], 25)
        y2 = [x[1] for x in benchmarks]
        x = [x[2] for x in benchmarks]
        line += ax1.plot(x[12:-12],y, c=color, lw=arg.fpsthickness)
        line2 += ax2.step(x,y2, c=color, lw=arg.fpsthickness)
    
    # Add titles with values
    ax1.set_title("Avg. frames per second: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')
    ax2.set_title("Frametime in ms: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')  

    # Removes unwanted white space; also controls the space between the two graphs
    plt.tight_layout(pad=0, h_pad=0, w_pad=2.5)
    
    fig.canvas.draw()

    # Cache the background
    axbackground = fig.canvas.copy_from_bbox(ax1.bbox)
    ax2background = fig.canvas.copy_from_bbox(ax2.bbox)


# Create a ffmpeg instance as a subprocess we will pipe the finished frame into ffmpeg
# encoded in Apple QuickTime (qtrle) for small(ish) file size and alpha support
# There are free and opensource types that will also do this but with much larger sizes
canvas_width, canvas_height = fig.canvas.get_width_height()
outf = '{}.mov'.format(arg.output)
cmdstring = ('ffmpeg',
                '-stats', '-hide_banner', '-loglevel', 'error', # Makes ffmpeg less annoying / to much console output
                '-y', '-r', '60', # set the fps of the video
                '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
                '-pix_fmt', 'argb', # format cant be changed since this is what  `fig.canvas.tostring_argb()` outputs
                '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
                '-vcodec', 'qtrle', outf) # output encoding must support alpha channel
pipe = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

def render_frame(frame : int):

    # Set the bounds of the graph for each frame to render the correct data
    start = (frame * step) - frame_size_fps
    end = start + frame_size_fps
    ax1.set_xlim(start,end)
     
     
    start = (frame * step) - frame_size_frametime
    end = start + frame_size_frametime
    ax2.set_xlim(start,end)
    

    # Restore background
    fig.canvas.restore_region(axbackground)
    fig.canvas.restore_region(ax2background)

    # Redraw just the points will only draw points with in `axes.set_xlim`
    for i in line:
        ax1.draw_artist(i)
        
    for i in line2:
        ax2.draw_artist(i)

    # Fill in the axes rectangle
    fig.canvas.blit(ax1.bbox)
    fig.canvas.blit(ax2.bbox)
    
    fig.canvas.flush_events()

    # Converts the finished frame to ARGB
    string = fig.canvas.tostring_argb()
    return string




#import multiprocessing
#p = multiprocessing.Pool()
#for i, _ in enumerate(p.imap(render_frame, range(0, int(total_frames + max_blocksize))), 20):
#    pipe.stdin.write(_)
#    sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))
#p.close()

#Signle Threaded not much slower then multi-threading
if __name__ == "__main__":
    for i , _ in enumerate(range(0, int(total_frames + max_blocksize))):
        render_frame(_)
        pipe.stdin.write(render_frame(_))
        sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))


    


  • FFmpeg to RTMP - no audio on output [closed]

    25 mars 2022, par John Mergene Arellano

    From my client side, I am sending a stream using the Socket.IO library. I captured the video and audio using getUserMedia API.

    


    navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
    window.videoStream = video.srcObject = stream;
    let mediaRecorder = new MediaRecorder(stream, {
        videoBitsPerSecond : 3 * 1024 * 1024
    });
    mediaRecorder.addEventListener('dataavailable', (e) => {
        let data = e.data;
        socket.emit('live', data);
    });
    mediaRecorder.start(1000);
});


    


    Then my server will receive the stream and write it to FFmpeg.

    


    client.on('live', (stream)=>{
   if(ffmpeg)
       ffmpeg.stdin.write(stream);
});


    


    I tried watching the live video in VLC media player. There is a 5 seconds delay and no audio output.

    


    Please see below for FFmpeg options I used :

    


    ffmpeg = this.CHILD_PROCESS.spawn("ffmpeg", [
   '-f',
   'lavfi',
   '-i', 'anullsrc',
   '-i','-',
   '-c:v', 'libx264', '-preset', 'veryfast', '-tune', 'zerolatency',
   '-c:a', 'aac', '-ar', '44100', '-b:a', '64k',
   '-y', //force to overwrite
   '-use_wallclock_as_timestamps', '1', // used for audio sync
   '-async', '1', // used for audio sync
   '-bufsize', '1000',
   '-f',
   'flv',
   `rtmp://127.0.0.1:1935/live/stream` ]);


    


    What is wrong with my setup ?

    


    I tried removing some of the options but failed. I am expecting to have an output of video and audio from the getUserMedia API.

    


  • Try to concatenate videos using FFmpeg Package. My videos concatenate correctly but those that i record from fornt camera rotate 90' in concatenate

    24 avril 2024, par Ahmad Akram

    Here is my code where I pass a list of image paths that concatenate. I am facing an issue with the front camera video. When concatenated completely some videos rotate 90 degrees.

    


    Future<void> mergeVideos(List<string> videoPaths) async {&#xA;    VideoHelper.showInSnackBar(&#x27;Videos merged Start&#x27;, context);&#xA;    String outputPath = await VideoHelper.generateOutputPath();&#xA;    FlutterFFmpeg flutterFFmpeg = FlutterFFmpeg();&#xA;&#xA;    // Create a text file containing the paths of the videos to concatenate&#xA;    String fileListPath =&#xA;        &#x27;${(await getTemporaryDirectory()).path}/fileList.txt&#x27;;&#xA;    File fileList = File(fileListPath);&#xA;    await fileList&#xA;        .writeAsString(videoPaths.map((path) => &#x27;file \&#x27;$path\&#x27;&#x27;).join(&#x27;\n&#x27;));&#xA;&#xA;    // Run FFmpeg command to concatenate videos&#xA;    // String command = &#x27;-f concat -safe 0 -i $fileListPath -c copy $outputPath&#x27;;&#xA;&#xA;    String command =&#xA;        &#x27;-f concat -safe 0 -i $fileListPath -vf "transpose=1" -c:a copy $outputPath&#x27;;&#xA;&#xA;    VideoHelper.showInSnackBar(&#x27;command Start&#x27;, context);&#xA;    await flutterFFmpeg.execute(command).then((value) {&#xA;      if (value == 0) {&#xA;        print("Output Path : $outputPath");&#xA;        VideoHelper.showInSnackBar(&#x27;Videos merged successfully&#x27;, context);&#xA;        Navigator.push(&#xA;            context,&#xA;            MaterialPageRoute(&#xA;                builder: (context) => VideoPlayerScreen(&#xA;                      videoFile: XFile(outputPath),&#xA;                    )));&#xA;      } else {&#xA;        VideoHelper.showInSnackBar(&#xA;            &#x27;Error merging videos  ::::: returnCode=== $value &#x27;, context);&#xA;      }&#xA;    });&#xA;  }&#xA;</string></void>

    &#xA;