
Recherche avancée
Autres articles (54)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (3946)
-
Trying to get the current FPS and Frametime value into Matplotlib title
16 juin 2022, par TiSoBrI try to turn an exported CSV with benchmark logs into an animated graph. Works so far, but I can't get the Titles on top of both plots with their current FPS and frametime in ms values animated.


Thats the output I'm getting. Looks like he simply stores all values in there instead of updating them ?


Screengrab of cli output
Screengrab of the final output (inverted)


from __future__ import division
import sys, getopt
import time
import matplotlib
import numpy as np
import subprocess
import math
import re
import argparse
import os
import glob

import matplotlib.animation as animation
import matplotlib.pyplot as plt


def check_pos(arg):
 ivalue = int(arg)
 if ivalue <= 0:
 raise argparse.ArgumentTypeError("%s Not a valid positive integer value" % arg)
 return True
 
def moving_average(x, w):
 return np.convolve(x, np.ones(w), 'valid') / w
 

parser = argparse.ArgumentParser(
 description = "Example Usage python frame_scan.py -i mangohud -c '#fff' -o mymov",
 formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("-i", "--input", help = "Input data set from mangohud", required = True, nargs='+', type=argparse.FileType('r'), default=sys.stdin)
parser.add_argument("-o", "--output", help = "Output file name", required = True, type=str, default = "")
parser.add_argument("-r", "--framerate", help = "Set the desired framerate", required = False, type=float, default = 60)
parser.add_argument("-c", "--colors", help = "Colors for the line graphs; must be in quotes", required = True, type=str, nargs='+', default = 60)
parser.add_argument("--fpslength", help = "Configures how long the data will be shown on the FPS graph", required = False, type=float, default = 5)
parser.add_argument("--fpsthickness", help = "Changes the line width for the FPS graph", required = False, type=float, default = 3)
parser.add_argument("--frametimelength", help = "Configures how long the data will be shown on the frametime graph", required = False, type=float, default = 2.5)
parser.add_argument("--frametimethickness", help = "Changes the line width for the frametime graph", required = False, type=float, default = 1.5)
parser.add_argument("--graphcolor", help = "Changes all of the line colors on the graph; expects hex value", required = False, default = '#FFF')
parser.add_argument("--graphthicknes", help = "Changes the line width of the graph", required = False, type=float, default = 1)
parser.add_argument("-ts","--textsize", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 23)
parser.add_argument("-fsM","--fpsmax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 180)
parser.add_argument("-fsm","--fpsmin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fss","--fpsstep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 30)
parser.add_argument("-ftM","--frametimemax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 50)
parser.add_argument("-ftm","--frametimemin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fts","--frametimestep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 10)

arg = parser.parse_args()
status = False


if arg.input:
 status = True
if arg.output:
 status = True
if arg.framerate:
 status = check_pos(arg.framerate)
if arg.fpslength:
 status = check_pos(arg.fpslength)
if arg.fpsthickness:
 status = check_pos(arg.fpsthickness)
if arg.frametimelength:
 status = check_pos(arg.frametimelength)
if arg.frametimethickness:
 status = check_pos(arg.frametimethickness)
if arg.colors:
 if len(arg.output) != len(arg.colors):
 for i in arg.colors:
 if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", i):
 status = True
 else:
 print('{} : Isn\'t a valid hex value!'.format(i))
 status = False
 else:
 print('You must have the same amount of colors as files in input!')
 status = False
if arg.graphcolor:
 if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", arg.graphcolor):
 status = True
 else:
 print('{} : Isn\'t a vaild hex value!'.format(arg.graphcolor))
 status = False
if arg.graphthicknes:
 status = check_pos(arg.graphthicknes)
if arg.textsize:
 status = check_pos(arg.textsize)
if not status:
 print("For a list of arguments try -h or --help") 
 exit()


# Empty output folder
files = glob.glob('/output/*')
for f in files:
 os.remove(f)


# We need to know the longest recording out of all inputs so we know when to stop the video
longest_data = 0

# Format the raw data into a list of tuples (fps, frame time in ms, time from start in micro seconds)
# The first three lines of our data are setup so we ignore them
data_formated = []
for li, i in enumerate(arg.input):
 t = 0
 sublist = []
 for line in i.readlines()[3:]:
 x = line[:-1].split(',')
 fps = float(x[0])
 frametime = int(x[1])/1000 # convert from microseconds to milliseconds
 elapsed = int(x[11])/1000 # convert from nanosecond to microseconds
 data = (fps, frametime, elapsed)
 sublist.append(data)
 # Compare last entry of each list with the 
 if sublist[-1][2] >= longest_data:
 longest_data = sublist[-1][2]
 data_formated.append(sublist)


max_blocksize = max(arg.fpslength, arg.frametimelength) * arg.framerate
blockSize = arg.framerate * arg.fpslength


# Get step time in microseconds
step = (1/arg.framerate) * 1000000 # 1000000 is one second in microseconds
frame_size_fps = (arg.fpslength * arg.framerate) * step
frame_size_frametime = (arg.frametimelength * arg.framerate) * step


# Total frames will have to be updated for more then one source
total_frames = int(int(longest_data) / step)


if True: # Gonna be honest, this only exists so I can collapse this block of code

 # Sets up our figures to be next to each other (horizontally) and with a ratio 3:1 to each other
 fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})

 # Size of whole output 1920x360 1080/3=360
 fig.set_size_inches(19.20, 3.6)

 # Make the background transparent
 fig.patch.set_alpha(0)


 # Loop through all active axes; saves a lot of lines in ax1.do_thing(x) ax2.do_thing(x)
 for axes in fig.axes:

 # Set all splines to the same color and width
 for loc, spine in axes.spines.items():
 axes.spines[loc].set_color(arg.graphcolor)
 axes.spines[loc].set_linewidth(arg.graphthicknes)

 # Make sure we don't render any data points as this will be our background
 axes.set_xlim(-(max_blocksize * step), 0)
 

 # Make both plots transparent as well as the background
 axes.patch.set_alpha(.5)
 axes.patch.set_color('#020202')

 # Change the Y axis info to be on the right side
 axes.yaxis.set_label_position("right")
 axes.yaxis.tick_right()

 # Add the white lines across the graphs; the location of the lines are based off set_{}ticks
 axes.grid(alpha=.8, b=True, which='both', axis='y', color=arg.graphcolor, linewidth=arg.graphthicknes)

 # Remove X axis info
 axes.set_xticks([])

 # Add a another Y axis so ticks are on both sides
 tmp_ax1 = ax1.secondary_yaxis("left")
 tmp_ax2 = ax2.secondary_yaxis("left")

 # Set both to the same values
 ax1.set_yticks(np.arange(arg.fpsmin, arg.fpsmax + 1, step=arg.fpsstep))
 ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))
 tmp_ax1.set_yticks(np.arange(arg.fpsmin , arg.fpsmax + 1, step=arg.fpsstep))
 tmp_ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))

 # Change the "ticks" to be white and correct size also change font size
 ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
 ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
 tmp_ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0) # Label size of 0 disables the fps/frame numbers
 tmp_ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0)


 # Limits Y scale
 ax1.set_ylim(arg.fpsmin,arg.fpsmax + 1)
 ax2.set_ylim(arg.frametimemin,arg.frametimemax + 1)

 # Add an empty plot
 line = ax1.plot([], lw=arg.fpsthickness)
 line2 = ax2.plot([], lw=arg.frametimethickness)

 # Sets all the data for our benchmark
 for benchmarks, color in zip(data_formated, arg.colors):
 y = moving_average([x[0] for x in benchmarks], 25)
 y2 = [x[1] for x in benchmarks]
 x = [x[2] for x in benchmarks]
 line += ax1.plot(x[12:-12],y, c=color, lw=arg.fpsthickness)
 line2 += ax2.step(x,y2, c=color, lw=arg.fpsthickness)
 
 # Add titles with values
 ax1.set_title("Avg. frames per second: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')
 ax2.set_title("Frametime in ms: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left') 

 # Removes unwanted white space; also controls the space between the two graphs
 plt.tight_layout(pad=0, h_pad=0, w_pad=2.5)
 
 fig.canvas.draw()

 # Cache the background
 axbackground = fig.canvas.copy_from_bbox(ax1.bbox)
 ax2background = fig.canvas.copy_from_bbox(ax2.bbox)


# Create a ffmpeg instance as a subprocess we will pipe the finished frame into ffmpeg
# encoded in Apple QuickTime (qtrle) for small(ish) file size and alpha support
# There are free and opensource types that will also do this but with much larger sizes
canvas_width, canvas_height = fig.canvas.get_width_height()
outf = '{}.mov'.format(arg.output)
cmdstring = ('ffmpeg',
 '-stats', '-hide_banner', '-loglevel', 'error', # Makes ffmpeg less annoying / to much console output
 '-y', '-r', '60', # set the fps of the video
 '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
 '-pix_fmt', 'argb', # format cant be changed since this is what `fig.canvas.tostring_argb()` outputs
 '-f', 'rawvideo', '-i', '-', # tell ffmpeg to expect raw video from the pipe
 '-vcodec', 'qtrle', outf) # output encoding must support alpha channel
pipe = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

def render_frame(frame : int):

 # Set the bounds of the graph for each frame to render the correct data
 start = (frame * step) - frame_size_fps
 end = start + frame_size_fps
 ax1.set_xlim(start,end)
 
 
 start = (frame * step) - frame_size_frametime
 end = start + frame_size_frametime
 ax2.set_xlim(start,end)
 

 # Restore background
 fig.canvas.restore_region(axbackground)
 fig.canvas.restore_region(ax2background)

 # Redraw just the points will only draw points with in `axes.set_xlim`
 for i in line:
 ax1.draw_artist(i)
 
 for i in line2:
 ax2.draw_artist(i)

 # Fill in the axes rectangle
 fig.canvas.blit(ax1.bbox)
 fig.canvas.blit(ax2.bbox)
 
 fig.canvas.flush_events()

 # Converts the finished frame to ARGB
 string = fig.canvas.tostring_argb()
 return string




#import multiprocessing
#p = multiprocessing.Pool()
#for i, _ in enumerate(p.imap(render_frame, range(0, int(total_frames + max_blocksize))), 20):
# pipe.stdin.write(_)
# sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))
#p.close()

#Signle Threaded not much slower then multi-threading
if __name__ == "__main__":
 for i , _ in enumerate(range(0, int(total_frames + max_blocksize))):
 render_frame(_)
 pipe.stdin.write(render_frame(_))
 sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))



-
RTSP stream to ffmpeg problems
14 octobre 2022, par maeekI'm writing a web application for managing and viewing streams from ONVIF ip-cameras.

It's written in nodejs. The idea is to run a child process in node and pipe output to node, then send the buffer to client and render it on canvas. I have a working solution for sending data to client and rendering it on canvas using websockets but it only works on one of my cameras.

I own 2 IP cameras and both of them have rtsp server.

One of them(let's name it camX) kind of works with this ffmpeg command (sometimes it just stops, maybe due to packet losses) :

ffmpeg -rtsp_transport tcp -re -i -f mjpeg pipe:1



But the other one(camY) returns
Nonmatching transport in server reply
and exits.

I discovered that the camY transport is
unicast
but ffmpeg doesn't support this particular lower_transport as I read on ffmpeg forum.

So I started looking for a solution. My first idea was to use
openRTSP
which works fine with both streams.
I looked at the documentation and came up with this command :

openRTSP -4 -c | ffmpeg -re -i pipe:0 -f mjpeg pipe:1

-4
parameter returns stream to pipe in mp4 format

And here's another problem I ran into, ffmpeg returns :

[mov,mp4,m4a,3gp,3g2,mj2 @ 0x559a4b6ba900] moov atom not found 
pipe:0: Invalid data found when processing input



Is there any way to make this work ?
I tried various solutions I found, but none of them worked.


EDIT


As @Gyan suggested I used
-i
parameter instead of-4
but it didn't solve my problem.

My command :


openRTSP -V -i -c -K | ffmpeg -loglevel debug -re -i pipe:0 -f mjpeg pipe:1
 
Created receiver for "video/H264" subsession (client ports 49072-49073)
Setup "video/H264" subsession (client ports 49072-49073)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
AVIFileSink::setWord(): SeekFile64 failed (err 29)
Outputting to the file: "stdout"
[avi @ 0x5612944268c0] Format avi probed with size=2048 and score=100
[avi @ 0x56129442f7a0] use odml:1
Started playing session
Receiving streamed data (signal with "kill -HUP 15028" or "kill -USR1 15028" to terminate)...
^C
[AVIOContext @ 0x56129442f640] Statistics: 16904 bytes read, 0 seeks
pipe:0: Invalid data found when processing input



As you can see openRTSP command return err 29 but in meantime it outputs some data to pipe.

When I terminate the command ffmpeg shows that it read some data but couldn't process it.

Here's the function that produces that error :


void AVIFileSink::setWord(unsigned filePosn, unsigned size) {
 do {
 if (SeekFile64(fOutFid, filePosn, SEEK_SET) < 0) break;
 addWord(size);
 if (SeekFile64(fOutFid, 0, SEEK_END) < 0) break; // go back to where we were

 return;
 } while (0);

 // One of the SeekFile64()s failed, probable because we're not a seekable file
 envir() << "AVIFileSink::setWord(): SeekFile64 failed (err "
 << envir().getErrno() << ")\n";
}



In my opinion it looks like it won't be able to seek file because it's a stream not a static file.

Any suggestion for a workaround ?

-
Crop video into a 4x4 grid/tiles/matrix efficiently via command-line ffmpeg ?
22 avril 2017, par DylanHello Stackoverflow community !
I dread having to ask questions, but there seems to be no efficient way to take a single input video and apply a matrix transformation/split the video into equal sized pieces, preferably 4x4=16 segments per input.
I tried using all the libraries such as ffmpeg and mencoder, but having 16 outputs can be as slow as 0.15x. The goal of my project is the split the video into 16 segments, rearrange those segments and combine back into a final video ; later reversing the process in HTML5 canvas. Here is a picture to help you understand what I am talking about :
the source but also the final destination after reorganizing the piecesI do not believe you can do this all in one command, so my goal is to crop into 16 mapped outputs quickly, then reassemble them in a different order. But I can do the other parts myself. Ideally there would be a way to move pixel blocks eg 100x100 and just move them around. My math is not strong enough..
I really appreciate the work you guys do !admin@dr.com