Recherche avancée

Médias (91)

Autres articles (61)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

Sur d’autres sites (9050)

  • M3u8 with either video or audio

    21 février 2023, par undercash

    I m facing an issue with this link that I can't seem to resolve.

    


    The m3u8 display either the video track or the audio track, but never both, though they exist.

    


    ffmpeg -re -i https://origin2-6play.live.6cloud.fr/pool_u81ytesz/w9/sixplayhde/hls_short_enc_w9/NA/aHEUMcx/index.m3u8
ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-17ubuntu1\~20.04)
configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfdk_aac --enable-libtheora --enable-libvorbis --enable-libmp3lame --enable-libx264 --enable-libx265 --enable-libvpx --enable-librtmp --enable-libass --enable-libfreetype --extra-libs=-ldl --enable-openssl
libavutil      56. 51.100 / 56. 51.100
libavcodec     58. 91.100 / 58. 91.100
libavformat    58. 45.100 / 58. 45.100
libavdevice    58. 10.100 / 58. 10.100
libavfilter     7. 85.100 /  7. 85.100
libswscale      5.  7.100 /  5.  7.100
libswresample   3.  7.100 /  3.  7.100
libpostproc    55.  7.100 / 55.  7.100
\[hls @ 0x562ecf699b40\] Skip ('#EXT-X-VERSION:7')
\[hls @ 0x562ecf699b40\] Skip ('## Just In Time Delivered by Quortex Solution')
\[hls @ 0x562ecf699b40\] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:0')
\[hls @ 0x562ecf699b40\] Skip ('#EXT-X-PROGRAM-DATE-TIME:2023-02-21T10:14:12.480000Z')
\[hls @ 0x562ecf699b40\] Opening 'https://url/segment_6246422.ts' for reading
Input #0, hls, from 'https://url/index.m3u8':
Duration: N/A, start: 92548.466311, bitrate: N/A
Program 0
Metadata:
variant_bitrate : 0
Stream #0:0: Audio: aac (LC) (\[15\]\[0\]\[0\]\[0\] / 0x000F), 48000 Hz, stereo, fltp
Metadata:
variant_bitrate : 0

ffmpeg -i url.m3u8
ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-17ubuntu1\~20.04)
configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfdk_aac --enable-libtheora --enable-libvorbis --enable-libmp3lame --enable-libx264 --enable-libx265 --enable-libvpx --enable-librtmp --enable-libass --enable-libfreetype --extra-libs=-ldl --enable-openssl
libavutil      56. 51.100 / 56. 51.100
libavcodec     58. 91.100 / 58. 91.100
libavformat    58. 45.100 / 58. 45.100
libavdevice    58. 10.100 / 58. 10.100
libavfilter     7. 85.100 /  7. 85.100
libswscale      5.  7.100 /  5.  7.100
libswresample   3.  7.100 /  3.  7.100
libpostproc    55.  7.100 / 55.  7.100
\[hls @ 0x55806439cc00\] Skip ('#EXT-X-VERSION:7')
\[hls @ 0x55806439cc00\] Skip ('## Just In Time Delivered by Quortex Solution')
\[hls @ 0x55806439cc00\] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:0')
\[hls @ 0x55806439cc00\] Skip ('#EXT-X-PROGRAM-DATE-TIME:2023-02-21T10:14:00.960000Z')
\[hls @ 0x55806439cc00\] Opening 'url/segment_6246419.ts' for reading
\[hls @ 0x55806439cc00\] Opening 'url.ts' for reading
Input #0, hls, from 'url/index.m3u8':
Duration: N/A, start: 92536.988978, bitrate: N/A
Program 0
Metadata:
variant_bitrate : 0
Stream #0:0: Video: h264 (High) (\[27\]\[0\]\[0\]\[0\] / 0x001B), yuv420p, 1280x720 \[SAR 1:1 DAR 16:9\], 25 fps, 25 tbr, 90k tbn, 50 tbc
Metadata:
variant_bitrate : 0\


    


    I put the same command on 2 different shells and on one I get the audio track, or the video one on the other..

    


    Question : Since I have seen a site properly using this link (I had audio and video on the player) how can I use ffmpeg (or any other program ) to grab both input and output them in a single file ?

    


    I tried to use -analyzeduration 1000M -probesize 1000M in ffmpeg parameters but it doesn't help

    


  • FFmpeg does not encode chapter titles [closed]

    15 octobre 2023, par Shironats

    I am trying to add chapters to a .mp4 file.
When I tried extracting the metadata using the following command :

    


    ffmpeg -i INPUT.mp4 -f ffmetadata FFMETADATAFILE.txt

    


    I got an output file as follows :

    


    ;FFMETADATA1
major_brand=mp42
minor_version=0
compatible_brands=isommp42
encoder=Lavf60.15.100


    


    So now I try to add some chapters as such

    


    [CHAPTER]
TIMEBASE=1/1000
START=0
END=424510
title=mytitle1

[CHAPTER]
TIMEBASE=1/1000
START=424511
END=941600
title=mytitle2

[CHAPTER]
TIMEBASE=1/1000
START=941601
END=1227260
title=mytitle3


    


    I use the ffmpeg -i INPUT.mp4 -i FFMETADATAFILE.txt -map_metadata 1 -codec copy OUTPUT.mp4 command to encode the metadata to my video file, and for some reason I only ever get 'Chapter 0', 'Chapter 1', and 'Chapter 2'.

    


    I tried using the -map_chapters 1 argument as well but results have been unchanged.

    


    I am using the Latest Auto-Build (2023-10-14 13:02) Windows build by BtbN. Thanks in advance for the help.

    


  • Trying to get the current FPS and Frametime value into Matplotlib title

    16 juin 2022, par TiSoBr

    I try to turn an exported CSV with benchmark logs into an animated graph. Works so far, but I can't get the Titles on top of both plots with their current FPS and frametime in ms values animated.

    


    Thats the output I'm getting. Looks like he simply stores all values in there instead of updating them ?

    


    Screengrab of cli output
Screengrab of the final output (inverted)

    


    from __future__ import division
import sys, getopt
import time
import matplotlib
import numpy as np
import subprocess
import math
import re
import argparse
import os
import glob

import matplotlib.animation as animation
import matplotlib.pyplot as plt


def check_pos(arg):
    ivalue = int(arg)
    if ivalue <= 0:
        raise argparse.ArgumentTypeError("%s Not a valid positive integer value" % arg)
    return True
    
def moving_average(x, w):
    return np.convolve(x, np.ones(w), 'valid') / w
    

parser = argparse.ArgumentParser(
    description = "Example Usage python frame_scan.py -i mangohud -c '#fff' -o mymov",
    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("-i", "--input", help = "Input data set from mangohud", required = True, nargs='+', type=argparse.FileType('r'), default=sys.stdin)
parser.add_argument("-o", "--output", help = "Output file name", required = True, type=str, default = "")
parser.add_argument("-r", "--framerate", help = "Set the desired framerate", required = False, type=float, default = 60)
parser.add_argument("-c", "--colors", help = "Colors for the line graphs; must be in quotes", required = True, type=str, nargs='+', default = 60)
parser.add_argument("--fpslength", help = "Configures how long the data will be shown on the FPS graph", required = False, type=float, default = 5)
parser.add_argument("--fpsthickness", help = "Changes the line width for the FPS graph", required = False, type=float, default = 3)
parser.add_argument("--frametimelength", help = "Configures how long the data will be shown on the frametime graph", required = False, type=float, default = 2.5)
parser.add_argument("--frametimethickness", help = "Changes the line width for the frametime graph", required = False, type=float, default = 1.5)
parser.add_argument("--graphcolor", help = "Changes all of the line colors on the graph; expects hex value", required = False, default = '#FFF')
parser.add_argument("--graphthicknes", help = "Changes the line width of the graph", required = False, type=float, default = 1)
parser.add_argument("-ts","--textsize", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 23)
parser.add_argument("-fsM","--fpsmax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 180)
parser.add_argument("-fsm","--fpsmin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fss","--fpsstep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 30)
parser.add_argument("-ftM","--frametimemax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 50)
parser.add_argument("-ftm","--frametimemin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fts","--frametimestep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 10)

arg = parser.parse_args()
status = False


if arg.input:
    status = True
if arg.output:
    status = True
if arg.framerate:
    status = check_pos(arg.framerate)
if arg.fpslength:
    status = check_pos(arg.fpslength)
if arg.fpsthickness:
    status = check_pos(arg.fpsthickness)
if arg.frametimelength:
    status = check_pos(arg.frametimelength)
if arg.frametimethickness:
    status = check_pos(arg.frametimethickness)
if arg.colors:
    if len(arg.output) != len(arg.colors):
        for i in arg.colors:
            if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", i):
                status = True
            else:
                print('{} : Isn\'t a valid hex value!'.format(i))
                status = False
    else:
        print('You must have the same amount of colors as files in input!')
        status = False
if arg.graphcolor:
    if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", arg.graphcolor):
        status = True
    else:
        print('{} : Isn\'t a vaild hex value!'.format(arg.graphcolor))
        status = False
if arg.graphthicknes:
    status = check_pos(arg.graphthicknes)
if arg.textsize:
    status = check_pos(arg.textsize)
if not status:
    print("For a list of arguments try -h or --help") 
    exit()


# Empty output folder
files = glob.glob('/output/*')
for f in files:
    os.remove(f)


# We need to know the longest recording out of all inputs so we know when to stop the video
longest_data = 0

# Format the raw data into a list of tuples (fps, frame time in ms, time from start in micro seconds)
# The first three lines of our data are setup so we ignore them
data_formated = []
for li, i in enumerate(arg.input):
    t = 0
    sublist = []
    for line in i.readlines()[3:]:
        x = line[:-1].split(',')
        fps = float(x[0])
        frametime = int(x[1])/1000 # convert from microseconds to milliseconds
        elapsed = int(x[11])/1000 # convert from nanosecond to microseconds
        data = (fps, frametime, elapsed)
        sublist.append(data)
    # Compare last entry of each list with the 
    if sublist[-1][2] >= longest_data:
        longest_data = sublist[-1][2]
    data_formated.append(sublist)


max_blocksize = max(arg.fpslength, arg.frametimelength) * arg.framerate
blockSize = arg.framerate * arg.fpslength


# Get step time in microseconds
step = (1/arg.framerate) * 1000000 # 1000000 is one second in microseconds
frame_size_fps = (arg.fpslength * arg.framerate) * step
frame_size_frametime = (arg.frametimelength * arg.framerate) * step


# Total frames will have to be updated for more then one source
total_frames = int(int(longest_data) / step)


if True: # Gonna be honest, this only exists so I can collapse this block of code

    # Sets up our figures to be next to each other (horizontally) and with a ratio 3:1 to each other
    fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})

    # Size of whole output 1920x360 1080/3=360
    fig.set_size_inches(19.20, 3.6)

    # Make the background transparent
    fig.patch.set_alpha(0)


    # Loop through all active axes; saves a lot of lines in ax1.do_thing(x) ax2.do_thing(x)
    for axes in fig.axes:

        # Set all splines to the same color and width
        for loc, spine in axes.spines.items():
            axes.spines[loc].set_color(arg.graphcolor)
            axes.spines[loc].set_linewidth(arg.graphthicknes)

        # Make sure we don't render any data points as this will be our background
        axes.set_xlim(-(max_blocksize * step), 0)
        

        # Make both plots transparent as well as the background
        axes.patch.set_alpha(.5)
        axes.patch.set_color('#020202')

        # Change the Y axis info to be on the right side
        axes.yaxis.set_label_position("right")
        axes.yaxis.tick_right()

        # Add the white lines across the graphs; the location of the lines are based off set_{}ticks
        axes.grid(alpha=.8, b=True, which='both', axis='y', color=arg.graphcolor, linewidth=arg.graphthicknes)

        # Remove X axis info
        axes.set_xticks([])

    # Add a another Y axis so ticks are on both sides
    tmp_ax1 = ax1.secondary_yaxis("left")
    tmp_ax2 = ax2.secondary_yaxis("left")

    # Set both to the same values
    ax1.set_yticks(np.arange(arg.fpsmin, arg.fpsmax + 1, step=arg.fpsstep))
    ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))
    tmp_ax1.set_yticks(np.arange(arg.fpsmin , arg.fpsmax + 1, step=arg.fpsstep))
    tmp_ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))

    # Change the "ticks" to be white and correct size also change font size
    ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    tmp_ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0) # Label size of 0 disables the fps/frame numbers
    tmp_ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0)


    # Limits Y scale
    ax1.set_ylim(arg.fpsmin,arg.fpsmax + 1)
    ax2.set_ylim(arg.frametimemin,arg.frametimemax + 1)

    # Add an empty plot
    line = ax1.plot([], lw=arg.fpsthickness)
    line2 = ax2.plot([], lw=arg.frametimethickness)

    # Sets all the data for our benchmark
    for benchmarks, color in zip(data_formated, arg.colors):
        y = moving_average([x[0] for x in benchmarks], 25)
        y2 = [x[1] for x in benchmarks]
        x = [x[2] for x in benchmarks]
        line += ax1.plot(x[12:-12],y, c=color, lw=arg.fpsthickness)
        line2 += ax2.step(x,y2, c=color, lw=arg.fpsthickness)
    
    # Add titles with values
    ax1.set_title("Avg. frames per second: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')
    ax2.set_title("Frametime in ms: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')  

    # Removes unwanted white space; also controls the space between the two graphs
    plt.tight_layout(pad=0, h_pad=0, w_pad=2.5)
    
    fig.canvas.draw()

    # Cache the background
    axbackground = fig.canvas.copy_from_bbox(ax1.bbox)
    ax2background = fig.canvas.copy_from_bbox(ax2.bbox)


# Create a ffmpeg instance as a subprocess we will pipe the finished frame into ffmpeg
# encoded in Apple QuickTime (qtrle) for small(ish) file size and alpha support
# There are free and opensource types that will also do this but with much larger sizes
canvas_width, canvas_height = fig.canvas.get_width_height()
outf = '{}.mov'.format(arg.output)
cmdstring = ('ffmpeg',
                '-stats', '-hide_banner', '-loglevel', 'error', # Makes ffmpeg less annoying / to much console output
                '-y', '-r', '60', # set the fps of the video
                '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
                '-pix_fmt', 'argb', # format cant be changed since this is what  `fig.canvas.tostring_argb()` outputs
                '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
                '-vcodec', 'qtrle', outf) # output encoding must support alpha channel
pipe = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

def render_frame(frame : int):

    # Set the bounds of the graph for each frame to render the correct data
    start = (frame * step) - frame_size_fps
    end = start + frame_size_fps
    ax1.set_xlim(start,end)
     
     
    start = (frame * step) - frame_size_frametime
    end = start + frame_size_frametime
    ax2.set_xlim(start,end)
    

    # Restore background
    fig.canvas.restore_region(axbackground)
    fig.canvas.restore_region(ax2background)

    # Redraw just the points will only draw points with in `axes.set_xlim`
    for i in line:
        ax1.draw_artist(i)
        
    for i in line2:
        ax2.draw_artist(i)

    # Fill in the axes rectangle
    fig.canvas.blit(ax1.bbox)
    fig.canvas.blit(ax2.bbox)
    
    fig.canvas.flush_events()

    # Converts the finished frame to ARGB
    string = fig.canvas.tostring_argb()
    return string




#import multiprocessing
#p = multiprocessing.Pool()
#for i, _ in enumerate(p.imap(render_frame, range(0, int(total_frames + max_blocksize))), 20):
#    pipe.stdin.write(_)
#    sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))
#p.close()

#Signle Threaded not much slower then multi-threading
if __name__ == "__main__":
    for i , _ in enumerate(range(0, int(total_frames + max_blocksize))):
        render_frame(_)
        pipe.stdin.write(render_frame(_))
        sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))