Recherche avancée

Médias (91)

Autres articles (54)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (8277)

  • In Powershell, how do you select different parts of a string and output them to variable separately and display them in line ?

    23 mai 2020, par HASJ

    I am trying to create a progress bar for FFmpeg, much like python's FFpb and got far enough that I can capture FFmpeg's output to variable and save the needed info in variables, but only after it ends.

    



    This command saves each line outputted by FFmpeg into the $RedirectOutput variable as objects :

    



    $RedirectOutput = ffmpeg.exe -y -v quiet -stats -hide_banner -hwaccel cuda -i 'input' -c:v hevc_nvenc -preset slow -crf 18 -an -t 300 'output' 2>&1


    



    $RedirectOutput then becomes :

    



    frame=  124 fps=0.0 q=14.0 size=     768kB time=00:00:04.03 bitrate=1559.9kbits/s speed=8.04x
frame=  480 fps=479 q=14.0 size=    3840kB time=00:00:15.90 bitrate=1978.5kbits/s speed=15.9x
frame=  865 fps=576 q=17.0 size=    7424kB time=00:00:28.73 bitrate=2116.6kbits/s speed=19.1x
frame= 1256 fps=627 q=17.0 size=   10496kB time=00:00:41.76 bitrate=2058.7kbits/s speed=20.9x
frame= 1698 fps=678 q=16.0 size=   14080kB time=00:00:56.50 bitrate=2041.5kbits/s speed=22.6x
frame= 2074 fps=691 q=16.0 size=   17152kB time=00:01:09.03 bitrate=2035.4kbits/s speed=  23x
frame= 2515 fps=718 q=16.0 size=   20992kB time=00:01:23.73 bitrate=2053.7kbits/s speed=23.9x
frame= 2972 fps=742 q=18.0 size=   25088kB time=00:01:38.96 bitrate=2076.7kbits/s speed=24.7x
frame= 3404 fps=755 q=18.0 size=   28928kB time=00:01:53.36 bitrate=2090.4kbits/s speed=25.2x
...


    



    Here I save each piece of necessary information into their respective variables :

    



    $videoFps     = $RedirectOutput -replace "(frame=)(\s+)?(\d+)(.*)", "$3"
$videoFps     = $RedirectOutput -replace "(.*)(fps=)(\d+)(.*)", "$3"
$videoSpeed   = $RedirectOutput -replace "(.*)(speed=)(\d+.*)x", "$3"
$videoBitrate = $RedirectOutput -replace "(.*)(bitrate=)(\d+)(.*)", "$3"


    



    Now that the arrays were created, I'd need to write these to the console periodically, in a formatted fashion (that I'll figure out eventually), as the FFmpeg conversion is running (instead of when it finished) and in a singular line, but I don't know how to proceed.

    



    Do I save FFmpeg's output to a file instead of a variable and try to read from there ? If so, how do I read it after each write ? With a while loop ?

    



    Wouldn't that be a performance issue ? I have to say, that isn't a great concern of mine, for now. Just want it to work.

    


  • Accuracy of trimming/concatenating parts of videos with ffmpeg

    10 mai 2020, par Circonflexe

    I am adjusting a video (with audio track) using ffmpeg, with the goal being to move the time position of some given points of the video as precisely as possible in time (ideally, with an accuracy of 1ms), by using a piecewise affine transform on the time of the video.
For this I use ffmpeg with a combination of trim, setpts, atrim, atempo, and concat filters, such as :

    



    ffmpeg -i input.mp4 -filter_complex \
'[0:v]trim=start=0:end=162.328554216868,setpts=1.194377385743*(PTS-STARTPTS)[v0];'\
'[0:a]atrim=start=0:end=162.328554216868,asetpts=PTS-STARTPTS,atempo=.837256307710[a0];'\
'[0:v]trim=start=162.328554216868:end=173.161445783132,setpts=1.135513218334*(PTS-STARTPTS)[v1];'\
'[0:a]atrim=start=162.328554216868:end=173.161445783132,asetpts=PTS-STARTPTS,atempo=.880659056939[a1];'\
'[0:v]trim=start=173.161445783132:end=197.842554216868,setpts=1.093796435691*(PTS-STARTPTS)[v2];'\
'[0:a]atrim=start=173.161445783132:end=197.842554216868,asetpts=PTS-STARTPTS,atempo=.914246899486[a2];'\
'[0:v]trim=start=197.842554216868:end=208.505445783132,setpts=1.145926645725*(PTS-STARTPTS)[v3];'\
'[0:a]atrim=start=197.842554216868:end=208.505445783132,asetpts=PTS-STARTPTS,atempo=.872656206861[a3];'\
'[0:v]trim=start=208.505445783132:end=212.61,setpts=PTS-STARTPTS[v4];'\
'[0:a]atrim=start=208.505445783132:end=212.61,asetpts=PTS-STARTPTS,atempo=1[a4];'\
'[v0][a0][v1][a1][v2][a2][v3][a3][v4][a4]concat=n=5:v=1:a=1[v][a]' \
-map '[v]' -map '[a]' output.mp4


    



    (the numeric values above are obviously generated by another script, not by hand !). However I find that the resulting audio track has some “whitespace” inserted wherever two parts are joined. For example, when joining a longer file (about 4 minutes with 31 parts in total), I found that the duration of the file had increased by about 0.3 second in total, which obviously defeats the “millisecond-accuracy” goal.

    



    How could I tell ffmpeg to make the transitions as exact as possible ?

    


  • Mute parts of Wave file using Python/FFMPEG/Pydub

    20 avril 2020, par Adil Azeem

    I am new to Python, please bear with me. I have been able to get so far with the help of Google/StackOverflow and youtube :). So I have a long (2 hours) *.wav file. I want to mute certain parts of that file. I have all of those [start], [stop] timestamps in the "Timestamps.txt" file in seconds. Like this :

    



       0001.000 0003.000
   0744.096 0747.096
   0749.003 0750.653
   0750.934 0753.170
   0753.210 0754.990
   0756.075 0759.075
   0760.096 0763.096
   0810.016 0811.016
   0815.849 0816.849


    



    What I have been able to do is read the file and isolate each tuple. I have just output the first tuple and printed it to check if everything looks good. It seems that the isolation of tuple works :) I plan to count the number of tuples (which is 674 in this case) and put in a 'for loop' according to that count and change the start and stop time according to the tuple. Perform the loop on that single *.wav file and output on file with muted sections as the timestamps. I have no idea how to implement my thinking with FFMPEG or any other utility in Python e.g pydub. Please help me.

    



       with open('Timestamps.txt') as f:
   data = [line.split() for line in f.readlines()]
   out = [(float(k), float(v)) for k, v in data]

   r = out[0] 
   x= r[0]
   y= r[1]
   #specific x and y values
   print(x)
   print(y)