
Recherche avancée
Médias (29)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (50)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (9607)
-
FFmpeg Process Priority at screenrecording
24 avril 2021, par AndrewI made formula for screenrecording via FFmpeg, and it works with Counter-Strike and Dota 2.
But it doesn't work with World of Tanks. So I faced with troubles. World of Tanks crashs after few minutes. And speed of recording is less then 1x (about 0.87x and it become slower and slower). So I think that I need to raise up the priority process of FFmpeg.
Look for test of it on my Youtube Channel : my test of ffmpeg screen recording


So I would like to set 'High' or 'Realtime' priority for this process automatically !


But right now I can do it via 2 ways :


- 

- via Task Manager ;
- via second PowerShell Window and second "copy-paste".






But both methods require a time (5-20 seconds). It is no good.
I cannot to set
Start-Process ffmpeg -NoNewWindow -Wait -ArgumentList
for setting PriorityClass as here stackoverflow because my ArgumentList of FFmpeg is too huge.
And I cannotstart-process pwsh -ArgumentList
for new powershell window for setting PriorityClass. It write me about an error.
My tested formula for Youtube :

ffmpeg -hide_banner -loglevel +repeat+level+info `
-f dshow -thread_queue_size 8192 -audio_buffer_size 100 -rtbufsize 2147M `
-i 'audio=@device_sw_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\{8E146464-DB61-4309-AFA1-3578E927E935}' `
-f dshow -thread_queue_size 8192 -audio_buffer_size 100 -rtbufsize 2147M `
-i 'audio=@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{472DBE92-04B3-48AB-A937-ED4CD6A85625}' `
-f gdigrab -hwaccel_device cuda -hwaccel cuda -hwaccel_output_format cuda -video_size 1920:1080 -thread_queue_size 8192 -draw_mouse 1 -show_region 0 -offset_x 0 -offset_y 0 -rtbufsize 2147M -probesize 500M -analyzeduration 500M -framerate 60 -i desktop `
-noautoscale -shortest `
-map '0:a:0' -c:a copy -f wav "FFmpeg_Screen_Recording_$(get-date -f yyyy-MM-dd_HH-mm-ss).wav" `
-map '1:a:0' -c:a copy -f wav "FFmpeg_WebCAM_Recording_$(get-date -f yyyy-MM-dd_HH-mm-ss).wav" `
-map '2:v:0' -c:v h264_nvenc -gpu 0 -vsync 1 -r 60 -video_size 1920x1080 -video_track_timescale 60 -copytb 0 -delay 0 -rc constqp -qp 0 -rc-lookahead 0 -zerolatency 1 -maxrate:v 512M -bufsize:v 512M -pix_fmt yuv444p -profile:v high444p -preset p1 -tune ull -level 6.2 -coder vlc -weighted_pred 0 `
-f mov "FFmpeg_Screen_Recording_$(get-date -f yyyy-MM-dd_HH-mm-ss).mov"



My current formula for High priority - first tab :


start-process pwsh -ArgumentList '-noexit' 
Get-Process -name 'pwsh' | foreach { $_.PriorityClass = "High" }
$startExe = ffmpeg -hide_banner -loglevel +repeat+level+info `
-f dshow -thread_queue_size 8192 -audio_buffer_size 100 -rtbufsize 2147M `
-i 'audio=@device_sw_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\{8E146464-DB61-4309-AFA1-3578E927E935}' `
-f dshow -thread_queue_size 8192 -audio_buffer_size 100 -rtbufsize 2147M `
-i 'audio=@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{472DBE92-04B3-48AB-A937-ED4CD6A85625}' `
-f gdigrab -hwaccel_device cuda -hwaccel cuda -hwaccel_output_format cuda -video_size 1920:1080 -thread_queue_size 8192 -draw_mouse 1 -show_region 0 -offset_x 0 -offset_y 0 -rtbufsize 2147M -probesize 500M -analyzeduration 500M -framerate 60 -i desktop `
-noautoscale -shortest `
-map '0:a:0' -c:a copy -f wav "FFmpeg_Screen_Recording_$(get-date -f yyyy-MM-dd_HH-mm-ss).wav" `
-map '1:a:0' -c:a copy -f wav "FFmpeg_WebCAM_Recording_$(get-date -f yyyy-MM-dd_HH-mm-ss).wav" `
-map '2:v:0' -c:v h264_nvenc -gpu 0 -vsync 1 -r 60 -video_size 1920x1080 -video_track_timescale 60 -copytb 0 -delay 0 -rc constqp -qp 0 -rc-lookahead 0 -zerolatency 1 -maxrate:v 512M -bufsize:v 512M -pix_fmt yuv444p -profile:v high444p -preset p1 -tune ull -level 6.2 -coder vlc -weighted_pred 0 `
-f mov "FFmpeg_Screen_Recording_$(get-date -f yyyy-MM-dd_HH-mm-ss).mov"



Second tab (second copy-paste action) :


Get-Process -name 'ffmpeg' | foreach { $_.PriorityClass = "High" }



How can I do it in the only one action ?


-
C# on linux : FFmpeg (FFMediaToolkit) on linux System.IO.DirectoryNotFoundException : Cannot found the default FFmpeg directory
6 mai 2021, par Jan ČernýI have C# project in rider and
FFMediaToolkit
installed via NuGet. I made instance ofMediaBuilder
. When I hit run I get this error message :

/home/john/Projects/Slimulator/bin/Debug/net5.0/Slimulator /home/john/Projects/Slimulator/test_mazes/small-maze-food2.png
Loading file /home/john/Projects/Slimulator/test_mazes/small-maze-food2.png
Unhandled exception. System.IO.DirectoryNotFoundException: Cannot found the default FFmpeg directory.
On Windows you have to set "FFmpegLoader.FFmpegPath" with full path to the directory containing FFmpeg shared build ".dll" files
For more informations please see https://github.com/radek-k/FFMediaToolkit#setup
 at FFMediaToolkit.FFmpegLoader.LoadFFmpeg()
 at FFMediaToolkit.Encoding.Internal.OutputContainer.Create(String extension)
 at FFMediaToolkit.Encoding.MediaBuilder..ctor(String path, Nullable`1 format)
 at FFMediaToolkit.Encoding.MediaBuilder.CreateContainer(String path)
 at Slimulator.AnimationBuffer..ctor(String videoPath, Int32 height, Int32 width, Int32 frameRate) in /home/john/Projects/Slimulator/AnimationBuffer.cs:line 11
 at Slimulator.Simulation..ctor(Space space, String seed, String outputVideoPath) in /home/john/Projects/Slimulator/Simulation.cs:line 12
 at Slimulator.Launcher.Main(String[] args) in /home/john/Projects/Slimulator/Launcher.cs:line 8

Process finished with exit code 134.



When I go to https://github.com/radek-k/FFMediaToolkit#setup I find just this :




Linux - Download FFmpeg using your package manager.


You need to set FFmpegLoader.FFmpegPath with a full path to FFmpeg libraries.


If you want to use 64-bit FFmpeg, you have to disable the Build -> Prefer 32-bit option in
Visual Studio project properties.




I have already installed
FFmpeg
package via pacman and I am still getting these error.

How can I fix this so I can use
FFMediaToolkit
without problem on linux ?

Thank you for help

EDIT1 : I use Arch linux.
EDIT2 : There is related issue on github : https://github.com/radek-k/FFMediaToolkit/issues/80


-
NumPy array of a video changes from the original after writing into the same video
29 mars 2021, par RashiqI have a video (
test.mkv
) that I have converted into a 4D NumPy array - (frame, height, width, color_channel). I have even managed to convert that array back into the same video (test_2.mkv
) without altering anything. However, after reading this new,test_2.mkv
, back into a new NumPy array, the array of the first video is different from the second video's array i.e. their hashes don't match and thenumpy.array_equal()
function returns false. I have tried using both python-ffmpeg and scikit-video but cannot get the arrays to match.

Python-ffmpeg attempt :


import ffmpeg
import numpy as np
import hashlib

file_name = 'test.mkv'

# Get video dimensions and framerate
probe = ffmpeg.probe(file_name)
video_stream = next((stream for stream in probe['streams'] if stream['codec_type'] == 'video'), None)
width = int(video_stream['width'])
height = int(video_stream['height'])
frame_rate = video_stream['avg_frame_rate']

# Read video into buffer
out, error = (
 ffmpeg
 .input(file_name, threads=120)
 .output("pipe:", format='rawvideo', pix_fmt='rgb24')
 .run(capture_stdout=True)
)

# Convert video buffer to array
video = (
 np
 .frombuffer(out, np.uint8)
 .reshape([-1, height, width, 3])
)

# Convert array to buffer
video_buffer = (
 np.ndarray
 .flatten(video)
 .tobytes()
)

# Write buffer back into a video
process = (
 ffmpeg
 .input('pipe:', format='rawvideo', s='{}x{}'.format(width, height))
 .output("test_2.mkv", r=frame_rate)
 .overwrite_output()
 .run_async(pipe_stdin=True)
)
process.communicate(input=video_buffer)

# Read the newly written video
out_2, error = (
 ffmpeg
 .input("test_2.mkv", threads=40)
 .output("pipe:", format='rawvideo', pix_fmt='rgb24')
 .run(capture_stdout=True)
)

# Convert new video into array
video_2 = (
 np
 .frombuffer(out_2, np.uint8)
 .reshape([-1, height, width, 3])
)

# Video dimesions change
print(f'{video.shape} vs {video_2.shape}') # (844, 1080, 608, 3) vs (2025, 1080, 608, 3)
print(f'{np.array_equal(video, video_2)}') # False

# Hashes don't match
print(hashlib.sha256(bytes(video_2)).digest()) # b'\x88\x00\xc8\x0ed\x84!\x01\x9e\x08 \xd0U\x9a(\x02\x0b-\xeeA\xecU\xf7\xad0xa\x9e\\\xbck\xc3'
print(hashlib.sha256(bytes(video)).digest()) # b'\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7'



Scikit-video attempt :


import skvideo.io as sk
import numpy as np

video_data = sk.vread('test.mkv')

sk.vwrite('test_2_ski.mkv', video_data)

video_data_2 = sk.vread('test_2_ski.mkv')

# Dimensions match but...
print(video_data.shape) # (844, 1080, 608, 3)
print(video_data_2.shape) # (844, 1080, 608, 3)

# ...array elements don't
print(np.array_equal(video_data, video_data_2)) # False

# Hashes don't match either
print(hashlib.sha256(bytes(video_2)).digest()) # b'\x8b?]\x8epD:\xd9B\x14\xc7\xba\xect\x15G\xfaRP\xde\xad&EC\x15\xc3\x07\n{a[\x80'
print(hashlib.sha256(bytes(video)).digest()) # b'\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7'



I don't understand where I'm going wrong and both the respective documentations do not highlight how to do this particular task. Any help is appreciated. Thank you.