
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (104)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (9323)
-
Doesn't python's ffmpy works with temporary files made using tempfile.TemporaryFile ?
8 juin 2017, par tkhurana96I am making a function whose purpose is to take a
mp3
file and analyse and process it. So, taking help from this SO answer, I am making a temporary wav file, and then using pythonffmpy
library I am trying to convertmp3
(actual given file) to wav file. But the catch is that I am giving the temporary wav file generated above as the output file to ffmpy to store the result to i.e. I am doing this :import ffmpy
import tempfile
from scipy.io import wavfile
# audioFile variable is known here
tempWavFile = tempfile.TemporaryFile(suffix="wav")
ff_obj = ffmpy.FFmpeg(
global_options="hide_banner",
inputs={audioFile:None},
outputs={tempWavFile: " -acodec pcm_s16le -ac 1 -ar 44000"}
)
ff_obj.run()
[fs, frames] = wavfile.read(tempWavFile)
print(" fs is: ", fs)
print(" frames is: ", frames)But on line
ff_obj.run()
this error occurs :File "/home/tushar/.local/lib/python3.5/site-packages/ffmpy.py", line 95, in run
stderr=stderr
File "/usr/lib/python3.5/subprocess.py", line 947, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.5/subprocess.py", line 1490, in _execute_child
restore_signals, start_new_session, preexec_fn)
TypeError: Can't convert '_io.TextIOWrapper' object to str implicitlySo, my question is :
- When I replaced
tempWavFile = tempfile.TemporaryFile(suffix="wav")
withtempWavFile = tempfile.mktemp('.wav')
, no error occurs, why so ? - What does this error mean and what is the cause of it’s occurrence and how can it be corrected ?
- When I replaced
-
ffmpeg remove Non-Monotonous DTS frames
24 mars, par MiGu3XIs it possible to stream copy a .ts file to another .ts file by removing the Non-Monotonous DTS frames ? These frames usually also have a smaller resolution than the video I am trying to copy. I attempted this with VideoReDo but it did not work and I cannot seem to make it work.



Also, the MediaInfo for the video after remixed to Matrosks shows this :



Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L3.2
Format settings : CABAC / 2 Ref Frames
Format settings, CABAC : Yes
Format settings, RefFrames : 2 frames
Codec ID : V_MPEG4/ISO/AVC
Duration : 2 h 35 min
Nominal bit rate : 6 000 kb/s
Width : 896 pixels
Original width : 1 280 pixels
Height : 504 pixels
Original height : 720 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 30.000 FPS
Original frame rate : 60.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.443
Writing library : x264 core 148 r2579M 73ae2d1
Encoding settings : cabac=1 / ref=2 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=2 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=0 / me_range=16 / chroma_me=1 / trellis=0 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=0 / threads=4 / lookahead_threads=1 / sliced_threads=0 / nr=250 / decimate=1 / interlaced=0 / bluray_compat=0 / stitchable=1 / constrained_intra=0 / bframes=0 / weightp=1 / keyint=122 / keyint_min=12 / scenecut=40 / intra_refresh=0 / rc_lookahead=10 / rc=2pass / mbtree=1 / bitrate=6000 / ratetol=1.0 / qcomp=0.60 / qpmin=5 / qpmax=69 / qpstep=4 / cplxblur=20.0 / qblur=0.5 / ip_ratio=1.40 / aq=1:1.00
Default : Yes
Forced : No




Thanks for the help !


-
FFMPEG Determine average color of an area of a video
12 novembre 2019, par Naved KhanI have a use case where I’d want to insert one of two watermarks - one designed for a dark-ish background, the other for a light background into a video. Let’s say that I’d want to do this on the top right corner of the video.
How do I determine the average color of the top right section of the video ? Post this, how do I determine which watermark to use by looking at the average color ?
I have a solution right now where I am taking equally spaced screenshots and then measuring the average color, but it’s excruciatingly slow, especially for longer videos.
# Calculate average color
black_distances = []
white_distances = []
movie = FFMPEG::Movie.new(video_file)
(0..movie.duration / 10).each do |second|
# extract a frame
filename = "tmp/watermark/#{SecureRandom.uuid}.jpg"
movie.screenshot filename.to_s, seek_time: second
# analyse frame for color distance
frame = MiniMagick::Image.open(filename)
frame.crop('20%x20%+80%+0')
frame.resize('1x1')
pixel = frame.get_pixels.flatten
distance_from_black = Math.sqrt(((black[0] - pixel[0])**2 + (black[1] - pixel[1])**2 + (black[2] - pixel[2])**2))
distance_from_white = Math.sqrt(((white[0] - pixel[0])**2 + (white[1] - pixel[1])**2 + (white[2] - pixel[2])**2))
black_distances.push distance_from_black
white_distances.push distance_from_white
File.delete(filename) if File.exist?(filename)
end
average_black_distance = black_distances.reduce(:+).to_f / black_distances.size
average_white_distance = white_distances.reduce(:+).to_f / white_distances.sizeI am also confused about how to use the resulting
average_black_distance
andaverage_white_distance
to determine which watermark to use.