
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (65)
-
L’espace de configuration de MediaSPIP
29 novembre 2010, parL’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
Il permet de configurer finement votre site.
La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)
Sur d’autres sites (13013)
-
I want to save a video with effects applied on it. OnSucccess is not calling [duplicate]
10 novembre 2016, par shriyaThis question already has an answer here :
This is command string which i want to process.
String[] command = new String[]"ffmpeg -y -i " + old_video_path + " -strict experimental -vf curves=vintage -s 240x320 -r 30 -aspect 4:3 -ab 48000 /storage/emulated/0/video.mp4" ;
executeFFmpeg(command) ;private void executeFFmpeg(String[] cmd) {
try {
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onStart() {
Log.e("Progress", "onStart");
}
@Override
public void onProgress(String message) {
Log.e("Progress", "onProgress");
}
@Override
public void onSuccess(String message) {
Log.e("Progress", "onSuccess");
}
@Override
public void onFailure(String message) {
Log.e("Progress", "onFailure" + message);
}
@Override
public void onFinish() {
Log.e("Progress", "onFinish");
}
});
} catch (FFmpegCommandAlreadyRunningException e) {
// Handle if FFmpeg is already running
}
}in Log i am getting onProcess call and onFininsh call. How do i write code to save this video in sdcard and where ?
-
Python : ani.save very slow. Any alternatives to create videos ?
14 novembre 2023, par CzesklebaIm doing some simple diffusion calculations. I save 2 matrices to 2 datasets every so many steps (every 2s or so) to a single .h5 file. After that I then load the file in another script, create some figures (2 subplots etc., see/run code - i know could be prettier). Then I use matplotlib.animation to make the animation. In the code below, in the very last lines, I then run the ani.save command from matplotlib.


And that's where the problem is. The animation is created within 2 seconds, even for my longer animations (14.755 frames, done in under 2s at 8284 it/s) but after that, ani.save in line 144 takes forever (it didn't finish over night). It reserves/uses about 10gb of my RAM constantly but seemingly takes forever. If you run the code below be sure to set the frames_to_do (line 20) to something like 30 or 60 to see that it does in fact save an mp4 for shorter videos. You can set it higher to see how fast the time to save stuff increases to something unreasonable.


I've been fiddling this for 2 days now and I cant figure it out. I guess my question is : Is there any way to create the video in a reasonable time like this ? Or do I need something other than animation ?


You should be able to just run the code. Ill provide a diffusion_array.h5 with 140 frames so you dont have to create a dummy file, if I can figure out how to upload something like this safely. (The results are with dummy numbers for now, diffusion coefficients etc. are not right yet.)
I used dropbox. Not sure if thats allowed, if not I'll delete the link and uhh PM me or something ?




Here is the code :


import h5py
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
from matplotlib.animation import FuncAnimation
from tqdm import tqdm
import numpy as np


# saving the .mp4 after tho takes forever

# Create an empty figure and axis
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 9), dpi=96)

# Load all saved arrays into a list
file_name = 'diffusion_array.h5'
loaded_u_arrays = []
loaded_h_arrays = []
frames_to_do = 14755 # for now like this, use # version once the slow mp4 convert is cleared up

# with h5py.File(file_name, 'r') as hf:
# for key in hf.keys():
# if key.startswith('u_snapshot_'):
# loaded_u_arrays.append(hf[key][:])
# elif key.startswith('h_snapshot_'):
# loaded_h_arrays.append(hf[key][:])

with h5py.File(file_name, 'r') as hf:
 for i in range(frames_to_do):
 target_key1 = f'u_snapshot_{i:05d}'
 target_key2 = f'h_snapshot_{i:05d}'
 if target_key1 in hf:
 loaded_u_arrays.append(hf[target_key1][:])
 else:
 print(f'Dataset u for time step {i} not found in the file.')
 if target_key2 in hf:
 loaded_h_arrays.append(hf[target_key2][:])
 else:
 print(f'Dataset h for time step {i} not found in the file.')

# Create "empty" imshow objects
# First one
norm1 = mcolors.Normalize(vmin=140, vmax=400)
cmap1 = plt.get_cmap('hot')
cmap1.set_under('0.85')
im1 = ax1.imshow(loaded_u_arrays[0], cmap=cmap1, norm=norm1)
ax1.set_title('Diffusion Heatmap')
ax1.set_xlabel('X')
ax1.set_ylabel('Y')
cbar_ax = fig.add_axes([0.05, 0.15, 0.03, 0.7])
cbar_ax.set_xlabel('$T$ / K', labelpad=20)
fig.colorbar(im1, cax=cbar_ax)


# Second one
ax2 = plt.subplot(1, 2, 2)
norm2 = mcolors.Normalize(vmin=-0.1, vmax=5)
cmap2 = plt.get_cmap('viridis')
cmap2.set_under('0.85')
im2 = ax2.imshow(loaded_h_arrays[0], cmap=cmap2, norm=norm2)
ax2.set_title('Diffusion Hydrogen')
ax2.set_xlabel('X')
ax2.set_ylabel('Y')
cbar_ax = fig.add_axes([0.9, 0.15, 0.03, 0.7])
cbar_ax.set_xlabel('HD in ml/100g', labelpad=20)
fig.colorbar(im2, cax=cbar_ax)

# General
fig.subplots_adjust(right=0.85)
time_text = ax2.text(-15, 0.80, f'Time: {0} s', transform=plt.gca().transAxes, color='black', fontsize=20)

# Annotations
# Heat 1
marker_style = dict(marker='o', markersize=6, markerfacecolor='black', markeredgecolor='black')
ax1.scatter(*[10, 40], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
 edgecolors=marker_style['markeredgecolor'])
ann_heat1 = ax1.annotate(f'Temp: {loaded_u_arrays[0][40, 10]:.0f}', xy=[10, 40], xycoords='data',
 xytext=([10, 40][0], [10, 40][1] + 48), textcoords='data',
 arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')
# Heat 2
ax1.scatter(*[140, 85], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
 edgecolors=marker_style['markeredgecolor'])
ann_heat2 = ax1.annotate(f'Temp: {loaded_u_arrays[0][85, 140]:.0f}', xy=[140, 85], xycoords='data',
 xytext=([140, 85][0] + 55, [140, 85][1] + 3), textcoords='data',
 arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')

# Diffusion 1
marker_style = dict(marker='o', markersize=6, markerfacecolor='black', markeredgecolor='black')
ax2.scatter(*[10, 40], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
 edgecolors=marker_style['markeredgecolor'])
ann_diff1 = ax2.annotate(f'HD: {loaded_h_arrays[0][40, 10]:.0f}', xy=[10, 40], xycoords='data',
 xytext=([10, 40][0], [10, 40][1] + 48), textcoords='data',
 arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')
# Diffusion 2
ax2.scatter(*[140, 85], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
 edgecolors=marker_style['markeredgecolor'])
ann_diff2 = ax2.annotate(f'HD: {loaded_h_arrays[0][85, 140]:.0f}', xy=[140, 85], xycoords='data',
 xytext=([140, 85][0] + 55, [140, 85][1] + 3), textcoords='data',
 arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')


# Function to update the animation
def update(frame, *args):
 loaded_u_array, loaded_h_array = args

 s_per_frame = 2 # during weld/cooling you save a state every 2s
 frames_to_room_temp = 7803 # that means this many frames need to be animated
 dt_big = 87 # during "just diffusion" you save every 10 frame but 87s pass in those

 # Update the time step shown
 if frame <= frames_to_room_temp:
 im1.set_data(loaded_u_array[frame])
 im2.set_data(loaded_h_array[frame])
 time_text.set_text(f'Time: {frame * s_per_frame} s')

 else:
 im1.set_data(loaded_u_array[frame])
 im2.set_data(loaded_h_array[frame])
 calc_time = int(((2 * frames_to_room_temp) + (frame - frames_to_room_temp) * 87) / 3600)
 time_text.set_text(f'Time: {calc_time} s')

 # Annotate some points
 ann_heat1.set_text(f'Temp: {loaded_u_arrays[frame][40, 10]:.0f}')
 ann_heat2.set_text(f'Temp: {loaded_u_arrays[frame][85, 140]:.0f}')
 ann_diff1.set_text(f'HD: {loaded_h_arrays[frame][40, 10]:.0f}')
 ann_diff2.set_text(f'HD: {loaded_h_arrays[frame][85, 140]:.0f}')

 return im1, im2 # Return the updated artists


# Create the animation without displaying it
ani = FuncAnimation(fig, update, frames=frames_to_do, repeat=False, blit=True, interval=1,
 fargs=(loaded_u_arrays, loaded_h_arrays)) # frames=len(loaded_u_arrays)

# Create the progress bar with tqdm
with tqdm(total=frames_to_do, desc='Creating Animation') as pbar: # total=len(loaded_u_arrays)
 for i in range(frames_to_do): # for i in range(len(loaded_u_arrays)):
 update(i, loaded_u_arrays, loaded_h_arrays) # Manually update the frame with both datasets
 pbar.update(1) # Update the progress bar

# Save the animation as a video file (e.g., MP4)
print("Converting to .mp4 now. This may take some time. This is normal, wait for Python to finish this process.")
ani.save('diffusion_animation.mp4', writer='ffmpeg', dpi=96, fps=60)

# Close the figure to prevent it from being displayed
plt.close(fig)




-
Configure FFmpeg to save the recording at specified time intervals as separate output files
27 septembre 2023, par John DoeHow can I configure FFmpeg to save the recording into separate output files at specific time intervals, for example, every 25 seconds without interrupting the recording ? By default, FFmpeg saves the entire recorded video in a single output file, but I want to create separate output files for each 25-second segment.
What steps should I follow to achieve this ?


I tried python
multiprogramming
to run the recording and saving task simultanously but didn't work.

How about using the
@ffmpeg.on("progress")
event ?