
Recherche avancée
Autres articles (44)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation" -
Les notifications de la ferme
1er décembre 2010, parAfin d’assurer une gestion correcte de la ferme, il est nécessaire de notifier plusieurs choses lors d’actions spécifiques à la fois à l’utilisateur mais également à l’ensemble des administrateurs de la ferme.
Les notifications de changement de statut
Lors d’un changement de statut d’une instance, l’ensemble des administrateurs de la ferme doivent être notifiés de cette modification ainsi que l’utilisateur administrateur de l’instance.
À la demande d’un canal
Passage au statut "publie"
Passage au (...)
Sur d’autres sites (4540)
-
Python : ani.save very slow. Any alternatives to create videos ?
14 novembre 2023, par CzesklebaIm doing some simple diffusion calculations. I save 2 matrices to 2 datasets every so many steps (every 2s or so) to a single .h5 file. After that I then load the file in another script, create some figures (2 subplots etc., see/run code - i know could be prettier). Then I use matplotlib.animation to make the animation. In the code below, in the very last lines, I then run the ani.save command from matplotlib.


And that's where the problem is. The animation is created within 2 seconds, even for my longer animations (14.755 frames, done in under 2s at 8284 it/s) but after that, ani.save in line 144 takes forever (it didn't finish over night). It reserves/uses about 10gb of my RAM constantly but seemingly takes forever. If you run the code below be sure to set the frames_to_do (line 20) to something like 30 or 60 to see that it does in fact save an mp4 for shorter videos. You can set it higher to see how fast the time to save stuff increases to something unreasonable.


I've been fiddling this for 2 days now and I cant figure it out. I guess my question is : Is there any way to create the video in a reasonable time like this ? Or do I need something other than animation ?


You should be able to just run the code. Ill provide a diffusion_array.h5 with 140 frames so you dont have to create a dummy file, if I can figure out how to upload something like this safely. (The results are with dummy numbers for now, diffusion coefficients etc. are not right yet.)
I used dropbox. Not sure if thats allowed, if not I'll delete the link and uhh PM me or something ?




Here is the code :


import h5py
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
from matplotlib.animation import FuncAnimation
from tqdm import tqdm
import numpy as np


# saving the .mp4 after tho takes forever

# Create an empty figure and axis
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 9), dpi=96)

# Load all saved arrays into a list
file_name = 'diffusion_array.h5'
loaded_u_arrays = []
loaded_h_arrays = []
frames_to_do = 14755 # for now like this, use # version once the slow mp4 convert is cleared up

# with h5py.File(file_name, 'r') as hf:
# for key in hf.keys():
# if key.startswith('u_snapshot_'):
# loaded_u_arrays.append(hf[key][:])
# elif key.startswith('h_snapshot_'):
# loaded_h_arrays.append(hf[key][:])

with h5py.File(file_name, 'r') as hf:
 for i in range(frames_to_do):
 target_key1 = f'u_snapshot_{i:05d}'
 target_key2 = f'h_snapshot_{i:05d}'
 if target_key1 in hf:
 loaded_u_arrays.append(hf[target_key1][:])
 else:
 print(f'Dataset u for time step {i} not found in the file.')
 if target_key2 in hf:
 loaded_h_arrays.append(hf[target_key2][:])
 else:
 print(f'Dataset h for time step {i} not found in the file.')

# Create "empty" imshow objects
# First one
norm1 = mcolors.Normalize(vmin=140, vmax=400)
cmap1 = plt.get_cmap('hot')
cmap1.set_under('0.85')
im1 = ax1.imshow(loaded_u_arrays[0], cmap=cmap1, norm=norm1)
ax1.set_title('Diffusion Heatmap')
ax1.set_xlabel('X')
ax1.set_ylabel('Y')
cbar_ax = fig.add_axes([0.05, 0.15, 0.03, 0.7])
cbar_ax.set_xlabel('$T$ / K', labelpad=20)
fig.colorbar(im1, cax=cbar_ax)


# Second one
ax2 = plt.subplot(1, 2, 2)
norm2 = mcolors.Normalize(vmin=-0.1, vmax=5)
cmap2 = plt.get_cmap('viridis')
cmap2.set_under('0.85')
im2 = ax2.imshow(loaded_h_arrays[0], cmap=cmap2, norm=norm2)
ax2.set_title('Diffusion Hydrogen')
ax2.set_xlabel('X')
ax2.set_ylabel('Y')
cbar_ax = fig.add_axes([0.9, 0.15, 0.03, 0.7])
cbar_ax.set_xlabel('HD in ml/100g', labelpad=20)
fig.colorbar(im2, cax=cbar_ax)

# General
fig.subplots_adjust(right=0.85)
time_text = ax2.text(-15, 0.80, f'Time: {0} s', transform=plt.gca().transAxes, color='black', fontsize=20)

# Annotations
# Heat 1
marker_style = dict(marker='o', markersize=6, markerfacecolor='black', markeredgecolor='black')
ax1.scatter(*[10, 40], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
 edgecolors=marker_style['markeredgecolor'])
ann_heat1 = ax1.annotate(f'Temp: {loaded_u_arrays[0][40, 10]:.0f}', xy=[10, 40], xycoords='data',
 xytext=([10, 40][0], [10, 40][1] + 48), textcoords='data',
 arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')
# Heat 2
ax1.scatter(*[140, 85], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
 edgecolors=marker_style['markeredgecolor'])
ann_heat2 = ax1.annotate(f'Temp: {loaded_u_arrays[0][85, 140]:.0f}', xy=[140, 85], xycoords='data',
 xytext=([140, 85][0] + 55, [140, 85][1] + 3), textcoords='data',
 arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')

# Diffusion 1
marker_style = dict(marker='o', markersize=6, markerfacecolor='black', markeredgecolor='black')
ax2.scatter(*[10, 40], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
 edgecolors=marker_style['markeredgecolor'])
ann_diff1 = ax2.annotate(f'HD: {loaded_h_arrays[0][40, 10]:.0f}', xy=[10, 40], xycoords='data',
 xytext=([10, 40][0], [10, 40][1] + 48), textcoords='data',
 arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')
# Diffusion 2
ax2.scatter(*[140, 85], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
 edgecolors=marker_style['markeredgecolor'])
ann_diff2 = ax2.annotate(f'HD: {loaded_h_arrays[0][85, 140]:.0f}', xy=[140, 85], xycoords='data',
 xytext=([140, 85][0] + 55, [140, 85][1] + 3), textcoords='data',
 arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')


# Function to update the animation
def update(frame, *args):
 loaded_u_array, loaded_h_array = args

 s_per_frame = 2 # during weld/cooling you save a state every 2s
 frames_to_room_temp = 7803 # that means this many frames need to be animated
 dt_big = 87 # during "just diffusion" you save every 10 frame but 87s pass in those

 # Update the time step shown
 if frame <= frames_to_room_temp:
 im1.set_data(loaded_u_array[frame])
 im2.set_data(loaded_h_array[frame])
 time_text.set_text(f'Time: {frame * s_per_frame} s')

 else:
 im1.set_data(loaded_u_array[frame])
 im2.set_data(loaded_h_array[frame])
 calc_time = int(((2 * frames_to_room_temp) + (frame - frames_to_room_temp) * 87) / 3600)
 time_text.set_text(f'Time: {calc_time} s')

 # Annotate some points
 ann_heat1.set_text(f'Temp: {loaded_u_arrays[frame][40, 10]:.0f}')
 ann_heat2.set_text(f'Temp: {loaded_u_arrays[frame][85, 140]:.0f}')
 ann_diff1.set_text(f'HD: {loaded_h_arrays[frame][40, 10]:.0f}')
 ann_diff2.set_text(f'HD: {loaded_h_arrays[frame][85, 140]:.0f}')

 return im1, im2 # Return the updated artists


# Create the animation without displaying it
ani = FuncAnimation(fig, update, frames=frames_to_do, repeat=False, blit=True, interval=1,
 fargs=(loaded_u_arrays, loaded_h_arrays)) # frames=len(loaded_u_arrays)

# Create the progress bar with tqdm
with tqdm(total=frames_to_do, desc='Creating Animation') as pbar: # total=len(loaded_u_arrays)
 for i in range(frames_to_do): # for i in range(len(loaded_u_arrays)):
 update(i, loaded_u_arrays, loaded_h_arrays) # Manually update the frame with both datasets
 pbar.update(1) # Update the progress bar

# Save the animation as a video file (e.g., MP4)
print("Converting to .mp4 now. This may take some time. This is normal, wait for Python to finish this process.")
ani.save('diffusion_animation.mp4', writer='ffmpeg', dpi=96, fps=60)

# Close the figure to prevent it from being displayed
plt.close(fig)




-
ffmpeg GRAY16 stream over network
28 novembre 2023, par Norbert P.Im working in a school project where we need to use depth cameras. The camera produces color and depth (in other words 16bit grayscale image). We decided to use ffmpeg, as later on compression could be very useful. For now we got some basic stream running form one PC to other. These settings include :


- 

- rtmp
- flv as container
- pixel format AV_PIX_FMT_YUV420P
- codec AV_CODEC_ID_H264










The problem we are having is with grayscale image. Not every codec is able to cope with this format, so as not every protocol able to work with given codec. I got some settings "working" but receiver side is just stuck on avformat_open_input() method.
I have also tested it with commandline where ffmpeg is listening for connection and same happens.


I include a minimum "working" example of client code. Server can be tested with "ffmpeg.exe -f apng -listen 1 -i rtmp ://localhost:9999/stream/stream1 -c copy -f apng -listen 1 rtmp ://localhost:2222/live/l" or code below. I get no warnings, ffmpeg is newest version installed with "vcpkg install —triplet x64-windows ffmpeg[ffmpeg,ffprobe,zlib]" on windows or packet manager on linux.


The question : Did I miss something ? How do I get it to work ? If you have any better ideas I would very gladly consider them. In the end I need 16 bits of lossless transmission, could be split between channels etc. which I also tried with same effect.


Client code that would have camera and connect to server :


extern "C" {
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>imgutils.h>
}

int main() {

 std::string container = "apng";
 AVCodecID codec_id = AV_CODEC_ID_APNG;
 AVPixelFormat pixFormat = AV_PIX_FMT_GRAY16BE;

 AVFormatContext* format_ctx;
 AVCodec* out_codec;
 AVStream* out_stream;
 AVCodecContext* out_codec_ctx;
 AVFrame* frame;
 uint8_t* data;

 std::string server = "rtmp://localhost:9999/stream/stream1";

 int width = 1280, height = 720, fps = 30, bitrate = 1000000;

 //initialize format context for output with flv and no filename
 avformat_alloc_output_context2(&format_ctx, nullptr, container.c_str(), server.c_str());
 if (!format_ctx) {
 return 1;
 }

 //AVIOContext for accessing the resource indicated by url
 if (!(format_ctx->oformat->flags & AVFMT_NOFILE)) {
 int avopen_ret = avio_open(&format_ctx->pb, server.c_str(),
 AVIO_FLAG_WRITE);// , nullptr, nullptr);
 if (avopen_ret < 0) {
 fprintf(stderr, "failed to open stream output context, stream will not work\n");
 return 1;
 }
 }


 const AVCodec* tmp_out_codec = avcodec_find_encoder(codec_id);
 //const AVCodec* tmp_out_codec = avcodec_find_encoder_by_name("hevc");
 out_codec = const_cast(tmp_out_codec);
 if (!(out_codec)) {
 fprintf(stderr, "Could not find encoder for '%s'\n",
 avcodec_get_name(codec_id));

 return 1;
 }

 out_stream = avformat_new_stream(format_ctx, out_codec);
 if (!out_stream) {
 fprintf(stderr, "Could not allocate stream\n");
 return 1;
 }

 out_codec_ctx = avcodec_alloc_context3(out_codec);

 const AVRational timebase = { 60000, fps };
 const AVRational dst_fps = { fps, 1 };
 av_log_set_level(AV_LOG_VERBOSE);
 //codec_ctx->codec_tag = 0;
 //codec_ctx->codec_id = codec_id;
 out_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
 out_codec_ctx->width = width;
 out_codec_ctx->height = height;
 out_codec_ctx->gop_size = 1;
 out_codec_ctx->time_base = timebase;
 out_codec_ctx->pix_fmt = pixFormat;
 out_codec_ctx->framerate = dst_fps;
 out_codec_ctx->time_base = av_inv_q(dst_fps);
 out_codec_ctx->bit_rate = bitrate;
 //if (fctx->oformat->flags & AVFMT_GLOBALHEADER)
 //{
 // codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 //}

 out_stream->time_base = out_codec_ctx->time_base; //will be set afterwards by avformat_write_header to 1/1000

 int ret = avcodec_parameters_from_context(out_stream->codecpar, out_codec_ctx);
 if (ret < 0)
 {
 fprintf(stderr, "Could not initialize stream codec parameters!\n");
 return 1;
 }

 AVDictionary* codec_options = nullptr;
 av_dict_set(&codec_options, "tune", "zerolatency", 0);

 // open video encoder
 ret = avcodec_open2(out_codec_ctx, out_codec, &codec_options);
 if (ret < 0)
 {
 fprintf(stderr, "Could not open video encoder!\n");
 return 1;
 }
 av_dict_free(&codec_options);

 out_stream->codecpar->extradata_size = out_codec_ctx->extradata_size;
 out_stream->codecpar->extradata = static_cast(av_mallocz(out_codec_ctx->extradata_size));
 memcpy(out_stream->codecpar->extradata, out_codec_ctx->extradata, out_codec_ctx->extradata_size);

 av_dump_format(format_ctx, 0, server.c_str(), 1);

 frame = av_frame_alloc();

 int sz = av_image_get_buffer_size(pixFormat, width, height, 32);
#ifdef _WIN32
 data = (uint8_t*)_aligned_malloc(sz, 32);
 if (data == NULL)
 return ENOMEM;
#else
 ret = posix_memalign(reinterpret_cast(&data), 32, sz);
#endif
 av_image_fill_arrays(frame->data, frame->linesize, data, pixFormat, width, height, 32);
 frame->format = pixFormat;
 frame->width = width;
 frame->height = height;
 frame->pts = 1;
 if (avformat_write_header(format_ctx, nullptr) < 0) //Header making problems!!!
 {
 fprintf(stderr, "Could not write header!\n");
 return 1;
 }

 printf("stream time base = %d / %d \n", out_stream->time_base.num, out_stream->time_base.den);

 double inv_stream_timebase = (double)out_stream->time_base.den / (double)out_stream->time_base.num;
 printf("Init OK\n");
 /* Init phase end*/
 int dts = 0;
 int frameNo = 0;

 while (true) {
 //Fill dummy frame with something
 for (int y = 0; y < height; y++) {
 uint16_t color = ((y + frameNo) * 256) % (256 * 256);
 for (int x = 0; x < width; x++) {
 data[x+y*width] = color;
 }
 }

 memcpy(frame->data[0], data, 1280 * 720 * sizeof(uint16_t));
 AVPacket* pkt = av_packet_alloc();

 int ret = avcodec_send_frame(out_codec_ctx, frame);
 if (ret < 0)
 {
 fprintf(stderr, "Error sending frame to codec context!\n");
 return ret;
 }
 while (ret >= 0) {
 ret = avcodec_receive_packet(out_codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 else if (ret < 0) {
 fprintf(stderr, "Error during encoding\n");
 break;
 }
 pkt->dts = dts;
 pkt->pts = dts;
 dts += 33;
 av_write_frame(format_ctx, pkt);
 frameNo++;
 av_packet_unref(pkt);
 }
 printf("Streamed %d frames\n", frameNo);
 }
 return 0;
}



And part of server that should receive. code where is stops and waits


extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
}

int main() {
 AVFormatContext* fmt_ctx = NULL;
 av_log_set_level(AV_LOG_VERBOSE);
 AVDictionary* options = nullptr;
 av_dict_set(&options, "protocol_whitelist", "file,udp,rtp,tcp,rtmp,rtsp,hls", 0);
 av_dict_set(&options, "timeout", "500000", 0); // Timeout in microseconds 

//Next Line hangs 
 int ret = avformat_open_input(&fmt_ctx, "rtmp://localhost:9999/stream/stream1", NULL, &options);
 if (ret != 0) {
 fprintf(stderr, "Could not open RTMP stream\n");
 return -1;
 }

 // Find the first video stream
 ret = avformat_find_stream_info(fmt_ctx, nullptr);
 if (ret < 0) {
 return ret;
 }
 //...
} 




Edit :
I tried to just create a animated png and tried to stream that from the console to another console window to avoid any programming mistakes on my side. It was the same, I just could not get 16 PNG encoded stream to work. I hung trying to receive and closed when the file ended with in total zero frames received.


I managed to get other thing working :
To not encode gray frames with YUV420, I installed ffmpeg with libx264 support (was thinking is the same as H264, which in code is, but it adds support to new pixel formats). Used H264 again but with GRAY8 with doubled image width and reconstructing the image on the other side.


Maybe as a side note, I could not get any other formats to work. Is "flv" the only option here ? Could I get more performance if I changed it to... what ?


-
Why aren't the videos merging ?
25 décembre 2023, par user9987656I have 100 videos (total duration of 10 hours) from one author, and I'm trying to merge them into one large video, but I'm encountering an issue. ffmpeg is giving me several errors with the following message :


"mp4 @ 000002067f56ecc0] Non-monotonic DTS in output stream 0:0 ; previous : 968719404, current : 434585687 ; changing to 968719405. This may result in incorrect timestamps in the output file."


As a result, I end up with a 10-hour video, but I can only view the first 3 hours and the last 2 hours.


What could be causing this problem ? I'm using the streaming copy command.


-f concat -safe 0 -i input.txt -c copy