
Recherche avancée
Autres articles (38)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4374)
-
How can fix CalledProcessError : in ffmpeg
7 décembre 2020, par MarioI hope someone can help to troubleshoot this problem. I'm trying to save the loss plots out of Keras in the form of the following animation.






but I have been facing the following error, and ultimately I can't save the animation :



MovieWriter stderr:
[h264_v4l2m2m @ 0x55a67176f430] Could not find a valid device
[h264_v4l2m2m @ 0x55a67176f430] can't configure encoder
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

---------------------------------------------------------------------------
BrokenPipeError Traceback (most recent call last)
~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in saving(self, fig, outfile, dpi, *args, **kwargs)
 229 try:
--> 230 yield self
 231 finally:

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in save(self, filename, writer, fps, dpi, codec, bitrate, extra_args, metadata, extra_anim, savefig_kwargs, progress_callback)
 1155 frame_number += 1
-> 1156 writer.grab_frame(**savefig_kwargs)
 1157 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in grab_frame(self, **savefig_kwargs)
 383 self.fig.savefig(self._frame_sink(), format=self.frame_format,
--> 384 dpi=self.dpi, **savefig_kwargs)
 385 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/figure.py in savefig(self, fname, transparent, **kwargs)
 2179 
-> 2180 self.canvas.print_figure(fname, **kwargs)
 2181 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, **kwargs)
 2081 bbox_inches_restore=_bbox_inches_restore,
-> 2082 **kwargs)
 2083 finally:

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py in print_raw(self, filename_or_obj, *args, **kwargs)
 445 cbook.open_file_cm(filename_or_obj, "wb") as fh:
--> 446 fh.write(renderer._renderer.buffer_rgba())
 447 

BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

CalledProcessError Traceback (most recent call last)
 in <module>
 17 print(f'{model_type.upper()} Train Time: {Timer} sec')
 18 
---> 19 create_loss_animation(model_type, hist.history['loss'], hist.history['val_loss'], epoch)
 20 
 21 evaluate(model, trainX, trainY, testX, testY, scores_train, scores_test)

 in create_loss_animation(model_type, loss_hist, val_loss_hist, epoch)
 34 
 35 ani = matplotlib.animation.FuncAnimation(fig, animate, fargs=(l1, l2, loss, val_loss, title), repeat=True, interval=1000, repeat_delay=1000)
---> 36 ani.save(f'loss_animation_{model_type}_oneDataset.mp4', writer=writer)

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in save(self, filename, writer, fps, dpi, codec, bitrate, extra_args, metadata, extra_anim, savefig_kwargs, progress_callback)
 1154 progress_callback(frame_number, total_frames)
 1155 frame_number += 1
-> 1156 writer.grab_frame(**savefig_kwargs)
 1157 
 1158 # Reconnect signal for first draw if necessary

~/anaconda3/envs/CR7/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
 97 value = type()
 98 try:
---> 99 self.gen.throw(type, value, traceback)
 100 except StopIteration as exc:
 101 # Suppress StopIteration *unless* it's the same exception that

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in saving(self, fig, outfile, dpi, *args, **kwargs)
 230 yield self
 231 finally:
--> 232 self.finish()
 233 
 234 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in finish(self)
 365 def finish(self):
 366 '''Finish any processing for writing the movie.'''
--> 367 self.cleanup()
 368 
 369 def grab_frame(self, **savefig_kwargs):

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in cleanup(self)
 409 if self._proc.returncode:
 410 raise subprocess.CalledProcessError(
--> 411 self._proc.returncode, self._proc.args, out, err)
 412 
 413 @classmethod

CalledProcessError: Command '['/usr/bin/ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '720x720', '-pix_fmt', 
'rgba', '-r', '5', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-b', '800k', '-y', 
'loss_animation_CNN_oneDataset.mp4']' returned non-zero exit status 1.
</module>



I tried to ignore the error by this answer but it seems it's not the case. I also checked similar case but its answer for getting a static git binary is not my cas as well since not especial converting PNG to MP4 !



My code is as follows :



plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'

def animate(i, data1, data2, line1, line2):
 temp1 = data1.iloc[:int(i+1)]
 temp2 = data2.iloc[:int(i+1)]

 line1.set_data(temp1.index, temp1.value)
 line2.set_data(temp2.index, temp2.value)

 return (line1, line2)


def create_loss_animation(model_type, data1, data2):
 fig = plt.figure()
 plt.title(f'Loss on Train & Test', fontsize=25)
 plt.xlabel('Epoch', fontsize=20)
 plt.ylabel('Loss MSE for Sx-Sy & Sxy', fontsize=20)
 plt.xlim(min(data1.index.min(), data2.index.min()), max(data1.index.max(), data2.index.max()))
 plt.ylim(min(data1.value.min(), data2.value.min()), max(data1.value.max(), data2.value.max()))

 l1, = plt.plot([], [], 'o-', label='Train Loss', color='b', markevery=[-1])
 l2, = plt.plot([], [], 'o-', label='Test Loss', color='r', markevery=[-1])
 plt.legend(loc='center right', fontsize='xx-large')

 Writer = animation.writers['ffmpeg']
 writer = Writer(fps=5, bitrate=1800)

 ani = matplotlib.animation.FuncAnimation(fig, animate, fargs=(data1, data2, l1, l2), repeat=True, interval=1000, repeat_delay=1000)
 ani.save(f'{model_type}.mp4', writer=writer)

# create datasets
x = np.linspace(0,150,50)
y1 = 41*np.exp(-x/20)
y2 = 35*np.exp(-x/50)

my_data_number_1 = pd.DataFrame({'x':x, 'value':y1}).set_index('x')
my_data_number_2 = pd.DataFrame({'x':x, 'value':y2}).set_index('x')

create_loss_animation('test', my_data_number_1, my_data_number_2)



-
ffmpeg in C# (ScrCpy)
23 janvier 2020, par RunicSheepI’m trying to access the screen of my android device like ScrCpy does (https://github.com/Genymobile/scrcpy) but in C#.
What I’ve done so far is pushing the jar (server) to my device and receiving the input. (Device resolution etc.)
But I can’t re implement the decoding process in c#, there has to be some sort of error so far.
C# library used for ffmpeg is ffmpeg.AutoGen (https://github.com/Ruslan-B/FFmpeg.AutoGen)Here’s the decoding code :
using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Runtime.InteropServices;
using System.Runtime.InteropServices.ComTypes;
using System.Threading;
using FFmpeg.AutoGen;
namespace Source.Android.Scrcpy
{
public unsafe class Decoder
{
private const string LD_LIBRARY_PATH = "LD_LIBRARY_PATH";
private AVFrame* _decodingFrame;
private AVCodec* _codec;
private AVCodecContext* _codec_ctx;
private AVFormatContext* _format_ctx;
public Decoder()
{
RegisterFFmpegBinaries();
SetupLogging();
this.InitFormatContext();
}
private void InitFormatContext()
{
_decodingFrame = ffmpeg.av_frame_alloc();
_codec = ffmpeg.avcodec_find_decoder(AVCodecID.AV_CODEC_ID_H264);
if (_codec== null)
{
throw new Exception("H.264 decoder not found");// run_end;
}
_codec_ctx = ffmpeg.avcodec_alloc_context3(_codec);
if (_codec_ctx == null)
{
throw new Exception("Could not allocate decoder context"); //run_end
}
if (ffmpeg.avcodec_open2(_codec_ctx, _codec, null) < 0)
{
throw new Exception("Could not open H.264 codec");// run_finally_free_codec_ctx
}
_format_ctx = ffmpeg.avformat_alloc_context();
if (_format_ctx == null)
{
throw new Exception("Could not allocate format context");// run_finally_close_codec;
}
}
private void RegisterFFmpegBinaries()
{
switch (Environment.OSVersion.Platform)
{
case PlatformID.Win32NT:
case PlatformID.Win32S:
case PlatformID.Win32Windows:
var current = Environment.CurrentDirectory;
var probe = Path.Combine("FFmpeg", Environment.Is64BitProcess ? "x64" : "x86");
while (current != null)
{
var ffmpegDirectory = Path.Combine(current, probe);
if (Directory.Exists(ffmpegDirectory))
{
Console.WriteLine($"FFmpeg binaries found in: {ffmpegDirectory}");
RegisterLibrariesSearchPath(ffmpegDirectory);
return;
}
current = Directory.GetParent(current)?.FullName;
}
break;
case PlatformID.Unix:
case PlatformID.MacOSX:
var libraryPath = Environment.GetEnvironmentVariable(LD_LIBRARY_PATH);
RegisterLibrariesSearchPath(libraryPath);
break;
}
}
private static void RegisterLibrariesSearchPath(string path)
{
switch (Environment.OSVersion.Platform)
{
case PlatformID.Win32NT:
case PlatformID.Win32S:
case PlatformID.Win32Windows:
SetDllDirectory(path);
break;
case PlatformID.Unix:
case PlatformID.MacOSX:
string currentValue = Environment.GetEnvironmentVariable(LD_LIBRARY_PATH);
if (string.IsNullOrWhiteSpace(currentValue) == false && currentValue.Contains(path) == false)
{
string newValue = currentValue + Path.PathSeparator + path;
Environment.SetEnvironmentVariable(LD_LIBRARY_PATH, newValue);
}
break;
}
}
[DllImport("kernel32", SetLastError = true)]
private static extern bool SetDllDirectory(string lpPathName);
private AVPacket GetPacket()
{
var packet = ffmpeg.av_packet_alloc();
ffmpeg.av_init_packet(packet);
packet->data = null;
packet->size = 0;
return *packet;
}
private static int read_raw_packet(void* opaque, ushort* buffer, int bufSize)
{
var buffSize = 1024;
var remaining = dt.Length - dtp - 1;
var written = 0;
for (var i = 0; i < buffSize && i+dtp < dt.Length; i++)
{
buffer[i] = dt[i+dtp];
written++;
}
dtp += written;
if (written <= 0)
{
return ffmpeg.AVERROR_EOF;
}
return written;
}
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
public delegate int av_read_function_callback(void* opaque, ushort* endData, int bufSize);
private static byte[] dt;
private static int dtp;
public Bitmap DecodeScrCpy(byte[] data)
{
if (data.Length == 0)
{
return null;
}
byte* _buffer;
ulong _bufferSize = 1024*2;
_buffer = (byte*)ffmpeg.av_malloc(_bufferSize);
if (_buffer == null)
{
throw new Exception("Could not allocate buffer"); // run_finally_free_format_ctx;
}
fixed (byte* dataPtr = data)
{
dt = data;
dtp = 0;
fixed (AVFormatContext** formatCtxPtr = &_format_ctx)
{
var mReadCallbackFunc = new av_read_function_callback(read_raw_packet);
AVIOContext* avio_ctx = ffmpeg.avio_alloc_context(_buffer, (int)_bufferSize, 0, null, //TODO: IntPtr.Zero nutzen?
new avio_alloc_context_read_packet_func{Pointer = Marshal.GetFunctionPointerForDelegate(mReadCallbackFunc) },
null, null);
if (avio_ctx == null)
{
ffmpeg.av_free(dataPtr);
throw new Exception("Could not allocate avio context"); //goto run_finally_free_format_ctx;
}
_format_ctx->pb = avio_ctx;
if (ffmpeg.avformat_open_input(formatCtxPtr, null, null, null) < 0)
{
throw new Exception("Could not open video stream"); // goto run_finally_free_avio_ctx;
}
var packet = GetPacket();
while (ffmpeg.av_read_frame(_format_ctx, &packet) == 0)
{
if (ffmpeg.LIBAVDEVICE_VERSION_INT >= ffmpeg.AV_VERSION_INT(57, 37, 0))
{
int ret;
if ((ret = ffmpeg.avcodec_send_packet(_codec_ctx, &packet)) < 0)
{
throw new Exception($"Could not send video packet: {ret}"); //goto run_quit
}
ret = ffmpeg.avcodec_receive_frame(_codec_ctx, _decodingFrame);
if (ret == 0)
{
// a frame was received
}
else if (ret != ffmpeg.AVERROR(ffmpeg.EAGAIN))
{
ffmpeg.av_packet_unref(&packet);
throw new Exception($"Could not receive video frame: {ret}"); //goto run_quit;
}
}
}
}
}
return null;
}
}
}Entry is DecodeScrCpy with read data from network stream.
Things I noticed :- read_raw_packet is called again after ffmpeg.AVERROR_EOF is returned
- ffmpeg.avformat_open_input fails
Question is, did I miss anything ?
-
Anomalie #4360 (Nouveau) : Message d’erreur au début de l’installation d’un site web SPIP par spip...
13 juillet 2019, par Vincent ROBERTSur Xampp avec Apache 2.4.39 (Win64) & PHP 7.2.19
En initiant une nouvelle installation avec spiploader, j’ai le message d’erreur suivant :"Warning : Use of undefined constant _DIR_TMP - assumed ’_DIR_TMP’ (this will throw an Error in a future version of PHP) in C :\xampp\htdocs\news-test\pclzip.php on line 28"
Ce message apparaît juste avant le début de l’installation, sur la page "Téléchargement de SPIP - !Le programme va télécharger les fichiers de SPIP à l’intérieur de ce répertoire."