
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (110)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Que fait exactement ce script ?
18 janvier 2011, parCe script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
Installation de dépendances de MediaSPIP
Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)
Sur d’autres sites (4364)
-
How to feed frames one by one and grab decoded frames using LibAV ?
15 septembre 2024, par Alvan RahimliI am integrating a third party system where I send a TCP packet to ask e.g. 5 frames from a live CCTV camera stream, and it sends those 5 frames one by one. Each package is h264 encoded frame, wrapped with some relevant data.


I need to write a code using Libav where I can :


- 

- Feed the frames one by one using AVIOContext (or smth similar).
- Grab decoded frame.
- Draw that to a window (Not relevant to the question, writing for context).








I've been doing the same with GStreamer by creating a pipeline like this :


AppSrc -> H264Parser -> H264Decoder -> FrameGrabber



The code below is what I was able to write so far :


using System.Runtime.InteropServices;
using FFmpeg.AutoGen;

namespace AvIoCtxExperiment;

public static unsafe class Program
{
 public static void Main()
 {
 ffmpeg.avdevice_register_all();
 Console.WriteLine($"FFmpeg v: {ffmpeg.av_version_info()}");

 // Generous buffer size for I frames (~43KB)
 const int bufferSize = 50 * 1024;

 var buff = (byte*)ffmpeg.av_malloc(bufferSize);
 if (buff == null)
 throw new Exception("Buffer is null");

 // A mock frame provider. Frames are stored in separate files and this reads & returns them one-by-one.
 var frameProvider = new FrameProvider(@"D:\Frames-1", 700);
 var gch = GCHandle.Alloc(frameProvider);

 var avioCtx = ffmpeg.avio_alloc_context(
 buffer: buff,
 buffer_size: bufferSize,
 write_flag: 0,
 opaque: (void*)GCHandle.ToIntPtr(gch),
 read_packet: new avio_alloc_context_read_packet(ReadFunc),
 write_packet: null,
 seek: null);

 var formatContext = ffmpeg.avformat_alloc_context();
 formatContext->pb = avioCtx;
 formatContext->flags |= ffmpeg.AVFMT_FLAG_CUSTOM_IO;

 var openResult = ffmpeg.avformat_open_input(&formatContext, null, null, null);
 if (openResult < 0)
 throw new Exception("Open Input Failed");

 if (ffmpeg.avformat_find_stream_info(formatContext, null) < 0)
 throw new Exception("Find StreamInfo Failed");

 AVPacket packet;
 while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
 {
 Console.WriteLine($"GRAB: {packet.buf->size}");
 ffmpeg.av_packet_unref(&packet);
 }
 }

 private static int ReadFunc(void* opaque, byte* buf, int bufSize)
 {
 var frameProvider = (FrameProvider?)GCHandle.FromIntPtr((IntPtr)opaque).Target;
 if (frameProvider == null)
 {
 return 0;
 }

 byte[] managedBuffer = new byte[bufSize];

 var fBuff = frameProvider.NextFrame();
 if (fBuff == null)
 {
 return ffmpeg.AVERROR_EOF;
 }

 int bytesRead = fBuff.Length;
 fBuff.CopyTo(managedBuffer, 0);

 if (bytesRead > 0)
 {
 Marshal.Copy(managedBuffer, 0, (IntPtr)buf, bytesRead);
 Console.WriteLine($"READ size: {fBuff.Length}");
 return bytesRead;
 }

 return ffmpeg.AVERROR_EOF;
 }
}



Second thing that confuses me is with AppSrc, I can feed frame of any size, but with LibAV, it asks for buffer size. With AppSrc, I am responsible of feeding frames into the pipeline. GStreamer notifies me when it is enough with signals. But with libav, it calls read_packet delegate itself.


Any help is much appreciated. Thanks.


P.S. I'm writing a C# app, but sample code in C is fine, I can adapt the code myself.


-
How to feed frames one by one and grab decoded frames using LibAV ?
22 septembre 2024, par Alvan RahimliI am integrating a third party system where I send a TCP packet to ask e.g. 5 frames from a live CCTV camera stream, and it sends those 5 frames one by one. Each package is h264 encoded frame, wrapped with some relevant data.


I need to write a code using Libav where I can :


- 

- Feed the frames one by one using AVIOContext (or smth similar).
- Grab decoded frame.
- Draw that to a window (Not relevant to the question, writing for context).








I've been doing the same with GStreamer by creating a pipeline like this :


AppSrc -> H264Parser -> H264Decoder -> FrameGrabber



The code below is what I was able to write so far :


using System.Runtime.InteropServices;
using FFmpeg.AutoGen;

namespace AvIoCtxExperiment;

public static unsafe class Program
{
 public static void Main()
 {
 ffmpeg.avdevice_register_all();
 Console.WriteLine($"FFmpeg v: {ffmpeg.av_version_info()}");

 // Generous buffer size for I frames (~43KB)
 const int bufferSize = 50 * 1024;

 var buff = (byte*)ffmpeg.av_malloc(bufferSize);
 if (buff == null)
 throw new Exception("Buffer is null");

 // A mock frame provider. Frames are stored in separate files and this reads & returns them one-by-one.
 var frameProvider = new FrameProvider(@"D:\Frames-1", 700);
 var gch = GCHandle.Alloc(frameProvider);

 var avioCtx = ffmpeg.avio_alloc_context(
 buffer: buff,
 buffer_size: bufferSize,
 write_flag: 0,
 opaque: (void*)GCHandle.ToIntPtr(gch),
 read_packet: new avio_alloc_context_read_packet(ReadFunc),
 write_packet: null,
 seek: null);

 var formatContext = ffmpeg.avformat_alloc_context();
 formatContext->pb = avioCtx;
 formatContext->flags |= ffmpeg.AVFMT_FLAG_CUSTOM_IO;

 var openResult = ffmpeg.avformat_open_input(&formatContext, null, null, null);
 if (openResult < 0)
 throw new Exception("Open Input Failed");

 if (ffmpeg.avformat_find_stream_info(formatContext, null) < 0)
 throw new Exception("Find StreamInfo Failed");

 AVPacket packet;
 while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
 {
 Console.WriteLine($"GRAB: {packet.buf->size}");
 ffmpeg.av_packet_unref(&packet);
 }
 }

 private static int ReadFunc(void* opaque, byte* buf, int bufSize)
 {
 var frameProvider = (FrameProvider?)GCHandle.FromIntPtr((IntPtr)opaque).Target;
 if (frameProvider == null)
 {
 return 0;
 }

 byte[] managedBuffer = new byte[bufSize];

 var fBuff = frameProvider.NextFrame();
 if (fBuff == null)
 {
 return ffmpeg.AVERROR_EOF;
 }

 int bytesRead = fBuff.Length;
 fBuff.CopyTo(managedBuffer, 0);

 if (bytesRead > 0)
 {
 Marshal.Copy(managedBuffer, 0, (IntPtr)buf, bytesRead);
 Console.WriteLine($"READ size: {fBuff.Length}");
 return bytesRead;
 }

 return ffmpeg.AVERROR_EOF;
 }
}



Second thing that confuses me is with AppSrc, I can feed frame of any size, but with LibAV, it asks for buffer size. With AppSrc, I am responsible of feeding frames into the pipeline. GStreamer notifies me when it is enough with signals. But with libav, it calls read_packet delegate itself.


Any help is much appreciated. Thanks.


P.S. I'm writing a C# app, but sample code in C is fine, I can adapt the code myself.


-
Overlaying one video on another one, and making black pixels transparent
25 janvier 2023, par Michael AI'm trying to use FFMPEG to create a video with one video overlayed on top another.



I have 2 MP4s. I need to make all BLACK pixels in the overlay video transparent so that I can see the main video underneath it.



I found two ways to overlay one video on another :



First, the following positions the overlay in the center, and therefore, hides that portion of the main video beneath it :



ffmpeg -i 1.mp4 -vf "movie=2.mp4 [a]; [in][a] overlay=352:0 [b]" combined.mp4 -y




And, this one, places the overlay video on the left, but it's opacity is set to 50% so at least other one beneath it is visible :



ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS[top]; [1:v]setpts=PTS-STARTPTS, format=yuva420p,colorchannelmixer=aa=0.5[bottom]; [top][bottom]overlay=shortest=0" -acodec libvo_aacenc -vcodec libx264 out.mp4 -y




My goal is simply to make all black pixels in the overlay (2.mp4) completely transparent. How can this be done.