
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (110)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Que fait exactement ce script ?
18 janvier 2011, parCe script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
Installation de dépendances de MediaSPIP
Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)
Sur d’autres sites (4141)
-
How to grab ffmpeg's output as binary and write it to a file on the fly such that video players can play it in real time ?
29 décembre 2022, par Mister MystèreI want to stream a RTSP-streaming device to a video player such as VLC but the catch is that, in between, the binary data needs to go through a custom high-speed serial link. I control what goes in this link from a C++ program.


I was happily surprised to see that the following line allowed me to watch the RTSP stream by just opening "out.bin" from VLC which was a good lead for fast and efficient binary transmission of the stream :


ffmpeg -i "rtsp://admin:password@X.X.X.X:554/h264Preview_01_main" -c:v copy -c:a copy -f mpegts out.bin



I already wondered how ffmpeg manages to allow VLC to read that file, while itself writing to it at the same time. Turns out I was right to wonder, see below.


I told myself I could make this command pipe its output to the standard output, and then in turn pipe the standard output to a file that I can read, (later, slice it, transmit the chunks and reconstruct it) and then write to an output file. However, this does not work :


#include 
#include 
#include 

#define BUFSIZE 188 //MPEG-TS packet size

int main()
{
 char *cmd = (char*)"ffmpeg -i \"rtsp://admin:password@X.X.X.X:554/h264Preview_01_main\" -c:v copy -c:a copy -f mpegts pipe:1 -loglevel quiet";
 char buf[BUFSIZE];
 FILE *ptr, *file;

 file = fopen("./out.bin", "w");

 if (!file)
 {
 printf("Failed to open output file for writing, aborting");
 abort();
 }

 if ((ptr = popen(cmd, "r")) != NULL) {
 printf("Writing RTSP stream to file...");

 while (!kbhit())
 {
 if(fread(&buf, sizeof(char), BUFSIZE, ptr) != 0)
 {
 fwrite(buf, sizeof(char), BUFSIZE, file);
 }
 else
 {
 printf("No data\n");
 }
 }
 pclose(ptr);
 }
 else
 {
 printf("Failed to open pipe from ffmpeg command, aborting");
 }

 printf("End of program");

 fclose(file);
 return 0;
}



Since VLC says "your input can't be opened" - whereas this works just fine :


ffmpeg -i "rtsp://admin:password@X.X.X.X:554/h264Preview_01_main" -c:v copy -c:a copy -f mpegts pipe:1 -loglevel quiet > out.bin



This is what ends up in the file after I close the program, versus the result of the command immediately above :



The file is always 2kB regardless of how long I run the program : "No data" is shown repeatedly in the console output.


Why doesn't it work ? If it is not just a bug, how can I grab the stream as binary at some point, and write it at the end to a file that VLC can read ?


Update


New code after applying Craig Estey's fix to my stupid mistake. The end result is that the MPEG-TS frames don't seem to shift anymore but the file writing stops partway into one of the first few frames (the console only shows a few ">" symbols and then stays silent, c.f. code).


#include 
#include 
#include 

#define BUFSIZE 188 // MPEG-TS packet size

int
main()
{
 char *cmd = (char *) "ffmpeg -i \"rtsp://127.0.0.1:8554/test.sdp\" -c:v copy -c:a copy -f mpegts pipe:1 -loglevel quiet";
 char buf[BUFSIZE];
 FILE *ptr,
 *file;

 file = fopen("./out.ts", "w");

 if (!file) {
 printf("Failed to open output file for writing, aborting");
 abort();
 }

 if ((ptr = popen(cmd, "r")) != NULL) {
 printf("Writing RTSP stream to file...");

 while(!kbhit()) {
 ssize_t rlen = fread(&buf, sizeof(char), BUFSIZE, ptr);
 if(rlen != 0)
 {
 printf(">");
 fwrite(buf, sizeof(char), rlen, file);
 fflush(file);
 }
 }
 pclose(ptr);
 }
 else {
 printf("Failed to open pipe from ffmpeg command, aborting");
 }

 printf("End of program");

 fclose(file);
 return 0;
}



This can be tested on any computer with VLC and a webcam : open VLC, open capture device, capture mode directshow, (switch "play" for "stream"), next, display locally, select RTSP, Add, path=/test.sdp, next, transcoding=H264+MP3 (TS), replace rtsp ://:8554/ with rtsp ://127.0.0.1:8554/ in the generated command line, stream.


To test that streaming is ok, you can just open a command terminal and enter "ffmpeg -i "rtsp ://127.0.0.1:8554/test.sdp" -c:v copy -c:a copy -f mpegts pipe:1 -loglevel quiet", the terminal should fill up with binary data.


To test the program, just compile, run, and open out.ts after the program has run.


-
How to feed frames one by one and grab decoded frames using LibAV ?
15 septembre 2024, par Alvan RahimliI am integrating a third party system where I send a TCP packet to ask e.g. 5 frames from a live CCTV camera stream, and it sends those 5 frames one by one. Each package is h264 encoded frame, wrapped with some relevant data.


I need to write a code using Libav where I can :


- 

- Feed the frames one by one using AVIOContext (or smth similar).
- Grab decoded frame.
- Draw that to a window (Not relevant to the question, writing for context).








I've been doing the same with GStreamer by creating a pipeline like this :


AppSrc -> H264Parser -> H264Decoder -> FrameGrabber



The code below is what I was able to write so far :


using System.Runtime.InteropServices;
using FFmpeg.AutoGen;

namespace AvIoCtxExperiment;

public static unsafe class Program
{
 public static void Main()
 {
 ffmpeg.avdevice_register_all();
 Console.WriteLine($"FFmpeg v: {ffmpeg.av_version_info()}");

 // Generous buffer size for I frames (~43KB)
 const int bufferSize = 50 * 1024;

 var buff = (byte*)ffmpeg.av_malloc(bufferSize);
 if (buff == null)
 throw new Exception("Buffer is null");

 // A mock frame provider. Frames are stored in separate files and this reads & returns them one-by-one.
 var frameProvider = new FrameProvider(@"D:\Frames-1", 700);
 var gch = GCHandle.Alloc(frameProvider);

 var avioCtx = ffmpeg.avio_alloc_context(
 buffer: buff,
 buffer_size: bufferSize,
 write_flag: 0,
 opaque: (void*)GCHandle.ToIntPtr(gch),
 read_packet: new avio_alloc_context_read_packet(ReadFunc),
 write_packet: null,
 seek: null);

 var formatContext = ffmpeg.avformat_alloc_context();
 formatContext->pb = avioCtx;
 formatContext->flags |= ffmpeg.AVFMT_FLAG_CUSTOM_IO;

 var openResult = ffmpeg.avformat_open_input(&formatContext, null, null, null);
 if (openResult < 0)
 throw new Exception("Open Input Failed");

 if (ffmpeg.avformat_find_stream_info(formatContext, null) < 0)
 throw new Exception("Find StreamInfo Failed");

 AVPacket packet;
 while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
 {
 Console.WriteLine($"GRAB: {packet.buf->size}");
 ffmpeg.av_packet_unref(&packet);
 }
 }

 private static int ReadFunc(void* opaque, byte* buf, int bufSize)
 {
 var frameProvider = (FrameProvider?)GCHandle.FromIntPtr((IntPtr)opaque).Target;
 if (frameProvider == null)
 {
 return 0;
 }

 byte[] managedBuffer = new byte[bufSize];

 var fBuff = frameProvider.NextFrame();
 if (fBuff == null)
 {
 return ffmpeg.AVERROR_EOF;
 }

 int bytesRead = fBuff.Length;
 fBuff.CopyTo(managedBuffer, 0);

 if (bytesRead > 0)
 {
 Marshal.Copy(managedBuffer, 0, (IntPtr)buf, bytesRead);
 Console.WriteLine($"READ size: {fBuff.Length}");
 return bytesRead;
 }

 return ffmpeg.AVERROR_EOF;
 }
}



Second thing that confuses me is with AppSrc, I can feed frame of any size, but with LibAV, it asks for buffer size. With AppSrc, I am responsible of feeding frames into the pipeline. GStreamer notifies me when it is enough with signals. But with libav, it calls read_packet delegate itself.


Any help is much appreciated. Thanks.


P.S. I'm writing a C# app, but sample code in C is fine, I can adapt the code myself.


-
How to feed frames one by one and grab decoded frames using LibAV ?
22 septembre 2024, par Alvan RahimliI am integrating a third party system where I send a TCP packet to ask e.g. 5 frames from a live CCTV camera stream, and it sends those 5 frames one by one. Each package is h264 encoded frame, wrapped with some relevant data.


I need to write a code using Libav where I can :


- 

- Feed the frames one by one using AVIOContext (or smth similar).
- Grab decoded frame.
- Draw that to a window (Not relevant to the question, writing for context).








I've been doing the same with GStreamer by creating a pipeline like this :


AppSrc -> H264Parser -> H264Decoder -> FrameGrabber



The code below is what I was able to write so far :


using System.Runtime.InteropServices;
using FFmpeg.AutoGen;

namespace AvIoCtxExperiment;

public static unsafe class Program
{
 public static void Main()
 {
 ffmpeg.avdevice_register_all();
 Console.WriteLine($"FFmpeg v: {ffmpeg.av_version_info()}");

 // Generous buffer size for I frames (~43KB)
 const int bufferSize = 50 * 1024;

 var buff = (byte*)ffmpeg.av_malloc(bufferSize);
 if (buff == null)
 throw new Exception("Buffer is null");

 // A mock frame provider. Frames are stored in separate files and this reads & returns them one-by-one.
 var frameProvider = new FrameProvider(@"D:\Frames-1", 700);
 var gch = GCHandle.Alloc(frameProvider);

 var avioCtx = ffmpeg.avio_alloc_context(
 buffer: buff,
 buffer_size: bufferSize,
 write_flag: 0,
 opaque: (void*)GCHandle.ToIntPtr(gch),
 read_packet: new avio_alloc_context_read_packet(ReadFunc),
 write_packet: null,
 seek: null);

 var formatContext = ffmpeg.avformat_alloc_context();
 formatContext->pb = avioCtx;
 formatContext->flags |= ffmpeg.AVFMT_FLAG_CUSTOM_IO;

 var openResult = ffmpeg.avformat_open_input(&formatContext, null, null, null);
 if (openResult < 0)
 throw new Exception("Open Input Failed");

 if (ffmpeg.avformat_find_stream_info(formatContext, null) < 0)
 throw new Exception("Find StreamInfo Failed");

 AVPacket packet;
 while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
 {
 Console.WriteLine($"GRAB: {packet.buf->size}");
 ffmpeg.av_packet_unref(&packet);
 }
 }

 private static int ReadFunc(void* opaque, byte* buf, int bufSize)
 {
 var frameProvider = (FrameProvider?)GCHandle.FromIntPtr((IntPtr)opaque).Target;
 if (frameProvider == null)
 {
 return 0;
 }

 byte[] managedBuffer = new byte[bufSize];

 var fBuff = frameProvider.NextFrame();
 if (fBuff == null)
 {
 return ffmpeg.AVERROR_EOF;
 }

 int bytesRead = fBuff.Length;
 fBuff.CopyTo(managedBuffer, 0);

 if (bytesRead > 0)
 {
 Marshal.Copy(managedBuffer, 0, (IntPtr)buf, bytesRead);
 Console.WriteLine($"READ size: {fBuff.Length}");
 return bytesRead;
 }

 return ffmpeg.AVERROR_EOF;
 }
}



Second thing that confuses me is with AppSrc, I can feed frame of any size, but with LibAV, it asks for buffer size. With AppSrc, I am responsible of feeding frames into the pipeline. GStreamer notifies me when it is enough with signals. But with libav, it calls read_packet delegate itself.


Any help is much appreciated. Thanks.


P.S. I'm writing a C# app, but sample code in C is fine, I can adapt the code myself.