Recherche avancée

Médias (10)

Mot : - Tags -/wav

Autres articles (38)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (4374)

  • Play video using mse (media source extension) in google chrome

    23 août 2019, par liyuqihxc

    I’m working on a project that convert rtsp stream (ffmpeg) and play it on the web page (signalr + mse).

    So far it works pretty much as I expected on the latest version of edge and firefox, but not chrome.

    here’s the code

    public class WebmMediaStreamContext
    {
       private Process _ffProcess;
       private readonly string _cmd;
       private byte[] _initSegment;
       private Task _readMediaStreamTask;
       private CancellationTokenSource _cancellationTokenSource;

       private const string _CmdTemplate = "-i {0} -c:v libvpx -tile-columns 4 -frame-parallel 1 -keyint_min 90 -g 90 -f webm -dash 1 pipe:";

       public static readonly byte[] ClusterStart = { 0x1F, 0x43, 0xB6, 0x75, 0x01, 0x00, 0x00, 0x00 };

       public event EventHandler<clusterreadyeventargs> ClusterReadyEvent;

       public WebmMediaStreamContext(string rtspFeed)
       {
           _cmd = string.Format(_CmdTemplate, rtspFeed);
       }

       public async Task StartConverting()
       {
           if (_ffProcess != null)
               throw new InvalidOperationException();

           _ffProcess = new Process();
           _ffProcess.StartInfo = new ProcessStartInfo
           {
               FileName = "ffmpeg/ffmpeg.exe",
               Arguments = _cmd,
               UseShellExecute = false,
               CreateNoWindow = true,
               RedirectStandardOutput = true
           };
           _ffProcess.Start();

           _initSegment = await ParseInitSegmentAndStartReadMediaStream();
       }

       public byte[] GetInitSegment()
       {
           return _initSegment;
       }

       // Find the first cluster, and everything before it is the InitSegment
       private async Task ParseInitSegmentAndStartReadMediaStream()
       {
           Memory<byte> buffer = new byte[10 * 1024];
           int length = 0;
           while (length != buffer.Length)
           {
               length += await _ffProcess.StandardOutput.BaseStream.ReadAsync(buffer.Slice(length));
               int cluster = buffer.Span.IndexOf(ClusterStart);
               if (cluster >= 0)
               {
                   _cancellationTokenSource = new CancellationTokenSource();
                   _readMediaStreamTask = new Task(() => ReadMediaStreamProc(buffer.Slice(cluster, length - cluster).ToArray(), _cancellationTokenSource.Token), _cancellationTokenSource.Token, TaskCreationOptions.LongRunning);
                   _readMediaStreamTask.Start();
                   return buffer.Slice(0, cluster).ToArray();
               }
           }

           throw new InvalidOperationException();
       }

       private void ReadMoreBytes(Span<byte> buffer)
       {
           int size = buffer.Length;
           while (size > 0)
           {
               int len = _ffProcess.StandardOutput.BaseStream.Read(buffer.Slice(buffer.Length - size));
               size -= len;
           }
       }

       // Parse every single cluster and fire ClusterReadyEvent
       private void ReadMediaStreamProc(byte[] bytesRead, CancellationToken cancel)
       {
           Span<byte> buffer = new byte[5 * 1024 * 1024];
           bytesRead.CopyTo(buffer);
           int bufferEmptyIndex = bytesRead.Length;

           do
           {
               if (bufferEmptyIndex &lt; ClusterStart.Length + 4)
               {
                   ReadMoreBytes(buffer.Slice(bufferEmptyIndex, 1024));
                   bufferEmptyIndex += 1024;
               }

               int clusterDataSize = BitConverter.ToInt32(
                   buffer.Slice(ClusterStart.Length, 4)
                   .ToArray()
                   .Reverse()
                   .ToArray()
               );
               int clusterSize = ClusterStart.Length + 4 + clusterDataSize;
               if (clusterSize > buffer.Length)
               {
                   byte[] newBuffer = new byte[clusterSize];
                   buffer.Slice(0, bufferEmptyIndex).CopyTo(newBuffer);
                   buffer = newBuffer;
               }

               if (bufferEmptyIndex &lt; clusterSize)
               {
                   ReadMoreBytes(buffer.Slice(bufferEmptyIndex, clusterSize - bufferEmptyIndex));
                   bufferEmptyIndex = clusterSize;
               }

               ClusterReadyEvent?.Invoke(this, new ClusterReadyEventArgs(buffer.Slice(0, bufferEmptyIndex).ToArray()));

               bufferEmptyIndex = 0;
           } while (!cancel.IsCancellationRequested);
       }
    }
    </byte></byte></byte></clusterreadyeventargs>

    I use ffmpeg to convert the rtsp stream to vp8 WEBM byte stream and parse it to "Init Segment" (ebml head、info、tracks...) and "Media Segment" (cluster), then send it to browser via signalR

    $(function () {

       var mediaSource = new MediaSource();
       var mimeCodec = 'video/webm; codecs="vp8"';

       var video = document.getElementById('video');

       mediaSource.addEventListener('sourceopen', callback, false);
       function callback(e) {
           var sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
           var queue = [];

           sourceBuffer.addEventListener('updateend', function () {
               if (queue.length === 0) {
                   return;
               }

               var base64 = queue[0];
               if (base64.length === 0) {
                   mediaSource.endOfStream();
                   queue.shift();
                   return;
               } else {
                   var buffer = new Uint8Array(atob(base64).split("").map(function (c) {
                       return c.charCodeAt(0);
                   }));
                   sourceBuffer.appendBuffer(buffer);
                   queue.shift();
               }
           }, false);

           var connection = new signalR.HubConnectionBuilder()
               .withUrl("/signalr-video")
               .configureLogging(signalR.LogLevel.Information)
               .build();
           connection.start().then(function () {
               connection.stream("InitVideoReceive")
                   .subscribe({
                       next: function(item) {
                           if (queue.length === 0 &amp;&amp; !!!sourceBuffer.updating) {
                               var buffer = new Uint8Array(atob(item).split("").map(function (c) {
                                   return c.charCodeAt(0);
                               }));
                               sourceBuffer.appendBuffer(buffer);
                               console.log(blockindex++ + " : " + buffer.byteLength);
                           } else {
                               queue.push(item);
                           }
                       },
                       complete: function () {
                           queue.push('');
                       },
                       error: function (err) {
                           console.error(err);
                       }
                   });
           });
       }
       video.src = window.URL.createObjectURL(mediaSource);
    })

    chrome just play the video for 3 5 seconds and then stop for buffering, even though there are plenty of cluster transfered and inserted into SourceBuffer.

    here’s the information in chrome ://media-internals/

    Player Properties :

    render_id: 217
    player_id: 1
    origin_url: http://localhost:52531/
    frame_url: http://localhost:52531/
    frame_title: Home Page
    url: blob:http://localhost:52531/dcb25d89-9830-40a5-ba88-33c13b5c03eb
    info: Selected FFmpegVideoDecoder for video decoding, config: codec: vp8 format: 1 profile: vp8 coded size: [1280,720] visible rect: [0,0,1280,720] natural size: [1280,720] has extra data? false encryption scheme: Unencrypted rotation: 0°
    pipeline_state: kSuspended
    found_video_stream: true
    video_codec_name: vp8
    video_dds: false
    video_decoder: FFmpegVideoDecoder
    duration: unknown
    height: 720
    width: 1280
    video_buffering_state: BUFFERING_HAVE_NOTHING
    for_suspended_start: false
    pipeline_buffering_state: BUFFERING_HAVE_NOTHING
    event: PAUSE

    Log

    Timestamp       Property            Value
    00:00:00 00     origin_url          http://localhost:52531/
    00:00:00 00     frame_url           http://localhost:52531/
    00:00:00 00     frame_title         Home Page
    00:00:00 00     url                 blob:http://localhost:52531/dcb25d89-9830-40a5-ba88-33c13b5c03eb
    00:00:00 00     info                ChunkDemuxer: buffering by DTS
    00:00:00 35     pipeline_state      kStarting
    00:00:15 213    found_video_stream  true
    00:00:15 213    video_codec_name    vp8
    00:00:15 216    video_dds           false
    00:00:15 216    video_decoder       FFmpegVideoDecoder
    00:00:15 216    info                Selected FFmpegVideoDecoder for video decoding, config: codec: vp8 format: 1 profile: vp8 coded size: [1280,720] visible rect: [0,0,1280,720] natural size: [1280,720] has extra data? false encryption scheme: Unencrypted rotation: 0°
    00:00:15 216    pipeline_state      kPlaying
    00:00:15 213    duration            unknown
    00:00:16 661    height              720
    00:00:16 661    width               1280
    00:00:16 665    video_buffering_state       BUFFERING_HAVE_ENOUGH
    00:00:16 665    for_suspended_start         false
    00:00:16 665    pipeline_buffering_state    BUFFERING_HAVE_ENOUGH
    00:00:16 667    pipeline_state      kSuspending
    00:00:16 670    pipeline_state      kSuspended
    00:00:52 759    info                Effective playback rate changed from 0 to 1
    00:00:52 759    event               PLAY
    00:00:52 759    pipeline_state      kResuming
    00:00:52 760    video_dds           false
    00:00:52 760    video_decoder       FFmpegVideoDecoder
    00:00:52 760    info                Selected FFmpegVideoDecoder for video decoding, config: codec: vp8 format: 1 profile: vp8 coded size: [1280,720] visible rect: [0,0,1280,720] natural size: [1280,720] has extra data? false encryption scheme: Unencrypted rotation: 0°
    00:00:52 760    pipeline_state      kPlaying
    00:00:52 793    height              720
    00:00:52 793    width               1280
    00:00:52 798    video_buffering_state       BUFFERING_HAVE_ENOUGH
    00:00:52 798    for_suspended_start         false
    00:00:52 798    pipeline_buffering_state    BUFFERING_HAVE_ENOUGH
    00:00:56 278    video_buffering_state       BUFFERING_HAVE_NOTHING
    00:00:56 295    for_suspended_start         false
    00:00:56 295    pipeline_buffering_state    BUFFERING_HAVE_NOTHING
    00:01:20 717    event               PAUSE
    00:01:33 538    event               PLAY
    00:01:35 94     event               PAUSE
    00:01:55 561    pipeline_state      kSuspending
    00:01:55 563    pipeline_state      kSuspended

    Can someone tell me what’s wrong with my code, or dose chrome require some magic configuration to work ?

    Thanks 

    Please excuse my english :)

  • Injecting Metadata into an .mp4 file

    21 août 2019, par TrueCP5

    I’m working on a library that injects metadata into a .mp4 file to allow the video to be displayed correctly as a 360 video. The input file is a standard .mp4 file in the equirectangular format. I know what metadata needs to be injected I just do not know how to inject it.

    I spent some time looking around for libraries that can do this but could only find ones for extracting metadata not injecting/embedding/writing it. The alternative I found was to use Spatial Media as a command line application to inject the metadata more easily. The problem is I know zero python whatsoever so I’m leaning towards a library/nuget package/ffmpeg script.

    Does a good nuget package/library exist that can do this or should I go for the alternative option ?

    Edit 1

    I have tried just pasting in the metadata into the correct place in the file, just in case it might work, but it didn’t.

    Edit 2

    This is the metadata injected by Google’s Spatial Media Tool which is what I am trying to achieve :

    &lt;?xml version="1.0"?>truetrueSpherical Metadata Toolequirectangular`

    Edit 3

    I’ve also tried to do it with ffmpeg like so : ffmpeg -i input.mp4 -movflags use_metadata_tags -metadata Spherical=true -metadata Stitched=true -metadata ProjectionType=equirectangular -metadata StitchingSoftware=StreetviewJourney -codec copy output.mp4

    I think the issue with the ffmpeg method is that it does not contain the rdf:SphericalVideo part which allows the spherical video tags to be used.

    Edit 4

    When I extract the metadata using ffmpeg it contains the spherical tag in the logs but not when I output it to a ffmetadata file. This was the command I used : ffmpeg -i injected.mp4 -map_metadata -1 -f ffmetadata data.txt

    This is the output of the log :

    fps, 60 tbr, 15360 tbn, 120 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Side data:
         spherical: equirectangular (0.000000/0.000000/0.000000)

    Edit 5

    I also tried to get the metadata using this command : ffprobe -v error -select_streams v:0 -show_streams -of default=noprint_wrappers=1 injected.mp4

    This was the logs it outputted :

    TAG:handler_name=VideoHandler
    side_data_type=Spherical Mapping
    projection=equirectangular
    yaw=0
    pitch=0
    roll=0

    I then tried to use this command but it didn’t work : ffmpeg -i chapmanspeak.mp4 -movflags use_metadata_tags -metadata side_metadata_type="Spherical Mapping" -metadata projection=equirectangular -metadata yaw=0 -metadata pitch=0 -metadata roll=0 -codec copy output.mp4

    Edit 6

    I tried @VC.One’s method but I must be doing something wrong because the output file is unplayable. Here is my code :

           public static void Metadata(string inputFile, string outputFile)
           {
               byte[] metadata = HexStringToByteArray("3C 3F 78 6D 6C 20 76 65 72 73 69 6F 6E 3D 22 31 2E 30 22 3F 3E 3C 72 64 66 3A 53 70 68 65 72 69 63 61 6C 56 69 64 65 6F 0A 78 6D 6C 6E 73 3A 72 64 66 3D 22 68 74 74 70 3A 2F 2F 77 77 77 2E 77 33 2E 6F 72 67 2F 31 39 39 39 2F 30 32 2F 32 32 2D 72 64 66 2D 73 79 6E 74 61 78 2D 6E 73 23 22 0A 78 6D 6C 6E 73 3A 47 53 70 68 65 72 69 63 61 6C 3D 22 68 74 74 70 3A 2F 2F 6E 73 2E 67 6F 6F 67 6C 65 2E 63 6F 6D 2F 76 69 64 65 6F 73 2F 31 2E 30 2F 73 70 68 65 72 69 63 61 6C 2F 22 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 53 70 68 65 72 69 63 61 6C 3E 74 72 75 65 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 53 70 68 65 72 69 63 61 6C 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 65 64 3E 74 72 75 65 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 65 64 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 69 6E 67 53 6F 66 74 77 61 72 65 3E 53 70 68 65 72 69 63 61 6C 20 4D 65 74 61 64 61 74 61 20 54 6F 6F 6C 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 69 6E 67 53 6F 66 74 77 61 72 65 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 50 72 6F 6A 65 63 74 69 6F 6E 54 79 70 65 3E 65 71 75 69 72 65 63 74 61 6E 67 75 6C 61 72 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 50 72 6F 6A 65 63 74 69 6F 6E 54 79 70 65 3E 3C 2F 72 64 66 3A 53 70 68 65 72 69 63 61 6C 56 69 64 65 6F 3E");
               byte[] stco = HexStringToByteArray("73 74 63 6F");
               byte[] moov = HexStringToByteArray("6D 6F 6F 76");
               byte[] trak = HexStringToByteArray("74 72 61 6B");

               byte[] file = File.ReadAllBytes(inputFile);

               //find trak
               int trakPosition = 0;
               for (int a = 0; a &lt; file.Length - trak.Length; a++)
               {
                   for (int b = 0; b &lt; trak.Length; b++)
                   {
                       if (file[a + b] != trak[b])
                           break;
                       if (b == trak.Length - 1)
                           trakPosition = a;
                   }
               }
               if (trakPosition == 0)
                   throw new FileLoadException();

               //add metadata
               int trakLength = BitConverter.ToInt32(new ArraySegment<byte>(file, trakPosition - 4, 4).Reverse().ToArray(), 0);
               var fileList = file.ToList();
               fileList.InsertRange(trakPosition - 4 + trakLength, metadata);
               file = fileList.ToArray();

               ////change length - tried this as well
               //byte[] trakBytes = BitConverter.GetBytes(trakLength + metadata.Length).Reverse().ToArray();
               //for (int i = 0; i &lt; 4; i++)
               //    file[trakPosition - 4 + i] = trakBytes[i];

               //find moov
               int moovPosition = 0;
               for (int a = 0; a &lt; file.Length - moov.Length; a++)
               {
                   for (int b = 0; b &lt; moov.Length; b++)
                   {
                       if (file[a + b] != moov[b])
                           break;
                       if (b == moov.Length - 1)
                           moovPosition = a;
                   }
               }
               if (moovPosition == 0)
                   throw new FileLoadException();

               //change length
               int moovLength = BitConverter.ToInt32(new ArraySegment<byte>(file, moovPosition - 4, 4).Reverse().ToArray(), 0);
               byte[] moovBytes = BitConverter.GetBytes(moovLength + metadata.Length).Reverse().ToArray();
               for (int i = 0; i &lt; 4; i++)
                   file[moovPosition - 4 + i] = moovBytes[i];

               //find stco
               int stcoPosition = 0;
               for (int a = 0; a &lt; file.Length - stco.Length; a++)
               {
                   for (int b = 0; b &lt; stco.Length; b++)
                   {
                       if (file[a + b] != stco[b])
                           break;
                       if (b == stco.Length - 1)
                           stcoPosition = a;
                   }
               }
               if (stcoPosition == 0)
                   throw new FileLoadException();

               //modify entries
               int stcoEntries = BitConverter.ToInt32(new ArraySegment<byte>(file, stcoPosition + 8, 4).Reverse().ToArray(), 0);
               for (int a = stcoPosition + 12; a &lt; stcoPosition + 12 + stcoEntries * 4; a += 4)
               {
                   int entryLength = BitConverter.ToInt32(new ArraySegment<byte>(file, a, 4).Reverse().ToArray(), 0);
                   byte[] newEntry = BitConverter.GetBytes(entryLength + metadata.Length).Reverse().ToArray();
                   for (int b = 0; b &lt; 4; b++)
                       file[a + b] = newEntry[b];
               }

               File.WriteAllBytes(outputFile, file);
           }

           private static byte[] HexStringToByteArray(string hex)
           {
               hex = hex.Replace(" ", "");
               return Enumerable.Range(0, hex.Length)
                                .Where(x => x % 2 == 0)
                                .Select(x => Convert.ToByte(hex.Substring(x, 2), 16))
                                .ToArray();
           }
    </byte></byte></byte></byte>

    The bytes are reversed because .mp4s seem to be Little Endian. I tried to also update the length of trak but that didn’t work either.

  • FFmpeg - feed raw frames via pipe - FFmpeg does not detect pipe closure

    8 septembre 2018, par Rumble

    Im trying to follow these examples from C++ in Windows. Phyton Example C# Example

    I have an application that produces raw frames that shall be encoded with FFmpeg.
    The raw frames are transfered via IPC pipe to FFmpegs STDIN. That is working as expected, FFmpeg even displays the number of frames currently available.

    The problem occours when we are done sending frames. When I close the write end of the pipe I would expect FFmpeg to detect that, finish up and output the video. But that does not happen. FFmpeg stays open and seems to wait for more data.

    I made a small test project in VisualStudio.

    #include "stdafx.h"
    //// stdafx.h
    //#include "targetver.h"
    //#include
    //#include
    //#include <iostream>

    #include "Windows.h"
    #include <cstdlib>

    using namespace std;

    bool WritePipe(void* WritePipe, const UINT8 *const Buffer, const UINT32 Length)
    {
       if (WritePipe == nullptr || Buffer == nullptr || Length == 0)
       {
           cout &lt;&lt; __FUNCTION__ &lt;&lt; ": Some input is useless";
           return false;
       }

       // Write to pipe
       UINT32 BytesWritten = 0;
       UINT8 newline = '\n';
       bool bIsWritten = WriteFile(WritePipe, Buffer, Length, (::DWORD*)&amp;BytesWritten, nullptr);
       cout &lt;&lt; __FUNCTION__ &lt;&lt; " Bytes written to pipe " &lt;&lt; BytesWritten &lt;&lt; endl;
       //bIsWritten = WriteFile(WritePipe, &amp;newline, 1, (::DWORD*)&amp;BytesWritten, nullptr); // Do we need this? Actually this should destroy the image.

       FlushFileBuffers(WritePipe); // Do we need this?

       return bIsWritten;
    }

    #define PIXEL 80 // must be multiple of 8. Otherwise we get warning: Bytes are not aligned

    int main()
    {
       HANDLE PipeWriteEnd = nullptr;
       HANDLE PipeReadEnd = nullptr;
       {
           // create us a pipe for inter process communication
           SECURITY_ATTRIBUTES Attr = { sizeof(SECURITY_ATTRIBUTES), NULL, true };
           if (!CreatePipe(&amp;PipeReadEnd, &amp;PipeWriteEnd, &amp;Attr, 0))
           {
               cout &lt;&lt; "Could not create pipes" &lt;&lt; ::GetLastError() &lt;&lt; endl;
               system("Pause");
               return 0;
           }
       }

       // Setup the variables needed for CreateProcess
       // initialize process attributes
       SECURITY_ATTRIBUTES Attr;
       Attr.nLength = sizeof(SECURITY_ATTRIBUTES);
       Attr.lpSecurityDescriptor = NULL;
       Attr.bInheritHandle = true;

       // initialize process creation flags
       UINT32 CreateFlags = NORMAL_PRIORITY_CLASS;
       CreateFlags |= CREATE_NEW_CONSOLE;

       // initialize window flags
       UINT32 dwFlags = 0;
       UINT16 ShowWindowFlags = SW_HIDE;

       if (PipeWriteEnd != nullptr || PipeReadEnd != nullptr)
       {
           dwFlags |= STARTF_USESTDHANDLES;
       }

       // initialize startup info
       STARTUPINFOA StartupInfo = {
           sizeof(STARTUPINFO),
           NULL, NULL, NULL,
           (::DWORD)CW_USEDEFAULT,
           (::DWORD)CW_USEDEFAULT,
           (::DWORD)CW_USEDEFAULT,
           (::DWORD)CW_USEDEFAULT,
           (::DWORD)0, (::DWORD)0, (::DWORD)0,
           (::DWORD)dwFlags,
           ShowWindowFlags,
           0, NULL,
           HANDLE(PipeReadEnd),
           HANDLE(nullptr),
           HANDLE(nullptr)
       };

       LPSTR ffmpegURL = "\"PATHTOFFMPEGEXE\" -y -loglevel verbose -f rawvideo -vcodec rawvideo -framerate 1 -video_size 80x80 -pixel_format rgb24 -i - -vcodec mjpeg -framerate 1/4 -an \"OUTPUTDIRECTORY\"";

       // Finally create the process
       PROCESS_INFORMATION ProcInfo;
       if (!CreateProcessA(NULL, ffmpegURL, &amp;Attr, &amp;Attr, true, (::DWORD)CreateFlags, NULL, NULL, &amp;StartupInfo, &amp;ProcInfo))
       {
           cout &lt;&lt; "CreateProcess failed " &lt;&lt; ::GetLastError() &lt;&lt; endl;
       }
       //CloseHandle(ProcInfo.hThread);


           // Create images and write to pipe
    #define MYARRAYSIZE (PIXEL*PIXEL*3) // each pixel has 3 bytes

       UINT8* Bitmap = new UINT8[MYARRAYSIZE];

       for (INT32 outerLoopIndex = 9; outerLoopIndex >= 0; --outerLoopIndex)   // frame loop
       {
           for (INT32 innerLoopIndex = MYARRAYSIZE - 1; innerLoopIndex >= 0; --innerLoopIndex) // create the pixels for each frame
           {
               Bitmap[innerLoopIndex] = (UINT8)(outerLoopIndex * 20); // some gray color
           }
           system("pause");
           if (!WritePipe(PipeWriteEnd, Bitmap, MYARRAYSIZE))
           {
               cout &lt;&lt; "Failed writing to pipe" &lt;&lt; endl;
           }
       }

       // Done sending images. Tell the other process. IS THIS NEEDED? HOW TO TELL FFmpeg WE ARE DONE?
       //UINT8 endOfFile = 0xFF; // EOF = -1 == 1111 1111 for uint8
       //if (!WritePipe(PipeWriteEnd, &amp;endOfFile, 1))
       //{
       //  cout &lt;&lt; "Failed writing to pipe" &lt;&lt; endl;
       //}
       //FlushFileBuffers(PipeReadEnd); // Do we need this?

       delete Bitmap;


       system("pause");

       // clean stuff up
       FlushFileBuffers(PipeWriteEnd); // Do we need this?

       if (PipeWriteEnd != NULL &amp;&amp; PipeWriteEnd != INVALID_HANDLE_VALUE)
       {
           CloseHandle(PipeWriteEnd);
       }

       // We do not want to destroy the read end of the pipe? Should not as that belongs to FFmpeg
       //if (PipeReadEnd != NULL &amp;&amp; PipeReadEnd != INVALID_HANDLE_VALUE)
       //{
       //  ::CloseHandle(PipeReadEnd);
       //}


       return 0;

    }
    </cstdlib></iostream>

    And here the output of FFmpeg

    ffmpeg version 3.4.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 7.2.0 (GCC)
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth --enable-libmfx
     libavutil      55. 78.100 / 55. 78.100
     libavcodec     57.107.100 / 57.107.100
     libavformat    57. 83.100 / 57. 83.100
     libavdevice    57. 10.100 / 57. 10.100
     libavfilter     6.107.100 /  6.107.100
     libswscale      4.  8.100 /  4.  8.100
     libswresample   2.  9.100 /  2.  9.100
     libpostproc    54.  7.100 / 54.  7.100
    [rawvideo @ 00000221ff992120] max_analyze_duration 5000000 reached at 5000000 microseconds st:0
    Input #0, rawvideo, from 'pipe:':
     Duration: N/A, start: 0.000000, bitrate: 153 kb/s
       Stream #0:0: Video: rawvideo, 1 reference frame (RGB[24] / 0x18424752), rgb24, 80x80, 153 kb/s, 1 fps, 1 tbr, 1 tbn, 1 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> mjpeg (native))
    [graph 0 input from stream 0:0 @ 00000221ff999c20] w:80 h:80 pixfmt:rgb24 tb:1/1 fr:1/1 sar:0/1 sws_param:flags=2
    [auto_scaler_0 @ 00000221ffa071a0] w:iw h:ih flags:'bicubic' interl:0
    [format @ 00000221ffa04e20] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
    [swscaler @ 00000221ffa0a780] deprecated pixel format used, make sure you did set range correctly
    [auto_scaler_0 @ 00000221ffa071a0] w:80 h:80 fmt:rgb24 sar:0/1 -> w:80 h:80 fmt:yuvj444p sar:0/1 flags:0x4
    Output #0, mp4, to 'c:/users/vr3/Documents/Guenni/sometest.mp4':
     Metadata:
       encoder         : Lavf57.83.100
       Stream #0:0: Video: mjpeg, 1 reference frame (mp4v / 0x7634706D), yuvj444p(pc), 80x80, q=2-31, 200 kb/s, 1 fps, 16384 tbn, 1 tbc
       Metadata:
         encoder         : Lavc57.107.100 mjpeg
       Side data:
         cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
    frame=   10 fps=6.3 q=1.6 size=       0kB time=00:00:09.00 bitrate=   0.0kbits/s speed=5.63x

    As you can see in the last line of te FFmpeg output, the images got trough. 10 frames are available. But after closing the pipe, FFmpeg does not close, still expecting input.

    As the linked examples show, this should be a valid method.

    Trying for a week now...