Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP 0.2

Autres articles (49)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Prérequis à l’installation

    31 janvier 2010, par

    Préambule
    Cet article n’a pas pour but de détailler les installations de ces logiciels mais plutôt de donner des informations sur leur configuration spécifique.
    Avant toute chose SPIPMotion tout comme MediaSPIP est fait pour tourner sur des distributions Linux de type Debian ou dérivées (Ubuntu...). Les documentations de ce site se réfèrent donc à ces distributions. Il est également possible de l’utiliser sur d’autres distributions Linux mais aucune garantie de bon fonctionnement n’est possible.
    Il (...)

Sur d’autres sites (5852)

  • Can't set seeker in GSTREAMER cv2, python

    29 avril, par Alperen Ölçer

    I want to skip n seconds forward and backward in gstreamer cv2 capture for recorded videos. But when I use cap_gstreamer.set(cv2.CAP_PROP_POS_FRAMES, fps*skip_second) it resets seeker to beginning of video. How can I solve it ? I wrote an example, used recorded clock video.

    


    import cv2

video_p = '/home/alperenlcr/Videos/clock.mp4'

cap_gstreamer = cv2.VideoCapture(video_p, cv2.CAP_GSTREAMER)
cap_ffmpeg = cv2.VideoCapture(video_p, cv2.CAP_FFMPEG)

fps = cap_gstreamer.get(cv2.CAP_PROP_FPS)
skip_second = 100

im1 = cv2.resize(cap_gstreamer.read()[1], (960, 540))
im1_ffmpeg = cv2.resize(cap_ffmpeg.read()[1], (960, 540))

cap_gstreamer.set(cv2.CAP_PROP_POS_FRAMES, fps*skip_second)
cap_ffmpeg.set(cv2.CAP_PROP_POS_FRAMES, fps*skip_second)

im2 = cv2.resize(cap_gstreamer.read()[1], (960, 540))
im2_ffmpeg = cv2.resize(cap_ffmpeg.read()[1], (960, 540))

merge_gstreamer = cv2.hconcat([im1, im2])
merge_ffmpeg = cv2.hconcat([im1_ffmpeg, im2_ffmpeg])

cv2.imshow(str(skip_second) + ' gstreamer', merge_gstreamer)
cv2.imshow(str(skip_second) + ' ffmpeg', merge_ffmpeg)
cv2.waitKey(0)
cv2.destroyAllWindows()

cap_gstreamer.release()
cap_ffmpeg.release()



    


    Result :
enter image description here

    


    My cv2 build is like :

    


    >>> print(cv2.getBuildInformation())

General configuration for OpenCV 4.8.1 =====================================
  Version control:               4.8.1-dirty

  Extra modules:
    Location (extra):            /home/alperenlcr/SourceInstalls/opencv_contrib/modules
    Version control (extra):     4.8.1

  Platform:
    Timestamp:                   2024-12-02T13:44:58Z
    Host:                        Linux 6.8.0-49-generic x86_64
    CMake:                       3.22.1
    CMake generator:             Unix Makefiles
    CMake build tool:            /usr/bin/gmake
    Configuration:               RELEASE

  CPU/HW features:
    Baseline:                    SSE SSE2 SSE3
      requested:                 SSE3
    Dispatched code generation:  SSE4_1 SSE4_2 FP16 AVX AVX2 AVX512_SKX
      requested:                 SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
      SSE4_1 (18 files):         + SSSE3 SSE4_1
      SSE4_2 (2 files):          + SSSE3 SSE4_1 POPCNT SSE4_2
      FP16 (1 files):            + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
      AVX (8 files):             + SSSE3 SSE4_1 POPCNT SSE4_2 AVX
      AVX2 (37 files):           + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2
      AVX512_SKX (8 files):      + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 AVX_512F AVX512_COMMON AVX512_SKX

  C/C++:
    Built as dynamic libs?:      NO
    C++ standard:                11
    C++ Compiler:                /usr/bin/c++  (ver 10.5.0)
    C++ flags (Release):         -fsigned-char -ffast-math -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG  -DNDEBUG
    C++ flags (Debug):           -fsigned-char -ffast-math -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g  -O0 -DDEBUG -D_DEBUG
    C Compiler:                  /usr/bin/cc
    C flags (Release):           -fsigned-char -ffast-math -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG  -DNDEBUG
    C flags (Debug):             -fsigned-char -ffast-math -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -g  -O0 -DDEBUG -D_DEBUG
    Linker flags (Release):      -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a   -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined  
    Linker flags (Debug):        -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a   -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined  
    ccache:                      NO
    Precompiled headers:         NO
    Extra dependencies:          /usr/lib/x86_64-linux-gnu/libjpeg.so /usr/lib/x86_64-linux-gnu/libpng.so /usr/lib/x86_64-linux-gnu/libtiff.so /usr/lib/x86_64-linux-gnu/libz.so /usr/lib/x86_64-linux-gnu/libfreetype.so /usr/lib/x86_64-linux-gnu/libharfbuzz.so Iconv::Iconv m pthread cudart_static dl rt nppc nppial nppicc nppidei nppif nppig nppim nppist nppisu nppitc npps cublas cudnn cufft -L/usr/lib/x86_64-linux-gnu -L/usr/lib/cuda/lib64
    3rdparty dependencies:       libprotobuf ade ittnotify libwebp libopenjp2 IlmImf quirc ippiw ippicv

  OpenCV modules:
    To be built:                 aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn dnn_objdetect dnn_superres dpm face features2d flann freetype fuzzy gapi hfs highgui img_hash imgcodecs imgproc intensity_transform line_descriptor mcc ml objdetect optflow phase_unwrapping photo plot python3 quality rapid reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab wechat_qrcode xfeatures2d ximgproc xobjdetect xphoto
    Disabled:                    cudacodec world
    Disabled by dependency:      -
    Unavailable:                 alphamat cvv hdf java julia matlab ovis python2 sfm viz
    Applications:                tests perf_tests examples apps
    Documentation:               NO
    Non-free algorithms:         NO

  GUI:                           GTK2
    QT:                          NO
    GTK+:                        YES (ver 2.24.33)
      GThread :                  YES (ver 2.72.4)
      GtkGlExt:                  NO
    OpenGL support:              NO
    VTK support:                 NO

  Media I/O: 
    ZLib:                        /usr/lib/x86_64-linux-gnu/libz.so (ver 1.2.11)
    JPEG:                        /usr/lib/x86_64-linux-gnu/libjpeg.so (ver 80)
    WEBP:                        build (ver encoder: 0x020f)
    PNG:                         /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.6.37)
    TIFF:                        /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 / 4.3.0)
    JPEG 2000:                   build (ver 2.5.0)
    OpenEXR:                     build (ver 2.3.0)
    HDR:                         YES
    SUNRASTER:                   YES
    PXM:                         YES
    PFM:                         YES

  Video I/O:
    DC1394:                      NO
    FFMPEG:                      YES
      avcodec:                   YES (58.134.100)
      avformat:                  YES (58.76.100)
      avutil:                    YES (56.70.100)
      swscale:                   YES (5.9.100)
      swresample:                YES (3.9.100)
    GStreamer:                   YES (1.20.3)
    v4l/v4l2:                    YES (linux/videodev2.h)

  Parallel framework:            TBB (ver 2021.5 interface 12050)

  Trace:                         YES (with Intel ITT)

  Other third-party libraries:
    Intel IPP:                   2021.8 [2021.8.0]
           at:                   /home/alperenlcr/SourceInstalls/opencv/build/3rdparty/ippicv/ippicv_lnx/icv
    Intel IPP IW:                sources (2021.8.0)
              at:                /home/alperenlcr/SourceInstalls/opencv/build/3rdparty/ippicv/ippicv_lnx/iw
    VA:                          NO
    Lapack:                      NO
    Eigen:                       NO
    Custom HAL:                  NO
    Protobuf:                    build (3.19.1)
    Flatbuffers:                 builtin/3rdparty (23.5.9)

  NVIDIA CUDA:                   YES (ver 11.5, CUFFT CUBLAS NVCUVID NVCUVENC FAST_MATH)
    NVIDIA GPU arch:             86
    NVIDIA PTX archs:

  cuDNN:                         YES (ver 8.6.0)

  OpenCL:                        YES (no extra features)
    Include path:                /home/alperenlcr/SourceInstalls/opencv/3rdparty/include/opencl/1.2
    Link libraries:              Dynamic load

  ONNX:                          NO

  Python 3:
    Interpreter:                 /usr/bin/python3 (ver 3.10.12)
    Libraries:                   /usr/lib/x86_64-linux-gnu/libpython3.10.so (ver 3.10.12)
    numpy:                       /usr/lib/python3/dist-packages/numpy/core/include (ver 1.21.5)
    install path:                lib/python3.10/dist-packages/cv2/python-3.10

  Python (for build):            /usr/bin/python3

  Java:                          
    ant:                         NO
    Java:                        NO
    JNI:                         NO
    Java wrappers:               NO
    Java tests:                  NO

  Install to:                    /usr/local
-----------------------------------------------------------------



    


  • Send image and audio data to FFmpeg via named pipes

    5 mai, par Nicke Manarin

    I'm able to send frames one by one to FFmpeg via a name pipe to create a video out of them, but if I try sending audio to a second named pipe, FFmpeg only accepts 1 frame in the frame pipe and starts reading from the audio pipe soon after it.

    


    ffmpeg.exe -loglevel debug -hwaccel auto 
-f:v rawvideo -r 25 -pix_fmt bgra -video_size 782x601 -i \\.\pipe\video_to_ffmpeg 
-f:a s16le -ac 2 -ar 48000 -i \\.\pipe\audio_to_ffmpeg 
-c:v libx264 -preset fast -pix_fmt yuv420p 
-vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -crf 23 -f:v mp4 -vsync vfr 
-c:a aac -b:a 128k -ar 48000 -ac 2 
-y "C:\Users\user\Desktop\video.mp4"


    


    I start both pipes like so :

    


    _imagePipeServer = new NamedPipeServerStream(ImagePipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
_imagePipeStreamWriter = new StreamWriter(_imagePipeServer);
_imagePipeServer.BeginWaitForConnection(null, null);

_audioPipeServer = new NamedPipeServerStream(AudioPipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
_audioPipeStreamWriter = new StreamWriter(_audioPipeServer);
_audioPipeServer.BeginWaitForConnection(null, null);


    


    And send the data to the pipes using these methods :

    


    public void EncodeFrame(byte[] data)
{
    if (_imagePipeServer?.IsConnected != true)
        throw new FFmpegException("Pipe not connected", Arguments, Output);

    _imagePipeStreamWriter?.BaseStream.Write(data, 0, data.Length);
}


    


    public void EncodeAudio(ISampleProvider provider, long length)
{
    if (_audioPipeServer?.IsConnected != true)
        throw new FFmpegException("Pipe not connected", Arguments, Output);

    var buffer = new byte[provider.WaveFormat.AverageBytesPerSecond * length / TimeSpan.TicksPerSecond];
    var bytesRead = provider.ToWaveProvider().Read(buffer, 0, buffer.Length);

    if (bytesRead < 1)
        return;

    _audioPipeStreamWriter?.BaseStream.Write(buffer, 0, bytesRead);
    _audioPipeStreamWriter?.BaseStream.Flush();
}


    


    Not sending the audio (and thus not creating the audio pipe) works, with FFmpeg taking one frame at time and creating the video normally.

    


    But if I try sending the audio via a secondary pipe, I can only send one frame. This is the output when that happens (Btw, FFmpeg v7.1) :

    


    Splitting the commandline.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-hwaccel' ... matched as option 'hwaccel' (use HW accelerated decoding) with argument 'auto'.
Reading option '-f:v' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 'rawvideo'.
Reading option '-r' ... matched as option 'r' (override input framerate/convert to given output framerate (Hz value, fraction or abbreviation)) with argument '25'.
Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'bgra'.
Reading option '-video_size' ... matched as AVOption 'video_size' with argument '782x601'.
Reading option '-i' ... matched as input url with argument '\\.\pipe\video_to_ffmpeg'.
Reading option '-f:a' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 's16le'.
Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'.
Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '48000'.
Reading option '-i' ... matched as input url with argument '\\.\pipe\audio_to_ffmpeg'.
Reading option '-c:v' ... matched as option 'c' (select encoder/decoder ('copy' to copy stream without reencoding)) with argument 'libx264'.
Reading option '-preset' ... matched as AVOption 'preset' with argument 'fast'.
Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'yuv420p'.
Reading option '-vf' ... matched as option 'vf' (alias for -filter:v (apply filters to video streams)) with argument 'scale=trunc(iw/2)*2:trunc(ih/2)*2'.
Reading option '-crf' ... matched as AVOption 'crf' with argument '23'.
Reading option '-f:v' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 'mp4'.
Reading option '-fps_mode' ... matched as option 'fps_mode' (set framerate mode for matching video streams; overrides vsync) with argument 'vfr'.
Reading option '-c:a' ... matched as option 'c' (select encoder/decoder ('copy' to copy stream without reencoding)) with argument 'aac'.
Reading option '-b:a' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '128k'.
Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '48000'.
Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option 'C:\Users\user\Desktop\video.mp4' ... matched as output url.
Finished splitting the commandline.

Parsing a group of options: global.
Applying option loglevel (set logging level) with argument debug.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.

Parsing a group of options: input url \\.\pipe\video_to_ffmpeg.
Applying option hwaccel (use HW accelerated decoding) with argument auto.
Applying option f:v (force container format (auto-detected otherwise)) with argument rawvideo.
Applying option r (override input framerate/convert to given output framerate (Hz value, fraction or abbreviation)) with argument 25.
Applying option pix_fmt (set pixel format) with argument bgra.
Successfully parsed a group of options.

Opening an input file: \\.\pipe\video_to_ffmpeg.
[rawvideo @ 000001c302ee08c0] Opening '\\.\pipe\video_to_ffmpeg' for reading
[file @ 000001c302ee1000] Setting default whitelist 'file,crypto,data'
[rawvideo @ 000001c302ee08c0] Before avformat_find_stream_info() pos: 0 bytes read:65536 seeks:0 nb_streams:1
[rawvideo @ 000001c302ee08c0] All info found
[rawvideo @ 000001c302ee08c0] After avformat_find_stream_info() pos: 1879928 bytes read:1879928 seeks:0 frames:1
Input #0, rawvideo, from '\\.\pipe\video_to_ffmpeg':
  Duration: N/A, start: 0.000000, bitrate: 375985 kb/s
  Stream #0:0, 1, 1/25: Video: rawvideo, 1 reference frame (BGRA / 0x41524742), bgra, 782x601, 0/1, 375985 kb/s, 25 tbr, 25 tbn
Successfully opened the file.

Parsing a group of options: input url \\.\pipe\audio_to_ffmpeg.
Applying option f:a (force container format (auto-detected otherwise)) with argument s16le.
Applying option ac (set number of audio channels) with argument 2.
Applying option ar (set audio sampling rate (in Hz)) with argument 48000.
Successfully parsed a group of options.

Opening an input file: \\.\pipe\audio_to_ffmpeg.
[s16le @ 000001c302ef5380] Opening '\\.\pipe\audio_to_ffmpeg' for reading
[file @ 000001c302ef58c0] Setting default whitelist 'file,crypto,data'


    


    The difference if I try sending 1 frame then some bytes (arbitrary length based on fps) of audio is that I get this extra comment at the end :

    


    [s16le @ 0000025948c96d00] Before avformat_find_stream_info() pos: 0 bytes read:15360 seeks:0 nb_streams:1


    


    Extra calls to EncodeFrame() hang forever at the BaseStream.Write(frameBytes, 0, frameBytes.Length) call, suggesting that FFmpeg is no longer reading the data.

    


    Something is causing FFmpeg to close or stop reading the first pipe and only accept data from the second one.

    


    Perhaps the command is missing something ?

    



    


    🏆 Working solution

    


    I started using two BlockingCollection objects, with the consumers running in separate tasks.

    


    Start the process, setting up the pipes :

    


    private Process? _process;
private NamedPipeServerStream? _imagePipeServer;
private NamedPipeServerStream? _audioPipeServer;
private StreamWriter? _imagePipeStreamWriter;
private StreamWriter? _audioPipeStreamWriter;
private readonly BlockingCollection _videoCollection = new();
private readonly BlockingCollection _audioCollection = new();

private const string ImagePipeName = "video_to_ffmpeg";
private const string AudioPipeName = "audio_to_ffmpeg";
private const string PipeStructure = @"\\.\pipe\"; //This part is only sent to FFmpeg, not to the .NET pipe creation.

public void StartEncoding(string arguments)
{
    _process = new Process
    {
        StartInfo = new ProcessStartInfo
        {
            FileName = "path to ffmpeg",
            Arguments = arguments.Replace("{image}", PipeStructure + ImagePipeName).Replace("{audio}", PipeStructure + AudioPipeName),
            RedirectStandardInput = false,
            RedirectStandardOutput = true,
            RedirectStandardError = true,
            UseShellExecute = false,
            CreateNoWindow = true
        }
    };

    StartFramePipeConnection();
    StartAudioPipeConnection();

    _process. Start();
    _process.BeginErrorReadLine();
    _process.BeginOutputReadLine();
}

private void StartFramePipeConnection()
{
    if (_imagePipeServer != null)
    {
        if (_imagePipeServer.IsConnected)
            _imagePipeServer.Disconnect();

        _imagePipeServer.Dispose();
    }

    _imagePipeServer = new NamedPipeServerStream(ImagePipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
    _imagePipeStreamWriter = new StreamWriter(_imagePipeServer);
    _imagePipeServer.BeginWaitForConnection(VideoPipe_Connected, null);
}

private void StartAudioPipeConnection()
{
    if (_audioPipeServer != null)
    {
        if (_audioPipeServer.IsConnected)
            _audioPipeServer.Disconnect();

        _audioPipeServer.Dispose();
    }

    _audioPipeServer = new NamedPipeServerStream(AudioPipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
    _audioPipeStreamWriter = new StreamWriter(_audioPipeServer);
    _audioPipeServer.BeginWaitForConnection(AudioPipe_Connected, null);
}


    


    Start sending the data as soon as the pipe gets connected. Once the BlockingCollection gets its signal that no more data is going to be sent, it will leave the foreach block and it will wait for the pipe to drain its data.

    


    private void VideoPipe_Connected(IAsyncResult ar)
{
    Task.Run(() =>
    {
        try
        {
            foreach (var frameBytes in _videoCollection.GetConsumingEnumerable())
            {                    
                _imagePipeStreamWriter?.BaseStream.Write(frameBytes, 0, frameBytes.Length);
            }

            _imagePipeServer?.WaitForPipeDrain();
            _imagePipeStreamWriter?.Close();
        }
        catch (Exception e)
        {
            //Logging
            throw;
        }
    });
}

private void AudioPipe_Connected(IAsyncResult ar)
{
    Task.Run(() =>
    {
        try
        {
            foreach (var audioChunk in _audioCollection.GetConsumingEnumerable())
            {
                _audioPipeStreamWriter?.BaseStream.Write(audioChunk, 0, audioChunk.Length);
            }

            _audioPipeServer?.WaitForPipeDrain();
            _audioPipeStreamWriter?.Close();
        }
        catch (Exception e)
        {
            //Logging
            throw;
        }
    });
}


    


    You can send the image and audio data as soon as the BlockingCollections are initiated, no need to wait for the pipes to connect.

    


    public void EncodeImage(byte[] data)
{
    _videoCollection.Add(data);
}

public void EncodeAudio(ISampleProvider provider, long length)
{
    var sampleCount = (int)(provider.WaveFormat.SampleRate * ((double)length / TimeSpan.TicksPerSecond) * provider.WaveFormat.Channels);
    var floatBuffer = new float[sampleCount];

    var samplesRead = provider.Read(floatBuffer, 0, sampleCount);

    if (samplesRead < 1)
        return 0;

    var byteBuffer = new byte[samplesRead * 4]; //4 bytes per float, f32le.
    Buffer.BlockCopy(floatBuffer, 0, byteBuffer, 0, byteBuffer.Length);

    
    _audioCollection.Add(byteBuffer);
}


    


    Once you finished producing data, make sure to signal the BlockingCollections :

    


    public void FinishEncoding()
{
    //Signal the end of video/audio producer.
    _videoCollection.CompleteAdding();
    _audioCollection.CompleteAdding();

    //Waits for 20 seconds for encoding to finish.
    _process?.WaitForExit(20_000);
}


    


    The FFmpeg arguments were changed slightly :

    


    -loglevel trace -hwaccel auto 
-f:v rawvideo -probesize 32 -r 25 -pix_fmt bgra -video_size 1109x627 -i {image} 
-f:a f32le -ac 2 -ar 48000 -probesize 32 -i {audio} 
-c:v libx264 -preset fast -pix_fmt yuv420p 
-vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -crf 23 -f:v mp4 -fps_mode vfr 
-c:a aac -b:a 128k -ar 48000 -ac 2 
-y "C:\Users\user\Desktop\Video.mp4"


    


  • avcodec/idctdsp : Transmit studio_profile to init instead of using AVCodecContext...

    28 mai 2018, par Michael Niedermayer
    avcodec/idctdsp : Transmit studio_profile to init instead of using AVCodecContext profile
    

    These 2 fields are not always the same, it is simpler to always use the same field
    for detecting studio profile

    Fixes : null pointer dereference
    Fixes : ffmpeg_crash_3.avi

    Found-by : Thuan Pham <thuanpv@comp.nus.edu.sg>, Marcel Böhme, Andrew Santosa and Alexandru RazvanCaciulescu with AFLSmart
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libavcodec/idctdsp.c
    • [DH] libavcodec/idctdsp.h
    • [DH] libavcodec/mpegvideo.c