Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (51)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

Sur d’autres sites (11086)

  • How to write a video file using FFmpeg

    15 janvier 2024, par Summit

    I am trying to write a video file using FFMPEG but i get the following errors

    


    [libx264 @ 000002bdf90c3c00] broken ffmpeg default settings detected&#xA;[libx264 @ 000002bdf90c3c00] use an encoding preset (e.g. -vpre medium)&#xA;[libx264 @ 000002bdf90c3c00] preset usage: -vpre <speed> -vpre <profile>&#xA;[libx264 @ 000002bdf90c3c00] speed presets are listed in x264 --help&#xA;[libx264 @ 000002bdf90c3c00] profile is optional; x264 defaults to high&#xA;</profile></speed>

    &#xA;

    This is my code

    &#xA;

    #pragma warning(disable : 4996)&#xA;&#xA;extern "C" {&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;}&#xA;&#xA;int main() {&#xA;    av_register_all();&#xA;    AVFormatContext* formatContext = nullptr;&#xA;    AVOutputFormat* outputFormat = nullptr;&#xA;    AVStream* videoStream = nullptr;&#xA;&#xA;    const char* filename = "output.mp4";&#xA;&#xA;    // Open the output file&#xA;    if (avformat_alloc_output_context2(&amp;formatContext, nullptr, nullptr, filename) &lt; 0) {&#xA;        fprintf(stderr, "Error allocating output format context\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    outputFormat = formatContext->oformat;&#xA;&#xA;    // Add a video stream&#xA;    videoStream = avformat_new_stream(formatContext, nullptr);&#xA;    if (!videoStream) {&#xA;        fprintf(stderr, "Error creating video stream\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Set codec parameters, you may need to adjust these based on your needs&#xA;    AVCodecContext* codecContext = avcodec_alloc_context3(nullptr);&#xA;    codecContext->codec_id = outputFormat->video_codec;&#xA;    codecContext->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    codecContext->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;    codecContext->width = 640;&#xA;    codecContext->height = 480;&#xA;    codecContext->time_base = { 1, 25 };&#xA;&#xA;    // Open the video codec&#xA;    AVCodec* videoCodec = avcodec_find_encoder(codecContext->codec_id);&#xA;    if (!videoCodec) {&#xA;        fprintf(stderr, "Error finding video codec\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    if (avcodec_open2(codecContext, videoCodec, nullptr) &lt; 0) {&#xA;        fprintf(stderr, "Error opening video codec\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    videoStream->codecpar->codec_id = codecContext->codec_id;&#xA;    videoStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    videoStream->codecpar->format = codecContext->pix_fmt;&#xA;    videoStream->codecpar->width = codecContext->width;&#xA;    videoStream->codecpar->height = codecContext->height;&#xA;&#xA;    if (avformat_write_header(formatContext, nullptr) &lt; 0) {&#xA;        fprintf(stderr, "Error writing header\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Create a frame&#xA;    AVFrame* frame = av_frame_alloc();&#xA;    frame->format = codecContext->pix_fmt;&#xA;    frame->width = codecContext->width;&#xA;    frame->height = codecContext->height;&#xA;    av_frame_get_buffer(frame, 32);&#xA;&#xA;    // Fill the frame with red color&#xA;    for (int y = 0; y &lt; codecContext->height; &#x2B;&#x2B;y) {&#xA;        for (int x = 0; x &lt; codecContext->width; &#x2B;&#x2B;x) {&#xA;            frame->data[0][y * frame->linesize[0] &#x2B; x * 3] = 255;     // Red component&#xA;            frame->data[0][y * frame->linesize[0] &#x2B; x * 3 &#x2B; 1] = 0;   // Green component&#xA;            frame->data[0][y * frame->linesize[0] &#x2B; x * 3 &#x2B; 2] = 0;   // Blue component&#xA;        }&#xA;    }&#xA;&#xA;    // Write video frames&#xA;    AVPacket packet;&#xA;    for (int i = 0; i &lt; 100; &#x2B;&#x2B;i) {&#xA;        // Send the frame for encoding&#xA;        if (avcodec_send_frame(codecContext, frame) &lt; 0) {&#xA;            fprintf(stderr, "Error sending a frame for encoding\n");&#xA;            return -1;&#xA;        }&#xA;&#xA;        // Receive the encoded packet&#xA;        while (avcodec_receive_packet(codecContext, &amp;packet) == 0) {&#xA;            // Write the packet to the output file&#xA;            if (av_write_frame(formatContext, &amp;packet) != 0) {&#xA;                fprintf(stderr, "Error writing video frame\n");&#xA;                return -1;&#xA;            }&#xA;            av_packet_unref(&amp;packet);&#xA;        }&#xA;    }&#xA;&#xA;    // Write the trailer&#xA;    if (av_write_trailer(formatContext) != 0) {&#xA;        fprintf(stderr, "Error writing trailer\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Clean up resources&#xA;    av_frame_free(&amp;frame);&#xA;    avcodec_free_context(&amp;codecContext);&#xA;    avformat_free_context(formatContext);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

  • Reading FFmpeg bytes from named pipes, extracted NAL units are bad/corrupted

    12 avril 2023, par Mr Squidr

    I'm trying to read .mp4 file wtih ffmpeg and read bytes from the named pipe which I then want to package to RTP stream and send those packets over WebRTC.

    &#xA;

    What I learned is that H264 video consists of many NAL units. So What I do in my code is read the bytes from the named pipe and try to extract NAL units. The problem is that the bytes I get seem to make no real sense as NAL unit start is sometimes only few bytes away.

    &#xA;

    I tested on multiple different mp4 files and on multiple h264 files, all have the same issues. Start of NAL units are found but they aren't separated properly, or what I'm reading aren't NAL units at all. For example NAL units start from reading a sample .h264 file would be : 4, 32, 41, 717. This does not make a lot of sense if these are NAL units, they are too close and some are far apart. I'm lost at what I'm doing wrong.

    &#xA;

    The issue might also be in the ffmpeg command itself. I do think I need "-c:v libx264 -bsf:v h264_mp4toannexb" arguments for the output to be in the correct format but I'm not certain.

    &#xA;

    I did try sending NAL units that seemed ok over webrtc but nothing was displayed on the receiving end (probably because of how H264 works by needing previous frames, I'm not sure).

    &#xA;

    I am struggling with this issue for past few days now and no matter what I tried NAL units were never as they should be.

    &#xA;

    Code to start ffmpeg process from c# :

    &#xA;

    var proc = new Process()&#xA;{&#xA;    StartInfo =&#xA;        {&#xA;            FileName = FFMPEG_LIB_PATH,&#xA;            Arguments = "-y -re -i input.mp4 -an -c:v libx264 -bsf:v h264_mp4toannexb -f image2pipe ffmpeg_rec_stream",&#xA;            UseShellExecute = false,&#xA;            CreateNoWindow = true,&#xA;            RedirectStandardInput = false,&#xA;            RedirectStandardOutput = true,&#xA;        }&#xA;};&#xA;

    &#xA;

    Code to connect to named pipe :

    &#xA;

    var mOutputPipe = new NamedPipeServerStream($"ffmpeg_rec_stream", PipeDirection.InOut, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous, 102400, 102400);&#xA;mOutputPipe.BeginWaitForConnection(OnOutputPipeConnected, null);&#xA;

    &#xA;

    Code for OnOutputPipeConnected

    &#xA;

    private void OnOutputPipeConnected(IAsyncResult ar)&#xA;        {&#xA;            try&#xA;            {&#xA;                mOutputPipe.EndWaitForConnection(ar);&#xA;                var buffer = new byte[65536];&#xA;                while (true)&#xA;                {&#xA;                    int bytesRead = mOutputPipe.Read(buffer, 0, buffer.Length);&#xA;                    if (bytesRead == 0)&#xA;                    {&#xA;                        break;&#xA;                    }&#xA;&#xA;                    var nalUnitStarts = FindAllNalUnitIndexes(buffer, bytesRead);&#xA;                    for (int i = 0; i &lt; nalUnitStarts.Count - 1; i&#x2B;&#x2B;)&#xA;                    {&#xA;                        int nalStartIndex = nalUnitStarts[i];&#xA;                        int nalEndIndex = nalUnitStarts[i &#x2B; 1] - 1;&#xA;                        int nalLength = nalEndIndex - nalStartIndex &#x2B; 1;&#xA;                        byte[] nalUnit = new byte[nalLength];&#xA;                        Buffer.BlockCopy(buffer, nalStartIndex, nalUnit, 0, nalLength);&#xA;&#xA;                        // send nalUnit over to webrtc client&#xA;                        var rtpPacket = new RTPPacket(nalUnit);&#xA;                        RecordingSession?.RTCPeer.SendRtpRaw(SDPMediaTypesEnum.video, rtpPacket.Payload, rtpPacket.Header.Timestamp, rtpPacket.Header.MarkerBit, 100);&#xA;                    }&#xA;                }&#xA;            }&#xA;            catch (Exception e)&#xA;            {&#xA;                &#xA;            }&#xA;        }&#xA;

    &#xA;

    Code for finding NAL units :

    &#xA;

    private static List<int> FindAllNalUnitIndexes(byte[] buffer, int length)&#xA;{&#xA;    var indexes = new List<int>();&#xA;    int i = 0;&#xA;&#xA;    while (i &lt; length - 4)&#xA;    {&#xA;        int nalStart = FindNextNalUnit(buffer, i, length);&#xA;        if (nalStart == -1)&#xA;        {&#xA;            break;&#xA;        }&#xA;        else&#xA;        {&#xA;            indexes.Add(nalStart);&#xA;            i = nalStart &#x2B; 1;&#xA;        }&#xA;    }&#xA;&#xA;    return indexes;&#xA;}&#xA;&#xA;private static int FindNextNalUnit(byte[] buffer, int startIndex, int length)&#xA;{&#xA;    for (int i = startIndex; i &lt; length - 4; i&#x2B;&#x2B;)&#xA;    {&#xA;        if (buffer[i] == 0 &amp;&amp; buffer[i &#x2B; 1] == 0 &amp;&amp; (buffer[i &#x2B; 2] == 1 || (buffer[i &#x2B; 2] == 0 &amp;&amp; buffer[i &#x2B; 3] == 1)))&#xA;        {&#xA;            return i &#x2B; (buffer[i &#x2B; 2] == 1 ? 3 : 4);&#xA;        }&#xA;    }&#xA;    return -1;&#xA;}&#xA;</int></int>

    &#xA;

  • Android Camera Video frames decoding coming out distorted with horizontal lines

    13 novembre 2018, par Iain Stanford

    I’ve been porting over the following Test Android example to run in a simple Xamarin Android project.

    https://bigflake.com/mediacodec/ExtractMpegFramesTest_egl14.java.txt

    I’m running a video captured by the camera (on the same device) through this pipeline but the PNGs I’m getting out the other end are distorted, I assume due to the minefield of Android Camera color spaces.

    Here are the images I’m getting running a Camera Video through the pipeline...

    https://imgur.com/a/nrOVBPk

    Its hard to tell, but it ’kinda’ looks like it is a single line of the actual image, stretched across. But I honestly wouldn’t want to bank on that being the issue as it could be a red herring.

    However, when I run a ’normal’ video that I grabbed online through the same pipeline, it works completely fine.

    I used the first video found on here (the lego one) http://techslides.com/sample-webm-ogg-and-mp4-video-files-for-html5

    And I get frames like this...

    https://imgur.com/a/yV2vMMd

    Checking out some of the ffmpeg probe data of the video, both this and my camera video have the same pixel format (pix_fmt=yuv420p) but there are differences in color_range.

    The video that works has,

    color_range=tv
    color_space=bt709
    color_transfer=bt709
    color_primaries=bt709

    And the camera video just has...

    color_range=unknown
    color_space=unknown
    color_transfer=unknown
    color_primaries=unknown

    The media format of the camera video appears to be in SemiPlanar YUV, the codec output gets updated to that at least. I get an OutputBuffersChanged message which sets the output buffer of the MediaCodec to the following,

    {
       mime=video/raw,
       crop-top=0,
       crop-right=639,
       slice-height=480,
       color-format=21,
       height=480,
       width=640,
       what=1869968451,
       crop-bottom=479,
       crop-left=0,
       stride=640
    }

    I can also point the codec output to a TextureView as opposed to OpenGL surface, and just grab the Bitmap that way (obviously slower) and these frames look fine. So maybe its the OpenGL display of the raw codec output ? Does Android TextureView do its on decoding ?

    Note - The reason I’m looking into all this is I have a need to try and run some form of image processing on a raw camera feed at as close to 30fps as possible. Obviously, this is not possible some devices, but recording a video at 30fps and then processing the video after the fact is a possible workaround I’m investigating. I’d rather try and process the image in OpenGL for the improved speed than taking each frame as a Bitmap from the TextureView output.

    In researching this I’ve seen someone else with pretty much the exact same issue here How to properly save frames from mp4 as png files using ExtractMpegFrames.java ?
    although he didn’t seem to have much luck finding out what might be going wrong.

    EDIT - FFMpeg Probe outputs for both videos...

    Video that works - https://justpaste.it/484ec .
    Video that fails - https://justpaste.it/55in0 .