Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (38)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (8711)

  • How do I toggle individual codec options in libavcodec (specifically h264_options)

    2 juillet 2020, par John Allard

    I'm trying to figure out how to enable enable_er option as is defined in h264dec.c in libavcodec. This is defined as an AVOption as part of the AVCodec.priv_class.option field. I can't figure out if this is some sort of compile-time option or if it's an option that I can enable via the av_dict_set method when initializing an AVCodec via avcodec_open2.

    


    I'm talking about these options in h264dec.c

    


    #define OFFSET(x) offsetof(H264Context, x)
#define VD AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_DECODING_PARAM
static const AVOption h264_options[] = {
    { "is_avc", "is avc", OFFSET(is_avc), AV_OPT_TYPE_BOOL, {.i64 = 0}, 0, 1, 0 },
    { "nal_length_size", "nal_length_size", OFFSET(nal_length_size), AV_OPT_TYPE_INT, {.i64 = 0}, 0, 4, 0 },
    { "enable_er", "Enable error resilience on damaged frames (unsafe)", OFFSET(enable_er), AV_OPT_TYPE_BOOL, { .i64 = -1 }, -1, 1, VD },
    { NULL },
};

static const AVClass h264_class = {
    .class_name = "H264 Decoder",
    .item_name  = av_default_item_name,
    .option     = h264_options,
    .version    = LIBAVUTIL_VERSION_INT,
};

AVCodec ff_h264_decoder = {
    .name                  = "h264",
    .long_name             = NULL_IF_CONFIG_SMALL("H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10"),
    .type                  = AVMEDIA_TYPE_VIDEO,
    .id                    = AV_CODEC_ID_H264,
    .priv_data_size        = sizeof(H264Context),
    .init                  = h264_decode_init,
    .close                 = h264_decode_end,
    .decode                = h264_decode_frame,
    .capabilities          = /*AV_CODEC_CAP_DRAW_HORIZ_BAND |*/ AV_CODEC_CAP_DR1 |
                             AV_CODEC_CAP_DELAY | AV_CODEC_CAP_SLICE_THREADS |
                             AV_CODEC_CAP_FRAME_THREADS,
    .caps_internal         = FF_CODEC_CAP_INIT_THREADSAFE | FF_CODEC_CAP_EXPORTS_CROPPING,
    .flush                 = flush_dpb,
    .init_thread_copy      = ONLY_IF_THREADS_ENABLED(decode_init_thread_copy),
    .update_thread_context = ONLY_IF_THREADS_ENABLED(ff_h264_update_thread_context),
    .profiles              = NULL_IF_CONFIG_SMALL(ff_h264_profiles),
    .priv_class            = &h264_class,
};


    


  • Setting individual pixels of an RGB frame for ffmpeg encoding

    15 mai 2013, par Camille Goudeseune

    I'm trying to change the test pattern of an ffmpeg streamer, Trouble syncing libavformat/ffmpeg with x264 and RTP , into familiar RGB format. My broader goal is to compute frames of a streamed video on the fly.

    So I replaced its AV_PIX_FMT_MONOWHITE with AV_PIX_FMT_RGB24, which is "packed RGB 8:8:8, 24bpp, RGBRGB..." according to http://libav.org/doxygen/master/pixfmt_8h.html .

    To stuff its pixel array called data, I've tried many variations on

    for (int y=0; y/  const double j = y/double(HEIGHT);
       rgb[0] = 255*i;
       rgb[1] = 0;
       rgb[2] = 255*(1-i);
     }
    }

    At HEIGHTxWIDTH= 80x60, this version yields
    screenshot of red-to-blue stripes, when I expect a single blue-to-red horizontal gradient.

    640x480 yields the same 4-column pattern, but with far more horizontal stripes.

    640x640, 160x160, etc, yield three columns, cyan-ish / magenta-ish / yellow-ish, with the same kind of horizontal stripiness.

    Vertical gradients behave even more weirdly.

    Appearance was unaffected by an AV_PIX_FMT_RGBA attempt (4 not 3 bytes per pixel, alpha=255). Also unaffected by a port from C to C++.

    The argument srcStrides passed to sws_scale() is a length-1 array, containing the single int HEIGHT.

    Access each Pixel of AVFrame asks the same question in less detail, so far unanswered.

    The streamer emits one warning, which I doubt affects appearance :

    [rtp @ 0x269c0a0] Encoder did not produce proper pts, making some up.

    So. How do you set the RGB value of a pixel in a frame to be sent to sws_scale() (and then to x264_encoder_encode() and av_interleaved_write_frame()) ?

  • Pass individual frames as BGRA byte array and set the timestamps via pipe to FFmpeg

    30 juillet 2023, par Nicke Manarin

    I have a set of images (as BGRA byte[]) with their respective timestamps in milliseconds and I want to pass it to FFmpeg to build an animation.

    


    I'm using FFmpeg v6 right now and in this example I'm expecting a GIF as output, but I'm going to export to multiple formats later.

    


    var arguments = "-vsync passthrough  
-f rawvideo 
-pix_fmt bgra 
-video_size {width}x{height} 
-i -  
-loop 0 
-lavfi palettegen=stats_mode=diff[pal],[0:v][pal]paletteuse=new=1:dither=sierra2_4a:diff_mode=rectangle 
-f gif 
-y \"C:\Users\User\Desktop\test.gif\"";

var process = new Process
{
    StartInfo = new ProcessStartInfo
    {
        FileName = "./ffmpeg.exe",
        Arguments = arguments.Replace("{width}", width.ToString()).Replace("{height}", height.ToString()),
        RedirectStandardInput = true,
        RedirectStandardOutput = true,
        UseShellExecute = false,
        CreateNoWindow = true
    }
};

_process.Start();



    


    Then on my render loop, I'm trying to send the frames and their timestamps one by one.

    


    public void EncodeFrame(IntPtr bufferAddress, int bufferStride, int width, int height, int index, long timestamp, int delay)
{
    var frameSize = height * bufferStride;
    var frameBytes = new byte[frameSize];
    System.Runtime.InteropServices.Marshal.Copy(bufferAddress, frameBytes, 0, frameSize);

    _process.StandardInput.BaseStream.Write(frameBytes, 0, frameSize);
    _process.StandardInput.BaseStream.Write(_delimiter, 0, _delimiter.Length);
    _process.StandardInput.BaseStream.Write(BitConverter.GetBytes(timestamp), 0, sizeof(long));
}


    


    The issue is that I'm getting an IOException (The pipe has been ended), so probably I'm not sending the frames correctly (not sending the delimiter and timestamp doesn't help).

    


    Is this even possible ?