Recherche avancée

Médias (91)

Autres articles (44)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Configuration spécifique d’Apache

    4 février 2011, par

    Modules spécifiques
    Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
    Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
    Création d’un (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (5732)

  • FFmpeg Autogen and Unity C# to generate video from screenshots (FFmpeg.Autogen)

    1er juin 2022, par cameron gibbs

    I've taken the FFmpegHelper, VideoFrameConverter, H264VideoStreamEncoder classes straight from the FFmpeg.AutoGen.Example, rolled my own FFmpegBinariesHelper class and Size struct and mangled the EncodeImagesToH264 from Program.cs to look like the below code. I capture a bunch of frames into textures and feed them into Encoder.EncodeImagesToH264. It produces a file I'm calling outputFileName.h264 just fine, no errors. I've changed H264VideoStreamEncoder a little based on ffmpeg's own c++ examples because they had a few things it seemed the C# example was missing but that hasn't made any difference.

    


    The video is weird :

    


      

    1. it only plays in VLC, is there another AVPixelFormat I should be using for the destinationPixelFormat so that anything can play ?
    2. 


    3. VLC is unable to detect the video length or show current time
    4. 


    5. it plays back weird as if the first few seconds are all the same frame then starts playing what appears to be some of the frames I'd expect
    6. 


    


        public static class Encoder
    {
        public static unsafe void EncodeImagesToH264(Texture2D[] images, int fps, string outputFileName)
        {
            FFmpegBinariesHelper.RegisterFFmpegBinaries();

            var fistFrameImage = images[0];
            outputFileName = Path.ChangeExtension(outputFileName, ".h264");
            var sourceSize = new Size(fistFrameImage.width, fistFrameImage.height);
            var sourcePixelFormat = AVPixelFormat.AV_PIX_FMT_RGB24;
            var destinationSize = sourceSize;
            var destinationPixelFormat = AVPixelFormat.AV_PIX_FMT_YUV420P;

            try
            {
                using (var vfc = new VideoFrameConverter(
                    sourceSize,
                    sourcePixelFormat,
                    destinationSize,
                    destinationPixelFormat))
                {
                    using var fs = File.Open(outputFileName, FileMode.Create);
                    using var vse = new H264VideoStreamEncoder(fs, fps, destinationSize);
                    var frameNumber = 0;
                    foreach (var frameFile in images)
                    {
                        var bitmapData = GetBitmapData(frameFile);

                        //var pBitmapData = (byte*)NativeArrayUnsafeUtility
                        //    .GetUnsafeBufferPointerWithoutChecks(bitmapData);

                        fixed (byte* pBitmapData = bitmapData)
                        {
                            var data = new byte_ptrArray8 { [0] = pBitmapData };
                            var linesize = new int_array8 { [0] = bitmapData.Length / sourceSize.Height };
                            var frame = new AVFrame
                            {
                                data = data,
                                linesize = linesize,
                                height = sourceSize.Height
                            };

                            var convertedFrame = vfc.Convert(frame);
                            convertedFrame.pts = frameNumber;

                            vse.Encode(convertedFrame);

                            Debug.Log($"frame: {frameNumber}");
                            frameNumber++;
                        }
                    }
                    byte[] endcode = { 0, 0, 1, 0xb7 };
                    fs.Write(endcode, 0, endcode.Length);
                }
                Debug.Log(outputFileName);
            }
            catch (Exception ex)
            {
                Debug.LogException(ex);
            }
        }

        private static byte[] GetBitmapData(Texture2D frameBitmap)
        {
            return frameBitmap.GetRawTextureData();
        }
    }

    public sealed unsafe class H264VideoStreamEncoder : IDisposable
    {
        private readonly Size _frameSize;
        private readonly int _linesizeU;
        private readonly int _linesizeV;
        private readonly int _linesizeY;
        private readonly AVCodec* _pCodec;
        private readonly AVCodecContext* _pCodecContext;
        private readonly Stream _stream;
        private readonly int _uSize;
        private readonly int _ySize;

        public H264VideoStreamEncoder(Stream stream, int fps, Size frameSize)
        {
            _stream = stream;
            _frameSize = frameSize;

            var codecId = AVCodecID.AV_CODEC_ID_H264;
            _pCodec = ffmpeg.avcodec_find_encoder(codecId);
            if (_pCodec == null)
                throw new InvalidOperationException("Codec not found.");

            _pCodecContext = ffmpeg.avcodec_alloc_context3(_pCodec);
            _pCodecContext->bit_rate = 400000;
            _pCodecContext->width = frameSize.Width;
            _pCodecContext->height = frameSize.Height;
            _pCodecContext->time_base = new AVRational { num = 1, den = fps };
            _pCodecContext->gop_size = 10;
            _pCodecContext->max_b_frames = 1;
            _pCodecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;

            if (codecId == AVCodecID.AV_CODEC_ID_H264)
                ffmpeg.av_opt_set(_pCodecContext->priv_data, "preset", "veryslow", 0);

            ffmpeg.avcodec_open2(_pCodecContext, _pCodec, null).ThrowExceptionIfError();

            _linesizeY = frameSize.Width;
            _linesizeU = frameSize.Width / 2;
            _linesizeV = frameSize.Width / 2;

            _ySize = _linesizeY * frameSize.Height;
            _uSize = _linesizeU * frameSize.Height / 2;
        }

        public void Dispose()
        {
            ffmpeg.avcodec_close(_pCodecContext);
            ffmpeg.av_free(_pCodecContext);
        }

        public void Encode(AVFrame frame)
        {
            if (frame.format != (int)_pCodecContext->pix_fmt)
                throw new ArgumentException("Invalid pixel format.", nameof(frame));
            if (frame.width != _frameSize.Width)
                throw new ArgumentException("Invalid width.", nameof(frame));
            if (frame.height != _frameSize.Height)
                throw new ArgumentException("Invalid height.", nameof(frame));
            if (frame.linesize[0] < _linesizeY)
                throw new ArgumentException("Invalid Y linesize.", nameof(frame));
            if (frame.linesize[1] < _linesizeU)
                throw new ArgumentException("Invalid U linesize.", nameof(frame));
            if (frame.linesize[2] < _linesizeV)
                throw new ArgumentException("Invalid V linesize.", nameof(frame));
            if (frame.data[1] - frame.data[0] < _ySize)
                throw new ArgumentException("Invalid Y data size.", nameof(frame));
            if (frame.data[2] - frame.data[1] < _uSize)
                throw new ArgumentException("Invalid U data size.", nameof(frame));

            var pPacket = ffmpeg.av_packet_alloc();
            try
            {
                int error;
                do
                {
                    ffmpeg.avcodec_send_frame(_pCodecContext, &frame).ThrowExceptionIfError();
                    ffmpeg.av_packet_unref(pPacket);
                    error = ffmpeg.avcodec_receive_packet(_pCodecContext, pPacket);
                } while (error == ffmpeg.AVERROR(ffmpeg.EAGAIN));

                error.ThrowExceptionIfError();

                using var packetStream = new UnmanagedMemoryStream(pPacket->data, pPacket->size);
                packetStream.CopyTo(_stream);
            }
            finally
            {
                ffmpeg.av_packet_free(&pPacket);
            }
        }
    }


    


  • Tv box failed to open encoded videos using x265 HEVC encoder

    3 juin 2022, par Constadinos Chatzis

    I have Zidoo X9S tv box and it has support for x265 HEVC codec. Every single video i throw in it, HDR 4K or standard x264 videos, it plays perfectly except my encoded x265 video when i select it to play, it freezes and it doesn't load. I believe it has something to do with my encoding settings. The encoding command i use with ffmpeg, is this :

    


    -c:v libx265 -crf 10 -preset veryfast output.mkv


    


    Any ideas why this is happening ?

    


  • Cannot concatenate videos ffmpeg [on hold]

    1er mai 2014, par Paul Prescod

    I have a bitmap that I would like to concatenate to the front of many videos as a sort of title screen or disclaimer screen.

    I try to turn it into a video with the same attributes as the rest of the video. So first I introspect the video :

    ffmpeg version 2.2.1 Copyright (c) 2000-2014 the FFmpeg developers
     built on Apr 11 2014 22:50:38 with Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/2.2.1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-avresample --enable-vda --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-libxvid
     libavutil      52. 66.100 / 52. 66.100
     libavcodec     55. 52.102 / 55. 52.102
     libavformat    55. 33.100 / 55. 33.100
     libavdevice    55. 10.100 / 55. 10.100
     libavfilter     4.  2.100 /  4.  2.100
     libavresample   1.  2.  0 /  1.  2.  0
     libswscale      2.  5.102 /  2.  5.102
     libswresample   0. 18.100 /  0. 18.100
     libpostproc    52.  3.100 / 52.  3.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'EO1.mp4':
     Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    creation_time   : 1970-01-01 00:00:00
    encoder         : Lavf52.78.3
     Duration: 00:00:17.77, start: 0.000000, bitrate: 582 kb/s
    Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 512 kb/s, 23.98 fps, 23.98 tbr, 1199 tbn, 47.96 tbc (default)
    Metadata:
     creation_time   : 1970-01-01 00:00:00
     handler_name    : VideoHandler
    Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 64 kb/s (default)
    Metadata:
     creation_time   : 1970-01-01 00:00:00
     handler_name    : SoundHandler

    Then I try and create a similar file :

    /usr/local/Cellar/ffmpeg/2.2.1/bin/ffmpeg -y -loop 1 -i Disclaimer.png -c:v libx264 -r 23.98 -t 5 -pix_fmt yuv420p -profile:v main disclaimer.mp4

    It seems to work okay. The video plays I would expect it to. The attributes turn out very similar. Here is a diff :

    < Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'disclaimer.mp4':
    ---
    > Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'EO1.mp4':
    18,20c18,21
    <     encoder         : Lavf55.33.100
    <   Duration: 00:00:05.01, start: 0.000000, bitrate: 21 kb/s
    <     Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 17 kb/s, 23.98 fps, 23.98 tbr, 19184 tbn, 47.96 tbc (default)
    ---
    >     creation_time   : 1970-01-01 00:00:00
    >     encoder         : Lavf52.78.3
    >   Duration: 00:00:17.77, start: 0.000000, bitrate: 582 kb/s
    >     Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 512 kb/s, 23.98 fps, 23.98 tbr, 1199 tbn, 47.96 tbc (default)
    21a23
    22a25,28
    >     Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 64 kb/s (default)
    >     Metadata:
    >       handler_name    : SoundHandler

    But when I try to concatenate, I get errors.

    > $ cat temporary.txt

       file disclaimer.mp4
       file EO1.mp4

    /usr/local/Cellar/ffmpeg/2.2.1/bin/ffmpeg -y -f concat -i temporary.txt -c copy output.mp4

    [concat @ 0x7fd880806600] Estimating duration from bitrate, this may be inaccurate
    Input #0, concat, from 'temporary.txt':
     Duration: 00:00:00.02, start: 0.000000, bitrate: 17 kb/s
    Stream #0:0: Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 17 kb/s, 23.98 fps, 23.98 tbr, 19184 tbn, 47.96 tbc
    Output #0, mp4, to 'output.mp4':
     Metadata:
    encoder         : Lavf55.33.100
    Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 584x328 [SAR 1:1 DAR 73:41], q=2-31, 17 kb/s, 23.98 fps, 19184 tbn, 19184 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    [mp4 @ 0x7fd880829a00] Non-monotonous DTS in output stream 0:0; previous: 93600, current: 5951; changing to 93601. This may result in incorrect timestamps in the output file.
    [concat @ 0x7fd880806600] Invalid stream index 1
    [mp4 @ 0x7fd880829a00] Non-monotonous DTS in output stream 0:0; previous: 93601, current: 6001; changing to 93602. This may result in incorrect timestamps in the output file.
    [concat @ 0x7fd880806600] Invalid stream index 1
    [mp4 @ 0x7fd880829a00] Non-monotonous DTS in output stream 0:0; previous: 93602, current: 6051; changing to 93603. This may result in incorrect timestamps in the output file.

    ...

    frame=  546 fps=0.0 q=-1.0 Lsize=    1127kB time=00:00:04.90 bitrate=1882.9kbits/s    
    video:1123kB audio:0kB subtitle:0 data:0 global headers:0kB muxing overhead 0.349865%

    The output looks like it only has my disclaimer file in it, not the rest of the video.

    I’m also confused why it feels like it needs to "estimate" anything. It knows the input FPS and input durations. I’m not sure if this is the problem or not. Maybe its just a bug.