Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (111)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

Sur d’autres sites (11560)

  • How to overlay sequence of frames on video using ffmpeg-python ?

    19 novembre 2022, par Yogesh Yadav

    I tried below but it is only showing the background video.

    


    background_video = ffmpeg.input( "input.mp4")
overlay_video = ffmpeg.input(f'{frames_folder}*.png', pattern_type='glob', framerate=25)
subprocess = ffmpeg.overlay(
                          background_video,
                          overlay_video,
                        ).filter("setsar", sar=1)


    


    I also tried to assemble sequence of frames into .webm/.mov video but transparency is lost. video is taking black as background.

    


    P.s - frame size is same as background video size. So no scaling needed.

    


    Edit

    


    I tried @Rotem suggestions

    


    


    Try using single PNG image first

    


    


    overlay_video =  ffmpeg.input('test-frame.png')


    


    It's not working for frames generated by OpenCV but working for any other png image. This is weird, when I'm manually viewing these frames folder it's showing blank images(Link to my frames folder).
But If I convert these frames into the video(see below) it is showing correctly what I draw on each frame.

    


    output_options = {
                    'crf': 20,
                    'preset': 'slower',
                    'movflags': 'faststart',
                    'pix_fmt': 'yuv420p'
                }
ffmpeg.input(f'{frames_folder}*.png', pattern_type='glob', framerate=25 , reinit_filter=0).output(
                    'movie.avi',
                    **output_options
                ).global_args('-report').run()


    


    


    try creating a video from all the PNG images without overlay

    


    


    It's working as expected only issue is transparency. Is there is way to create a transparent background video ? I tried .webm/.mov/.avi but no luck.

    


    


    Add .global_args('-report') and check the log file

    


    


    Report written to "ffmpeg-20221119-110731.log"
Log level: 48
ffmpeg version 5.1 Copyright (c) 2000-2022 the FFmpeg developers
  built with Apple clang version 13.1.6 (clang-1316.0.21.2.5)
  configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/5.1 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-neon
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
Input #0, image2, from './frames/*.png':
  Duration: 00:00:05.00, start: 0.000000, bitrate: N/A
  Stream #0:0: Video: png, rgba(pc), 1920x1080, 25 fps, 25 tbr, 25 tbn
Codec AVOption crf (Select the quality for constant quality mode) specified for output file #0 (movie.avi) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
Codec AVOption preset (Configuration preset) specified for output file #0 (movie.avi) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
Stream mapping:
  Stream #0:0 -> #0:0 (png (native) -> mpeg4 (native))
Press [q] to stop, [?] for help
Output #0, avi, to 'movie.avi':
  Metadata:
    ISFT            : Lavf59.27.100
  Stream #0:0: Video: mpeg4 (FMP4 / 0x34504D46), yuv420p(tv, progressive), 1920x1080, q=2-31, 200 kb/s, 25 fps, 25 tbn
    Metadata:
      encoder         : Lavc59.37.100 mpeg4
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
frame=  125 fps= 85 q=31.0 Lsize=     491kB time=00:00:05.00 bitrate= 804.3kbits/s speed=3.39x    
video:482kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.772174%


    


    To draw frame I used below.

    


    for i in range(num_frames):
            transparent_img = np.zeros((height, width, 4),  dtype=np.uint8)
            cv2.line(transparent_img, (x1,y1), (x2,y2) ,(255, 255, 255), thickness=1, lineType=cv2.LINE_AA)
            self.frames.append(transparent_img)


## To Save each frame of the video in the given folder
for i, f in enumerate(frames):
    cv2.imwrite("{}/{:0{n}d}.png".format(path_to_frames, i, n=num_digits), f)





    


  • Use C# to converting apng to webm with ffmpeg from pipe input and output

    30 novembre 2022, par martin wang

    I was using ffmpeg to convert Line sticker from apng file to webm file.
And the result is weird, some of them was converted successed and some of them failed.
not sure what happend with these failed convert.

    


    Here is my c# code to convert Line sticker to webm,
and I use CliWrap to run ffmpeg command line.

    


    async Task Main()
{

    var downloadUrl = @"http://dl.stickershop.LINE.naver.jp/products/0/0/1/23303/iphone/stickerpack@2x.zip";
    var arg = @$"-i pipe:.png -vf scale=512:512:force_original_aspect_ratio=decrease:flags=lanczos -pix_fmt yuva420p -c:v libvpx-vp9 -cpu-used 5 -minrate 50k -b:v 350k -maxrate 450k -to 00:00:02.900 -an -y -f webm pipe:1";

    var errorCount = 0;
    try
    {
        using (var hc = new HttpClient())
        {
            var imgsZip = await hc.GetStreamAsync(downloadUrl);

            using (ZipArchive zipFile = new ZipArchive(imgsZip))
            {
                var files = zipFile.Entries.Where(entry => Regex.IsMatch(entry.FullName, @"animation@2x\/\d+\@2x.png"));
                foreach (var entry in files)
                {
                    try
                    {
                        using (var fileStream = File.Create(Path.Combine("D:", "Projects", "ffmpeg", "Temp", $"{Path.GetFileNameWithoutExtension(entry.Name)}.webm")))
                        using (var pngFileStream = File.Create(Path.Combine("D:", "Projects", "ffmpeg", "Temp", $"{entry.Name}")))
                        using (var entryStream = entry.Open())
                        using (MemoryStream ms = new MemoryStream())
                        {
                            entry.Open().CopyTo(pngFileStream);

                            var result = await Cli.Wrap("ffmpeg")
                                         .WithArguments(arg)
                                         .WithStandardInputPipe(PipeSource.FromStream(entryStream))
                                         .WithStandardOutputPipe(PipeTarget.ToStream(ms))
                                         .WithStandardErrorPipe(PipeTarget.ToFile(Path.Combine("D:", "Projects", "ffmpeg", "Temp", $"{Path.GetFileNameWithoutExtension(entry.Name)}Info.txt")))
                                         .WithValidation(CommandResultValidation.ZeroExitCode)
                                         .ExecuteAsync();
                            ms.Seek(0, SeekOrigin.Begin);
                            ms.WriteTo(fileStream);
                        }
                    }
                    catch (Exception ex)
                    {
                        entry.FullName.Dump();
                        ex.Dump();
                        errorCount++;
                    }
                }
            }

        }
    }
    catch (Exception ex)
    {
        ex.Dump();
    }
    $"Error Count:{errorCount.Dump()}".Dump();

}


    


    This is the failed convert file's error information from ffmpeg :

    


    enter image description here

    


    And the successed convert file from ffmpeg infromation :
enter image description here

    


    It's strange when I was manually converted these failed convert file from command line, and it will be converted successed.
enter image description here

    


    The question is the resource of images are all the same apng file,
so I just can't understan why some of files will convert failed from my c# code
but also when I manually use command line will be converted successed ?

    



    


    I have written same exampe from C# to Python...
and here is python code :

    


    from io import BytesIO
import os
import re
import subprocess
import zipfile

import requests


downloadUrl = "http://dl.stickershop.LINE.naver.jp/products/0/0/1/23303/iphone/stickerpack@2x.zip"
args = [
    'ffmpeg',
    '-i', 'pipe:',
    '-vf', 'scale=512:512:force_original_aspect_ratio=decrease:flags=lanczos',
    '-pix_fmt', 'yuva420p',
    '-c:v', 'libvpx-vp9',
    '-cpu-used', '5',
    '-minrate', '50k',
    '-b:v', '350k',
    '-maxrate', '450k', '-to', '00:00:02.900', '-an', '-y', '-f', 'webm', 'pipe:1'
]


imgsZip = requests.get(downloadUrl)
with zipfile.ZipFile(BytesIO(imgsZip.content)) as archive:
    files = [file for file in archive.infolist() if re.match(
        "animation@2x\/\d+\@2x.png", file.filename)]
    for entry in files:
        fileName = entry.filename.replace(
            "animation@2x/", "").replace(".png", "")
        rootPath = 'D:\\' + os.path.join("Projects", "ffmpeg", "Temp")
        # original file
        apngFile = os.path.join(rootPath, fileName+'.png')
        # output file
        webmFile = os.path.join(rootPath, fileName+'.webm')
        # output info
        infoFile = os.path.join(rootPath, fileName+'info.txt')

        with archive.open(entry) as file, open(apngFile, 'wb') as output_apng, open(webmFile, 'wb') as output_webm, open(infoFile, 'wb') as output_info:
            p = subprocess.Popen(args, stdin=subprocess.PIPE,
                                 stdout=subprocess.PIPE, stderr=output_info)
            outputBytes = p.communicate(input=file.read())[0]

            output_webm.write(outputBytes)
            file.seek(0)
            output_apng.write(file.read())



    


    And you can try it,the result will be the as same as C#.

    


  • File encoded as FFV1 with libavcodec is unplayable

    19 novembre 2016, par Ali Alidoust

    I’m using the following code to encode a series of frames into an mkv or avi file with FFV1 encoding :

    HRESULT Session::createContext(LPCSTR filepath, UINT width, UINT height, UINT fps_num, UINT fps_den) {
       LOG(filepath);

       this->codec = avcodec_find_encoder(AV_CODEC_ID_FFV1);
       RET_IF_NULL(this->codec, "Could not create codec", E_FAIL);

       this->oformat = av_guess_format(NULL, filepath, NULL);
       RET_IF_NULL(this->oformat, "Could not create format", E_FAIL);
       this->oformat->video_codec = AV_CODEC_ID_FFV1;
       this->width = width;
       this->height = height;
       this->context = avcodec_alloc_context3(this->codec);
       RET_IF_NULL(this->context, "Could not allocate context for the codec", E_FAIL);

       this->context->codec = this->codec;
       this->context->codec_id = AV_CODEC_ID_FFV1;
       this->context->codec_type = AVMEDIA_TYPE_VIDEO;

       this->context->pix_fmt = AV_PIX_FMT_YUV420P;
       this->context->width = this->width;
       this->context->height = this->height;
       this->context->time_base.num = fps_den;
       this->context->time_base.den = fps_num;

       this->context->gop_size = 1;

       av_opt_set_int(this->context->priv_data, "coder", 0, 0);
       av_opt_set_int(this->context->priv_data, "context", 1, 0);
       av_opt_set_int(this->context->priv_data, "slicecrc", 1, 0);

       avformat_alloc_output_context2(&fmtContext, NULL, NULL, filepath);

       this->fmtContext->oformat = this->oformat;
       this->fmtContext->video_codec_id = AV_CODEC_ID_FFV1;

       this->stream = avformat_new_stream(this->fmtContext, this->codec);
       RET_IF_NULL(this->fmtContext, "Could not create new stream", E_FAIL);
       avcodec_parameters_from_context(this->stream->codecpar, this->context);
       this->stream->codecpar->level = 3;

       /*if (this->fmtContext->oformat->flags & AVFMT_GLOBALHEADER)
       {*/
           this->context->flags |= CODEC_FLAG_GLOBAL_HEADER;
       /*}*/

       RET_IF_FAILED_AV(avcodec_open2(this->context, this->codec, NULL), "Could not open codec", E_FAIL);
       RET_IF_FAILED_AV(avio_open(&this->fmtContext->pb, filepath, AVIO_FLAG_READ_WRITE), "Could not open output file", E_FAIL);
       RET_IF_NULL(this->fmtContext->pb, "Could not open output file", E_FAIL);
       RET_IF_FAILED_AV(avformat_write_header(this->fmtContext, NULL), "Could not write header", E_FAIL);

       frame = av_frame_alloc();
       RET_IF_NULL(frame, "Could not allocate frame", E_FAIL);
       frame->format = this->context->pix_fmt;
       frame->width = width;
       frame->height = height;
       // RET_IF_FAILED(av_image_alloc(frame->data, frame->linesize, context->width, context->height, context->pix_fmt, 32), "Could not allocate image", E_FAIL);
       return S_OK;
    }

    HRESULT Session::writeFrame(IMFSample * pSample) {
       IMFMediaBuffer *mediaBuffer = NULL;
       BYTE *pData = NULL;
       DWORD length;

       RET_IF_FAILED(pSample->ConvertToContiguousBuffer(&mediaBuffer), "Could not convert IMFSample to contagiuous buffer", E_FAIL);
       RET_IF_FAILED(mediaBuffer->GetCurrentLength(&length), "Could not get buffer length", E_FAIL);
       RET_IF_FAILED(mediaBuffer->Lock(&pData, NULL, NULL), "Could not lock the buffer", E_FAIL);
       RET_IF_FAILED(av_image_fill_arrays(frame->data, frame->linesize, pData, AV_PIX_FMT_YUV420P, this->width, this->height, 1), "Could not fill the frame with data from the buffer", E_FAIL);
       LOG_IF_FAILED(mediaBuffer->Unlock(), "Could not unlock the buffer");

       frame->pts = this->pts++;

       AVPacket pkt;

       av_init_packet(&pkt);
       pkt.data = NULL;
       pkt.size = 0;

       RET_IF_FAILED_AV(avcodec_send_frame(this->context, frame), "Could not send the frame to the encoder", E_FAIL);
       if (SUCCEEDED(avcodec_receive_packet(this->context, &pkt))) {
           RET_IF_FAILED_AV(av_interleaved_write_frame(this->fmtContext, &pkt), "Could not write the received packet.", E_FAIL);
       }

       av_packet_unref(&pkt);

       return S_OK;
    }

    HRESULT Session::endSession() {
       LOG("Ending session...");
       LOG("Closing files...");
       av_write_trailer(this->fmtContext);
       avio_close(this->fmtContext->pb);
       avcodec_close(this->context);
       av_free(this->context);
       //fclose(this->file);
       LOG("Done.");
       return S_OK;
    }

    The problem is that the generated file is not playable in either VLC or MPC-HC. However, MPC-HC reports following info in file properties :

    General
    Unique ID                      : 202978442142665779317960161865934977227 (0x98B439D9BE859109BD5EC00A62A238CB)
    Complete name                  : T:\Test.mkv
    Format                         : Matroska
    Format version                 : Version 4 / Version 2
    File size                      : 24.6 MiB
    Duration                       : 147ms
    Overall bit rate               : 1 401 Mbps
    Writing application            : Lavf57.57.100
    Writing library                : Lavf57.57.100

    Video
    ID                             : 1
    Format                         : FFV1
    Format version                 : Version 0
    Codec ID                       : V_MS/VFW/FOURCC / FFV1
    Duration                       : 147ms
    Width                          : 1 280 pixels
    Height                         : 720 pixels
    Display aspect ratio           : 16:9
    Frame rate mode                : Constant
    Frame rate                     : 1 000.000 fps
    Color space                    : YUV
    Chroma subsampling             : 4:2:0
    Bit depth                      : 8 bits
    Compression mode               : Lossless
    Default                        : Yes
    Forced                         : No
    DURATION                       : 00:00:00.147000000
    coder_type                     : Golomb Rice

    Something to note is that it reports 1000 FPS which is weird since I’ve set AVCodecContext::time_base in the code.

    UPDATE :

    I managed to set the correct fps by setting time_base property of the stream :

    this->stream->time_base.den = fps_num;
    this->stream->time_base.num = fps_den;

    VLC plays the output file but it shows VLC logo instead of the video, as if there is no video stream in the file.