Recherche avancée

Médias (1)

Mot : - Tags -/berlin

Autres articles (67)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (14638)

  • Display ffmepg's AVFrame in DirectX (in C# with SlimDX)

    14 mars 2012, par Sinan

    I have a C++ DLL which uses ffmepg to read a video. This DLL is used by a C# program.
    I want to display AVFrame in DirectX with SlimDX.
    When ffmepg thread gets a picture, it converts AVFrame to RGB24 bmp and transmits it to C# code thanks to a callback.
    It's working but, owing to bmp format, I lose alpha canal in the image.

    I try to display AvFrame (keyframe) in DirectX(9c) as a picture and overlay others frame using opacity.
    Here my source code when receiving new picture (videoByte) :

    Texture texture = Texture.FromMemory(device, videoByte, width, height,  0, Usage.None, Format.A8B8G8R8, Pool.Default, Filter.None, Filter.None, 0);
    device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, System.Drawing.Color.DarkOrange, 1.0f, 0);
    device.BeginScene();
    if (isfKey)
    {
     sprite.Begin(SpriteFlags.None);
    }
    else
    {
     sprite.Begin(SpriteFlags.AlphaBlend);
    }
    sprite.Draw(texture, System.Drawing.Color.White);
    sprite.End();
    device.EndScene();
    device.Present();

    Do someone know how to display an AVFrame using DirectX ?

  • How to get audio peaks with ffmpeg in json ot txt ?

    7 janvier 2021, par Sagar Kardani

    I want to generate audio waveform data in flutter app with flutter-ffmpeg wrapper which is already available and working fine.

    


    flutter-ffmpeg

    


    Below mentioned post describes how to draw the waveform data in flutter app with canvas. It uses precalculated data server side. It also mentions that ffmpeg can be used to generate data.

    


    Precalculated Data

    


    Drawing waveforms

    


    How to get audio peaks with ffmpeg in json format or as an array or some other formats with can later be used to draw waveform.

    


    PS - I am completely new to ffmpeg and audio processing.

    


  • Workflow for creating animated hand-drawn videos - encoding difficulties

    8 décembre 2017, par Mircode

    I want to create YouTube videos, kind of in the style of a white-board animation.

    Tldr question : How can I encode into a lossless rgb video format with ffmpeg, including alpha channel ?

    More detailed :
    My current workflow looks like this :

    I draw the slides in Inkscape, I group all paths that are to be drawn in one go (one scene so to say) and store the slide as svg. Then I run a custom python script over that. It animates the slides as described here https://codepen.io/MyXoToD/post/howto-self-drawing-svg-animation. Each frame is exported as svg, converted to png and fed to ffmpeg for making a video from it.

    For every scene (a couple of paths being drawn, there are several scenes per slide) I create an own video file and then I also store a png file that contains the last frame of that video.

    I then use kdenlive to join it all together : A video containing the drawing of the first scene, then a png which holds the last image of the video while I talk about the drawing, then the next animated drawing, then the next still image where I continue talking and so on. I use these intermediate images because freezing the last frame is tedious in kdenlive and I have around 600 scenes. Here I do the editing, adjust the duration of the still images and render the final video.

    The background of the video is a photo of a blackboard which never changes, the strokes are paths with a filter to make it look like chalk.

    So far so good, everything almost works.

    My problem is : Whenever there is a transition between an animation and a still image, it is visible in the final result. I have tried several approaches to make this work but nothing is without flaws.

    My first approach was to encode the animations as mp4 like this :

    p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-crf', '21', '-bf', '2', '-flags', '+cgop', '-pix_fmt', 'yuv420p', '-movflags', 'faststart', '-r', str(fps), videofile], stdin=PIPE)

    which is recommended for YouTube. But then there is a little brightness difference between video and still image.

    Then I tried mov with png codec :

    p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'png', '-r', str(fps), videofile], stdin=PIPE)

    I think this encodes every frame as png in the video. It creates way bigger files since every frame is encoded separately. But it’s ok since I can use transparency for the background and just store the chalk strokes. However, sometimes I want to swipe parts of the chalk on a slide away, which I do by drawing background over it. Which would work if those overlaid, animated background chunks which are stored in the video looked exactly like the underlying png in the background. But it doesn’t. It’s slightly more blurry and I believe the color changes a tiny bit as well. Which I don’t understand since I thought the video just stores a sequence of pngs... Is there some quality setting that I miss here ?

    Then I read about ProRes4444 and tried that :

    p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-c:v', 'prores_ks', '-pix_fmt', 'yuva444p10le', '-alpha_bits', '8', '-profile:v', '4444', '-r', str(fps), videofile], stdin=PIPE)

    and this actually seems to work. However, the animation files become larger than the bunch of png files they contain, probably because this format stores 12 bit per channel. This is not thaat horrible since only intermediate videos grow big, the final result is still ok.

    But ideally there would be a lossless codec which stores in rgb colorspace with 8 bit per channel, 8 bit for alpha and considers only the difference to the previous frame (because all that changes from frame to frame is a tiny bit of chalk drawing). Is there such a thing ? Alternatively, I’d also be ok without transparency but then I have to store the background in every scene. But if only changes are stored from frame to frame within one scene, that should be manageable.

    Or should I make some fundamental changes in my workflow altogether ?

    Sorry that this is rather lengthy, I appreciate any help.

    Cheers !