Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (20)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (3862)

  • ffmpeg/libav - how to wirte video files with valid pts

    9 mai 2017, par Kai Rohmer

    I’m currently trying the write out a real time rendered video into a h264 encoded file. After reading a lot of (mostly) old samples and the few class references they call a documentation, I manager to write my video file and I’m also able to read it. Unfortunately, I need some metadata for each frame but I’m not having a constant frame rate. So my intension was to start with the presentation timestamps to "frametime" during recoding. But after all I tried I get no pts while reading the the file (pts stays -9223372036854775808). Before wring a lot of code, here are the basics steps I’m doing. I’m probably using the wrong container or I’m missing to set a flag and you will notice it right away.

    // open a AVFormatContext
    avformat_alloc_output_context2(&m_FormatContext, nullptr, "avi", m_FileName.c_str());

    // open a stream
    m_VideoStream = avformat_new_stream(m_FormatContext, avcodec_find_encoder(AV_CODEC_ID_H264));

    // setup the codec context (including bitrate, frame size, ...)
    m_CodecContext = m_VideoStream ->codec;
    m_CodecContext->coder_type = FF_CODER_TYPE_VLC;
    m_CodecContext->time_base = AVRational{1, 120}; // I expect 20-60 Hz
    m_CodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
    m_CodecContext->color_range = AVCOL_RANGE_JPEG;
    ...
    av_opt_set(m_CodecContext->priv_data, "preset", "ultrafast", 0);
    av_opt_set(m_CodecContext->priv_data, "tune", "zerolatency,fastdecode", 0);


    // set the same time_base to the stream
    m_VideoStream ->time_base = m_CodecContext->time_base;

    // open the codec
    avcodec_open2(m_CodecContext, m_CodecContext->codec, nullptr);

    // open file and write header
    avio_open(&m_FormatContext->pb, m_FileName.c_str(), AVIO_FLAG_WRITE);
    avformat_write_header(m_FormatContext, nullptr);

    // then in a loop:
    // render frame, convert RGBA to YUV frame, set the frames pts (timestamp is double application time in seconds)
    frameToEncode.pts = int64_t(timestamp / av_q2d(m_VideoStream->time_base));
    av_init_packet(m_EncodedPacket);
    avcodec_encode_video2(m_CodecContext, m_EncodedPacket, frameToEncode, &got_output);

    // check packet infos
    //m_EncodedPacket->pts equals frameToEncode.pts
    m_EncodedPacket->dts = AV_NOPTS_VALUE; // also tried incrementing numbers, or zero
    m_EncodedPacket->stream_index = m_Stream->index;
    m_EncodedPacket->duration = 0;
    m_EncodedPacket->pos = -1;
    m_EncodedPacket->flags = 0;
    m_EncodedPacket->flags |= AV_PKT_FLAG_KEY; // read that somewhere

    // write the packet to stream
    av_interleaved_write_frame(m_FormatContext, m_EncodedPacket);


    // after the loop
    // I encode delayed frames and write the trailer
    av_write_trailer(m_FormatContext);

    Thats pretty much it. I’m not getting what is missing. Since I have some meta data per frame I tried to add side data to each package but this data also disapered after reading from file. If decode the packets directly (instead of writing them to file, the data is there)

    I’m quite sure the problem is with the encoding. I managed to decode the big buck bunny movie in which case i got valid pts values.

    Thanks a lot for your help !

  • How to use ffmpeg to create a video from images with timestamps

    16 juin 2023, par 肉蛋充肌

    I am attempting to record videos from a camera (basler pylon). Each frame has a timestamp provided by the camera. I want to write every frame into a video file one by one and set the interval time between two consecutive frames (in other words, fps) to match the interval time of their timestamps.I found an example code that uses ffmpeg to write the frame with a specified fps, but I am unsure how to modify the code to change it from specifying fps to using the timestamp of the frame. As I don't know anything about ffmpeg, could someone show me how to modify the code to achieve this ? It should be noted that since I immediately write every image I obtain into a video, the solution of first obtaining all the images and then merging them into a video may not be feasible.

    


    class FFMPEG_VideoWriter:
    def __init__(
            self,
            filename,  # Use avi format if possible
            size,  # (width, height)
            fps,  # Frame rate
            codec="libx264",  # Codec
            audiofile=None,  # Audio file
            preset="medium",  # Compression rate; slower is better
            bitrate=None,  # Bitrate (set only when codec supports bitrate)
            pixfmt="rgba",
            logfile=None,
            threads=None,
            ffmpeg_params=None,
    ):
        if logfile is None:
            logfile = sp.PIPE

        self.filename = filename
        self.codec = codec
        self.ext = self.filename.split(".")[-1]

        # order is important
        cmd = [
            "ffmpeg",
            "-y",
            "-loglevel",
            "error" if logfile == sp.PIPE else "info",
            "-f",
            "rawvideo",
            "-vcodec",
            "rawvideo",
            "-s",
            "%dx%d" % (size[1], size[0]),
            "-pix_fmt",
            pixfmt,
            "-r",
            "%.02f" % fps,
            "-i",
            "-",
            "-an",
        ]
        cmd.extend(
            [
                "-vcodec",
                codec,
                "-preset",
                preset,
            ]
        )
        if ffmpeg_params is not None:
            cmd.extend(ffmpeg_params)
        if bitrate is not None:
            cmd.extend(["-b", bitrate])
        if threads is not None:
            cmd.extend(["-threads", str(threads)])

        if (codec == "libx264") and (size[0] % 2 == 0) and (size[1] % 2 == 0):
            cmd.extend(["-pix_fmt", "yuv420p"])
        cmd.extend([filename])

        popen_params = {"stdout": sp.DEVNULL, "stderr": logfile, "stdin": sp.PIPE}

        if os.name == "nt":
            popen_params["creationflags"] = 0x08000000  # CREATE_NO_WINDOW

        self.proc = sp.Popen(cmd, **popen_params)

    def write_frame(self, img_array):
        """Writes one frame in the file."""
        try:
            self.proc.stdin.write(img_array.tobytes())
        except IOError as err:
            ...
            raise IOError(error)

    def close(self):
        if self.proc:
            self.proc.stdin.close()
            if self.proc.stderr is not None:
                self.proc.stderr.close()
            self.proc.wait()
        self.proc = None

    # Support the Context Manager protocol, to ensure that resources are cleaned up.

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_value, traceback):
        self.close()

# Use Example
with FFMPEG_VideoWriter(video_path, (height, width), fps=120, pixfmt="rgb24") as writer:
    while self.isRecording or not self.frame_queue.empty():
        if not self.frame_queue.empty():
            writer.write_frame(self.frame_queue.get())


    


  • C# on linux : FFmpeg (FFMediaToolkit) on linux System.IO.DirectoryNotFoundException : Cannot found the default FFmpeg directory

    6 mai 2021, par Jan Černý

    I have C# project in rider and FFMediaToolkit installed via NuGet. I made instance of MediaBuilder. When I hit run I get this error message :

    


    /home/john/Projects/Slimulator/bin/Debug/net5.0/Slimulator /home/john/Projects/Slimulator/test_mazes/small-maze-food2.png
Loading file /home/john/Projects/Slimulator/test_mazes/small-maze-food2.png
Unhandled exception. System.IO.DirectoryNotFoundException: Cannot found the default FFmpeg directory.
On Windows you have to set "FFmpegLoader.FFmpegPath" with full path to the directory containing FFmpeg shared build ".dll" files
For more informations please see https://github.com/radek-k/FFMediaToolkit#setup
   at FFMediaToolkit.FFmpegLoader.LoadFFmpeg()
   at FFMediaToolkit.Encoding.Internal.OutputContainer.Create(String extension)
   at FFMediaToolkit.Encoding.MediaBuilder..ctor(String path, Nullable`1 format)
   at FFMediaToolkit.Encoding.MediaBuilder.CreateContainer(String path)
   at Slimulator.AnimationBuffer..ctor(String videoPath, Int32 height, Int32 width, Int32 frameRate) in /home/john/Projects/Slimulator/AnimationBuffer.cs:line 11
   at Slimulator.Simulation..ctor(Space space, String seed, String outputVideoPath) in /home/john/Projects/Slimulator/Simulation.cs:line 12
   at Slimulator.Launcher.Main(String[] args) in /home/john/Projects/Slimulator/Launcher.cs:line 8

Process finished with exit code 134.


    


    When I go to https://github.com/radek-k/FFMediaToolkit#setup I find just this :

    


    


    Linux - Download FFmpeg using your package manager.

    


    You need to set FFmpegLoader.FFmpegPath with a full path to FFmpeg libraries.

    


    If you want to use 64-bit FFmpeg, you have to disable the Build -> Prefer 32-bit option in
Visual Studio project properties.

    


    


    I have already installed FFmpeg package via pacman and I am still getting these error.

    


    How can I fix this so I can use FFMediaToolkit without problem on linux ?
    
Thank you for help

    


    EDIT1 : I use Arch linux.
EDIT2 : There is related issue on github : https://github.com/radek-k/FFMediaToolkit/issues/80