Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (103)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (12015)

  • FFMPEG and Python : Stream a frame into video

    17 août 2023, par Vasilis Lemonidis

    Old approach

    


    I have created a small class for the job. After the streaming of the third frame I get an error from FFMPEG :

    


    pipe:0: Invalid data found when processing input

    


    and then I get a broken pipe.

    


    I have a feeling my ffmpeg input arguments are incorrect, I have little experience with the tool. Here is the code :

    


    import subprocess
import os
import cv2
import shutil
class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        assert video_path.endswith(".flv")
        self._ps = None

        self.video_path = video_path
        self.framerate = framerate
        self._video = None
        self.curr_frame = None
        if os.path.isfile(self.video_path):
            shutil.copyobj(self.video_path, self.video_path + ".old")
            cap = cv2.VideoCapture(self.video_path + ".old")
            while cap.isOpened():
                ret, self.curr_frame = cap.read()
                if not ret:
                    break
                if len(self.curr_frame.shape) == 2:
                    self.curr_frame = cv2.cvtColor(self.curr_frame, cv2.COLOR_GRAY2RGB)
                self.ps.stdin.write(self.curr_frame.tobytes())

    @property
    def ps(self) -> subprocess.Popen:
        if self._ps is None:
            framesize = self.curr_frame.shape[0] * self.curr_frame.shape[1] * 3 * 8
            self._ps = subprocess.Popen(
                f"ffmpeg  -i pipe:0 -vcodec mpeg4 -s qcif -frame_size {framesize} -y {self.video_path}",
                shell=True,
                stdin=subprocess.PIPE,
                stdout=sys.stdout,
            )
        return self._ps

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2RGB)
        self.curr_frame = frame
        self.ps.stdin.write(frame.tobytes())


    


    and here is a script I use to test it :

    


        import os
    import numpy as np
    import cv2

    size = (300, 300, 3)
    img_array = [np.random.randint(255, size=size, dtype=np.uint8) for c in range(50)]

    tmp_path = "tmp.flv"
    tmp_path = str(tmp_path)
    out = VideoUpdater(tmp_path, 1)

    for i in range(len(img_array)):
        out.update(img_array[i])


    


    Update closer to what I want

    


    Having further studied how ffmpeg internals work, I went for an approach without pipes, where a video of a frame is made and appended to the .ts file at every update :

    


    import tmpfile
import cv2
from tempfile import NamedTemporaryFile
import subprocess
import shutil
import os


class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        if not video_path.endswith(".mp4"):
            LOGGER.warning(
                f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
            )
            video_path = os.path.splitext(video_path)[0] + ".mp4"

        self._ps = None
        self.env = {
        }
        self.ffmpeg = "ffmpeg "
        self.video_path = video_path
        self.ts_path = video_path.replace(".mp4", ".ts")
        self.tfile = None
        self.framerate = framerate
        self._video = None
        self.last_frame = None
        self.curr_frame = None

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
        else:
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        self.writeFrame(frame)

    def writeFrame(self, frame: np.ndarray):
        tImLFrame = NamedTemporaryFile(suffix=".png")
        tVidLFrame = NamedTemporaryFile(suffix=".ts")

        cv2.imwrite(tImLFrame.name, frame)
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        ps.communicate()
        if os.path.isfile(self.ts_path):
            # this does not work to watch, as timestamps are not updated
            ps = subprocess.Popen(
                self.ffmpeg
                + rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
                env=self.env,
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
            )
            ps.communicate()
            shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)

        else:
            shutil.copyfile(tVidLFrame.name, self.ts_path)
        # fixing timestamps, we dont have to wait for this operation
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        tImLFrame.close()
        tVidLFrame.close()



    


    As you may notice, a timestamp correction needs to be performed. By reading the final mp4 file, I saw however that it consistently has 3 fewer frames than the ts file, the first 3 frames are missing. Does anyone have an idea why this is happening

    


  • FFmpeg : Combining Video with Transparent Image Results in 0-Byte Output

    14 juillet 2023, par hello world

    I'm trying to combine a video with a transparent image using FFmpeg in my C# application. However, I'm encountering an issue where the resulting output video is consistently 0 bytes in size. I have reviewed the code and made several attempts to troubleshoot the problem, but I haven't been able to find a solution.

    


    Here is the relevant code snippet I am using :

    


    private async void button6_Click(object sender, EventArgs e)
{
    string ffmpegPath = "ffmpeg.exe"; // Path to the FFmpeg executable
    string videoPath = "input.mp4"; // Path to the input video
    string imagePath = "image.png"; // Path to the transparent image
    string outputPath = "output.mp4"; // Path for the output video

    // Build the FFmpeg command with the image information
    string command = $"-i \"{videoPath}\" -loop 1 -t 5 -i \"{imagePath}\" -filter_complex \"[1:v]colorkey=0x000000:0.1:0.1[fg];[0:v][fg]overlay[outv]\" -map \"[outv]\" -map 0:a? -c:a copy \"{outputPath}\"";

    // Execute the FFmpeg command
    ProcessStartInfo startInfo = new ProcessStartInfo
    {
        FileName = ffmpegPath,
        Arguments = command,
        UseShellExecute = false,
        RedirectStandardOutput = true,
        RedirectStandardError = true,
        CreateNoWindow = true
    };

    using (Process process = new Process())
    {
        process.StartInfo = startInfo;
        process.Start();
        process.WaitForExit();
    }
}


    


    I have confirmed that the input video and transparent image files are valid and located in the correct paths. The code runs without any exceptions, but the resulting output video is always 0 bytes in size.

    


    I have also tried different approaches, such as adjusting the alpha channel blending and preprocessing the image to ensure the transparent areas are fully opaque. However, none of these attempts have resolved the issue.

    


    Is there anything I might be missing or any additional steps I should take to ensure that the resulting video is generated correctly with the transparent image properly overlayed ?

    


    Any suggestions or insights would be greatly appreciated. Thank you !

    


  • lavfi/qsvvpp : check the parameters before initializing vpp session

    12 juin 2023, par Haihao Xiang
    lavfi/qsvvpp : check the parameters before initializing vpp session
    

    According to the description about MFXVideoVPP_Query [1], we may call
    MFXVideoVPP_Query to check the validity of the parameters for vpp
    session, use the corrected values to initialize the session.

    [1] https://spec.oneapi.io/versions/latest/elements/oneVPL/source/API_ref/VPL_func_vid_vpp.html#mfxvideovpp-query

    Signed-off-by : Haihao Xiang <haihao.xiang@intel.com>

    • [DH] libavfilter/qsvvpp.c