Recherche avancée

Médias (91)

Autres articles (72)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (10640)

  • FFmpeg : Combining Video with Transparent Image Results in 0-Byte Output

    14 juillet 2023, par hello world

    I'm trying to combine a video with a transparent image using FFmpeg in my C# application. However, I'm encountering an issue where the resulting output video is consistently 0 bytes in size. I have reviewed the code and made several attempts to troubleshoot the problem, but I haven't been able to find a solution.

    


    Here is the relevant code snippet I am using :

    


    private async void button6_Click(object sender, EventArgs e)
{
    string ffmpegPath = "ffmpeg.exe"; // Path to the FFmpeg executable
    string videoPath = "input.mp4"; // Path to the input video
    string imagePath = "image.png"; // Path to the transparent image
    string outputPath = "output.mp4"; // Path for the output video

    // Build the FFmpeg command with the image information
    string command = $"-i \"{videoPath}\" -loop 1 -t 5 -i \"{imagePath}\" -filter_complex \"[1:v]colorkey=0x000000:0.1:0.1[fg];[0:v][fg]overlay[outv]\" -map \"[outv]\" -map 0:a? -c:a copy \"{outputPath}\"";

    // Execute the FFmpeg command
    ProcessStartInfo startInfo = new ProcessStartInfo
    {
        FileName = ffmpegPath,
        Arguments = command,
        UseShellExecute = false,
        RedirectStandardOutput = true,
        RedirectStandardError = true,
        CreateNoWindow = true
    };

    using (Process process = new Process())
    {
        process.StartInfo = startInfo;
        process.Start();
        process.WaitForExit();
    }
}


    


    I have confirmed that the input video and transparent image files are valid and located in the correct paths. The code runs without any exceptions, but the resulting output video is always 0 bytes in size.

    


    I have also tried different approaches, such as adjusting the alpha channel blending and preprocessing the image to ensure the transparent areas are fully opaque. However, none of these attempts have resolved the issue.

    


    Is there anything I might be missing or any additional steps I should take to ensure that the resulting video is generated correctly with the transparent image properly overlayed ?

    


    Any suggestions or insights would be greatly appreciated. Thank you !

    


  • FFmpeg - Error submitting a packet to the muxer [closed]

    26 novembre 2023, par undercash

    I m having an issue with my self compiled ffmpeg since 2020.
here is my ffmpeg comfig

    


    [```
built with gcc 11 (Ubuntu 11.4.0-1ubuntu1 22.04)
configuration : —enable-gpl —enable-version3 —enable-nonfree —enable-postproc —enable-libfdk_aac —enable-libtheora —enable-libvorbis —enable-libmp3lame —enable-libx264 —enable-libx265 —enable-libvpx —enable-librtmp —enable-libass —enable-libfreetype —enable-libdav1d —extra-libs='-lpthread -lm' —enable-openssl —enable-cuda-nvcc —enable-libnpp —extra-cflags=-I/usr/local/cuda/include —extra-ldflags=-L/usr/local/cuda/lib64 —disable-static —enable-shared

    


    
I know this is a config with nvidea but this is not relevent to the problem. Happens to me on real server without gpu , ubuntu 20 or 22.

I use ffmpeg to stream to a remote rtmp server (-f flv rtmp://xx)
Depending on the file it will consistently fail at the same time of the file with this error warning
If I just transcode a video locally, there is no issue at all
I have been testing streaming to youtube or big sites so I could discard the possibility of my own rtmp servers being misconfigured.

[```
[vost#0:0/h264 @ 0x55de16e8ebc0] Error submitting a packet to the muxer: End of fileop=1 speed=   1x
[flv @ 0x55de16e78980] Failed to update header with correct duration.
[flv @ 0x55de16e78980] Failed to update header with correct filesize.
[out#0/flv @ 0x55de16e83640] Error writing trailer: End of file
```]


    0070:  22 30                                              "0
    [vost#0:0/h264 @ 0x55ca300a0200] Error submitting a packet to the muxer: End of file
    [out#0/flv @ 0x55ca3019f6c0] Muxer returned EOF
    [out#0/flv @ 0x55ca3019f6c0] Terminating muxer thread
    [out#0/flv @ 0x55ca3019f6c0] sq: send 1 ts 1676.94
    [out#0/flv @ 0x55ca3019f6c0] sq: receive 1 ts 1676.93 queue head -1 ts N/A
    [NULL @ 0x55ca30076180] ct_type:0 pic_struct:3
    Last message repeated 2 times
    [out#0/flv @ 0x55ca3019f6c0] sq: send 0 ts 1676.72
    [out#0/flv @ 0x55ca3019f6c0] sq: receive 0 ts 1676.72 queue head -1 ts N/A
    No more output streams to write to, finishing.
    [vist#0:0/h264 @ 0x55ca301a3b40] Decoder thread received EOF packet
    [vist#0:0/h264 @ 0x55ca301a3b40] Decoder returned EOF, finishing
    [vist#0:0/h264 @ 0x55ca301a3b40] Terminating decoder thread
    [aist#0:1/ac3 @ 0x55ca301a3680] Decoder thread received EOF packet
    [aist#0:1/ac3 @ 0x55ca301a3680] Decoder returned EOF, finishing
    [aist#0:1/ac3 @ 0x55ca301a3680] Terminating decoder thread


As a work around, and since I have not seen any threads on the internet talking about this issue, I have been using since 2020 static builds from https://johnvansickle.com/ffmpeg/
They work fine (once you enable dns resolution with them) but as I m willing to use a nvidea card I need to use my own ffmpeg with nvidea drivers.

thanks

Well pretty much what I wrote previously


    


  • FFMPEG and Python : Stream a frame into video

    17 août 2023, par Vasilis Lemonidis

    Old approach

    


    I have created a small class for the job. After the streaming of the third frame I get an error from FFMPEG :

    


    pipe:0: Invalid data found when processing input

    


    and then I get a broken pipe.

    


    I have a feeling my ffmpeg input arguments are incorrect, I have little experience with the tool. Here is the code :

    


    import subprocess
import os
import cv2
import shutil
class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        assert video_path.endswith(".flv")
        self._ps = None

        self.video_path = video_path
        self.framerate = framerate
        self._video = None
        self.curr_frame = None
        if os.path.isfile(self.video_path):
            shutil.copyobj(self.video_path, self.video_path + ".old")
            cap = cv2.VideoCapture(self.video_path + ".old")
            while cap.isOpened():
                ret, self.curr_frame = cap.read()
                if not ret:
                    break
                if len(self.curr_frame.shape) == 2:
                    self.curr_frame = cv2.cvtColor(self.curr_frame, cv2.COLOR_GRAY2RGB)
                self.ps.stdin.write(self.curr_frame.tobytes())

    @property
    def ps(self) -> subprocess.Popen:
        if self._ps is None:
            framesize = self.curr_frame.shape[0] * self.curr_frame.shape[1] * 3 * 8
            self._ps = subprocess.Popen(
                f"ffmpeg  -i pipe:0 -vcodec mpeg4 -s qcif -frame_size {framesize} -y {self.video_path}",
                shell=True,
                stdin=subprocess.PIPE,
                stdout=sys.stdout,
            )
        return self._ps

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2RGB)
        self.curr_frame = frame
        self.ps.stdin.write(frame.tobytes())


    


    and here is a script I use to test it :

    


        import os
    import numpy as np
    import cv2

    size = (300, 300, 3)
    img_array = [np.random.randint(255, size=size, dtype=np.uint8) for c in range(50)]

    tmp_path = "tmp.flv"
    tmp_path = str(tmp_path)
    out = VideoUpdater(tmp_path, 1)

    for i in range(len(img_array)):
        out.update(img_array[i])


    


    Update closer to what I want

    


    Having further studied how ffmpeg internals work, I went for an approach without pipes, where a video of a frame is made and appended to the .ts file at every update :

    


    import tmpfile
import cv2
from tempfile import NamedTemporaryFile
import subprocess
import shutil
import os


class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        if not video_path.endswith(".mp4"):
            LOGGER.warning(
                f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
            )
            video_path = os.path.splitext(video_path)[0] + ".mp4"

        self._ps = None
        self.env = {
        }
        self.ffmpeg = "ffmpeg "
        self.video_path = video_path
        self.ts_path = video_path.replace(".mp4", ".ts")
        self.tfile = None
        self.framerate = framerate
        self._video = None
        self.last_frame = None
        self.curr_frame = None

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
        else:
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        self.writeFrame(frame)

    def writeFrame(self, frame: np.ndarray):
        tImLFrame = NamedTemporaryFile(suffix=".png")
        tVidLFrame = NamedTemporaryFile(suffix=".ts")

        cv2.imwrite(tImLFrame.name, frame)
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        ps.communicate()
        if os.path.isfile(self.ts_path):
            # this does not work to watch, as timestamps are not updated
            ps = subprocess.Popen(
                self.ffmpeg
                + rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
                env=self.env,
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
            )
            ps.communicate()
            shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)

        else:
            shutil.copyfile(tVidLFrame.name, self.ts_path)
        # fixing timestamps, we dont have to wait for this operation
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        tImLFrame.close()
        tVidLFrame.close()



    


    As you may notice, a timestamp correction needs to be performed. By reading the final mp4 file, I saw however that it consistently has 3 fewer frames than the ts file, the first 3 frames are missing. Does anyone have an idea why this is happening