Recherche avancée

Médias (91)

Autres articles (63)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (11939)

  • Python ffmpeg issue

    7 mars 2024, par SDailey

    I have a Python script which takes short videos (somewhere just over a minute long) and does the following

    


      

    1. strips original audio from mp4 video file
ffmpeg -i 1.mp4 -vcodec copy -an nosound_1.mp4"
    2. 


    3. gets the length in seconds for an existing mp3 that will be inserted
ffmpeg.probe('1.mp3')['format']['duration']
    4. 


    5. If the mp3 length is 57 seconds or above
trim the mp4 video to 58 seconds
ffmpeg -i nosound_1.mp4 -ss 00:00:00 -t 00:00:58 -c copy trimmed_1.mp4
else
trim the mp4 video to mp3 length + 2
ffmpeg -i nosound_1.mp4 -ss 00:00:00 -t 00:00:length+2 -c copy trimmed_1.mp4
    6. 


    7. insert mp3 into newly trimmed video
input_video = ffmpeg.input('/path/to/trimmed_1.mp4')
input_audio = ffmpeg.input('/path/to/1.mp3')
ffmpeg.concat(input_video, input_audio, v=1, a=1).output('new_1.mp4').run()
    8. 


    


    However, about a third of the time, the final video stream seems to have been compressed to finish at about 54 seconds - so although the video continues "playing" and the audio plays correctly until the end (usually 58 seconds), the video stream freezes on the last video frame at 54 seconds.

    


    So, if I watch trimmed_1.mp4, the video plays for 58 seconds, and let's say the last frame of the video is someone raising their hand.
The new_1.mp4 (trimmed_1 combined with 1.mp3), when played, seems to hit that last frame of someone raising their hand at the 54-second mark, so it looks like the video stream was slightly sped up to finish at the 54-second mark, and for the last 4 seconds makes it look like the video stream is frozen because it's showing the last frame while the audio continues to play.

    


    What is going wrong, and how can I fix it ?

    


  • FFMpeg clip broken, generated from images [closed]

    5 février 2024, par thomassss

    I hope someone can help me out here, would really appreciate it :).

    


    I created a video out of images, and then merged it with other videos.
The problem is that the final result is kinda broken - when I want to add background music to it, it won't play the music on the parts with the video generated by images, and on mobile phones it seems even more broken.

    


    The commands I used :

    


    Extracting the images from the video :

    


    ffmpeg.exe -i E:\dev\chatgpt\temp11\peng\final\1_runway.mp4 -vsync 0 -f image2 E:\dev\chatgpt\temp9\peng\final/1_runway-%06d.png


    


    Then I edit the images, then merging them back with :

    


    ffmpeg.exe  -i  E:\dev\chatgpt\temp11\peng\final\2_runway_final-%06d.png E:\dev\chatgpt\temp9\peng\final\5_runway_final.mp4



    


    This file now seems to be broken

    


    Now when i merge this file with other mp4s via

    


    ffmpeg.exe -f concat   -i E:\dev\chatgpt\temp11\peng\\final\ENGLISH_ffmpeg_input.txt  -vcodec copy -acodec copy E:\dev\chatgpt\temp9\peng\\final\ENGLISH_output.mp4


    


    Ad then add background music via

    


    ffmpeg.exe -i "E:\dev\chatgpt\temp11\peng\final\ENGLISH_output.mp4" -i "E:\dev\chatgpt\background.mp3" -filter_complex "[1:a]volume=0.2[a1];[0:a][a1]amix=inputs=2:duration=longest[a]" -map 0:v -map "[a]" -c:v copy -c:a aac -shortest "E:\dev\chatgpt\temp10\peng\final\ENGLISH_output_background.mp4"


    


    the background music isn't played in the parts with the "video generated from images".
Even VLC sometimes have problem displaying all parts correctly - that's why I added Youtube-links - this seems to work somehow.

    


    Does anyone have a clue how I can investigate further ?

    


    I tried converting both, the final video-file and the generated clip from images into different formats with different framerates - but nothing worked

    


    Thanks in advance !

    


    Endfile with backgroundmusic
Dropbox :
https://www.dropbox.com/scl/fi/peouyis2eezakc91mzfvt/ENGLISH_output_background.mp4?rlkey=4yfcyxzvh6fa3w0qxbiaz4auf&dl=1
Youtube :
https://www.youtube.com/watch?v=9gV6wP08lWA

    


    Without backgroundmusic
https://www.dropbox.com/scl/fi/7543al9kngtkdy92rqhe3/ENGLISH_output.mp4?rlkey=og6jnxgbqwc2r6gxg2xpphfwc&dl=1
Youtube :
https://www.youtube.com/shorts/1zIRFs6bYgU

    


    Standalone file - generated from images :
https://www.dropbox.com/scl/fi/cy46ngkvoofnf7mur420i/3_runway_final.mp4?rlkey=70emr5t9dv53s5rcz4qaygwla&dl=1

    


    Edit :

    


    Download Single Images of one Video :
https://www.dropbox.com/scl/fi/817kakf1sksqfnulv1cja/singleimagesofonevideo.7z?rlkey=xkc73s7z8rfkp1js2epuzebs0&dl=1

    


    All Videos :
https://www.dropbox.com/scl/fi/817kakf1sksqfnulv1cja/singleimagesofonevideo.7z?rlkey=xkc73s7z8rfkp1js2epuzebs0&dl=1

    


    ffprobe of "normal" video :

    


    ffprobe.exe ENGLISH_0.mp4
ffprobe version 6.1.1-full_build-www.gyan.dev Copyright (c) 2007-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
configuration : —enable-gpl —enable-version3 —enable-static —pkg-config=pkgconf —disable-w32threads —disable-autodetect —enable-fontconfig —enable-iconv —enable-gnutls —enable-libxml2 —enable-gmp —enable-bzlib —enable-lzma —enable-libsnappy —enable-zlib —enable-librist —enable-libsrt —enable-libssh —enable-libzmq —enable-avisynth —enable-libbluray —enable-libcaca —enable-sdl2 —enable-libaribb24 —enable-libaribcaption —enable-libdav1d —enable-libdavs2 —enable-libuavs3d —enable-libzvbi —enable-librav1e —enable-libsvtav1 —enable-libwebp —enable-libx264 —enable-libx265 —enable-libxavs2 —enable-libxvid —enable-libaom —enable-libjxl —enable-libopenjpeg —enable-libvpx —enable-mediafoundation —enable-libass —enable-frei0r —enable-libfreetype —enable-libfribidi —enable-libharfbuzz —enable-liblensfun —enable-libvidstab —enable-libvmaf —enable-libzimg —enable-amf —enable-cuda-llvm —enable-cuvid —enable-ffnvcodec —enable-nvdec —enable-nvenc —enable-dxva2 —enable-d3d11va —enable-libvpl —enable-libshaderc —enable-vulkan —enable-libplacebo —enable-opencl —enable-libcdio —enable-libgme —enable-libmodplug —enable-libopenmpt —enable-libopencore-amrwb —enable-libmp3lame —enable-libshine —enable-libtheora —enable-libtwolame —enable-libvo-amrwbenc —enable-libcodec2 —enable-libilbc —enable-libgsm —enable-libopencore-amrnb —enable-libopus —enable-libspeex —enable-libvorbis —enable-ladspa —enable-libbs2b —enable-libflite —enable-libmysofa —enable-librubberband —enable-libsoxr —enable-chromaprint
libavutil 58. 29.100 / 58. 29.100
libavcodec 60. 31.102 / 60. 31.102
libavformat 60. 16.100 / 60. 16.100
libavdevice 60. 3.100 / 60. 3.100
libavfilter 9. 12.100 / 9. 12.100
libswscale 7. 5.100 / 7. 5.100
libswresample 4. 12.100 / 4. 12.100
libpostproc 57. 3.100 / 57. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'ENGLISH_0.mp4' :
Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf60.16.100
Duration : 00:00:03.38, start : 0.000000, bitrate : 238 kb/s
Stream #0:00x1 : Video : h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(progressive), 1080x1920, 160 kb/s, 25 fps, 25 tbr, 12800 tbn (default)
Metadata :
handler_name : VideoHandler
vendor_id : [0][0][0][0]
encoder : Lavc60.31.102 libx264
Stream #0:10x2 : Audio : aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
Metadata :
handler_name : SoundHandler
vendor_id : [0][0][0][0]

    


  • FFMPEG and Python : Stream a frame into video

    17 août 2023, par Vasilis Lemonidis

    Old approach

    


    I have created a small class for the job. After the streaming of the third frame I get an error from FFMPEG :

    


    pipe:0: Invalid data found when processing input

    


    and then I get a broken pipe.

    


    I have a feeling my ffmpeg input arguments are incorrect, I have little experience with the tool. Here is the code :

    


    import subprocess
import os
import cv2
import shutil
class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        assert video_path.endswith(".flv")
        self._ps = None

        self.video_path = video_path
        self.framerate = framerate
        self._video = None
        self.curr_frame = None
        if os.path.isfile(self.video_path):
            shutil.copyobj(self.video_path, self.video_path + ".old")
            cap = cv2.VideoCapture(self.video_path + ".old")
            while cap.isOpened():
                ret, self.curr_frame = cap.read()
                if not ret:
                    break
                if len(self.curr_frame.shape) == 2:
                    self.curr_frame = cv2.cvtColor(self.curr_frame, cv2.COLOR_GRAY2RGB)
                self.ps.stdin.write(self.curr_frame.tobytes())

    @property
    def ps(self) -> subprocess.Popen:
        if self._ps is None:
            framesize = self.curr_frame.shape[0] * self.curr_frame.shape[1] * 3 * 8
            self._ps = subprocess.Popen(
                f"ffmpeg  -i pipe:0 -vcodec mpeg4 -s qcif -frame_size {framesize} -y {self.video_path}",
                shell=True,
                stdin=subprocess.PIPE,
                stdout=sys.stdout,
            )
        return self._ps

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2RGB)
        self.curr_frame = frame
        self.ps.stdin.write(frame.tobytes())


    


    and here is a script I use to test it :

    


        import os
    import numpy as np
    import cv2

    size = (300, 300, 3)
    img_array = [np.random.randint(255, size=size, dtype=np.uint8) for c in range(50)]

    tmp_path = "tmp.flv"
    tmp_path = str(tmp_path)
    out = VideoUpdater(tmp_path, 1)

    for i in range(len(img_array)):
        out.update(img_array[i])


    


    Update closer to what I want

    


    Having further studied how ffmpeg internals work, I went for an approach without pipes, where a video of a frame is made and appended to the .ts file at every update :

    


    import tmpfile
import cv2
from tempfile import NamedTemporaryFile
import subprocess
import shutil
import os


class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        if not video_path.endswith(".mp4"):
            LOGGER.warning(
                f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
            )
            video_path = os.path.splitext(video_path)[0] + ".mp4"

        self._ps = None
        self.env = {
        }
        self.ffmpeg = "ffmpeg "
        self.video_path = video_path
        self.ts_path = video_path.replace(".mp4", ".ts")
        self.tfile = None
        self.framerate = framerate
        self._video = None
        self.last_frame = None
        self.curr_frame = None

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
        else:
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        self.writeFrame(frame)

    def writeFrame(self, frame: np.ndarray):
        tImLFrame = NamedTemporaryFile(suffix=".png")
        tVidLFrame = NamedTemporaryFile(suffix=".ts")

        cv2.imwrite(tImLFrame.name, frame)
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        ps.communicate()
        if os.path.isfile(self.ts_path):
            # this does not work to watch, as timestamps are not updated
            ps = subprocess.Popen(
                self.ffmpeg
                + rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
                env=self.env,
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
            )
            ps.communicate()
            shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)

        else:
            shutil.copyfile(tVidLFrame.name, self.ts_path)
        # fixing timestamps, we dont have to wait for this operation
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        tImLFrame.close()
        tVidLFrame.close()



    


    As you may notice, a timestamp correction needs to be performed. By reading the final mp4 file, I saw however that it consistently has 3 fewer frames than the ts file, the first 3 frames are missing. Does anyone have an idea why this is happening