Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (54)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

Sur d’autres sites (15107)

  • Broken pipe when closing subprocess pipe with FFmpeg

    5 avril 2021, par Shawn

    First, I'm a total noob with ffmpeg. That said, similar to this question, I'm trying to read a video file as bytes and extract 1 frame as a thumbnail image, as bytes, so I can upload the thumbnail to AWS S3. I don't want to save files to disk and then have to delete them. I modified the accepted answer in the aforementioned question for my purposes, which is to handle different file formats, not just video. Image files work just fine with this code, but an mp4 breaks the pipe at byte_command.stdin.close(). I'm sure I'm missing something simple, but can't figure it out.

    


    The input bytes are a valid mp4, as I'm getting the following in the Terminal :

    


      Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42isom
  Duration: 00:02:48.48, start: 0.000000, bitrate: N/A
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 640x480, 486 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc (default)
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))


    


    from FFmpeg when I write to stdin.

    



    


    My FFmpeg command I'm passing in :
ffmpeg -i /dev/stdin -f image2pipe -frames:v 1 -

    


    I've tried numerous variations of this command, with -f nut, -f ... etc, to no avail.

    



    


    At the command line, without using python or subprocess, I've tried :
ffmpeg -i /var/www/app/thumbnail/movie.mp4 -frames:v 1 output.png and I get a nice png image of the video.

    



    


    My method :

    


    def get_converted_bytes_from_bytes(input_bytes: bytes, command: str) -> bytes or None:
    byte_command = subprocess.Popen(
        shlex.split(command),
        stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,
        shell=False,
        bufsize=10 ** 8,
    )
    b = b""

    byte_command.stdin.write(input_bytes)
    byte_command.stdin.close()
    while True:
        output = byte_command.stdout.read()
        if len(output) > 0:
            b += output
        else:
            error_msg = byte_command.poll()
            if error_msg is not None:
                break
    return b


    


    What am I missing ? Thank you !

    



    


    UPDATE, AS REQUESTED :

    


    Code Sample :

    


    import shlex
import subprocess


def get_converted_bytes_from_bytes(input_bytes: bytes, command: str) -> bytes or None:
    byte_command = subprocess.Popen(
        shlex.split(command),
        stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,
        shell=False,
        bufsize=10 ** 8,
    )
    b = b""
    # write bytes to processe's stdin and close the pipe to pass
    # data to piped process
    byte_command.stdin.write(input_bytes)
    byte_command.stdin.close()
    while True:
        output = byte_command.stdout.read()
        if len(output) > 0:
            b += output
        else:
            error_msg = byte_command.poll()
            if error_msg is not None:
                break
    return b


def run():
    subprocess.run(
        shlex.split(
            "ffmpeg -y -f lavfi -i testsrc=size=640x480:rate=1 -vcodec libx264 -pix_fmt yuv420p -crf 23 -t 5 test.mp4"
        )
    )
    with open("test.mp4", "rb") as mp4:
        b1 = mp4.read()
        b = get_converted_bytes_from_bytes(
            b1,
            "ffmpeg -y -loglevel error -i /dev/stdin -f image2pipe -frames:v 1 -",
        )
        print(b)


if __name__ == "__main__":
    run()



    


  • How to pipe multiple inputs to ffmpeg through python subprocess ?

    14 avril 2021, par D. Ramsook

    I have a ffmpeg command that uses two different input files (stored in 'input_string'). In ffmpeg '-' after the '-i' represents reading from stdin.

    


    I currently use the below setup to collect the output of the command. How can it be modified so that fileA and fileB, which are stored as bytes, are piped as the inputs to the ffmpeg command ?

    


    from subprocess import run, PIPE

with open('fileA.yuv', 'rb') as f:
    fileA = f.read()

with open('fileB.yuv', 'rb') as f:
    fileB = f.read()   


input_string = """ffmpeg -f rawvideo -s:v 1280x720 -pix_fmt yuv420p -r 24 -i - -f rawvideo -s:v 50x50 -pix_fmt yuv420p -r 24 -i - -lavfi  ...#remainder of command"""
result = run(input_string, stdout=PIPE, stderr=PIPE, universal_newlines=True)
result = result.stderr.splitlines()


    


  • Pipe video frames from ffmpeg to numpy array without loading whole movie into memory

    2 mai 2021, par marcman

    I'm not sure whether what I'm asking is feasible or functional, but I'm experimenting with trying to load frames from a video in an ordered, but "on-demand," fashion.

    


    Basically what I have now is to read the entire uncompressed video into a buffer by piping through stdout, e.g. :

    


    H, W = 1080, 1920 # video dimensions
video = '/path/to/video.mp4' # path to video

# ffmpeg command
command = [ "ffmpeg",
            '-i', video,
            '-pix_fmt', 'rgb24',
            '-f', 'rawvideo',
            'pipe:1' ]

# run ffmpeg and load all frames into numpy array (num_frames, H, W, 3)
pipe = subprocess.run(command, stdout=subprocess.PIPE, bufsize=10**8)
video = np.frombuffer(pipe.stdout, dtype=np.uint8).reshape(-1, H, W, 3)

# or alternatively load individual frames in a loop
nb_img = H*W*3 # H * W * 3 channels * 1-byte/channel
for i in range(0, len(pipe.stdout), nb_img):
    img = np.frombuffer(pipe.stdout, dtype=np.uint8, count=nb_img, offset=i).reshape(H, W, 3)


    


    I'm wondering if it's possible to do this same process, in Python, but without first loading the entire video into memory. In my mind, I'm picturing something like :

    


      

    1. open the a buffer
    2. 


    3. seeking to memory locations on demand
    4. 


    5. loading frames to numpy arrays
    6. 


    


    I know there are other libraries, like OpenCV for example, that enable this same sort of behavior, but I'm wondering :

    


      

    • Is it possible to do this operation efficiently using this sort of ffmpeg-pipe-to-numpy-array operation ?
    • 


    • Does this defeat the speed-up benefit of ffmpeg directly rather than seeking/loading through OpenCV or first extracting frames and then loading individual files ?
    •