Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (100)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (10494)

  • Pipe opencv frames into ffmpeg

    25 juin 2023, par Dmytro Soltusyuk

    I am trying to pipe opencv frames into ffmpeg, but it does not work.

    


    After the research, I found this answer (https://stackoverflow.com/a/62807083/10676682) to work the best for me, so I have the following :

    


    def start_streaming_process(rtmp_url, width, height, fps):
    # fmt: off
    cmd = ['ffmpeg',
           '-y',
           '-f', 'rawvideo',
           '-vcodec', 'rawvideo',
           '-pix_fmt', 'bgr24',
           '-s', "{}x{}".format(width, height),
           '-r', str(fps),
           '-i', '-',
           '-c:v', 'libx264',
           '-pix_fmt', 'yuv420p',
           '-preset', 'ultrafast',
           '-f', 'flv',
           '-flvflags', 'no_duration_filesize',
           rtmp_url]
    # fmt: on

    return subprocess.Popen(cmd, stdin=subprocess.PIPE)


    


    def main():
    width, height, fps = get_video_size(SOURCE_VIDEO_PATH)
    streaming_process = start_streaming_process(
        TARGET_VIDEO_PATH,
        width,
        height,
        fps,
    )

    model = load_yolo(WEIGHTS_PATH)
    frame_iterator = read_frames(video_source=SOURCE_VIDEO_PATH)
    processed_frames_iterator = process_frames(
        model, frame_iterator, ball_target_area=400
    )

    for processed_frame in processed_frames_iterator:
        streaming_process.communicate(processed_frame.tobytes())

    streaming_process.kill()


    


    processed_frame here is an annotated OpenCV frame.

    


    However, after I do my first streaming_process.communicate call, the ffmpeg process exits with code 0 (meaning everything was ok), but it is not. I can not feed the rest of the frames into ffmpeg, because the process exited.

    


    Here are the logs :

    


    Input #0, rawvideo, from 'fd:':
  Duration: N/A, start: 0.000000, bitrate: 663552 kb/s
  Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 663552 kb/s, 30 tbr, 30 tbn
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
[libx264 @ 0x132e05570] using cpu capabilities: ARMv8 NEON
[libx264 @ 0x132e05570] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0x132e05570] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - h
ttp://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme
=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 
fast_pskip=1 chroma_qp_offset=-2 threads=15 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 inter
laced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=
1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbt
ree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'rtmp://global-live.mux.com:5222/app/9428e064-e5d3-0bee-dc67-974ba53ce164':
  Metadata:
    encoder         : Lavf60.3.100
  Stream #0:0: Video: h264 ([7][0][0][0] / 0x0007), yuv420p(tv, progressive), 1280x720, q=2-31, 30 fps
, 1k tbn
    Metadata:
      encoder         : Lavc60.3.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame=    1 fps=0.0 q=29.0 Lsize=      41kB time=00:00:00.00 bitrate=N/A speed=   0x    eed=N/A    
video:40kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.678311%
[libx264 @ 0x132e05570] frame I:1     Avg QP:25.22  size: 40589
[libx264 @ 0x132e05570] mb I  I16..4: 37.7% 33.4% 28.9%
[libx264 @ 0x132e05570] 8x8 transform intra:33.4%
[libx264 @ 0x132e05570] coded y,uvDC,uvAC intra: 51.1% 53.2% 14.4%
[libx264 @ 0x132e05570] i16 v,h,dc,p: 32% 38% 20% 10%
[libx264 @ 0x132e05570] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 36% 28%  3%  2%  2%  3%  3%  6%
[libx264 @ 0x132e05570] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 18% 37% 17%  4%  4%  4%  5%  4%  7%
[libx264 @ 0x132e05570] i8c dc,h,v,p: 46% 37% 12%  4%
[libx264 @ 0x132e05570] kb/s:9741.36


    


    That's all. Exit code 0.

    


  • can't pipe in numpy arrays (images) to ffmpeg subprocess in python

    21 juin 2023, par Kevin

    I'm trying to capture webcam video stream using opencv and pipe raw frames into ffmpeg subprocess, apply 3d .cube lut, bring back those lut applied frames into opencv and display it using cv2.imshow.

    


    This is my code :

    


    import cv2
import subprocess as sp
import numpy as np

lut_cmd = [
            'ffmpeg', '-f', 'rawvideo', '-pixel_format', 'bgr24', '-s', '1280x720', '-framerate', '30', '-i', '-', '-an', '-vf',
            'lut3d=file=lut/luts/lut.cube', '-f', 'rawvideo', 'pipe:1'
        ]

lut_process = sp.Popen(lut_cmd, stdin=sp.PIPE, stdout=sp.PIPE)

width = 1280
height = 720

video_capture = cv2.VideoCapture(0)

while True:
    ret, frame = video_capture.read()

    if not ret:
        break
 
    # Write raw video frame to input stream of ffmpeg sub-process.
    lut_process.stdin.write(frame.tobytes())
    lut_process.stdin.flush()
    print("flushed")

    # Read the processed frame from the ffmpeg subprocess
    raw_frame = lut_process.stdout.read(width * height * 3)
    print("read")
    frame = np.frombuffer(raw_frame, dtype=np.uint8).reshape(height, width, 3)

    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

lut_process.terminate()
video_capture.release()

cv2.destroyAllWindows()



    


    code gets stuck at reading from ffmpeg part :
raw_frame = lut_process.stdout.read(width * height * 3)

    


    this is what i get when i run the code :

    


    flushed
Input #0, rawvideo, from 'fd:':
  Duration: N/A, start: 0.000000, bitrate: 663552 kb/s
  Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 663552 kb/s, 30 tbr, 30 tbn
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
Output #0, rawvideo, to 'pipe:1':
  Metadata:
    encoder         : Lavf60.3.100
  Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24(progressive), 1280x720, q=2-31, 663552 kb/s, 30 fps, 30 tbn
    Metadata:
      encoder         : Lavc60.3.100 rawvideo
frame=    0 fps=0.0 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A  


    


    "read" never gets printed. ffmpeg is stuck at 0fps. cv2.imshow doesn't show up.

    


    I tried changing lut_process.stdin.write(frame.tobytes()) 
to lut_process.stdin.write(frame.tostring()), but result was same.

    


    I tried adding 3 seconds pause before first write to ffmpeg begin, thinking maybe ffmpeg was not ready to process frames, but result was same.

    


    I'm sure that my webcam is working, and I know it's video stream is 1280x720 30fps.

    


    I was successful at
Displaying webcam stream just using opencv,
set FFmpeg input directly to my webcam and get output result using stdout.read, displaying it using opencv.

    


    have no idea what should I try next.

    


    I am using macOS 12.6, openCV 4.7.0, ffmpeg 6.0, python 3.10.11, and visual studio code.

    


    Any help would be greatly appreciated.

    


  • can't pipe in opencv frames to ffmpeg subprocess in python

    19 juin 2023, par Kevin

    I'm trying to capture webcam video stream using opencv and pipe raw frames into ffmpeg subprocess, apply 3d .cube lut, bring back those lut applied frames into opencv and display it using cv2.imshow.

    


    This is my code :

    


    import cv2
import subprocess as sp
import numpy as np

lut_cmd = [
            'ffmpeg', '-f', 'rawvideo', '-pixel_format', 'bgr24', '-s', '1280x720', '-framerate', '30', '-i', '-', '-an', '-vf',
            'lut3d=file=lut/luts/lut.cube', '-f', 'rawvideo', 'pipe:1'
        ]

lut_process = sp.Popen(lut_cmd, stdin=sp.PIPE, stdout=sp.PIPE)

width = 1280
height = 720

video_capture = cv2.VideoCapture(0)

while True:
    ret, frame = video_capture.read()

    if not ret:
        break
 
    # Write raw video frame to input stream of ffmpeg sub-process.
    lut_process.stdin.write(frame.tobytes())
    lut_process.stdin.flush()
    print("flushed")

    # Read the processed frame from the ffmpeg subprocess
    raw_frame = lut_process.stdout.read(width * height * 3)
    print("read")
    frame = np.frombuffer(raw_frame, dtype=np.uint8).reshape(height, width, 3)

    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

lut_process.terminate()
video_capture.release()

cv2.destroyAllWindows()



    


    code gets stuck at reading from ffmpeg part :
raw_frame = lut_process.stdout.read(width * height * 3)

    


    this is what i get when i run the code :

    


    flushed
Input #0, rawvideo, from 'fd:':
  Duration: N/A, start: 0.000000, bitrate: 663552 kb/s
  Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 663552 kb/s, 30 tbr, 30 tbn
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
Output #0, rawvideo, to 'pipe:1':
  Metadata:
    encoder         : Lavf60.3.100
  Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24(progressive), 1280x720, q=2-31, 663552 kb/s, 30 fps, 30 tbn
    Metadata:
      encoder         : Lavc60.3.100 rawvideo
frame=    0 fps=0.0 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A  


    


    "read" never gets printed. ffmpeg is stuck at 0fps. cv2.imshow doesn't show up.

    


    I tried changing lut_process.stdin.write(frame.tobytes()) 
to lut_process.stdin.write(frame.tostring()), but result was same.

    


    I tried adding 3 seconds pause before first write to ffmpeg begin, thinking maybe ffmpeg was not ready to process frames, but result was same.

    


    I'm sure that my webcam is working, and I know it's video stream is 1280x720 30fps.

    


    I was successful at
Displaying webcam stream just using opencv,
set FFmpeg input directly to my webcam and get output result using stdout.read, displaying it using opencv.

    


    have no idea what should I try next.

    


    I am using macOS 12.6, openCV 4.7.0, ffmpeg 6.0, python 3.10.11, and visual studio code.

    


    Any help would be greatly appreciated.