Recherche avancée

Médias (0)

Mot : - Tags -/logo

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (65)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9456)

  • youtube-dl streaming to pipe, how to add container

    24 décembre 2017, par MetNP

    I use this command on raspberry-pi :

    youtube-dl -o- [youtubeurl] | omxplayer --no-keys pipe:0

    and it works great for 50% of youtube content. For non-working videos, omxplayer just won’t play it. It is not raspberry nor omxplayer specific problem. All players are ffmpeg based and for those videos the same problem can be achieved on any platform with ffplay or ffmpeg live transcode...

    When i download that kind of video separatelly :

    youtube-dl -o name.mp4 [url]
    ffplay name.mp4                   ... works OK
    cat name.mp4 | ffplay -           ... does NOT work (input stream not recognized well)

    Reason for this problem is well explained here. MP4 by itself is not good enough for streaming, and i want just to envelope it inside some TS or MKV or any container that will help player.

    Does someone have an idea how to do it ? Can youtube-dl be forced to do it itself, and can some middle ffmpeg command helps ?

    update : thanks to directions from Mulvya comment, it seems that all video works well with -f mp4 addition :

    youtube-dl -o- -f mp4 [youtubeurl] | omxplayer --no-keys pipe:0

    selecting specific format like -f 135+140 breaks pipe usability, and will not work except with some possible code changes. Until i reach some other problematic video content, it seems that -f mp4 solved everything.

  • Pipe video frame to OpenCV image and then to FFmpeg

    8 février 2018, par Pureheart

    There is a similar question here :
    Getting ’av_interleaved_write_frame() : Broken pipe’ error

    But what should I do if I want to write the data ?

    I put pipe_out.stdin.write(image.tostring()) in the while loop, like this

    FFMPEG_BIN = "/home/media/Downloads/ffmpeg"
    import subprocess as sp
    import sys
    width = 360
    height = 240
    command_in = [ FFMPEG_BIN,
               '-i', '/home/media/Videos/mytestvideo/zhou.avi',
               '-f', 'image2pipe',
               '-pix_fmt', 'bgr24',
               '-vcodec', 'rawvideo', '-']
    pipe_in = sp.Popen(command_in, stdout = sp.PIPE, bufsize = 10**8)

    command_out = [ FFMPEG_BIN,
           '-y', # (optional) overwrite output file if it exists
           '-f', 'rawvideo',
           '-vcodec','rawvideo',
           '-s', '360x240', # size of one frame
           '-pix_fmt', 'bgr24',
           '-r', '28', # frames per second
           '-i', '-', # The imput comes from a pipe
           '-an', # Tells FFMPEG not to expect any audio
           #'-vcodec', 'mpeg',
           'my_output_videofile.mp4' ]

    pipe_out = sp.Popen( command_out, stdin=sp.PIPE, stderr=sp.PIPE)

    import numpy
    import cv2
    import pylab
    # read width*height*3 bytes (= 1 frame)
    while True:
       raw_image = pipe_in.stdout.read(width*height*3)
       image =  numpy.fromstring(raw_image, dtype='uint8')
       image = image.reshape((height,width,3))
       pipe_in.communicate()
       pipe_out.stdin.write(image.tostring())
       pipe_out.communicate()
       pipe_in.stdout.flush()
       #cv2.imshow('image',image)
       #cv2.waitKey(0)
       # throw away the data in the pipe's buffer.


    '''
    pipe_in.stdin.close()
    pipe_in.stderr.close()
    pipe_in.wait()
    pipe_out.stdout.close()
    pipe_out.stderr.close()
    pipe_out.wait()
    '''
    #pipe_out.stdin.write(image.tostring())

    However, the output video has only 1 frame(the first frame of input video)

    Any ideas ?

    Thanks !

  • Capture Video with opencv, save to ffmpeg pipe and live stream

    9 mars 2018, par Chris

    The goal is to stream an analysed live video over RTSP to some media server. To make the edits/analysis I use opencv, save the edited frames as JPEG in an FFMPEG image pipe and use the same FFMPEG to create a RTSP stream. Sorry if the terminology is not that accurate, I find it still quite confusing.

    I have the following code after quite some struggle :

    import cv2
    from subprocess import Popen, PIPE
    from PIL import Image

    # open pipe
    p = Popen('ffmpeg -y -f image2pipe -vcodec mjpeg -r 24 -i - -vcodec h264 -f rtsp -rtsp_transport tcp rtsp://localhost:8081/test.sdp', stdin=PIPE)

    video = cv2.VideoCapture(0)
    i = 0
    while video.isOpened():
       i=i+1
       ret, frame = video.read()
       if ret:
           #[...do some analysis stuff]
           im = Image.fromarray(frame)
           im.save(p.stdin, 'JPEG')

           """
           alternatively
           img_str = cv2.imencode('.jpg', frame)[1].tostring()
           p.stdin.write(img_str)
           """

       else:
           break

       print (i)
       if(i==1000):
           break


    p.stdin.close()
    p.wait()
    video.release()
    cv2.destroyAllWindows()
    print("done streaming video")

    This runs for 124 frames (i=124) then the loop hangs and I get some message from ffmpeg where I am not sure what it is about, however it does not look like an error :

    push frame
    122
    push frame
    123
    push frame
    124
    Input #0, image2pipe, from 'pipe:':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 640x480 [SAR 1:1 DAR 4:3], 24 fps, 24 tbr, 24 tbn, 24 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
    [libx264 @ 000002076650d980] using SAR=1/1
    [libx264 @ 000002076650d980] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 000002076650d980] profile High, level 3.0
    [libx264 @ 000002076650d980] 264 - core 155 r2893 b00bcaf - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=24 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    push frame

    The webcam seems to continue running but no more frames are pushed into the pipe. It looks like some buffer is filled or something. If I write directly to a video file instead of rtsp, it works. If I open the rtsp stream simultaneously with ffplay, it also works (although with a 5 seconds lag).
    Anyone an idea where this is coming from and how to solve it ?