Recherche avancée

Médias (91)

Autres articles (93)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (7805)

  • Sending an mp4 video over the network with avconv command to a raspberry pi in a piwall videowall

    21 juillet 2017, par noswoscar

    I would like to send an mp4 video from a raspbery pi to another raspberry-pi (in order to build a piwall) but the command I am using for sending a video stream (tested with correct outcome on other devices) doesn’t work.

    Details :
    I am using the avconv command from libav.org to send an mp4 video from a "master raspberry pi" through the network to my "slave raspberry pi". The raspberry pi (which is receiving the video) has the Raspbian Jessie Operating System on it. The goal of my experiment is to use the pwomxplayer command to display the video received on the "slave raspberry pi".

    The full command i am using for sending the mp4 video stream is as follows :
    avconv -re -i input.mp4 -vcodec libx264 -f h264 -an udp ://224.0.0.1:1234
    (works when sending to my raspberry pi from my dell laptop)

    It is interesting to note that when using the pre-mentioned avconv command on my dell laptop which has an intel processor and debian OS installed, the video stream is sent properly. However, the same command doesn’t work when sending the video stream from a raspberry pi which has an arm processor and raspbian OS installed.
    I wonder why this is !

    Am I right in thinking the arm processor is not interpreting my command avconv as well as the intel processor ? Or is it more a question of architecture ? Or is my avconv command syntax incorrect ?

    Thank you for your help !

    More info :
    Ip address of master raspberry pi : 192.168.72.10
    Ip address of slave raspberry pi : 192.168.72.11
    I am using the route add -net 224.0.0.0 netmask 224.0.0.0 eth0 command on master pi to send the video stream to all laptops connected to my ethernet port.
    The following command works for files ending in .h264 :
    avconv -re -i input.h264 -vcodec libx264 -f h264 -an udp ://224.0.0.1:1234
    (but I would like to send .mp4 files encoded with h264 not just .h264 files)

  • ffmpeg is reading SDP from RTSP stream but unable to save a screenshot. is it network or utility issue ?

    8 juillet 2017, par Paul Serikov

    My task is to get a screenshot from IP camera rtsp stream via ffmpeg.
    I got following error when I’m trying to do that on DigitalOcean droplet :

    root@docker-512mb-fra1-01:~# ffmpeg -hide_banner -loglevel debug -i rtsp://10.132.193.9//ch0.h264 -f image2 latest.jpg
    Splitting the commandline.
    Reading option '-hide_banner' ... matched as option 'hide_banner' (do not show program banner) with argument '1'.
    Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
    Reading option '-i' ... matched as input url with argument 'rtsp://10.132.193.9//ch0.h264'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'image2'.
    Reading option 'latest.jpg' ... matched as output url.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option hide_banner (do not show program banner) with argument 1.
    Applying option loglevel (set logging level) with argument debug.
    Successfully parsed a group of options.
    Parsing a group of options: input url rtsp://10.132.193.9//ch0.h264.
    Successfully parsed a group of options.
    Opening an input file: rtsp://10.132.193.9//ch0.h264.
    [rtsp @ 0x1298440] SDP:
    v=0
    o=- 1499314217993040 1 IN IP4 192.168.1.128
    s=H.264 Program Stream, streamed by the LIVE555 Media Server
    i=ch0.h264
    t=0 0
    a=DevVer:pusher2
    a=GroupName:IPCAM
    a=NickName:CIF
    a=CfgSection:PROG_CHN0
    a=tool:LIVE555 Streaming Media v2011.08.13
    a=type:broadcast
    a=control:*
    a=range:npt=0-
    a=x-qt-text-nam:H.264 Program Stream, streamed by the LIVE555 Media Server
    a=x-qt-text-inf:ch0.h264
    m=video 0 RTP/AVP 96
    c=IN IP4 0.0.0.0
    b=AS:4000
    a=rtpmap:96 H264/90000
    a=control:trackID=1
    a=fmtp:96 packetization-mode=1;profile-level-id=64001F;sprop-parameter-sets=Z2QAH6wrUCgC3IA=,aO48MA==
    a=framesize:96 1280-720
    a=cliprect:0,0,1280,720
    m=audio 0 RTP/AVP 97
    a=rtpmap:97 mpeg4-generic/8000/2
    a=fmtp:97 streamtype=5;profile-level-id=1;cpresent=0;mode=AAC-hbr;sizelength=13;indexlength=3;indexdeltalength=3;config=1590
    a=control:trackID=2

    Failed to parse interval end specification ''
    [rtsp @ 0x1298440] video codec set to: h264
    [rtsp @ 0x1298440] RTP Packetization Mode: 1
    [rtsp @ 0x1298440] RTP Profile IDC: 64 Profile IOP: 0 Level: 1f
    [rtsp @ 0x1298440] Extradata set to 0x1298a20 (size: 23)
    [rtsp @ 0x1298440] audio codec set to: aac
    [rtsp @ 0x1298440] audio samplerate set to: 8000
    [rtsp @ 0x1298440] audio channels set to: 2
    [udp @ 0x129e7e0] end receive buffer size reported is 131072
    [udp @ 0x129e680] end receive buffer size reported is 131072
    [udp @ 0x12bf380] end receive buffer size reported is 131072
    [udp @ 0x12bf1c0] end receive buffer size reported is 131072
    [rtsp @ 0x1298440] hello state=0
    [rtsp @ 0x1298440] UDP timeout, retrying with TCP
    [rtsp @ 0x1298440] hello state=0
    [rtsp @ 0x1298440] Could not find codec parameters for stream 0 (Video: h264, 1 reference frame, none(left), 1280x720, 1/180000): unspecified pixel format
    Consider increasing the value for the 'analyzeduration' and 'probesize' options
    Input #0, rtsp, from 'rtsp://10.132.193.9//ch0.h264':
     Metadata:
       title           : H.264 Program Stream, streamed by the LIVE555 Media Server
       comment         : ch0.h264
     Duration: N/A, start: 0.000000, bitrate: N/A
       Stream #0:0, 0, 1/90000: Video: h264, 1 reference frame, none(left), 1280x720, 1/180000, 90k tbr, 90k tbn, 180k tbc
       Stream #0:1, 0, 1/8000: Audio: aac, 8000 Hz, stereo, fltp
    Successfully opened the file.
    Parsing a group of options: output url latest.jpg.
    Applying option f (force format) with argument image2.
    Successfully parsed a group of options.
    Opening an output file: latest.jpg.
    Successfully opened the file.
    detected 1 logical cores
    [graph 0 input from stream 0:0 @ 0x1298280] Setting 'video_size' to value '1280x720'
    [graph 0 input from stream 0:0 @ 0x1298280] Setting 'pix_fmt' to value '-1'
    [buffer @ 0x12f9680] Unable to parse option value "-1" as pixel format
    [graph 0 input from stream 0:0 @ 0x1298280] Setting 'time_base' to value '1/90000'
    [graph 0 input from stream 0:0 @ 0x1298280] Setting 'pixel_aspect' to value '0/1'
    [graph 0 input from stream 0:0 @ 0x1298280] Setting 'sws_param' to value 'flags=2'
    [graph 0 input from stream 0:0 @ 0x1298280] Setting 'frame_rate' to value '180000/2'
    [buffer @ 0x12f9680] Unable to parse option value "-1" as pixel format
    [buffer @ 0x12f9680] Error setting option pix_fmt to value -1.
    [graph 0 input from stream 0:0 @ 0x1298280] Error applying options to the filter.
    Error opening filters!
    Exiting normally, received signal 2.

    As you see, ffmpeg is able to read SDP metadata, but for some reason is unable to save a screenshot

    Also same command works fine on my laptop with same VPN configuration !

    Just in case, IP camera doesn’t have a public IP address and accessible via VPN.

    What could be wrong and how to debug ?

    I tried to increase -analyzeduration and -probesize options from default 5s to 30s, but it doesn’t work.

  • How to pipe Picamera video to FFMPEG with subprocess (Python)

    31 juillet 2017, par VeniVidiReliqui

    I see a ton of info about piping a raspivid stream directly to FFMPEG for encoding, muxing, and restreaming but these use cases are mostly from bash ; similar to :

    raspivid -n -w 480 -h 320 -b 300000 -fps 15 -t 0 -o - | ffmpeg -i - -f mpegts udp ://192.168.1.2:8090ffmpeg

    I’m hoping to utilize the functionality of the Picamera library so I can do concurrent processing with OpenCV and similar while still streaming with FFMPEG. But I can’t figure out how to properly open FFMPEG as subprocess and pipe video data to it. I have seen plenty of attempts, unanswered posts, and people claiming to have done it, but none of it seems to work on my Pi.

    Should I create a video buffer with Picamera and pipe that raw video to FFMPEG ? Can I use camera.capture_continuous() and pass FFMPEG the bgr24 images I’m using for my OpenCV calculation ?

    I’ve tried all sorts of variations and I’m not sure if I’m just misunderstanding how to use the subprocess module, FFMPEG, or I’m simply missing a few settings. I understand the raw stream won’t have any metadata, but I’m not completely sure what settings I need to give FFMPEG for it to understand what I’m giving it.

    I have a Wowza server I’ll eventually be streaming to, but I’m currently testing by streaming to a VLC server on my laptop. I’ve currently tried this :

    import subprocess as sp
    import picamera
    import picamera.array
    import numpy as np

    npimage = np.empty(
           (480, 640, 3),
           dtype=np.uint8)
    with picamera.PiCamera() as camera:
       camera.resolution = (640, 480)
       camera.framerate = 24

       camera.start_recording('/dev/null', format='h264')
       command = [
           'ffmpeg',
           '-y',
           '-f', 'rawvideo',
           '-video_size', '640x480',
           '-pix_fmt', 'bgr24',
           '-framerate', '24',
           '-an',
           '-i', '-',
           '-f', 'mpegts', 'udp://192.168.1.54:1234']
       pipe = sp.Popen(command, stdin=sp.PIPE,
                       stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10**8)
       if pipe.returncode != 0:
           output, error = pipe.communicate()
           print('Pipe failed: %d %s %s' % (pipe.returncode, output, error))
           raise sp.CalledProcessError(pipe.returncode, command)

       while True:
           camera.wait_recording(0)
           for i, image in enumerate(
                           camera.capture_continuous(
                               npimage,
                               format='bgr24',
                               use_video_port=True)):
               pipe.stdout.write(npimage.tostring())
       camera.stop_recording()

    I’ve also tried writing the stream to a file-like object that simply creates the FFMPEG subprocess and writes to the stdin of it (camera.start_recording() can be given an object like this when you initialize the picam) :

    class PipeClass():
       """Start pipes and load ffmpeg."""

       def __init__(self):
           """Create FFMPEG subprocess."""
           self.size = 0
           command = [
               'ffmpeg',
               '-f', 'rawvideo',
               '-s', '640x480',
               '-r', '24',
               '-i', '-',
               '-an',
               '-f', 'mpegts', 'udp://192.168.1.54:1234']

           self.pipe = sp.Popen(command, stdin=sp.PIPE,
                            stdout=sp.PIPE, stderr=sp.PIPE)

           if self.pipe.returncode != 0:
               raise sp.CalledProcessError(self.pipe.returncode, command)

       def write(self, s):
           """Write to the pipe."""
           self.pipe.stdin.write(s)

       def flush(self):
           """Flush pipe."""
           print("Flushed")

    usage:
    (...)
    with picamera.PiCamera() as camera:
       p = PipeClass()
       camera.start_recording(p, format='h264')
    (...)

    Any assistance with this would be amazing !