Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (46)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8297)

  • FFSERVER - streaming an ASF video as Webm output

    30 mai 2014, par Emmanuel Brunet

    I’m trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump.

    Input stream

    $ ffprobe http://account:password@webcam/videostream.asf

    Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf':
     Duration: N/A, start: 0.000000, bitrate: 32 kb/s
       Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc
       Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s

    ffserver configuration

    my ffserver configuration is :

    Port 8091
    RTSPPort 554
    BindAddress 192.168.1.62
    MaxHTTPConnections 1000
    MaxClients 100
    MaxBandwidth 1000
    CustomLog -

    <feed>
           File /tmp/webcam.ffm
           FileMaxSize 500M
           ACL allow localhost
           ACL allow 192.168.0.0 192.168.255.255

    </feed>

    <stream>              # Output stream URL definition
      Feed webcam.ffm              # Feed from which to receive video
      Format webm

      # Audio settings
      AudioCodec vorbis
      AudioBitRate 64             # Audio bitrate

      # Video settings
      VideoCodec libvpx
      VideoSize 640x480           # Video resolution
      VideoFrameRate 25           # Video FPS
      AVOptionVideo flags +global_header  # Parameters passed to encoder
                                          # (same as ffmpeg command-line parameters)
      AVOptionVideo cpu-used 0
      AVOptionVideo qmin 10
      AVOptionVideo qmax 42
      AVOptionVideo quality good
      AVOptionAudio flags +global_header
      PreRoll 15
      StartSendOnKey
      # VideoBitRate 32            # Video bitrate
    </stream>

    <stream>
           Format status
           # Only allow local people to get the status
           ACL allow localhost
           ACL allow 192.168.0.0 192.168.255.255
    </stream>

    ffmpeg feed

    I run the following command that fails

    $ ffmpeg  -i http://account:password@webcam/videostream.asf http://192.168.1.62:8091/webcam.ffm
    http://192.168.1.62:8091/webcam.ffm
    Input #0, asf, from 'http://account:password@webcam/videostream.asf':
     Duration: N/A, start: 0.000000, bitrate: 32 kb/s
       Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc
       Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s
    [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly
    Segmentation fault

    I tryed

    $ ffmpeg  -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p  http://192.168.1.62:8091/webcam.ffm

    But it raises the same error.

    Thanks for your help

    Edit

    For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too.
    And thus with mirrored encoding so I changed the ffserver configuration to

    ...
    <stream>
       Feed webcam.ffm
       Format asf
       VideoFrameRate 25
       VideoSize 640X480
       VideoBitRate 256
       VideoBufferSize 1000
       VideoGopSize 30
       AudioBitRate 32
       StartSendOnKey
    </stream>
    ...

    And the output is now :

    Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf':
     Duration: N/A, start: 0.000000, bitrate: 32 kb/s
       Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc
       Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s
    [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly
    Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm':
     Metadata:
       creation_time   : now
       encoder         : Lavf55.40.100
       Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s
       Metadata:
         encoder         : Lavc55.64.100 wmav2
       Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc
       Metadata:
    Stream mapping:
     Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2)
     Stream #0:0 -> #0:1 (mjpeg -> msmpeg4)
    Press [q] to stop, [?] for help
    Segmentation fault

    I can’t even forward the stream.

  • How to save a DShow input on a file and simultaneously publish it on rtst stream ?

    18 novembre 2020, par user2586476

    Piping command does not work properly when your input is coming from a device (e.g. a webcam) so you cannot work with 2 output. I need to save the webcam stream on a file (.mkv) and, at the same time, publish it on a rtst server. I tried the following :

    &#xA;

    ffmpeg -y -f dshow -loglevel info -rtbufsize 2147.48M -i "video=my_camera" -vf hflip,rotate=PI ^&#xA;-c:v libx264 -preset fast -crf 25 -pix_fmt yuv420p ^&#xA;-minrate 2M -maxrate 4M -bufsize 3.5M -s 1920x1080 -f tee "[f=mkv]'C :\test.mkv'|[f=rtsp]rtsp ://localhost:8554/mystream]"

    &#xA;

  • FFMPEG add/remove inputs on the fly / dynamically

    27 novembre 2022, par Petro O. Bochan

    We seek a command to be able to add/remove video/audio inputs on the fly and apply filters without restarting a command. That way, we may continuously manipulate the output stream without restarting it - both a recorded file as well as the stream. Is it feasible doing ?

    &#xA;

    We tried this approach Switch video from webcam while stream live but it only manipulates existing inputs.

    &#xA;