Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (52)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (3715)

  • Write, with ffmpeg, every 30 seconds wav file from ip stream to file.temp and change the name to timestamp.wav after

    18 juillet 2018, par Eliya

    I’m using ffmpeg to record IP stream and write every 30 seconds to wav file.

    Here is my bash script code :

    #!/bin/bash
    function start_ffmpeg_stream ()
    {
       address=$1 #This is IP stream address
       ffmpeg_option=$2 # "?overrun_nonfatal=1&fifo_size=250000"
       folder_name=$3 #folderName
       channel_number=$4 #i
       #local pid      
       ffmpeg  -loglevel 8 -thread_queue_size 1500 -i "$address$ffmpeg_option" -c copy\
           -vn -f segment -segment_time 30 -ar 8000 -acodec pcm_s32le -ac 1 -strftime 1 /"$folder_name"/"X$channel_number""_""%s.wav"&    
       pid=$!
       echo "$Start ffmpeg, pid is - $pid"
       __="$pid"
    }
    ffmpegOptions="?overrun_nonfatal=1&fifo_size=250000"
    folderName="/wav_files"
    start_ffmpeg_stream "udp://224.10.0.1"  "$ffmpegOptions" "$folderName" "1"

    Now the wav file name is like "X000001_unix_time_stamp.wav".

    I want to write the file name in writeing time something like "X000001_unix_time_stamp.temp"

    And when the 30 seconds done and the FFmpeg finish to write this 30 seconds, I want that FFmpeg changes the name to "X000001_unix_time_stamp.wav"

    And keep writing next 30 seconds.

    The only change that i want is, when FFmpeg writing it write in temp name and after FFmpeg finish to write it change the name.

    It’s seem to case when i donloaded a file so until the donload not finish the file has a temp name and when it done the name change to const name.

  • Getting ffserver && ffmpeg to work of Debian with Logitech C270 [closed]

    14 mars 2013, par Joseph Baldwin Roberts

    I'm having real trouble getting my webcam to work and I wonder if anyone can help. Im running Raspian on a raspberry pi.

    When I run lsusb I get :

    pi@raspberrycar ~ $ lsusb
    Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp.
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp.
    Bus 001 Device 004: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB
    Bus 001 Device 005: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN Adapter
    Bus 001 Device 006: ID 046d:0825 Logitech, Inc. Webcam C270

    When I run v412-ctl —all I get :

    pi@raspberrycar ~ $ v4l2-ctl --all
    Driver Info (not using libv4l2):
           Driver name   : uvcvideo
           Card type     : UVC Camera (046d:0825)
           Bus info      : usb-bcm2708_usb-1.2.4
           Driver version: 3.6.11
           Capabilities  : 0x04000001
                   Video Capture
                   Streaming
    Format Video Capture:
           Width/Height  : 640/480
           Pixel Format  : 'YUYV'
           Field         : None
           Bytes per Line: 1280
           Size Image    : 614400
           Colorspace    : SRGB
    Crop Capability Video Capture:
           Bounds      : Left 0, Top 0, Width 640, Height 480
           Default     : Left 0, Top 0, Width 640, Height 480
           Pixel Aspect: 1/1
    Video input : 0 (Camera 1: ok)
    Streaming Parameters Video Capture:
           Capabilities     : timeperframe
           Frames per second: 5.000 (5/1)
           Read buffers     : 0

    When I run v412-ctl —list-formats I get :

    pi@raspberrycar ~ $ v4l2-ctl --list-formats
    ioctl: VIDIOC_ENUM_FMT
           Index       : 0
           Type        : Video Capture
           Pixel Format: 'YUYV'
           Name        : YUV 4:2:2 (YUYV)

           Index       : 1
           Type        : Video Capture
           Pixel Format: 'MJPG' (compressed)
           Name        : MJPEG

    My ffserver settings file is /etc/ffserver.conf

    Port 80
    BindAddress 0.0.0.0
    MaxClients 10
    MaxBandwidth 50000
    NoDaemon

    <feed>
    file /tmp/webcam.ffm
    FileMaxSize 10M
    </feed>

    <stream>
    Feed webcam.ffm
    Format mpeg
    VideoSize 640x480
    VideoFrameRate 10
    VideoBitRate 2000
    VideoQMin 1
    VideoQMax 10
    </stream>

    <stream>
    Format status

    ACL allow 192.168.2.0 192.168.2.255
    </stream>

    My ffmpeg run string is /usr/sbin/webcam.sh

    ffserver -f /etc/ffserver.conf &amp; ffmpeg -v verbose -r 5 -s 640x480 -f video4linux2 -i /dev/video0 http://localhost/webcam.ffm

    When I run this is the output :

    pi@raspberrycar ~ $ sudo /usr/sbin/webcam.sh
    ffmpeg version 1.0.5ffserver version 1.0.5 Copyright (c) 2000-2012 the FFmpeg developers
     built on Mar 14 2013 18:37:40 with gcc 4.6 (Debian 4.6.3-14+rpi1)
    Copyright (c) 2000-2012 the FFmpeg developers
     built on Mar 14 2013 18:37:40 with gcc 4.6 (Debian 4.6.3-14+rpi1)
     configuration:
     libavutil      51. 73.101 / 51. 73.101
     configuration:
     libavutil      51. 73.101 / 51. 73.101
     libavcodec     54. 59.100 / 54. 59.100
     libavformat    54. 29.104 / 54. 29.104
     libavdevice    54.  2.101 / 54.  2.101
     libavfilter     3. 17.100 /  3. 17.100
     libswscale      2.  1.101 /  2.  1.101
     libavcodec     54. 59.100 / 54. 59.100
     libavformat    54. 29.104 / 54. 29.104
     libavdevice    54.  2.101 / 54.  2.101
     libavfilter     3. 17.100 /  3. 17.100
     libswscale      2.  1.101 /  2.  1.101
     libswresample   0. 15.100 /  0. 15.100
     libswresample   0. 15.100 /  0. 15.100
    [video4linux2,v4l2 @ 0xfa5620] [3]Capabilities: 4000001

    I can see the stream on the status page and my webcam light is on but it never loads. Does anyone know what I am doing wrong ?

    Thanks in advance

    Joe

  • How to Use FFmpeg to Fetch an Audio From Local Network and Decode it to PCM ?

    26 mai 2020, par Yousef Alaqra

    Currently, I have a node js server which is connected to a specific IP address on the local network (the source of the audio), to receive the audio using VBAN protocol. VBAN protocol, basically uses UDP to send audio over the local network.

    &#xA;&#xA;

    Node js implementation :

    &#xA;&#xA;

    http.listen(3000, () => {&#xA;  console.log("Server running on port 3000");&#xA;});&#xA;&#xA;let PORT = 6980;&#xA;let HOST = "192.168.1.244";&#xA;&#xA;io.on("connection", (socket) => {&#xA;  console.log("a user connected");&#xA;  socket.on("disconnect", () => {&#xA;    console.log("user disconnected");&#xA;  });&#xA;});&#xA;&#xA;io.on("connection", () => {&#xA;&#xA;  let dgram = require("dgram");&#xA;  let server = dgram.createSocket("udp4");&#xA;&#xA;  server.on("listening", () => {&#xA;    let address = server.address();&#xA;    console.log("server host", address.address);&#xA;    console.log("server port", address.port);&#xA;  });&#xA;&#xA;  server.on("message", function (message, remote) {&#xA;    let audioData = vban.ProcessPacket(message);&#xA;    io.emit("audio", audioData); // // console.log(`Received packet: ${remote.address}:${remote.port}`)&#xA;  });&#xA;  server.bind({&#xA;    address: "192.168.1.230",&#xA;    port: PORT,&#xA;    exclusive: false,&#xA;  });&#xA;});&#xA;

    &#xA;&#xA;

    once the server receives a package from the local network, it processes the package, then, using socket.io it emits the processed data to the client.

    &#xA;&#xA;

    An example of the processed audio data that's being emitted from the socket, and received in the angular :

    &#xA;&#xA;

         audio {&#xA;      format: {&#xA;        channels: 2,&#xA;        sampleRate: 44100,&#xA;        interleaved: true,&#xA;        float: false,&#xA;        signed: true,&#xA;        bitDepth: 16,&#xA;        byteOrder: &#x27;LE&#x27;&#xA;      },&#xA;      sampleRate: 44100,&#xA;      buffer: <buffer 2e="2e" 00="00" ce="ce" ff="ff" 3d="3d" bd="bd" 44="44" b6="b6" 48="48" c3="c3" 32="32" d3="d3" 31="31" d4="d4" 30="30" dd="dd" 38="38" 34="34" e5="e5" 1d="1d" c6="c6" 25="25" 974="974" more="more" bytes="bytes">,&#xA;      channels: 2,&#xA;}&#xA;</buffer>

    &#xA;&#xA;

    In the client-side (Angular), after receiving a package using socket.io.clinet, AudioConetext is used to decode the audio and play it :

    &#xA;&#xA;

       playAudio(audioData) {&#xA;    let audioCtx = new AudioContext();&#xA;    let count = 0;&#xA;    let offset = 0;&#xA;    let msInput = 1000;&#xA;    let msToBuffer = Math.max(50, msInput);&#xA;    let bufferX = 0;&#xA;    let audioBuffer;&#xA;    let prevFormat = {};&#xA;    let source;&#xA;&#xA;    if (!audioBuffer || Object.keys(audioData.format).some((key) => prevFormat[key] !== audioData.format[key])) {&#xA;      prevFormat = audioData.format;&#xA;      bufferX = Math.ceil(((msToBuffer / 1000) * audioData.sampleRate) / audioData.samples);&#xA;      if (bufferX &lt; 3) {&#xA;        bufferX = 3;&#xA;      }&#xA;      audioBuffer = audioCtx.createBuffer(audioData.channels, audioData.samples * bufferX, audioData.sampleRate);&#xA;      if (source) {&#xA;        source.disconnect();&#xA;      }&#xA;      source = audioCtx.createBufferSource();&#xA;      console.log("source", source);&#xA;      source.connect(audioCtx.destination);&#xA;      source.loop = true;&#xA;      source.buffer = audioBuffer;&#xA;      source.start();&#xA;    }&#xA;  }&#xA;

    &#xA;&#xA;

    Regardless that audio isn't playing in the client-side, and there is something wrong, this isn't the correct implementation.

    &#xA;&#xA;

    Brad, mentioned in the comments below, that I can implement this correctly and less complexity using FFmpeg child-process. And I'm very interested to know how to fetch the audio locally using FFmpeg.

    &#xA;