Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (59)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (6490)

  • ffmpeg doesn't seem to be working with multiple audio streams correctly

    21 juin 2017, par Caius Jard

    I’m having an issue with ffmpeg 3.2.2 ; ordinarily I ask it to make an MP4 video file with 2 audio streams. The command line looks like this :

    ffmpeg.exe
    -rtbufsize 256M
    -f dshow -i video="screen-capture-recorder" -thread_queue_size 512
    -f dshow -i audio="Line 2 (Virtual Audio Cable)"
    -f dshow -i audio="Line 3 (Virtual Audio Cable)"
    -map 0:v -map 1:a -map 2:a
    -af silencedetect=n=-50dB:d=60 -pix_fmt yuv420p -y "c:\temp\2channelvideo.mp4"

    I’ve wrapped it for legibility. This once worked fine, but something is wrong lately - it doesnt seem to record any audio, even though I can use other tools like Audacity to record audio from these devices just fine

    I’m trying to do some diag on it by dropping the video component and asking ffmpeg to record the two audio devices to two separate files :

    ffmpeg.exe
    -f dshow -i audio="Line 2 (Virtual Audio Cable)" "c:\temp\line2.mp3"
    -f dshow -i audio="Line 3 (Virtual Audio Cable)" "c:\temp\line3.mp3"

    ffmpeg’s console output looks like :

    Guessed Channel Layout for Input Stream #0.0 : stereo
    Input #0, dshow, from 'audio=Line 2 (Virtual Audio Cable)':
     Duration: N/A, start: 5935.810000, bitrate: 1411 kb/s
       Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
    Guessed Channel Layout for Input Stream #1.0 : stereo
    Input #1, dshow, from 'audio=Line 3 (Virtual Audio Cable)':
     Duration: N/A, start: 5936.329000, bitrate: 1411 kb/s
       Stream #1:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
    Output #0, mp3, to 'c:\temp\line2.mp3':
     Metadata:
       TSSE            : Lavf57.56.100
       Stream #0:0: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s16p
       Metadata:
         encoder         : Lavc57.64.101 libmp3lame
    Output #1, mp3, to 'c:\temp\line3.mp3':
     Metadata:
       TSSE            : Lavf57.56.100
       Stream #1:0: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s16p
       Metadata:
         encoder         : Lavc57.64.101 libmp3lame
    Stream mapping:
     Stream #0:0 -> #0:0 (pcm_s16le (native) -> mp3 (libmp3lame))
     Stream #0:0 -> #1:0 (pcm_s16le (native) -> mp3 (libmp3lame))
    Press [q] to stop, [?] for help

    The problem i’m currently having is that the produced mp3 are identical copies of line 2 only ; line 3 audio is not recorded. The last line is of concern ; it seems to be saying that stream 0 is being mapped to both output 0 and 1 ? Do I need a map command for each file also ? I thought it would be implicit due to the way i specified the arguments

  • Low latency video shared in local gigabit network using linux [on hold]

    6 mai 2017, par user3387542

    For a robotics task we need to share the video (Webcam) live to about 6 or 7 users in the same room. OpenCV will be used on the clients to read the situation and send new tasks to the robots. Latency should not be much more than one second, the lower the better. What commands would you recommend for this ?

    We have one camera on a Linux host which wants to share the video to about 6 other units just some meters away.

    I already experimented with different setups. While raw-video looks like perfectly latency free (local loopback, the issue is the amount of data), any compression suddenly ads about a second delay.
    And how should we share this in the network. Is broadcasting the right approach ? How can it be so hard, they are right next to each other.

    Works locally, issues over the network.

    #server
    ffmpeg -f video4linux2 -r 10 -s 1280x720 -i /dev/video0 -c:v libx264 -preset veryfast -tune zerolatency -pix_fmt yuv420p -f mpegts - | socat - udp-sendto:192.168.0.255:12345,broadcast
    #client
    socat -u udp-recv:12345,reuseaddr - | vlc --live-caching=0 --network-caching=0 --file-caching=0 -

    raw video - perfectly fine like this, video with many artefacts if sent over the network

    ffmpeg -f video4linux2 -r 10 -s 1280x720 -i /dev/video0 -c:v rawvideo -f rawvideo -pix_fmt yuv420p - | vlc --demux rawvideo --rawvid-fps 10 --rawvid-width 1280 --rawvid-height 720 --rawvid-chroma I420 -

    The technology used doesen’t matter, we do not care about network load either. Just want to use opencv on different clients using live data.

  • fixing (with ffmpeg) the chrominance position on a video after capturing

    23 mai 2016, par APLU

    I’m trying to convert some video from VHS to digital using an (old) video capture card (and obviously an old VHS player). Due to the input from my video capture card and the output available from the VHS, I have no other choice than capture with an S-Video cable to a computer.

    Almost everything works except a little mis-synchronization between chroma and luma which do not happen on TV.

    For example, in the original video, I have something like that :
    good position of color

    After capturing the video looks like this :
    bad position of color

    As you may see, there is a little desynchronization of the chroma with the luma channel (I will say about 10 lines errors).

    I’m capturing with ffmpeg on a Linux system with the following commands :

    $ v4lctl setnorm PAL-BG

    $ v4lctl setinput S-video

    $ ffmpeg -y -f alsa -ac 2 -i pulse -f video4linux2 -i /dev/video0 -c:a pcm_s16le -vcodec rawvideo -t $duration -r 25 -loglevel error -stats /tmp/tmp.mkv

    I tried other input norm in v4l, tried an other VHS player, tried an other conversion cable from SCART to S-Video but it didn’t change anything,

    My question is simple : is there a way to fix this with a post-processing video filter in ffmpeg ?

    I already looked at the long list of video filter available in ffmpeg but I didn’t find anything.

    Also, please note I can’t apply filter during the capture commands (old capture cards, old cpu, ..), this is why I capture in rawvideo and native audio. When the capture is done I convert the video/audio into h264/vorbis, at this step I can apply as much as audio/video filtering needed (even if it include extracting chroma & luma to new files, fixing and merging again).

    Thanks !