Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (111)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (9548)

  • C++ Parse data from FFMPEG pipe output

    5 janvier 2017, par Simon

    I want to play around with data coming from an RTSP stream (e.g., do motion detection). Instead of using the cumbersome ffmpeg C API for decoding the stream, I decided to try something different. FFmpeg offers the possibility to pipe its output to stdout. Therefore, I wrote a C++ program which calls popen() to make an external call to ffmpeg and grabs the output. My aim is now to extract the YUV images from the resulting stream. As a first test, I decoded an h264 video file and wrote the stdout output coming from the ffmpeg call to a file.

    #include <iostream>
    #include <fstream>
    #include

    using namespace std;

    int main()
    {
     FILE *in;
     char buff[512];

     if(!(in = popen("ffmpeg -i input.mp4 -c:v rawvideo -an -r 1 -f rawvideo pipe:1", "r")))
     {
       return 1;
     }

     ofstream outputFile("output.yuv");
     while(fgets(buff, sizeof(buff), in) != NULL)
     {
       outputFile &lt;&lt; buff;
     }

     outputFile.close();
     pclose(in);
     return 0;
    }
    </fstream></iostream>

    The resulting raw video can be played with vlc afterwards :

    vlc --rawvid-fps 1 --rawvid-width 1280 --rawvid-height 544 --rawvid-chroma I420 output.yuv

    Here, I chose the width and height from the video (a trailer of the Simpsons movie from http://www.dvdloc8.com/clip.php?movieid=12167&clipid=3). This first test worked very well. The resulting file is the same as when calling the ffmpeg binary directly with

    ffmpeg -i input.mp4 -c:v rawvideo -an -f output_ffmpeg.yuv

    Now I want to do some processing with the images coming from the ffmpeg output instead of dumping it to a file. My question is : Is there a clever way of parsing the stdout data from ffmpeg ? Ideally, I want to parse the stream into a sequence of instances of a YUVImage class (for example).

  • Can't use dshow devices with nodejs fluent-ffmpeg

    18 janvier 2017, par vivien anglesio

    I’m working on windows with electron on a program that stream raw-video+desktop captured audio via ffmpeg (using fluent-ffmpeg).

    I’m having trouble when I try to use dshow capture device as audio input, ffmpeg tell me that it doesn’t find this device. But when i run exactly the same command directly via command line it works...

    dump using ffmpeg-fluent :

    started : ffmpeg -vcodec rawvideo -f rawvideo -pix_fmt rgb32 -framerate 30 -s 1280x720 -rtbufsize 1500M -vsync 0 -fflags genpts -i pipe:0 -f dshow -i audio="virtual-audio-capturer" -acodec aac -b:a 128k -ac 1 -vcodec h264_nvenc -r 30 -preset fast -pix_fmt yuv420p -crf 20 -maxrate 4000k -bufsize 8000k -g 30 -flvflags no_duration_filesize -f mpegts udp://127.0.0.1:8888
    Stderr output: ffmpeg version N-82966-g6993bb4 Copyright (c) 2000-2016 the FFmpeg developers
    Stderr output:   built with gcc 5.4.0 (GCC)
    Stderr output:   configuration: --enable-gpl --enable-version3 --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
    Stderr output:   libavutil      55. 43.100 / 55. 43.100
    Stderr output:   libavcodec     57. 70.100 / 57. 70.100
    Stderr output:   libavformat    57. 61.100 / 57. 61.100
    Stderr output:   libavdevice    57.  2.100 / 57.  2.100
    Stderr output:   libavfilter     6. 68.100 /  6. 68.100
    Stderr output:   libswscale      4.  3.101 /  4.  3.101
    Stderr output:   libswresample   2.  4.100 /  2.  4.100
    Stderr output:   libpostproc    54.  2.100 / 54.  2.100
    Stderr output: Input #0, rawvideo, from 'pipe:0':
    Stderr output:   Duration: N/A, start: 0.000000, bitrate: 884736 kb/s
    Stderr output:     Stream #0:0: Video: rawvideo (BGRA / 0x41524742), bgra, 1280x720, 884736 kb/s, 30 tbr, 30 tbn, 30 tbc
    Stderr output: [dshow @ 000000000070ac20] Could not find audio only device with name ["virtual-audio-capturer"] among source devices of type audio.
    Stderr output: [dshow @ 000000000070ac20] Searching for audio device within video devices for "virtual-audio-capturer"
    Stderr output: [dshow @ 000000000070ac20] Could not find audio only device with name ["virtual-audio-capturer"] among source devices of type video.
    Stderr output: audio="virtual-audio-capturer": I/O error

    Dump using cmd.exe :

    ffmpeg -vcodec rawvideo -f rawvideo -pix_fmt rgb32 -framerate 30 -s 1280x720 -rtbufsize 1500M -vsync 0 -fflags genpts -i pipe:0 -f dshow -i audio="virtual-audio-capturer" -acodec aac -b:a 128k -ac 1 -vcodec h264_nvenc -r 30 -preset fast -pix_fmt yuv420p -crf 20 -maxrate 4000k -bufsize 8000k -g 30 -flvflags no_duration_filesize -f mpegts udp://127.0.0.1:8888
    ffmpeg version N-82966-g6993bb4 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 5.4.0 (GCC)
     configuration: --enable-gpl --enable-version3 --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 43.100 / 55. 43.100
     libavcodec     57. 70.100 / 57. 70.100
     libavformat    57. 61.100 / 57. 61.100
     libavdevice    57.  2.100 / 57.  2.100
     libavfilter     6. 68.100 /  6. 68.100
     libswscale      4.  3.101 /  4.  3.101
     libswresample   2.  4.100 /  2.  4.100
     libpostproc    54.  2.100 / 54.  2.100
    Input #0, rawvideo, from 'pipe:0':
     Duration: N/A, bitrate: 884736 kb/s
       Stream #0:0: Video: rawvideo (BGRA / 0x41524742), bgra, 1280x720, 884736 kb/s, 30 tbr, 30 tbn, 30 tbc
    Guessed Channel Layout for Input Stream #1.0 : stereo
    Input #1, dshow, from 'audio=virtual-audio-capturer':
     Duration: N/A, bitrate: N/A
       Stream #1:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
    Using -vsync 0 and -r can produce invalid output files
    Codec AVOption crf (Select the quality for constant quality mode) specified for output file #0 (udp://127.0.0.1:8888) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    Output #0, mpegts, to 'udp://127.0.0.1:8888':
     Metadata:
       encoder         : Lavf57.61.100
       Stream #0:0: Video: h264 (h264_nvenc) (Main), yuv420p, 1280x720, q=-1--1, 2000 kb/s, 30 fps, 90k tbn, 30 tbc
       Metadata:
         encoder         : Lavc57.70.100 h264_nvenc
       Side data:
         cpb: bitrate max/min/avg: 4000000/0/2000000 buffer size: 8000000 vbv_delay: -1
       Stream #0:1: Audio: aac (LC), 48000 Hz, mono, fltp, 128 kb/s
       Metadata:
         encoder         : Lavc57.70.100 aac
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (h264_nvenc))
     Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
    frame=    0 fps=0.0 q=0.0 Lsize=       0kB time=00:00:00.00 bitrate=N/A speed=N/A

    If anybody has an idea of what’s happening. It could be great !

    Thanks

  • ffmpeg webm encode for low powered devices

    21 février 2017, par Max Tkachenko

    I want to play transparent video into my app using built-in player over phone’s camera capture. I try to encode my video with alpha channel for android device :

    ffmpeg -i "Comp.avi" -c:v libvpx -pix_fmt yuva420p -metadata:s:v:0 alpha_mode="1" output.webm

    The result is pretty good, but I have lags (freezing video from time to time) while playing it on my android phone. Is it any options to improve decode performance ?

    Some console output :

    D:\SOFT\ffmpeg-20160207-git-9ee4c89-win64-static\bin>ffmpeg -i "d:\temp\cherti\Comp 1.avi" -c:v libvpx -pix_fmt yuva420p -metadata:s:v:0 alpha_mode="1" d:\temp\cherti\output.webm
    ffmpeg version N-80386-g5f5a97d Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 5.4.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 24.100 / 55. 24.100
     libavcodec     57. 46.100 / 57. 46.100
     libavformat    57. 38.100 / 57. 38.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 46.101 /  6. 46.101
     libswscale      4.  1.100 /  4.  1.100
     libswresample   2.  1.100 /  2.  1.100
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, avi, from 'd:\temp\cherti\Comp 1.avi':
     Metadata:
       date            : 2017-02-18T14:10:42.00916
       encoder         : Adobe After Effects CC 2015 (Windows)
     Duration: 00:00:05.00, start: 0.000000, bitrate: 1592542 kb/s
       Stream #0:0: Video: rawvideo, bgra, 1080x1920, 1605907 kb/s, 24 fps, 24 tbr, 24 tbn, 24 tbc
    File 'd:\temp\cherti\output.webm' already exists. Overwrite ? [y/N] y
    [libvpx @ 0000000002593640] v1.5.0
    [webm @ 00000000025a54e0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
    Output #0, webm, to 'd:\temp\cherti\output.webm':
     Metadata:
       date            : 2017-02-18T14:10:42.00916
       encoder         : Lavf57.38.100
       Stream #0:0: Video: vp8 (libvpx), yuva420p, 1080x1920, q=-1--1, 200 kb/s, 24 fps, 1k tbn, 24 tbc
       Metadata:
         alpha_mode      : 1
         encoder         : Lavc57.46.100 libvpx
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> vp8 (libvpx))