Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (46)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8297)

  • ffmpeg flip horizontally webcam to virtual video camera

    30 mai 2023, par Kaiser Schwarcz

    I need to horizontally flip my webcam image for a meeting.
I tried the instructions in this site https://wiki.archlinux.org/index.php/Webcam_setup#Applications which uses v4l2 and v4l2loopback to generate a virtual camera.

    


    # modprobe v4l2loopback


    


    Check the name of the newly created camera :

    


    $ v4l2-ctl --list-devices

Dummy video device (0x0000) (platform:v4l2loopback-000):
       /dev/video1


    


    Then you can run ffmpeg to read from your actual webcam (here /dev/video0) and invert it and feed it to the virtual camera :

    


    $ ffmpeg -f v4l2 -i /dev/video0 -vf "vflip" -f v4l2 /dev/video1


    


    You can use the "Dummy" camera in your applications instead of the "Integrated" camera.

    


    With these settings I was successful in vertically flipping my video. But that is not what I want. I want it to be flipped horizontally.

    


    So I tried this :

    


    $ ffmpeg -f v4l2 -i /dev/video0 -vf **"hflip"** -f v4l2 /dev/video1


    


    But I then I get no image from my cam.

    


    What am I doing wrong ?

    


    I'm using Fedora 31 in a desktop.

    


    COMPLETE LOG :

    


    ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers

  built with gcc 9 (GCC)

  configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --docdir=/usr/share/doc/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' --extra-ldflags='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld ' --extra-cflags=' ' --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib --disable-crystalhd --enable-fontconfig --enable-frei0r --enable-gcrypt --enable-gnutls --enable-ladspa --enable-libaom --enable-libdav1d --enable-libass --enable-libbluray --enable-libcdio --enable-libdrm --enable-libjack --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libmp3lame --enable-nvenc --enable-openal --enable-opencl --enable-opengl --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librsvg --enable-libsrt --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg --enable-libzvbi --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-libmfx --enable-runtime-cpudetect

  libavutil      56. 31.100 / 56. 31.100

  libavcodec     58. 54.100 / 58. 54.100

  libavformat    58. 29.100 / 58. 29.100

  libavdevice    58.  8.100 / 58.  8.100

  libavfilter     7. 57.100 /  7. 57.100

  libavresample   4.  0.  0 /  4.  0.  0

  libswscale      5.  5.100 /  5.  5.100

  libswresample   3.  5.100 /  3.  5.100

  libpostproc    55.  5.100 / 55.  5.100

Input #0, video4linux2,v4l2, from '/dev/video0':

  Duration: N/A, start: 233168.222502, bitrate: 147456 kb/s

    Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 147456 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc

Stream mapping:

  Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))

Press [q] to stop, [?] for help

Output #0, video4linux2,v4l2, to '/dev/video2':

  Metadata:

    encoder         : Lavf58.29.100

    Stream #0:0: Video: rawvideo (Y42B / 0x42323459), yuv422p, 640x480, q=2-31, 147456 kb/s, 30 fps, 30 tbn, 30 tbc

    Metadata:

    encoder         : Lavc58.54.100 rawvideo

frame=   31 fps=0.0 q=-0.0 size=N/A time=00:00:01.03 bitrate=N/A dup=16 drop=0 sframe=   46 fps= 46 q=-0.0 size=N/A time=00:00:01.53 bitrate=N/A dup=16 drop=0 sframe=   61 fps= 40 q=-0.0 size=N/A time=00:00:02.03 bitrate=N/A .....


    


  • Get image from webcam, convert that image into something else, and returning it back to the client

    29 janvier 2023, par immigration9

    I have some questions on choosing the right architectural decision to solve my problem.

    


    I am planning to create an app, which takes the

    


      

    1. input from a client's (a browser) webcam,

      


    2. 


    3. sending the input to the server (whether frame by frame, or just live stream video)

      


    4. 


    5. getting each frame from the server (into a image)

      


    6. 


    7. convert the image using some technology (let's say like a tiktok filter)

      


    8. 


    9. returning the image back to the client in real time.

      


    10. 


    


    Except for the phase 4 which the technology can only be applied on an image,
Everything else can be changed.

    


    I'm targeting 30fps (or at least 20) with 1080p quality.

    


    The language or framework is completely agnostic as I do not have any preference. Right now, I am thinking of using React with Node, but I'm opened to other options as well. (eg. Python maybe. Language doesn't matter)

    


    If anyone have some prior experiences, can you teach me the best way ?

    


    I've tried to create the image blob from client and send it to the server using socket.io but it seemed too slow to use when targeted at 30fps on 1080p image.

    


    I'm currently looking at WebRTC with fluent-ffmpeg, but not sure if it's the right way.

    


    Any kind of help will be appreciated.

    


  • How to control webcam's exposure time (using V4L2) based on the certain pixel's value ?

    16 avril 2022, par Garid

    I'd like to control the exposure time such that certain average value of certain window (e.g. 10x10+230+70) always fall between 100 to 220.

    


    /p.s. Camera is monochrome/

    


    Something like following :

    


    loop
    if average value > 220:
        v4l2: lower the exposure
    else if average value > 100:
        v4l2: higher the exposure
    else if 100 < average value < 220:
        break the loop


    


    I can do this with Python with OpenCV. I'm looking for another solutions

    


    Is there any solutions with ffmpeg, or imagemagick ?