Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (67)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (9302)

  • RaspberryPi HLS streaming with nginx and ffmpeg ; v4l2 error : ioctl(VIDIOC_STREAMON) : Protocol error

    22 janvier 2021, par Mirco Weber

    I'm trying to realize a baby monitoring with a Raspberry Pi (Model 4B, 4GB RAM) and an ordinary Webcam (with integrated Mic).
I followed this Tutorial : https://github.com/DeTeam/webcam-stream/blob/master/Tutorial.md

    


    Shortly described :

    


      

    1. I installed and configured an nginx server with rtmp module enabled.
    2. 


    3. I installed ffmpeg with this configuration —enable-gpl —enable-nonfree —enable-mmal —enable-omx-rpi
    4. 


    5. I tried to stream ;)
    6. 


    


    The configuration of nginx seems to be working (sometimes streaming works, the server starts without any complication and when the server is up and running, the webpage is displayed).
The configuration of ffmpeg seems to be fine as well, since streaming sometimes works...

    


    I was trying a couple of different ffmpeg-commands ; all of them are sometimes working and sometimes resulting in an error.
The command looks like following :

    


    ffmpeg -re
-f v4l2
-i /dev/video0
-f alsa
-ac 1
-thread_queue_size 4096
-i hw:CARD=Camera,DEV=0
-profile:v high
-level:v 4.1
-vcodec h264_omx
-r 10
-b:v 512k
-s 640x360
-acodec aac
-strict
-2
-ac 2
-ab 32k
-ar 44100
-f flv
rtmp://localhost/show/stream;


    


    Note : I rearranged the code to make it easier to read. In the terminal, it is all in one line.
Note : There is no difference when using -f video4linux2 instead of -f v4l2

    


    The camera is recognized by the system :

    


    pi@raspberrypi:~ $ v4l2-ctl --list-devices
bcm2835-codec-decode (platform:bcm2835-codec):
    /dev/video10
    /dev/video11
    /dev/video12

bcm2835-isp (platform:bcm2835-isp):
    /dev/video13
    /dev/video14
    /dev/video15
    /dev/video16

HD Web Camera: HD Web Camera (usb-0000:01:00.0-1.2):
    /dev/video0
    /dev/video1


    


    When only using -i /dev/video0, audio transmission never worked.
The output of arecord -L was :

    


    pi@raspberrypi:~ $ arecord -L
default
    Playback/recording through the PulseAudio sound server
null
    Discard all samples (playback) or generate zero samples (capture)
jack
    JACK Audio Connection Kit
pulse
    PulseAudio Sound Server
usbstream:CARD=Headphones
    bcm2835 Headphones
    USB Stream Output
sysdefault:CARD=Camera
    HD Web Camera, USB Audio
    Default Audio Device
front:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    Front speakers
surround21:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    2.1 Surround output to Front and Subwoofer speakers
surround40:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    4.0 Surround output to Front and Rear speakers
surround41:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
iec958:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    IEC958 (S/PDIF) Digital Audio Output
dmix:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    Direct sample mixing device
dsnoop:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    Direct sample snooping device
hw:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    Direct hardware device without any conversions
plughw:CARD=Camera,DEV=0
    HD Web Camera, USB Audio
    Hardware device with all software conversions
usbstream:CARD=Camera
    HD Web Camera
    USB Stream Output


    


    that's why i added -i hw:CARD=Camera,DEV=0.

    


    As mentioned above, it worked very well a couple of times with this configuration and commands.
But very often, i get the following error message when starting to stream :

    


    pi@raspberrypi:~ $ ffmpeg -re -f video4linux2 -i /dev/video0 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream
ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 8 (Raspbian 8.3.0-6+rpi1)
  configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic
  libavutil      56. 63.101 / 56. 63.101
  libavcodec     58.117.101 / 58.117.101
  libavformat    58. 65.101 / 58. 65.101
  libavdevice    58. 11.103 / 58. 11.103
  libavfilter     7. 96.100 /  7. 96.100
  libswscale      5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
[video4linux2,v4l2 @ 0x2ea4600] ioctl(VIDIOC_STREAMON): Protocol error
/dev/video0: Protocol error


    


    And when I'm swithing to /dev/video1 (since this was also an output for v4l2-ctl --list-devices), I get the following error message :

    


    pi@raspberrypi:~ $ ffmpeg -re -f v4l2 -i /dev/video1 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream
ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 8 (Raspbian 8.3.0-6+rpi1)
  configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic
  libavutil      56. 63.101 / 56. 63.101
  libavcodec     58.117.101 / 58.117.101
  libavformat    58. 65.101 / 58. 65.101
  libavdevice    58. 11.103 / 58. 11.103
  libavfilter     7. 96.100 /  7. 96.100
  libswscale      5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
[video4linux2,v4l2 @ 0x1aa4610] ioctl(VIDIOC_G_INPUT): Inappropriate ioctl for device
/dev/video1: Inappropriate ioctl for device


    


    When using the video0 input, the webcam's LED that recognizes an access is constantly on. When using video1not.

    


    After hours and days of googling and tears and whiskey, for the sake of my liver, my marriage and my physical and mental health, I'm very sincerly asking for your help...
What the f**k is happening and what can I do to make it work ???

    


    Thanks everybody :)

    


    UPDATE 1 :

    


      

    1. using the full path to ffmpeg does not change anything...
    2. 


    3. /dev/video0 and /dev/video1 have access rights for everybody
    4. 


    5. sudo ffmpeg ... does not change anything as well
    6. 


    7. the problem seems to be at an "early stage". Stripping the command down to ffmpeg -i /dev/video0 results in the same problem
    8. 


    


    UPDATE 2 :
    
It seems that everything is working when I first start another Application that needs access to the webcam and then ffmpeg...
Might be some driver issue, but when I'm looking for loaded modules with lsmod, there is absolutely no change before and after I started the application...
Any help still appreciated...

    


    UPDATE 3 :
    
I was checking the output of dmesg.
    
When I started the first application I received this message :
    
uvcvideo: Failed to query (GET_DEF) UVC control 12 on unit 2: -32 (exp. 4).

    And when I started ffmpeg, nothing happend but everything worked...

    


  • How to make mpv more compatible with ffmpeg filters like minterpolate ?

    1er octobre 2020, par F usedEmacs -con fused

    ffmpeg filter minterpolate (motion interpolation) does not work in MPV.

    



    (Nevertheless the file then is played normally without the minterpolate).

    



    (I researched using search engines and throughout documentation and troubleshooted to make a use of opengl and generally tried everything apart from asking for help and learning to understand more in the source code and I'm not a programmer)…

    



    --gpu-context=angle --gpu-api=opengl also does not make opengl work. (I'm guessing opengl could help from seeing its use in the documentations).

    



    


    Note

    
 


    To get a full list of available video filters, see —vf=help and
 http://ffmpeg.org/ffmpeg-filters.html .

    
 


    Also, keep in mind that most actual filters are available via the
 lavfi wrapper, which gives you access to most of libavfilter's
 filters. This includes all filters that have been ported from MPlayer
 to libavfilter.

    
 


    Most builtin filters are deprecated in some ways, unless they're only
 available in mpv (such as filters which deal with mpv specifics, or
 which are implemented in mpv only).

    
 


    If a filter is not builtin, the lavfi-bridge will be automatically
 tried. This bridge does not support help output, and does not verify
 parameters before the filter is actually used. Although the mpv syntax
 is rather similar to libavfilter's, it's not the same. (Which means
 not everything accepted by vf_lavfi's graph option will be accepted by
 —vf.)

    
 


    You can also prefix the filter name with lavfi- to force the wrapper.
 This is helpful if the filter name collides with a deprecated mpv
 builtin filter. For example —vf=lavfi-scale=args would use
 libavfilter's scale filter over mpv's deprecated builtin one.

    


    



    I expect MPV to play with minterpolate (one of several filters that MPV can use, listed in http://ffmpeg.org/ffmpeg-filters.html) enabled. But this is what happens :

    



    Input : "--vf=lavfi=[minterpolate=fps=60000/1001:mi_mode=mci]"

    



    Output :

    



       cplayer:  (+) Video --vid=1 (*) (h264 1280x720 29.970fps)
   cplayer:  (+) Audio --aid=1 (*) (aac 2ch 44100Hz)
        vd: Using hardware decoding (d3d11va).
    ffmpeg: Impossible to convert between the formats supported by the filter 'mpv_src_in0' and the filter 'auto_scaler_0'
     lavfi: failed to configure the filter graph
        vf: Disabling filter lavfi.00 because it has failed.


    



    (Interesting is also that --gpu-api=opengl does not work (despite that according to specification my—not to brag—HD Graphics 400 Braswell supports its 4.2 version)… And that aresample seems to have no effect too, and with the few audio filters selected playback often doesn't start nor output errors.)

    


  • Building FFmpeg for use in Swift

    7 juillet 2021, par NCrusher

    With the new XCode/Swift release comes the ability to use binary dependencies. This seems to me to be an ideal time to create an SPM package for FFMpeg.

    


    However, while I've spent the last year learning to code i Swift, I'm actually not all that familiar with how to build libraries, especially those as complex as FFmpeg with all the configurable libraries and third-party dependencies.

    


    There's kewlbear's iOS build scripts, but these are for iOS/tvOS and ideally an FFMpeg SPM package would be usable for MacOS also. It's also not updated for the newest Xcode and Swift versions.

    


    My personal interest is simply in audio and I don't need a lot of bells and whistles, but I figure the ideal situation would be a full package with the entire source and whatever dependencies it needs, and then when it's used as a package dependency, the compiler will just use the parts it needs.

    


    I guess my question is...how would I ideally compile the ffmpeg code for this purpose. I'm trying to follow the directions for compiling yourself, but I'm stuck at the point of compiling gettext because I'm not sure if I should follow the directions (in the gettext source code) for compiling a fat binary for multiple architectures, and when I try to run :

    


          ./configure CC="gcc -arch i386 -arch x86_64 -arch ppc -arch ppc64" \
                  CXX="g++ -arch i386 -arch x86_64 -arch ppc -arch ppc64" \
                  CPP="gcc -E" CXXCPP="g++ -E"



    


    I get the following error :

    


     checking whether the C compiler works... no configure: error: in
 `/Users/nolainecrusher/Downloads/FFMpeg-source/gettext-0.21/gettext-runtime':
 configure: error: C compiler cannot create executables See
 `config.log' for more details configure: error: ./configure failed for
 gettext-runtime


    


    and config.log doesn't really tell me anything useful :

    


    This is what I see at the end of the log :

    


     mkdir_p='$(MKDIR_P)' oldincludedir='/usr/include' pdfdir='${docdir}'
 prefix='/usr/local' program_transform_name='s,x,x,' psdir='${docdir}'
 sbindir='${exec_prefix}/sbin' sharedstatedir='${prefix}/com' subdirs='
 gettext-runtime libtextstyle gettext-tools' sysconfdir='${prefix}/etc'
 target_alias=''


    


    I feel like maybe I'm going about this the wrong way, but I'm not sure what the right way is.