Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (7887)

  • How to add and keep moving my custom image throughout the video with ffmpeg in nodejs [closed]

    2 avril 2023, par Darwin Swartz

    I need some guidance about how to implement this :-

    


    I recorded the user tab screen (without the default mouse), and now I want to :

    


      

    • Capture the user mouse movements with my chrome extension throughout the recording session.

      


    • 


    • Send the recorded screen video and stored mouse movements to the backend nodejs.

      


    • 


    • Add my own custom mouse on top of the recorded video according to the stored mouse movements captured during the recording session with mouse events.

      


    • 


    


    So something like, drawing a custom mouse image on top of a video, ie : making the custom mouse move throughout the video based on the actual mouse events captured from the client side.

    


    I researched about it and found out that I can use FFmpeg to edit videos, but not really sure how to implement this.

    


    Any advice would be greatly appreciated.

    


  • How to extract frames in real time from the MediaStream object returned from the frontend in backend

    23 mai 2023, par Darwin Swartz

    is it possible to extract frames in real-time on the backend from a MediaStream object returned from the frontend ? something like :- instead of extracting frames from a canvas element in frontend and sending those frames to the backend in real time, can we send just the stream instance to the backend and extract frames there in real time until the user stops the recording ?

    


    chrome.tabCapture.capture({ audio: false, video: true }, function(stream) {
  // Use the media stream object here
});


    


    I am using tabCapture api which returns a stream, now I want to send this MediaStream instance in real time to the backend and extract frames there and edit something on them in real-time using OpenCV or FFmpeg. is this something technically possible ?

    


    One approach I have seen is

    


    chrome.tabCapture.capture({ audio: false, video: true }, function(stream) {
  video.srcObject = stream
  const canvas = document.createElement('canvas');
  const ctx = canvas.getContext('2d');
   ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
  const imageData = canvas.toDataURL('image/jpeg');
});


    


    drawing each frame on top of a canvas and capturing those frames from it (in the frontend itself)and sending those frames in real-time to the backend using web sockets. I am not sure about this approach as this might be bad for frontend memory wise,

    


    What could be a more efficient way of implementing real-time frame editing with frame manipulation libraries like OpenCV and FFmpeg

    


  • ffmpeg vaapi (intel) hardware decode, drawbox, hardware encode

    11 mars 2023, par Tom

    So I am running a go2rtc server and I'm receiving a rtsp stream from a camera and I want to draw a box on top of the video. The system has a Pentium Silver J5005 with iGPU. From what I understand I should be able to use hwmap instead of hwdownload/hwupload in this case because the iGPU and CPU share the same system memory. Anyway, leaving out the drawing part, I can tell that hardware decoding and encoding is working because ffmpeg only uses about 8% CPU. This is the ffmpeg command that I have working, but it is only decoding and re-encoding the video :

    


    ffmpeg -hide_banner -v error -allowed_media_types video -loglevel verbose -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 \
  -i rtsp://... -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an \
  -filter_complex "[0:v]scale_vaapi,hwmap=mode=read+write+direct,format=nv12[in];\
    [in]format=vaapi|nv12,hwmap[out]" -map "[out]" \
  -c:v h264_vaapi -an -user_agent ffmpeg/go2rtc -rtsp_transport tcp -f rtsp rtsp://..."


    


    Now i'm trying to insert a drawbox filter :

    


    ffmpeg -hide_banner -v error -allowed_media_types video -loglevel verbose -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 \
  -i rtsp://... -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an \
  -filter_complex "[0:v]scale_vaapi,hwmap=mode=read+write+direct,format=nv12[in];\
    [in]drawbox=x=10:y=10:w=100:h=100:color=pink@0.5:t=fill[in2];\
    [in2]format=vaapi|nv12,hwmap[out]" -map "[out]" \
  -c:v h264_vaapi -an -user_agent ffmpeg/go2rtc -rtsp_transport tcp -f rtsp rtsp://...



    


    But this fails immedicately :

    


    [h264 @ 0x55bf016ffc40] Reinit context to 2304x1296, pix_fmt: vaapi
[graph 0 input from stream 0:0 @ 0x55bf01fe9100] w:2304 h:1296 pixfmt:vaapi tb:1/90000 fr:20/1 sar:0/1
[auto_scale_0 @ 0x55bf01fee800] w:iw h:ih flags:'' interl:0
[Parsed_drawbox_3 @ 0x55bf01fe8180] auto-inserting filter 'auto_scale_0' between the filter 'Parsed_format_2' and the filter 'Parsed_drawbox_3'
[auto_scale_1 @ 0x55bf01feffc0] w:iw h:ih flags:'' interl:0
[Parsed_format_4 @ 0x55bf01fe8780] auto-inserting filter 'auto_scale_1' between the filter 'Parsed_drawbox_3' and the filter 'Parsed_format_4'
[auto_scale_0 @ 0x55bf01fee800] w:2304 h:1296 fmt:nv12 sar:0/1 -> w:2304 h:1296 fmt:yuv420p sar:0/1 flags:0x0
[Parsed_drawbox_3 @ 0x55bf01fe8180] x:10 y:10 w:100 h:100 color:0xC67B9B7F
[auto_scale_1 @ 0x55bf01feffc0] w:2304 h:1296 fmt:yuv420p sar:0/1 -> w:2304 h:1296 fmt:nv12 sar:0/1 flags:0x0
    Last message repeated 3 times
[Parsed_hwmap_5 @ 0x55bf01fe8bc0] Failed to map frame: -38.
Error while filtering: Function not implemented
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0


    


    I found a similar question, but the solution of setting -hwaccel_output_format nv12 causes ffmpeg to fail (even if I don't include the drawbox step) :

    


    [Parsed_scale_vaapi_0 @ 0x55ab1c1d2540] auto-inserting filter 'auto_scale_0' between the filter 'graph 0 input from stream 0:0' and the filter 'Parsed_scale_vaapi_0'
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scale_0'


    


    It seems like the problem is the nv12 pixel format. I tried countless of ways to convert to e.g. rgb24 but everything I tried just caused ffmpeg to fail.