Recherche avancée

Médias (91)

Autres articles (73)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (11340)

  • FFMPEG, the penultimate image from the .txt file doesn't show in the video

    25 octobre 2019, par ArmKh

    I’m trying to create a video from the images using ffmeg. Actually, the creating of the video works but there is a small problem. I have a text file with the names of the images (paths) which I’m using in the video. The file looks like this

    file 'image1.jpg'
    file 'image2.jpg'
    file 'image3.jpg'
    file 'image4.jpg'
    file 'image5.jpg'

    And the ffmeg command is following

    ffmpeg -y -r 1/5 -f concat -safe 0 -i imagenames.txt -i some_audio.mp3 -c:v libx264 -vf fps=30 -pix_fmt yuv420p -t 30 output.mp4

    This video should keep the frame on every image for 5 seconds. But the problem is the penultimate image ( image4 in this case ) is not being shown in the video. So, it keeps on image3 10 seconds and moves to image5

    So, the video in seconds looks like this

    [image1] -> [image2] -> [image3] -> [image4] -> [image5]
      5sec        5sec       10sec        0sec        5sec

    And the problem is not with the image4 exactly. In case of swapping image3 and image4, the video will skip image3

    [image1] -> [image2] -> [image4] -> [image3] -> [image5]
      5sec        5sec       10sec        0sec        5sec

    Hope you’ll have any suggestions to fix this issue

  • How to use ffmpeg on hardware acceleration with multiple inputs ?

    20 mars 2019, par Cole

    I’m trying to speed up the rendering of a video by using the GPU instead of the CPU. This code works, but I don’t know if I’m doing it correctly.

    ffmpeg -hwaccel cuvid -c:v hevc_cuvid \
    -i video.mp4 \
    -i logo.png \
    -i text.mov \
    -c:v h264_nvenc \
    -filter_complex " \
    [0]scale_npp=1920:1080,hwdownload,format=nv12[bg0]; \
    [bg0]trim=0.00:59.460,setpts=PTS-STARTPTS[bg0]; \
    [1]scale=150:-1[logo1];[bg0][logo1]overlay=(W-w)-10:(H-h)-10[bg0]; \
    [2]scale=500:-1[logo2];[logo2]setpts=PTS-STARTPTS[logo2]; \
    [bg0][logo2]overlay=-150:-100[bg0]; \
    [bg0]fade=in:00:30,fade=out:1750:30[bg0]" \
    -map "[bg0]" -preset fast -y output.mp4

    I feel like I need to be using hwuplaod somewhere in there, but I’m not totally sure. Any help would be appreciated.

    Log from run :

    ffmpeg version 4.0 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.11) 20160609
     configuration: --enable-gpl --enable-libx264 --enable-cuda --enable-nvenc --enable-cuvid --enable-nonfree --enable-libnpp --extra-cflags=-I/usr/local/cuda/include/ --extra-ldflags=-L/usr/local/cuda/lib64/
     libavutil      56. 14.100 / 56. 14.100
     libavcodec     58. 18.100 / 58. 18.100
     libavformat    58. 12.100 / 58. 12.100
     libavdevice    58.  3.100 / 58.  3.100
     libavfilter     7. 16.100 /  7. 16.100
     libswscale      5.  1.100 /  5.  1.100
     libswresample   3.  1.100 /  3.  1.100
     libpostproc    55.  1.100 / 55.  1.100
    [hevc @ 0x3eb04c0] vps_num_hrd_parameters -1 is invalid
    [hevc @ 0x3eb04c0] VPS 0 does not exist
    [hevc @ 0x3eb04c0] SPS 0 does not exist.
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2mp41
       encoder         : Lavf58.7.100
     Duration: 00:00:59.46, start: 0.000000, bitrate: 5894 kb/s
       Stream #0:0(und): Video: hevc (Main) (hev1 / 0x31766568), yuv420p(tv, progressive), 3840x2160 [SAR 1:1 DAR 16:9], 5891 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 29.97 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    Input #1, png_pipe, from 'logo.png':
     Duration: N/A, bitrate: N/A
       Stream #1:0: Video: png, rgba(pc), 528x128 [SAR 11339:11339 DAR 33:8], 25 tbr, 25 tbn, 25 tbc
    Input #2, mov,mp4,m4a,3gp,3g2,mj2, from 'text.mov':
     Metadata:
       major_brand     : qt  
       minor_version   : 512
       compatible_brands: qt  
       encoder         : Lavf57.56.100
     Duration: 00:00:06.00, start: 0.000000, bitrate: 1276 kb/s
       Stream #2:0(eng): Video: qtrle (rle  / 0x20656C72), bgra, 1920x1080, 1274 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 12800 tbn, 12800 tbc (default)
       Metadata:
         handler_name    : DataHandler
    Stream mapping:
     Stream #0:0 (hevc_cuvid) -> scale_npp
     Stream #1:0 (png) -> scale
     Stream #2:0 (qtrle) -> scale
     fade -> Stream #0:0 (h264_nvenc)
    Press [q] to stop, [?] for help
    Output #0, mp4, to 'output.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2mp41
       encoder         : Lavf58.12.100
       Stream #0:0: Video: h264 (h264_nvenc) (Main) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 2000 kb/s, 29.97 fps, 30k tbn, 29.97 tbc (default)
       Metadata:
         encoder         : Lavc58.18.100 h264_nvenc
       Side data:
         cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: -1
    frame= 1783 fps=151 q=30.0 Lsize=   15985kB time=00:00:59.45 bitrate=2202.4kbits/s dup=4 drop=0 speed=5.04x    
    video:15977kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.050389%

    Not sure what to make of this, pretty new to ffmpeg.

  • Lost in converting hardware accelarated ffmpeg decoder to raw loopback device

    5 septembre 2019, par Mikeynl

    Before reaching out, i googled enough, tried many different docker containers, building, compiling etc etc.

    What I am looking for is a hardware accelerated conversion from a rtsp stream to a /dev/video0 device. I have a working v4l2loopback kernel module, and the following command is working :

    ffmpeg -loglevel panic -hide_banner -i "rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream" -f v4l2 -pix_fmt yuv420p /dev/video0

    I can test the /dev/video0 device to take a screenshot :

    ffmpeg -f video4linux2 -i /dev/video0 -ss 0:0:2 -frames 1 /var/www/html/out.png

    Above is working, but taking around 30 / 50% cpu usage to decode/encoding.

    I have a fully working test environment with a GeForce GTX 1050. All cuda / nvidia related drivers are in place. ffmpeg is compiled with the following options :

    configuration: --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/cuda/lib64

    My last attempt was :

    ffmpeg -hwaccel cuvid -c:v h264_cuvid -i "rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream" -c:v rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0

    It gives me an error :

    Impossible to convert between the formats supported by the filter ’Parsed_null_0’ and the filter ’auto_scaler_0’
    Error reinitializing filters !
    Failed to inject frame into filter network : Function not implemented
    Error while processing the decoded data for stream #0:0

    I have at the moment absolute no idea anymore how to solve this.

    Thanks in advance !

    //added information

    root@localhost:~# ffmpeg -i "rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream" -c copy -f v4l2 /dev/video0
    ffmpeg version N-91067-g1c2e5fc Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609
     configuration: --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/cuda/lib64
     libavutil      56. 18.102 / 56. 18.102
     libavcodec     58. 19.101 / 58. 19.101
     libavformat    58. 13.102 / 58. 13.102
     libavdevice    58.  4.100 / 58.  4.100
     libavfilter     7. 22.100 /  7. 22.100
     libswscale      5.  2.100 /  5.  2.100
     libswresample   3.  2.100 /  3.  2.100
    Input #0, rtsp, from 'rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream':
     Metadata:
       title           : RTSP Session
     Duration: N/A, start: 2.300000, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1920x1080, 10 fps, 10 tbr, 90k tbn, 20 tbc
    [v4l2 @ 0x31a7f00] V4L2 output device supports only a single raw video stream
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
       Last message repeated 1 times
    root@localhost:~#

    root@localhost:~# ffmpeg -i "rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream" -f v4l2 -pix_fmt yuv420p /dev/video0                   ffmpeg version N-91067-g1c2e5fc Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609
     configuration: --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/cuda/lib64
     libavutil      56. 18.102 / 56. 18.102
     libavcodec     58. 19.101 / 58. 19.101
     libavformat    58. 13.102 / 58. 13.102
     libavdevice    58.  4.100 / 58.  4.100
     libavfilter     7. 22.100 /  7. 22.100
     libswscale      5.  2.100 /  5.  2.100
     libswresample   3.  2.100 /  3.  2.100
    Input #0, rtsp, from 'rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream':
     Metadata:
       title           : RTSP Session
     Duration: N/A, start: 2.300000, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1920x1080, 10 fps, 10 tbr, 90k tbn, 20 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
    Press [q] to stop, [?] for help
    [swscaler @ 0x2821240] deprecated pixel format used, make sure you did set range correctly
    Output #0, v4l2, to '/dev/video0':
     Metadata:
       title           : RTSP Session
       encoder         : Lavf58.13.102
       Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 1920x1080, q=2-31, 248832 kb/s, 10 fps, 10 tbn, 10 tbc
       Metadata:
         encoder         : Lavc58.19.101 rawvideo
    frame=   85 fps= 13 q=-0.0 Lsize=N/A time=00:00:08.50 bitrate=N/A dup=0 drop=4 speed=1.31x
    video:258188kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown