Recherche avancée

Médias (91)

Autres articles (87)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (7995)

  • avcodec/nvenc : Declare support for P016

    25 février 2018, par Philip Langdale
    avcodec/nvenc : Declare support for P016
    

    nvenc doesn't support P016, but we have two problems today :

    1) We declare support for YUV444P16 which nvenc also doesn't support.
    We do this because it's the only pix_fmt we have that can
    approximate nvenc's internal format that is YUV444P10 with data in
    MSBs instead of LSBs. Because the declared format is a 16bit one,
    it will be preferrentially chosen when encoding >10bit content,
    but that content will normally be YUV420P12 or P016 which should
    get mapped to P010 and not YUV444P10.

    2) Transcoding P016 content with nvenc should be possible in a pure
    hardware pipeline, and that can't be done if nvenc doesn't say it
    accepts P016. By mapping it to P010, we can use it, albeit with
    truncation. I have established that swscale doesn't know how to
    dither to 10bits so we'd get truncation anyway, even if we tried
    to do this 'properly'.

    • [DH] libavcodec/nvenc.c
    • [DH] libavcodec/version.h
  • How to record desktop while on x2go session via a command line tool ?

    4 décembre 2017, par haragei

    the Goal :
    I am trying to record a specific X display on a remote server with a command line tool.

    the Problem :
    The output file contains a pure black video stream for the whole duration of the recording.

    My Approach :
    I am connecting to a remote server via x2go. The Server runs Ubuntu 16.04.2 with Xfce Desktop Environment. The Display I try to record is :50 (which gets created when I connect to the x2go server). I can control the remote server totally fine through x2go.

    My commands for recording via ffmpeg (or avconv/recordmydesktop, which use ffmpeg underneath) all look more or less the same and are like this :
    ffmpeg -f x11grab -r 25 -s 1854x1176 -i :50.0 -c:v libx264 screencast.mkv

    Sample output :

    user@machine:~/$ ffmpeg -f x11grab -r 25 -s 1854x1176 -i :50.0+0,0 -c:v libx264 -vb 4000k -an screencast.mkv
    ffmpeg version N-86766-g264f6c6 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
     configuration: --prefix=/home/user/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/user/ffmpeg_build/include --extra-ldflags=-L/home/user/ffmpeg_build/lib --bindir=/home/user/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree
     libavutil      55. 67.100 / 55. 67.100
     libavcodec     57.100.104 / 57.100.104
     libavformat    57. 75.100 / 57. 75.100
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 95.100 /  6. 95.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    [x11grab @ 0x1fd9b40] XFixes not available, cannot draw the mouse.
    [x11grab @ 0x1fd9b40] Stream #0: not enough frames to estimate rate; consider increasing probesize
    Input #0, x11grab, from ':50.0+0,0':
     Duration: N/A, start: 1500041497.684675, bitrate: N/A
       Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1854x1176, 25 fps, 1000k tbr, 1000k tbn, 1000k tbc
    File 'screencast.mkv' already exists. Overwrite ? [y/N] y
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    [libx264 @ 0x1fe3040] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x1fe3040] profile High 4:4:4 Predictive, level 4.2, 4:4:4 8-bit
    [libx264 @ 0x1fe3040] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=4000 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, matroska, to 'screencast.mkv':
     Metadata:
       encoder         : Lavf57.75.100
       Stream #0:0: Video: h264 (libx264) (H264 / 0x34363248), yuv444p, 1854x1176, q=-1--1, 4000 kb/s, 25 fps, 1k tbn, 25 tbc
       Metadata:
         encoder         : Lavc57.100.104 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/4000000 buffer size: 0 vbv_delay: -1
    [swscaler @ 0x1fe94e0] Warning: data is not aligned! This can lead to a speedloss
    frame=  179 fps= 36 q=-1.0 Lsize=      16kB time=00:00:07.04 bitrate=  18.8kbits/s speed=1.43x    
    video:14kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 12.869934%
    [libx264 @ 0x1fe3040] frame I:1     Avg QP: 6.00  size:   518
    [libx264 @ 0x1fe3040] frame P:45    Avg QP: 0.44  size:    81
    [libx264 @ 0x1fe3040] frame B:133   Avg QP: 0.94  size:    73
    [libx264 @ 0x1fe3040] consecutive B-frames:  0.6%  1.1%  0.0% 98.3%
    [libx264 @ 0x1fe3040] mb I  I16..4:  0.0% 100.0%  0.0%
    [libx264 @ 0x1fe3040] mb P  I16..4:  0.0%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:100.0%
    [libx264 @ 0x1fe3040] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  0.0%  0.0%  0.0%  direct: 0.0%  skip:100.0%
    [libx264 @ 0x1fe3040] final ratefactor: -23.85
    [libx264 @ 0x1fe3040] 8x8 transform intra:100.0%
    [libx264 @ 0x1fe3040] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%
    [libx264 @ 0x1fe3040] i16 v,h,dc,p:  0%  0% 100%  0%
    [libx264 @ 0x1fe3040] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  0%  0% 100%  0%  0%  0%  0%  0%  0%
    [libx264 @ 0x1fe3040] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x1fe3040] kb/s:15.56

    Using : Ubuntu 16.04.2 LTS

    I have successfully accomplished to capture the display :50 with "simplescreenrecorder", but that tool has no command line interface. It uses ffmpeg also, so it somehow should be possible to caputure the display but I can´t get it to work properly.

  • Is ffmpeg able to read ArrayBuffer input from stream

    7 juillet 2017, par jAndy

    I want to accomplish the following tasks :

    • Record Video+Audio from any HTML5 (MediaStream) capable browser
    • Send that data via WebSocket as Blob / ArrayBuffer chunks to a server
    • Broadcast that input stream-data to multiple clients

    As it turns out, this brought me into a world of pain. The first task is fairly simple using the HTML5 MediaStream objects alongside WebSockets.

    // ... for simplicity...
    navigator.mediaDevices.getUserMedia({ audio: true, video: true }).then(stream => {
       let mediaRecorder = new MediaRecorder( stream );
       // ...
       mediaRecorder.ondataavailable = e => {
           webSocket.send( 'newVideoData', e.data ); // configured for binary data
       };
    });

    Now, I want to receive those data fragments and stream those via nginx vod module, because I guess I want the output stream in HLS or DASH.
    I could write a little nodejs script as backend, which just receives the binary chunks and write them to a file or stream, and just reference it so nginx vod module could possibly read it and create the m3u8 manifest on the fly ?

    I am wondering now,

    • if ffmpeg is able to read that binary data directly (should be webm format), without a man-in-the-middle script, "somehow" ?
    • If not, do I have to write the data down into a file and pass that as input to ffmpeg or can I (should I) pipe the data to a self spawned ffmpeg instance ? (if so, how ?)
    • Do I actually need the nginx server (probably alongside rtmp module) to deliver the output stream as HLS or could I just use ffmpeg to also create a dynamic manifest ?
    • Is the nginx vod module capable of creating a dynamic hls/dash manifest or must the input data be complete beforehand ?
    • Ultimately, am I on the totally wrong track here ? :P

    Actually I just want to create a little video-live-chat demo, without any plugins or 3rd party encoding software, pure browser.