Recherche avancée

Médias (91)

Autres articles (41)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (8326)

  • How to record desktop while on x2go session via a command line tool ?

    4 décembre 2017, par haragei

    the Goal :
    I am trying to record a specific X display on a remote server with a command line tool.

    the Problem :
    The output file contains a pure black video stream for the whole duration of the recording.

    My Approach :
    I am connecting to a remote server via x2go. The Server runs Ubuntu 16.04.2 with Xfce Desktop Environment. The Display I try to record is :50 (which gets created when I connect to the x2go server). I can control the remote server totally fine through x2go.

    My commands for recording via ffmpeg (or avconv/recordmydesktop, which use ffmpeg underneath) all look more or less the same and are like this :
    ffmpeg -f x11grab -r 25 -s 1854x1176 -i :50.0 -c:v libx264 screencast.mkv

    Sample output :

    user@machine:~/$ ffmpeg -f x11grab -r 25 -s 1854x1176 -i :50.0+0,0 -c:v libx264 -vb 4000k -an screencast.mkv
    ffmpeg version N-86766-g264f6c6 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
     configuration: --prefix=/home/user/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/user/ffmpeg_build/include --extra-ldflags=-L/home/user/ffmpeg_build/lib --bindir=/home/user/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree
     libavutil      55. 67.100 / 55. 67.100
     libavcodec     57.100.104 / 57.100.104
     libavformat    57. 75.100 / 57. 75.100
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 95.100 /  6. 95.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    [x11grab @ 0x1fd9b40] XFixes not available, cannot draw the mouse.
    [x11grab @ 0x1fd9b40] Stream #0: not enough frames to estimate rate; consider increasing probesize
    Input #0, x11grab, from ':50.0+0,0':
     Duration: N/A, start: 1500041497.684675, bitrate: N/A
       Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1854x1176, 25 fps, 1000k tbr, 1000k tbn, 1000k tbc
    File 'screencast.mkv' already exists. Overwrite ? [y/N] y
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    [libx264 @ 0x1fe3040] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x1fe3040] profile High 4:4:4 Predictive, level 4.2, 4:4:4 8-bit
    [libx264 @ 0x1fe3040] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=4000 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, matroska, to 'screencast.mkv':
     Metadata:
       encoder         : Lavf57.75.100
       Stream #0:0: Video: h264 (libx264) (H264 / 0x34363248), yuv444p, 1854x1176, q=-1--1, 4000 kb/s, 25 fps, 1k tbn, 25 tbc
       Metadata:
         encoder         : Lavc57.100.104 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/4000000 buffer size: 0 vbv_delay: -1
    [swscaler @ 0x1fe94e0] Warning: data is not aligned! This can lead to a speedloss
    frame=  179 fps= 36 q=-1.0 Lsize=      16kB time=00:00:07.04 bitrate=  18.8kbits/s speed=1.43x    
    video:14kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 12.869934%
    [libx264 @ 0x1fe3040] frame I:1     Avg QP: 6.00  size:   518
    [libx264 @ 0x1fe3040] frame P:45    Avg QP: 0.44  size:    81
    [libx264 @ 0x1fe3040] frame B:133   Avg QP: 0.94  size:    73
    [libx264 @ 0x1fe3040] consecutive B-frames:  0.6%  1.1%  0.0% 98.3%
    [libx264 @ 0x1fe3040] mb I  I16..4:  0.0% 100.0%  0.0%
    [libx264 @ 0x1fe3040] mb P  I16..4:  0.0%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:100.0%
    [libx264 @ 0x1fe3040] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  0.0%  0.0%  0.0%  direct: 0.0%  skip:100.0%
    [libx264 @ 0x1fe3040] final ratefactor: -23.85
    [libx264 @ 0x1fe3040] 8x8 transform intra:100.0%
    [libx264 @ 0x1fe3040] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%
    [libx264 @ 0x1fe3040] i16 v,h,dc,p:  0%  0% 100%  0%
    [libx264 @ 0x1fe3040] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  0%  0% 100%  0%  0%  0%  0%  0%  0%
    [libx264 @ 0x1fe3040] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x1fe3040] kb/s:15.56

    Using : Ubuntu 16.04.2 LTS

    I have successfully accomplished to capture the display :50 with "simplescreenrecorder", but that tool has no command line interface. It uses ffmpeg also, so it somehow should be possible to caputure the display but I can´t get it to work properly.

  • rtsp to rtmp using ffmpeg or any tool wrapper

    27 août 2017, par Chakri

    I have a requirement where I need to restream the RTSP stream from camera source to RTMP server. I know this may sound a repeated question but my exact scenario is I cannot do it manually over command line with ffmpeg command. I need a wrapper where I receive the rtsp and rtmp url from external source say through REST invocation. Then the code can trigger the ffmpeg restream.

    Basically flow is like this :

    1. Camera source application sends RTSP read event(could be basic HTTP(REST) request with RTSP url, metadata about camera info, serial no etc) to my streamer app

    Ex : /usr/bin/ffmpeg -i rtsp ://10.144.11.22:554/stream1 -f flv rtmp ://10.13.11.121:1935/stream1

    1. Streamer app computes the RTMP server url for corresponding camera and triggers a ffmpeg command to stream RTSP to RTMP

    2. Streamer app triggers above(2) in separate thread and keeps reading the logs for monitoring purpose. Also identifies the end of RTSP stream and sends an update(Example : RTSP END) event to UI

    Now at point(2) I need a suggestion. Here I need a stable wrapper/api which can help. I tried this through some Java wrappers but the process hangs or fails to read the output from ffmpeg. Also I need to handle streams from many cameras where spawning thread for each one could be exhaustive.

    So I am looking for some similar api/wrapper in C++ or Go Lang which might have more closer interaction in handling ffmpeg command.

    Please point if similar issue is addressed elsewhere

  • Using FFmpeg within a python application : ffmpeg tool or libav* libraries ?

    5 septembre 2017, par user2457666

    I am working on a python project that uses ffmpeg as part of its core functionality. Essentially the functionality from ffmpeg that I use boils down to these two commands :

    ffmpeg -i udp:// -qscale:v 2 -vf "fps=30" sttest%04d.jpg
    ffmpeg -i udp:// -map data-re -codec copy -f data out.bin

    Pretty simple stuff.

    I am trying to create a self-contained program (which uses the above ffmpeg functionality) that can easily be installed on any particular system without relying on that system having the necessary dependencies, as hopefully I would package those dependencies with the program itself.

    With that in mind, would it be best to use the libav* libraries to perform this functionality from within the program ? Or would a wrapper (ffmpy) for the ffmpeg command line tool be a better option ? My current thinking on the drawbacks of each is that using the libraries may be the best practice, but it seems overly complex to have to learn how to use them (and potentially learn C, which I’ve never learned, in the process) just to do those two basic things I mentioned above. The libraries overall are a bit of a bit of a black box to me and don’t have very much documentation. But the problem with using a wrapper for ffmpeg would be that it essentially relies on calling a subprocess, which seems somewhat sloppy. Although I’m not sure why I feel so viscerally opposed to subprocesses.