Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (32)

  • Monitoring de fermes de MediaSPIP (et de SPIP tant qu’à faire)

    31 mai 2013, par

    Lorsque l’on gère plusieurs (voir plusieurs dizaines) de MediaSPIP sur la même installation, il peut être très pratique d’obtenir d’un coup d’oeil certaines informations.
    Cet article a pour but de documenter les scripts de monitoring Munin développés avec l’aide d’Infini.
    Ces scripts sont installés automatiquement par le script d’installation automatique si une installation de munin est détectée.
    Description des scripts
    Trois scripts Munin ont été développés :
    1. mediaspip_medias
    Un script de (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (6462)

  • checkasm : Explicitly declare function prototypes

    20 août 2015, par Henrik Gramner
    checkasm : Explicitly declare function prototypes
    

    Now we no longer have to rely on function pointers intentionally
    declared without specified argument types.

    This makes it easier to support functions with floating point parameters
    or return values as well as functions returning 64-bit values on 32-bit
    architectures. It also avoids having to explicitly cast strides to
    ptrdiff_t for example.

    Signed-off-by : Anton Khirnov <anton@khirnov.net>

    • [DBH] tests/checkasm/Makefile
    • [DBH] tests/checkasm/bswapdsp.c
    • [DBH] tests/checkasm/checkasm.c
    • [DBH] tests/checkasm/checkasm.h
    • [DBH] tests/checkasm/h264pred.c
    • [DBH] tests/checkasm/h264qpel.c
    • [DBH] tests/checkasm/x86/checkasm.asm
  • How to encode 24-bit audio with libav/ffmpeg ?

    18 mars 2015, par andrewrk

    Here’s a code snippet from libavutil/samplefmt.h :

    /**
    * Audio Sample Formats
    *
    * @par
    * The data described by the sample format is always in native-endian order.
    * Sample values can be expressed by native C types, hence the lack of a signed
    * 24-bit sample format even though it is a common raw audio data format.
    *
    * @par
    * The floating-point formats are based on full volume being in the range
    * [-1.0, 1.0]. Any values outside this range are beyond full volume level.
    *
    * @par
    * The data layout as used in av_samples_fill_arrays() and elsewhere in Libav
    * (such as AVFrame in libavcodec) is as follows:
    *
    * @par
    * For planar sample formats, each audio channel is in a separate data plane,
    * and linesize is the buffer size, in bytes, for a single plane. All data
    * planes must be the same size. For packed sample formats, only the first data
    * plane is used, and samples for each channel are interleaved. In this case,
    * linesize is the buffer size, in bytes, for the 1 plane.
    */
    enum AVSampleFormat {
       AV_SAMPLE_FMT_NONE = -1,
       AV_SAMPLE_FMT_U8,          ///&lt; unsigned 8 bits
       AV_SAMPLE_FMT_S16,         ///&lt; signed 16 bits
       AV_SAMPLE_FMT_S32,         ///&lt; signed 32 bits
       AV_SAMPLE_FMT_FLT,         ///&lt; float
       AV_SAMPLE_FMT_DBL,         ///&lt; double

       AV_SAMPLE_FMT_U8P,         ///&lt; unsigned 8 bits, planar
       AV_SAMPLE_FMT_S16P,        ///&lt; signed 16 bits, planar
       AV_SAMPLE_FMT_S32P,        ///&lt; signed 32 bits, planar
       AV_SAMPLE_FMT_FLTP,        ///&lt; float, planar
       AV_SAMPLE_FMT_DBLP,        ///&lt; double, planar

       AV_SAMPLE_FMT_NB           ///&lt; Number of sample formats. DO NOT USE if linking dynamically
    };

    It specifically mentions that 24-bit is missing even though it is a common raw audio data format. So if I were using libav/ffmpeg to export to an audio file, how would I use 24-bit audio ?

    Exporting an audio file looks something like this :

    AVCodec *codec = get_codec();
    AVOutputFormat *oformat = get_output_format();
    AVFormatContext *fmt_ctx = avformat_alloc_context();
    assert(fmt_ctx);
    int err = avio_open(&amp;fmt_ctx->pb, get_output_filename(), AVIO_FLAG_WRITE);
    assert(err >= 0);
    fmt_ctx->oformat = oformat;
    AVStream *stream = avformat_new_stream(fmt_ctx, codec);
    assert(stream);
    AVCodecContext *codec_ctx = stream->codec;
    codec_ctx->bit_rate = get_export_bit_rate();

    // How to set this to 24 bit instead of 32?
    codec_ctx->sample_fmt = AV_SAMPLE_FMT_S32;

    codec_ctx->sample_rate = get_sample_rate();
    codec_ctx->channel_layout = get_channel_layout()
    codec_ctx->channels = get_channel_count();
    codec_ctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
  • How can I use both vaapi acceleration and video overlays with ffmpeg

    10 novembre 2019, par nrdxp

    I am fairly new to ffmpeg and I am trying to capture both my webcam and my screen, all using vaapi acceleration (without it, its too slow). I want to overlay the webcam in the bottom right corner using ffmpeg. I need to use kmsgrab, so I can record a Wayland session on Linux.

    What I am doing to work around this is simply open the webcam in a window using the sdl backend, and then call another instance off ffmpeg to record the screen. This isn’t ideal however, since the window with the webcam gets covered up by other windows on fullscreen or when workspace switching. I would much rather encode the webcam on top of the screencast so it is always visible, no matter what I am doing.

    Here is the workaround script I am using right now :

    #!/usr/bin/env zsh

    # record webcam and open it in sdl window
    ffmpeg -v quiet -hide_banner -re -video_size 640X480 -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -i /dev/video0 -vf 'format=nv12,hwupload' -c:v hevc_vaapi -f hevc -threads $(nproc) - | ffmpeg -v quiet -i - -f sdl2 - &amp;

    # wait for webcam window to open
    until swaymsg -t get_tree | grep 'pipe:' &amp;>/dev/null; do
     sleep 0.5
    done

    # position webcam in the bottom right corner of screen using sway
    swaymsg floating enable
    swaymsg resize set width 320 height 240
    swaymsg move position 1580 795
    swaymsg focus tiling

    #screencast
    ffmpeg -format bgra -framerate 60 -f kmsgrab -thread_queue_size 1024 -i - \
     -f alsa -ac 2 -thread_queue_size 1024 -i hw:0 \
     -vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' \
     -c:v h264_vaapi -g 120 -b:v 3M -maxrate 3M -pix_fmt vaapi_vld -c:a aac -ab 96k -threads $(nproc) \
     output.mkv

    kill %1

    So far, I’ve tried adding the webcam as a second input to the screencast and using :

    -filter_complex '[1] scale=w=320:h=240,hwupload,format=nv12 [tmp]; \
    [0][tmp] overlay=x=1580:y=795,hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' \

    But I get the error :

    Impossible to convert between the formats supported by the filter 'Parsed_hwupload_1' and the filter 'auto_scaler_0'
    Error reinitializing filters!
    Failed to inject frame into filter network: Function not implemented