Recherche avancée

Médias (91)

Autres articles (73)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (9096)

  • canvas to ffmpeg - invalid data error

    12 juillet 2017, par nameless

    I’m curently trying to pipe some raw data directly into ffmpeg. The data comes from a canvas. What I do is the following :

    var imageData = ctx.getImageData(0,0,600,600);
    var dataArray = imageData.data;
    var rgbArray = [];
    for (var i = 0; i < dataArray.length; i+=4) {
       rgbArray.push([dataArray[i], dataArray[i+1], dataArray[i+2]])
    }

    var rgb24Array = rgbArray.map(function(rgbList){
       return (rgbList[0] << 16) | (rgbList[1] << 8) | (rgbList[2])
    });

    The ffmpegArgs are set like this :

    var ffmpegArgs = [
       '-c:v', 'rawvideo' // input container
       '-f', 'rawvideo',
       '-pix_fmt', 'rgb24', // input pixel format
       '-s', '600x600' //input size
       '-i', 'pipe:0', // input source
       '-format', 'mp4', // output container format
       '-c:v', 'libx264', // output video codec
       '-b:v', '2m', // output bitrate
       'udp://239.255.123.46:1234' // output destination
    ];

    So basically, the imageData I get from the canvas has 4 elements for each pixel (rgba), I first filter out the rgb values and then pack them to send them to ffmpeg, I directly pipe them in like ffmpeg.stdin.write(rgb24Array).

    But I get a TypeError: invalid data and I’m not sure why... I also tried to leave data as it is and use rgba as pix_fmt, but same error.

    The stacktrace of the error is :

    TypeError: invalid data
      at Socket.write (net.js:617:11)
      at null._repeat (....line with ffmpeg.stdin.write() in it)
      at wrapper [as_onTimeout] (timers.js:275:11)
      at Timer.listonTimeout (timers.js:92:15)

    Does anybody have a idea where the problem could be ?

    Edit :
    changed some things, first, I now use a Uint8Array :

    var imageData = ctx.getImageData(0,0,600,600);
               var srcArray = imageData.data;
               var dstArray = new Uint8Array(imageData.width * imageData.height * 3);
               for(var i = 0, p = 0; i < srcArray.length; i++) { // i++ skips alpha
                 dstArray[p++] = srcArray[i++];                   // red comp. and incr.
                 dstArray[p++] = srcArray[i++];                   // green comp. and incr.
                 dstArray[p++] = srcArray[i++];                   // blue comp. and incr.
               }

               ffmpeg.stdin.write(dstArray);

    Also added -video_size parameter to the ffmpegArgs. But still it’s not working and I get the same errors.

  • swscale : add two spatially stable dithering methods

    23 mars 2014, par Øyvind Kolås
    swscale : add two spatially stable dithering methods
    

    Both of these dithering methods are from http://pippin.gimp.org/a_dither/ for
    GIF they can be considered better than bayer (provides more gray-levels), and
    spatial stability - often more than twice as good compression and less visual
    flicker than error diffusion methods (the methods also avoids error-shadow
    artifacts of diffusion dithers).

    These methods are similar to blue/green noise type dither masks ; but are
    simple enough to generate their mask on the fly. They are still research work
    in progress ; though more expensive to generate masks (which can be used in a
    LUT) like ’void and cluster’ and similar methods will yield superior results

    • [DH] doc/scaler.texi
    • [DH] libswscale/options.c
    • [DH] libswscale/output.c
    • [DH] libswscale/swscale_internal.h
    • [DH] libswscale/utils.c
  • Lost video stream when streaming using FFmpeg and RTSP camera

    13 février 2019, par Vape

    on the Linux server, I have FFmpeg installed which streams video from Chinese low-cost IP camera to Twitch or Youtube server. After a few hours, the video is not visible but on the server side, the FFmpeg is still running and also the IP camera respond to the "ping" command.

    Here is the script I’m using :

    #
    # Camera IP
    #
    AQUARIUM_CAM_IP="192.168.123.102"


    #
    # Aquarium data file
    #
    AQUARIUM_DATA_FILE="/run/aquarium-cam/data.txt"


    #
    # FFmpeg parameters
    #
    FFMPEG_LOG_LEVEL=fatal

    # Bitrate (1000k = 1Mbit/s)  and  encoding speed (affects CPU)  and  number of CPU cores to use
    FFMPEG_VBR="1000k"
    FFMPEG_QUAL="ultrafast"
    FFMPEG_THREADS="1"

    # Streaming source
    FFMPEG_CAM_RTSP_SRC="rtsp://${AQUARIUM_CAM_IP}:554/user=admin&password=&channel=1&stream=0.sdp" # Camera source

    # Streaming destination
    FFMPEG_TWITCH_STREAM_URL_DST="rtmp://live-ber.twitch.tv/app"  # RTMP stream URL
    FFMPEG_TWITCH_KEY=""

    # Data overlay setup
    FFMPEG_TEXT_OVERLAY_FONT_PATH="OpenSans-Regular.ttf"
    FFMPEG_TEXT_OVERLAY_FONT_SIZE=25
    FFMPEG_TEXT_OVERLAY_OFFSET_X=5
    FFMPEG_TEXT_OVERLAY_OFFSET_Y=60
    FFMPEG_TEXT_OVERLAY_RELOAD=1
    FFMPEG_TEXT_OVERLAY_BOX="1"
    FFMPEG_TEXT_OVERLAY_BOX_BORDER_WIDTH="5"
    FFMPEG_TEXT_OVERLAY_BOX_COLOR="blue@0.5"

    the FFmpeg script :

    ffmpeg \
       -loglevel ${FFMPEG_LOG_LEVEL} -f lavfi -i anullsrc \
       -rtsp_transport tcp \
       -i "${FFMPEG_CAM_RTSP_SRC}" \
       -vcodec libx264 -pix_fmt yuv420p -preset ${FFMPEG_QUAL} -g 75 -b:v ${FFMPEG_VBR} \
       -vf "\
    drawtext=fontfile=${FFMPEG_TEXT_OVERLAY_FONT_PATH}:textfile=${AQUARIUM_DATA_FILE}:\
    x=${FFMPEG_TEXT_OVERLAY_OFFSET_X}:y=${FFMPEG_TEXT_OVERLAY_OFFSET_X}:\
    reload=${FFMPEG_TEXT_OVERLAY_RELOAD}: \
    fontcolor=white:fontsize=${FFMPEG_TEXT_OVERLAY_FONT_SIZE}:\
    box=${FFMPEG_TEXT_OVERLAY_BOX}:boxborderw=${FFMPEG_TEXT_OVERLAY_BOX_BORDER_WIDTH}:\
    boxcolor=${FFMPEG_TEXT_OVERLAY_BOX_COLOR}"\
       -threads ${FFMPEG_THREADS} -bufsize 512k \
       -f flv "${FFMPEG_TWITCH_STREAM_URL_DST}/${FFMPEG_TWITCH_KEY}"

    The other strange thing is that when the FFmpeg start and the stream begin the CPU utilization is about 30% in the case when there is no stream but the FFmpeg is still alive the CPU utilization is below 20% or less.

    Any Idea how to resolve this kind of problem ?
    Has FFmpeg some option to terminate if there is no "stream" or if it has lost a connection with a camera ?