Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (51)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (7082)

  • ffmpeg transcode to live stream

    14 septembre 2016, par brayancastrop

    I need to display a ip camera stream in an html video tag, i have figured out how to transcode to a file from the rtsp stream like this

    ffmpeg -i "rtsp://user:password@ip" -s 640x480 /tmp/output.mp4

    now i need to be able to be able to live stream the rtsp input in a video tag like this

    <video src="http://domain:port/output.mp4" autoplay="autoplay"></video>

    I was trying to do something like this in my server (an ubuntu micro instance on amazon) in order to reproduce the video in the video tag but didn’t work

    ffmpeg -i "rtsp://user:password@ip" -s 640x480 http://localhost:8080/stream.mp4

    instead i got this log

    [tcp @ 0x747b40] Connection to tcp://localhost:8080 failed: Connection refused
    http://localhost:8080/stream.mp4: Connection refused

    i don’t really understand what’s happening, not sure if it’s sending the output to that url or serving the output there and this, i’ve been checking the ffmpeg man docs but i didn’t find any example related to this use case and also other questiones like this one FFmpeg Stream Transcoding which is similar to my last try without success

    btw, this is the camera i’m using DS-2CD2020F-I(W) - http://www.hikvision.com/en/Products_accessries_157_i5847.html
    they offer an httppreview but it’s just an img tag source which updates but appears to be unstable

    This is my first time trying to do something like this so any insight about how to achieve it will be really usefull and appreciated

  • AWS Lambda in Node JS with FFMPEG Lambda Layer

    29 mars 2023, par mwcwge23

    I'm trying to make a Lambda that takes a video and puts a watermark image on it.&#xA;I'm using Lambda with NodeJS and FFMPEG Lambda Layer I took from here :&#xA;https://serverlessrepo.aws.amazon.com/applications/us-east-1/145266761615/ffmpeg-lambda-layer

    &#xA;

    I got these two errors and I don't have a clue what do I did wrong :&#xA;errors

    &#xA;

    Please help me :)

    &#xA;

    (by the way, if you have an easier solution to put a watermark image on video that'll also be great)

    &#xA;

    That's my code (trying to put a watermark image on a video file) :

    &#xA;

    const express = require("express");&#xA;const childProcess = require("child_process");&#xA;const path = require("path");&#xA;const fs = require("fs");&#xA;const util = require("util");&#xA;const os = require("os");&#xA;const { fileURLToPath } = require("url");&#xA;const { v4: uuidv4 } = require("uuid");&#xA;const bodyParser = require("body-parser");&#xA;const awsServerlessExpressMiddleware = require("aws-serverless-express/middleware");&#xA;const AWS = require("aws-sdk");&#xA;const workdir = os.tmpdir();&#xA;&#xA;const s3 = new AWS.S3();&#xA;&#xA;// declare a new express app&#xA;const app = express();&#xA;app.use(bodyParser.json());&#xA;app.use(awsServerlessExpressMiddleware.eventContext());&#xA;&#xA;// Enable CORS for all methods&#xA;app.use(function (req, res, next) {&#xA;  res.header("Access-Control-Allow-Origin", "*");&#xA;  res.header("Access-Control-Allow-Headers", "*");&#xA;  next();&#xA;});&#xA;&#xA;const downloadFileFromS3 = function (bucket, fileKey, filePath) {&#xA;  "use strict";&#xA;  console.log("downloading", bucket, fileKey, filePath);&#xA;  return new Promise(function (resolve, reject) {&#xA;    const file = fs.createWriteStream(filePath),&#xA;      stream = s3&#xA;        .getObject({&#xA;          Bucket: bucket,&#xA;          Key: fileKey,&#xA;        })&#xA;        .createReadStream();&#xA;    stream.on("error", reject);&#xA;    file.on("error", reject);&#xA;    file.on("finish", function () {&#xA;      console.log("downloaded", bucket, fileKey);&#xA;      resolve(filePath);&#xA;    });&#xA;    stream.pipe(file);&#xA;  });&#xA;};&#xA;&#xA;const uploadFileToS3 = function (bucket, fileKey, filePath, contentType) {&#xA;  "use strict";&#xA;  console.log("uploading", bucket, fileKey, filePath);&#xA;  return s3&#xA;    .upload({&#xA;      Bucket: bucket,&#xA;      Key: fileKey,&#xA;      Body: fs.createReadStream(filePath),&#xA;      ACL: "private",&#xA;      ContentType: contentType,&#xA;    })&#xA;    .promise();&#xA;};&#xA;&#xA;const spawnPromise = function (command, argsarray, envOptions) {&#xA;  return new Promise((resolve, reject) => {&#xA;    console.log("executing", command, argsarray.join(" "));&#xA;    const childProc = childProcess.spawn(&#xA;        command,&#xA;        argsarray,&#xA;        envOptions || { env: process.env, cwd: process.cwd() }&#xA;      ),&#xA;      resultBuffers = [];&#xA;    childProc.stdout.on("data", (buffer) => {&#xA;      console.log(buffer.toString());&#xA;      resultBuffers.push(buffer);&#xA;    });&#xA;    childProc.stderr.on("data", (buffer) => console.error(buffer.toString()));&#xA;    childProc.on("exit", (code, signal) => {&#xA;      console.log(`${command} completed with ${code}:${signal}`);&#xA;      if (code || signal) {&#xA;        reject(`${command} failed with ${code || signal}`);&#xA;      } else {&#xA;        resolve(Buffer.concat(resultBuffers).toString().trim());&#xA;      }&#xA;    });&#xA;  });&#xA;};&#xA;&#xA;app.post("/api/addWatermark", async (req, res) => {&#xA;  try {&#xA;    const bucketName = "bucketName ";&#xA;    const uniqeName = uuidv4() &#x2B; Date.now();&#xA;    const outputPath = path.join(workdir, uniqeName &#x2B; ".mp4");&#xA;    const key = "file_example_MP4_480_1_5MG.mp4";&#xA;    const localFilePath = path.join(workdir, key);&#xA;    const watermarkPngKey = "watermark.png";&#xA;    const watermarkLocalFilePath = path.join(workdir, watermarkPngKey);&#xA;&#xA;    downloadFileFromS3(bucketName, key, localFilePath)&#xA;      .then(() => {&#xA;        downloadFileFromS3(bucketName, watermarkPngKey, watermarkLocalFilePath)&#xA;          .then(() => {&#xA;            fs.readFile(localFilePath, (err, data) => {&#xA;              if (!err &amp;&amp; data) {&#xA;                console.log("successsss111");&#xA;              }&#xA;            });&#xA;            fs.readFile(watermarkLocalFilePath, (err, data) => {&#xA;              if (!err &amp;&amp; data) {&#xA;                console.log("successsss222");&#xA;              }&#xA;            });&#xA;&#xA;            fs.readFile(outputPath, (err, data) => {&#xA;              if (!err &amp;&amp; data) {&#xA;                console.log("successsss3333");&#xA;              }&#xA;            });&#xA;&#xA;            spawnPromise(&#xA;              "/opt/bin/ffmpeg",&#xA;              [&#xA;                "-i",&#xA;                localFilePath,&#xA;                "-i",&#xA;                watermarkLocalFilePath,&#xA;                "-filter_complex",&#xA;                `[1]format=rgba,colorchannelmixer=aa=0.5[logo];[0][logo]overlay=5:H-h-5:format=auto,format=yuv420p`,&#xA;                "-c:a",&#xA;                "copy",&#xA;                outputPath,&#xA;              ],&#xA;              { env: process.env, cwd: workdir }&#xA;            )&#xA;              .then(() => {&#xA;                uploadFileToS3(&#xA;                  bucketName,&#xA;                  uniqeName &#x2B; ".mp4",&#xA;                  outputPath,&#xA;                  "mp4"&#xA;                );&#xA;              });&#xA;           });&#xA;      });&#xA;  } catch (err) {&#xA;    console.log({ err });&#xA;    res.json({ err });&#xA;  }&#xA;});&#xA;&#xA;app.listen(8136, function () {&#xA;  console.log("App started");&#xA;});&#xA;&#xA;module.exports = app;&#xA;&#xA;

    &#xA;

  • ios Crash when convert yuvj420p to CVPixelBufferRef use ffmpeg

    20 mars 2020, par jansma

    I need to get rtsp steam from ip camera and convert the AVFrame data to CVPixelBufferRef
    in order to send the data to other sdk

    First I use avcodec_decode_video2 to decode video data

    After decode the video I convert the data to CVPixelBufferRef this is my code

    size_t srcPlaneSize = pVideoFrame_->linesize[1]*pVideoFrame_->height;
    size_t dstPlaneSize = srcPlaneSize *2;
    uint8_t *dstPlane = malloc(dstPlaneSize);
    void *planeBaseAddress[2] = { pVideoFrame_->data[0], dstPlane };

    // This loop is very naive and assumes that the line sizes are the same.
    // It also copies padding bytes.
    assert(pVideoFrame_->linesize[1] == pVideoFrame_->linesize[2]);
    for(size_t i = 0; i/ These might be the wrong way round.
       dstPlane[2*i  ]=pVideoFrame_->data[2][i];
       dstPlane[2*i+1]=pVideoFrame_->data[1][i];
    }

    // This assumes the width and height are even (it's 420 after all).
    assert(!pVideoFrame_->width%2 &amp;&amp; !pVideoFrame_->height%2);
    size_t planeWidth[2] = {pVideoFrame_->width, pVideoFrame_->width/2};
    size_t planeHeight[2] = {pVideoFrame_->height, pVideoFrame_->height/2};
    // I'm not sure where you'd get this.
    size_t planeBytesPerRow[2] = {pVideoFrame_->linesize[0], pVideoFrame_->linesize[1]*2};
    int ret = CVPixelBufferCreateWithPlanarBytes(
           NULL,
           pVideoFrame_->width,
           pVideoFrame_->height,
           kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
           NULL,
           0,
           2,
           planeBaseAddress,
           planeWidth,
           planeHeight,
           planeBytesPerRow,
           NULL,
           NULL,
           NULL,
           &amp;pixelBuf);

    After I run the app the application will crash on

    dstPlane[2*i  ]=pVideoFrame_->data[2][i];

    How to resove this question ?

    this is console in xcode

    All info found
    Setting avg frame rate based on r frame rate
    stream 0: start_time: 0.080 duration: -102481911520608.625
    format: start_time: 0.080 duration: -9223372036854.775 bitrate=0 kb/s
    nal_unit_type: 0, nal_ref_idc: 0
    nal_unit_type: 7, nal_ref_idc: 3
    nal_unit_type: 0, nal_ref_idc: 0
    nal_unit_type: 8, nal_ref_idc: 3
    Ignoring NAL type 0 in extradata
    Ignoring NAL type 0 in extradata
    nal_unit_type: 7, nal_ref_idc: 3
    nal_unit_type: 8, nal_ref_idc: 3
    nal_unit_type: 6, nal_ref_idc: 0
    nal_unit_type: 5, nal_ref_idc: 3
    unknown SEI type 229
    Reinit context to 800x608, pix_fmt: yuvj420p
    (lldb)

    enter image description here