Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (108)

  • Librairies et logiciels spécifiques aux médias

    10 décembre 2010, par

    Pour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
    Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

Sur d’autres sites (9894)

  • How to send encoded video (or audio) data from server to client in a way that's decodable by webcodecs API using minimal latency and data overhead

    11 janvier 2023, par Tiger Yang

    My question (read entire post for context) :

    


    Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this.

    


    Context and problem :

    


    I'm working on a remote desktop implementation that consists of a server that captures and encodes the display and speakers using FFmpeg and forwards it via pipe to a go (language) program which sends it on two unidirectional webtransport streams to my client, which I plan to decode using the webcodecs API. According to MDN, the video decoder needs to be fed via .configure() an object containing the following : https://developer.mozilla.org/en-US/docs/Web/API/VideoDecoder/configure before it's able to decode anything.

    


    same goes for the audio decoder : https://developer.mozilla.org/en-US/docs/Web/API/AudioDecoder/configure

    


    What I've tried so far :

    


    Because this remote desktop will be for my personal use only, it would only ever receive streams from a specific encoder configured in a specific way encoding video at a specific resolution, framerate, color space, etc.. Therefore, I took my video capture FFmpeg command...

    


    videoString := []string{
        "ffmpeg",
        "-init_hw_device", "d3d11va",
        "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",
        "-vcodec", "hevc_nvenc",
        "-tune", "ll",
        "-preset", "p7",
        "-spatial_aq", "1",
        "-temporal_aq", "1",
        "-forced-idr", "1",
        "-rc", "cbr",
        "-b:v", "500K",
        "-no-scenecut", "1",
        "-g", "216000",
        "-f", "hevc", "-",
    }


    


    ...and instructed it to write to an mp4 file instead of outputting to pipe, and then I had this webcodecs demo https://w3c.github.io/webcodecs/samples/video-decode-display/ demux it using mp4box.js. Knowing that the demo outputs a proper .configure() object, I blindly copied it and had my client configure using that every time. Sadly, it didn't work, and I since noticed that the "description" part of the configure object changes despite the encoder and parameters being the same.

    


    I knew that mp4 files worked via mp4box, but they can't be streamed with low latency over a network, and additionally, ffmpeg's -f parameters specifies the muxer to use, but there are so many different types.

    


    At this point, I think I'm completely out of my depth, so :

    


    Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this. (copied above)

    


  • How do I exit or kill a running OS process (FFMPEG) started with Node.js without crashing my app ?

    18 novembre 2022, par Alula Leakemariam

    I am developing an express application that starts FFMPEG through nodejs's child_process. The process starts, but when I try deleting specific processes by pid, the whole app crashes and has to be restarted.

    


    I start the stream with this :

    


    const { spawn, exec } = require("child_process");
const execFile = require("child_process").execFile;

function startStream(foo, url, bar) {
  const ls = spawn(`mkdir`, [`$foo`], {
    cwd: `path/to/working/dir`,
    stdio: "inherit",
  });

  const child = execFile(
    "ffmpeg",
    ["-i", url, "-hls_flags", "delete_segments", "-c", "copy", `path/to/file.m3u8`],
    { maxBuffer: Infinity },
    (error, stdout, stderr) => {
      if (error) {
        console.error("stderr: =============================", stderr);
        throw error;
      }
      console.log("stdout: ==========================", stdout);
    }
  );

  const checkProcesses = exec(`ps`, (error, stdout, stderr) => {
    if (error) {
      console.error("stderr: =============================", stderr);
      throw error;
    }
    console.log("stdout: ==========================", stdout);
  });

  return child.pid;
}

module.exports = startStream;


    


    The code below is the results of running the ps command to list the running processes, which lists ffmpeg as one of them. This will also show ffmpeg again for each time I run the function above.

    


       6394 pts/3    00:00:00 bash
 110129 pts/3    00:00:28 npm run start
 110140 pts/3    00:00:00 sh
 110141 pts/3    00:00:38 node
 136730 pts/3    00:00:00 node
 137148 pts/3    00:00:00 ffmpeg
 137358 pts/3    00:00:00 sh
 137359 pts/3    00:00:00 ps



    


    This will also start copying the FFMPEG files to the directory as expected. Afterwards, another endpoint will use the function below to delete the files created and (attempt to) kill the process :

    


    const { spawn, exec } = require("child_process");
const kill = require("tree-kill");

async function endStream(foo, bar, pid) {
  kill(pid, "SIGKILL");

  // Also tried this commented out code below with spawn and exec
  // const killProcessByPid = spawn("kill", ["-9", `${pid}`]);
  
  const ls = exec(`rm -rf  ${foo}`, {
    cwd: `./path/to/working/dir`,
  });
}
module.exports = endStream;



    


    I've tried a few variations but the result I get is usually along the lines of this :

    


    Exiting normally, received signal 15.&#xA;&#xA;    at ChildProcess.exithandler (node:child_process:402:12)&#xA;    at ChildProcess.emit (node:events:513:28)&#xA;    at maybeClose (node:internal/child_process:1100:16)&#xA;    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5) {&#xA;  code: 255,&#xA;  killed: false,&#xA;  signal: null,&#xA;  cmd: &#x27;ffmpeg -i <url>.m3u8 -hls_flags delete_segments -c copy path/to/file.m3u8&#x27;&#xA;}&#xA;[nodemon] app crashed - waiting for file changes before starting..&#xA;&#xA;</url>

    &#xA;

    I only started using exec/execFile/spawn after failing with libraries like fluent-ffmpeg and a few others, though it doesn't look like starting the process causes the same issues that exiting them do.

    &#xA;

    If there's anything else I can optimize while my code is up, i'd love to hear it. I'm not even sure if this code will have success with many ffmpeg processes running concurrently.

    &#xA;

    I am running this on linux (ubuntu) right now and eventually plan to deploy the server.

    &#xA;

  • avformat_write_header() changes my stream's time_base

    12 avril 2023, par greken

    I have a high framerate camera which can capture >2000 fps. My plan was to assign actual capture timestamps with µs resolution to the frames and encode with H.264 in a matroska file.

    &#xA;

    So I set the time_base of both the AVStream and the AVCodecContext to {1, 1&#x27;000&#x27;000}. But after calling avformat_write_header(avFormatContext, nullptr) I notice that the stream's time_base is {1, 1&#x27;000}. Now, since my framerate is double this resolution, I get identical consecutive timestamps and half my frames get lost when I extract them from the video file.

    &#xA;

    Does anyone have an idea why this is happening and what I can do about it ? Preferably in a way that preserves the correct timestamps.

    &#xA;