Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (70)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (5111)

  • lavc/vvc : Ensure subpictures don't overlap

    22 février, par Frank Plowman
    lavc/vvc : Ensure subpictures don't overlap
    

    This is essentially a re-implementation of
    https://patchwork.ffmpeg.org/project/ffmpeg/patch/20241005223955.54158-1-post@frankplowman.com/

    That patch was not applied last time. Instead we opted to identify
    issues which could be caused by invalid subpicture layouts and remedy
    those issues where they manifest, either through error detection or code
    hardening. This was primarily implemented in the set
    https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=13381.

    This has worked to some degree, however issues with subpicture layouts
    continue to crop up from the fuzzer and I've fixed a number of bugs
    related to subpicture layouts since then. I think it's best to return
    to the initial plan and simply check if the subpicture layout is valid
    initially.

    This implementation is also lighter than the first time — by doing a
    bit more logic in pps_subpic_less_than_one_tile_slice, we are able to
    store a tile_in_subpic map rather than a ctu_in_subpic map. This
    reduces the size of the map to the point it becomes possible to allocate
    it on the stack. Similar to 8bd66a8c9587af61c7b46558be3c4ee317c1af5a,
    the layout is also validated in the slice map construction code, rather
    than in the CBS, which avoids duplicating some logic.

    Signed-off-by : Frank Plowman <post@frankplowman.com>

    • [DH] libavcodec/vvc/ps.c
  • Bash script to automate FFmpeg operations fails when calling the command, but copy-pasting the generated command into the terminal works [duplicate]

    28 février, par GaboScharff99

    I wrote a bash script which automates a number of conversion operations on video files using FFmpeg. Oddly enough, the FFmpeg call itself now fails when running the script, with a very confusing error message, I might add, but when I copy the command generated by the script into the terminal and run it, it works flawlessly. I'm sorry to insert such a long code block here, but considering how strange this error is, it might be anywhere in the script, so here it is :

    &#xA;

    #!/bin/bash&#xA;&#xA;audioTrack=1&#xA;subSource=1&#xA;subTrack=0&#xA;transcodeVideo=1&#xA;transcodeAudio=1&#xA;volumeMultiplier=1&#xA;degradeToStereo=0&#xA;subLanguage="Japanese"&#xA;&#xA;while getopts "t:ns:vam:dl:h" opt; do&#xA;    case "$opt" in&#xA;        t) audioTrack=${OPTARG};;&#xA;        n) subSource=0;;&#xA;        s) subTrack=${OPTARG};;&#xA;        v) transcodeVideo=0;;&#xA;        a) transcodeAudio=0;;&#xA;        m) volumeMultiplier=${OPTARG};;&#xA;        d) degradeToStereo=1;;&#xA;        l) subLanguage=${OPTARG};;&#xA;        h)&#xA;            echo "Options:"&#xA;            echo "-t [integer]: Audio track number. Default: 1."&#xA;            echo "-n: If included, subtitles will be taken from internal source."&#xA;            echo "-s [integer]: Subtitles track number. Default: 0."&#xA;            echo "-v: If included, video source will be copied without transcoding."&#xA;            echo "-a: If included, audio source will be copied without transcoding."&#xA;            echo "-m [number]: Volume multiplier. If 1, volume is unaffected. Default: 1"&#xA;            echo "-d: If included, audio will be degraded to stereo."&#xA;            echo "-l [language]: Subtitles language. Only used for external subtitles source. Default: Japanese."&#xA;            exit 0&#xA;        ;;&#xA;    esac&#xA;done&#xA;&#xA;echo "Audio track: $audioTrack."&#xA;echo "Subtitles track: $subTrack."&#xA;params="-map 0:0 -map 0:$audioTrack -map $subSource:$subTrack -c:v"&#xA;&#xA;if [[ $transcodeVideo -eq 1 ]]; then&#xA;    echo "Video will be transcoded."&#xA;    params="$params hevc"&#xA;elif [[ $transcodeVideo -eq 0 ]]; then&#xA;    echo "Video will be copied without transcoding."&#xA;    params="$params copy"&#xA;fi&#xA;&#xA;params="$params -c:a"&#xA;&#xA;if [[ $transcodeAudio -eq 1 ]]; then&#xA;    echo "Audio will be transcoded."&#xA;    params="$params libopus"&#xA;elif [[ $transcodeAudio -eq 0 ]]; then&#xA;    echo "Audio will be copied without transcoding."&#xA;    params="$params copy"&#xA;fi&#xA;&#xA;if [[ $volumeMultiplier -ne 1 ]]; then&#xA;    echo "Volume will be multiplied by a factor of $volumeMultiplier."&#xA;    params="$params -filter:a &#x27;volume=$volumeMultiplier&#x27;"&#xA;else&#xA;    echo "Volume will be unaffected."&#xA;fi&#xA;&#xA;if [[ $degradeToStereo -eq 1 ]]; then&#xA;    echo "Audio will be degraded to stereo."&#xA;    params="$params -ac 2"&#xA;elif [[ $degradeToStereo -eq 0 ]]; then&#xA;    echo "Audio will not be degraded to stereo."&#xA;fi&#xA;&#xA;params="$params -c:s copy"&#xA;&#xA;if [[ $subSource -eq 1 ]]; then&#xA;    echo "Subtitles source is external."&#xA;    echo "Subtitles language is $subLanguage."&#xA;    params="$params -metadata:s:s:0 title=&#x27;&#x27; -metadata:s:s:0 language=&#x27;$subLanguage&#x27;"&#xA;else&#xA;    echo "Subtitles source is internal."&#xA;fi&#xA;&#xA;if [[ -f titles.txt ]]; then&#xA;    echo "A titles.txt file was found. Titles will be changed according to it."&#xA;    echo "Please check titles.txt to make sure the titles are correct."&#xA;    changeTitles=1&#xA;    counter=0&#xA;else&#xA;    echo "A titles.txt file was not found. Titles will not be changed."&#xA;    changeTitles=0&#xA;fi&#xA;&#xA;read -p "Are these options correct? (y/n) " choice&#xA;&#xA;case "$choice" in&#xA;    y|Y)&#xA;        echo "Initiating conversion sequence. This may take a while..."&#xA;&#xA;        mkdir output&#xA;        currentParams=""&#xA;&#xA;        shopt -s nullglob&#xA;        for i in *.mp4 *.mkv; do&#xA;            currentParams=$params&#xA;            fileNameNoExtension=$(echo $i | rev | cut -f 2- -d &#x27;.&#x27; | rev)&#xA;&#xA;            if [[ $subSource -eq 1 ]]; then&#xA;                currentParams="-f srt -i $fileNameNoExtension.srt $currentParams"&#xA;            fi&#xA;&#xA;            if [[ $changeTitles -eq 1 ]]; then&#xA;                ((counter&#x2B;&#x2B;))&#xA;                currentParams="$currentParams -metadata title=&#x27;$(awk "NR==$counter" titles.txt)&#x27;"&#xA;            fi&#xA;&#xA;            ffmpeg -i "$i" $currentParams "output/$fileNameNoExtension.mkv"&#xA;        done&#xA;&#xA;        echo "Conversion finished!"&#xA;    ;;&#xA;    n|N) echo "Operation canceled. Exiting.";;&#xA;    *) echo "Invalid input. Try again.";;&#xA;esac&#xA;

    &#xA;

    The directory I'm running this in contains six video files :

    &#xA;

      &#xA;
    1. E1 - The Pirates of Orion.mkv
    2. &#xA;

    3. E2 - Bem.mkv
    4. &#xA;

    5. E3 - The Practical Joker.mkv
    6. &#xA;

    7. E4 - Albatross.mkv
    8. &#xA;

    9. E5 - How Sharper Than a Serpent&#x27;s Tooth.mkv
    10. &#xA;

    11. E6 - The Counter-Clock Incident.mkv
    12. &#xA;

    &#xA;

    Here's the titles.txt file, for completion's sake :

    &#xA;

    Star Trek: The Animated Series - Season 2, Episode 1 - The Pirates of Orion&#xA;Star Trek: The Animated Series - Season 2, Episode 2 - Bem&#xA;Star Trek: The Animated Series - Season 2, Episode 3 - The Practical Joker&#xA;Star Trek: The Animated Series - Season 2, Episode 4 - Albatross&#xA;Star Trek: The Animated Series - Season 2, Episode 5 - How Sharper Than a Serpent&#x27;s Tooth&#xA;Star Trek: The Animated Series - Season 2, Episode 6 - The Counter-Clock Incident&#xA;

    &#xA;

    And finally, here's the error message given by FFmpeg on the terminal for every video file when running the command :

    &#xA;

    Unable to find a suitable output format for &#x27;Trek:&#x27;&#xA;Trek:: Invalid argument&#xA;

    &#xA;

    Maybe there are better ways to handle all of this, but first and foremost, I would like to figure out why the command fails with such a confusing error message. The only place where the string 'Trek :' is found is in the title taken from titles.txt, but I don't understand why that's seemingly being passed to the name of the output file instead of the title, and apparently only when running the script.

    &#xA;

    Thanks a lot for your answers ! I know this is quite a bit of text, so I really appreciate you taking your time to read through this.

    &#xA;

  • How to improve web camera streaming latency to v4l2loopback device with ffmpeg ?

    11 mars, par Made by Moses

    I'm trying to stream my iPhone camera to my PC on LAN.

    &#xA;

    What I've done :

    &#xA;

      &#xA;
    1. HTTP server with html page and streaming script :

      &#xA;

      I use WebSockets here and maybe WebRTC is better choice but it seems like network latency is good enough

      &#xA;

    2. &#xA;

    &#xA;

    async function beginCameraStream() {&#xA;  const mediaStream = await navigator.mediaDevices.getUserMedia({&#xA;    video: { facingMode: "user" },&#xA;  });&#xA;&#xA;  websocket = new WebSocket(SERVER_URL);&#xA;&#xA;  websocket.onopen = () => {&#xA;    console.log("WS connected");&#xA;&#xA;    const options = { mimeType: "video/mp4", videoBitsPerSecond: 1_000_000 };&#xA;    mediaRecorder = new MediaRecorder(mediaStream, options);&#xA;&#xA;    mediaRecorder.ondataavailable = async (event) => {&#xA;      // to measure latency I prepend timestamp to the actual video bytes chunk&#xA;      const timestamp = Date.now();&#xA;      const timestampBuffer = new ArrayBuffer(8);&#xA;      const dataView = new DataView(timestampBuffer);&#xA;      dataView.setBigUint64(0, BigInt(timestamp), true);&#xA;      const data = await event.data.bytes();&#xA;&#xA;      const result = new Uint8Array(data.byteLength &#x2B; 8);&#xA;      result.set(new Uint8Array(timestampBuffer), 0);&#xA;      result.set(data, 8);&#xA;&#xA;      websocket.send(result);&#xA;    };&#xA;&#xA;    mediaRecorder.start(100); // Collect 100ms chunks&#xA;  };&#xA;}&#xA;

    &#xA;

      &#xA;
    1. Server to process video chunks

      &#xA;

    2. &#xA;

    &#xA;

    import { serve } from "bun";&#xA;import { Readable } from "stream";&#xA;&#xA;const V4L2LOOPBACK_DEVICE = "/dev/video10";&#xA;&#xA;export const setupFFmpeg = (v4l2device) => {&#xA;  // prettier-ignore&#xA;  return spawn("ffmpeg", [&#xA;    &#x27;-i&#x27;, &#x27;pipe:0&#x27;,           // Read from stdin&#xA;    &#x27;-pix_fmt&#x27;, &#x27;yuv420p&#x27;,    // Pixel format&#xA;    &#x27;-r&#x27;, &#x27;30&#x27;,               // Target 30 fps&#xA;    &#x27;-f&#x27;, &#x27;v4l2&#x27;,             // Output format&#xA;    v4l2device, // Output to v4l2loopback device&#xA;  ]);&#xA;};&#xA;&#xA;export class FfmpegStream extends Readable {&#xA;  _read() {&#xA;    // This is called when the stream wants more data&#xA;    // We push data when we get chunks&#xA;  }&#xA;}&#xA;&#xA;function main() {&#xA;  const ffmpeg = setupFFmpeg(V4L2LOOPBACK_DEVICE);&#xA;  serve({&#xA;    port: 8000,&#xA;    fetch(req, server) {&#xA;      if (server.upgrade(req)) {&#xA;        return; // Upgraded to WebSocket&#xA;      }&#xA;    },&#xA;    websocket: {&#xA;      open(ws) {&#xA;        console.log("Client connected");&#xA;        const stream = new FfmpegStream();&#xA;        stream.pipe(ffmpeg?.stdin);&#xA;&#xA;        ws.data = {&#xA;          stream,&#xA;          received: 0,&#xA;        };&#xA;      },&#xA;      async message(ws, message) {&#xA;        const view = new DataView(message.buffer, 0, 8);&#xA;        const ts = Number(view.getBigUint64(0, true));&#xA;        ws.data.received &#x2B;= message.byteLength;&#xA;        const chunk = new Uint8Array(message.buffer, 8, message.byteLength - 8);&#xA;&#xA;        ws.data.stream.push(chunk);&#xA;&#xA;        console.log(&#xA;          [&#xA;            `latency: ${Date.now() - ts} ms`,&#xA;            `chunk: ${message.byteLength}`,&#xA;            `total: ${ws.data.received}`,&#xA;          ].join(" | "),&#xA;        );&#xA;      },&#xA;    },&#xA;  });&#xA;}&#xA;&#xA;main();&#xA;

    &#xA;

    After I try to open the v4l2loopback device

    &#xA;

    cvlc v4l2:///dev/video10&#xA;

    &#xA;

    picture is delayed for at least 1.5 sec which is unacceptable for my project.

    &#xA;

    Thoughts :

    &#xA;

      &#xA;
    • Problem doesn't seems to be with network latency
    • &#xA;

    &#xA;

    latency: 140 ms | chunk: 661 Bytes | total: 661 Bytes&#xA;latency: 206 ms | chunk: 16.76 KB | total: 17.41 KB&#xA;latency: 141 ms | chunk: 11.28 KB | total: 28.68 KB&#xA;latency: 141 ms | chunk: 13.05 KB | total: 41.74 KB&#xA;latency: 199 ms | chunk: 11.39 KB | total: 53.13 KB&#xA;latency: 141 ms | chunk: 16.94 KB | total: 70.07 KB&#xA;latency: 139 ms | chunk: 12.67 KB | total: 82.74 KB&#xA;latency: 142 ms | chunk: 13.14 KB | total: 95.88 KB&#xA;

    &#xA;

     150ms is actually too much for 15KB on LAN but there can some issue with my router

    &#xA;

      &#xA;
    • As far as I can tell it neither ties to ffmpeg throughput :
    • &#xA;

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;pipe:0&#x27;:&#xA;  Metadata:&#xA;    major_brand     : iso5&#xA;    minor_version   : 1&#xA;    compatible_brands: isomiso5hlsf&#xA;    creation_time   : 2025-03-09T17:16:49.000000Z&#xA;  Duration: 00:00:01.38, start:&#xA;0.000000, bitrate: N/A&#xA;    Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuvj420p(pc), 1280x720, 4012 kb/s, 57.14 fps, 29.83 tbr, 600 tbn, 1200 tbc (default)&#xA;    Metadata:&#xA;      rotate          : 90&#xA;      creation_time   : 2025-03-09T17:16:49.000000Z&#xA;      handler_name    : Core Media Video&#xA;    Side data:&#xA;      displaymatrix: rotation of -90.00 degrees&#xA;&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))&#xA;&#xA;[swscaler @ 0x55d8d0b83100] deprecated pixel format used, make sure you did set range correctly&#xA;&#xA;Output #0, video4linux2,v4l2, to &#x27;/dev/video10&#x27;:&#xA;  Metadata:&#xA;    major_brand     : iso5&#xA;    minor_version   : 1&#xA;    compatible_brands: isomiso5hlsf&#xA;    encoder         : Lavf58.45.100&#xA;&#xA;Stream #0:0(und): Video: rawvideo (I420 / 0x30323449), yuv420p, 720x1280, q=2-31, 663552 kb/s, 60 fps, 60 tbn, 60 tbc (default)&#xA;    Metadata:&#xA;      encoder         : Lavc58.91.100 rawvideo&#xA;      creation_time   : 2025-03-09T17:16:49.000000Z&#xA;      handler_name    : Core Media Video&#xA;    Side data:&#xA;      displaymatrix: rotation of -0.00 degrees&#xA;&#xA;frame=   99 fps=0.0 q=-0.0 size=N/A time=00:00:01.65 bitrate=N/A dup=50 drop=0 speed=2.77x&#xA;frame=  137 fps=114 q=-0.0 size=N/A time=00:00:02.28 bitrate=N/A dup=69 drop=0 speed=1.89x&#xA;frame=  173 fps= 98 q=-0.0 size=N/A time=00:00:02.88 bitrate=N/A dup=87 drop=0 speed=1.63x&#xA;frame=  210 fps= 86 q=-0.0 size=N/A time=00:00:03.50 bitrate=N/A dup=105 drop=0 speed=1.44x&#xA;frame=  249 fps= 81 q=-0.0 size=N/A time=00:00:04.15 bitrate=N/A dup=125 drop=0 speed=1.36&#xA;frame=  279 fps= 78 q=-0.0 size=N/A time=00:00:04.65 bitrate=N/A dup=139 drop=0 speed=1.31x&#xA;

    &#xA;

      &#xA;
    • I also tried to write the video stream directly to video.mp4 file and immediately open it with vlc but it only can be successfully opened after 1.5 sec.

      &#xA;

    • &#xA;

    • I've tried to use OBS v4l2 input source instead of vlc but the latency is the same

      &#xA;

    • &#xA;

    &#xA;

    Update №1

    &#xA;

    When i try to stream actual .mp4 file to ffmpeg it works almost immediately with 0.2sec delay to spin up the ffmpeg itself :

    &#xA;

    cat video.mp4 | ffmpeg -re -i pipe:0 -pix_fmt yuv420p -f v4l2 /dev/video10 &amp; ; sleep 0.2 &amp;&amp; cvlc v4l2:///dev/video10&#xA;

    &#xA;

    So the problem is apparently with streaming process

    &#xA;