Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (86)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

Sur d’autres sites (6344)

  • Adding FFMPEG Layer to HLS streaming causes video playback issues

    25 juin 2023, par Moe

    I have been searching a lot about HLS streaming and have succeeded to create a simple HLS streaming server with nodejs, the problem now is I need to add a layer of ffmpeg encoding to the .ts chunks before streaming to the user, without this layer everything works fine and on my server only 3 requests are seen :

    


    manifest.m3u8
output_000.ts
output_000.ts
output_001.ts
output_002.ts


    


    But then when I add a simple ffmpeg layer that literally copies everything from the ts file and output the stream (I will add of course dynamic filters to each request, thats why I need this ffmpeg layer), the player goes insane and request the whole video in just 5 seconds or something :

    


    manifest.m3u8
output_000.ts
output_000.ts
output_001.ts
output_002.ts
output_001.ts
output_003.ts
output_002.ts
...
output_095.ts


    


    I have also notices that the numbers aren't increasing uniformly and suspect this is part of the issue, I have tried adding more ffmpeg options to not do anything to the .ts files that are being fed to it as they are a part of a bigger video.

    


    Here's my NodeJS server (NextJS API route) :

    


    
const fs = require(`fs`);
const path = require(`path`);
const {exec, spawn} = require(`child_process`);
const pathToFfmpeg = require(`ffmpeg-static`);

export default function handler(req, res) {
  
    const { filename } = req.query;
    console.log(filename);
    const filePath = path.join(process.cwd(), 'public', 'stream', `${filename}`);
    const inputStream = fs.createReadStream(filePath);

    // first check if that is ts file..
    if(filename.indexOf(`.ts`) != -1){
  
      const ffmpegProcess = spawn(pathToFfmpeg, [
        '-f', `mpegts`,
        '-i', 'pipe:0', // specify input as pipe
        '-c', 'copy', 
        '-avoid_negative_ts', '0',
        `-map_metadata`, `0`,  // copy without re-encoding
        '-f', 'mpegts', // output format
        'pipe:1'        // specify output as pipe
      ], {
        stdio: ['pipe', 'pipe', 'pipe'] // enable logging by redirecting stderr to stdout
      });
      res.status(200);
      res.setHeader('Content-Type', 'application/vnd.apple.mpegurl');
      res.setHeader('Cache-Control', 'no-cache');
      res.setHeader('Access-Control-Allow-Origin', '*');
 

      // ffmpegProcess.stderr.pipe(process.stdout); // log stderr to stdout
  
      inputStream.pipe(ffmpegProcess.stdin);
      ffmpegProcess.stdout.pipe(res);
  
      ffmpegProcess.on('exit', (code) => {
        if (code !== 0) {
          console.error(`ffmpeg process exited with code ${code}`);
        }
      });
    }else{
      // if not then stream whatever file as it is
      res.status(200);
      res.setHeader('Content-Type', 'application/vnd.apple.mpegurl');
      inputStream.pipe(res);
    }
  }


    


    I have tried to feed the request's player appropriate headers but that didn't work, I have also tried to add the '-re' option to the ffmpeg encoder itself and hoped for minimal performance hits, but that also caused playback issue due to being too slow.

    


  • avformat/matroskaenc : Don't waste bytes writing level 1 elements

    20 avril 2019, par Andreas Rheinhardt
    avformat/matroskaenc : Don't waste bytes writing level 1 elements
    

    Up until now, the length field of most level 1 elements has been written
    using eight bytes, although it is known in advance how much space the
    content of said elements will take up so that it would be possible to
    determine the minimal amount of bytes for the length field. This
    commit changes this.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavformat/matroskaenc.c
    • [DH] tests/fate/matroska.mak
    • [DH] tests/fate/wavpack.mak
    • [DH] tests/ref/fate/aac-autobsf-adtstoasc
    • [DH] tests/ref/fate/binsub-mksenc
    • [DH] tests/ref/fate/rgb24-mkv
    • [DH] tests/ref/lavf/mka
    • [DH] tests/ref/lavf/mkv
    • [DH] tests/ref/lavf/mkv_attachment
    • [DH] tests/ref/seek/lavf-mkv
  • ffmpeg : use vidstabtransform to overlay it over blurred background

    5 novembre 2023, par konewka

    I am using ffmpeg to concatenate multiple video clips taken from the same object over multiple timeframes. To make sure the videos are properly aligned (and therefore show the object in rougly the same position), I manually identify two points in the first frame each clip, and use that to calculate the scaling and positioning necessary for proper alignment. I'm using Python for this, and it also generates the ffmpeg command for me. When it has calculated that the appropriate scale of the video is less than 100%, that means that some parts of the frame will become black. To counter that, I overlay the scaled and positioned video over a blurred version of the original video (like this effect)

    &#xA;

    Now, additionally, some of the video clips are a bit shaky, so my flow now first applies the vidstabdetect and vidstabtransform filters, and uses the transformed stabilized version as input for my final command. However, if the shaking is significant, the vidstabtransform will zoom in and therefore I will either lose some of the details around the edges, or a black border is created around the edge. As I am later including the stabilized version of the video in the concatenation, with the possibility of it shrinking, I would rather perform the vidstabtransform step inside my command, and use the output directly into the overlay over the blurred version. That way, I would want to achieve that the clip rotates across the frame as it is stabilized, and it is shown over the blurred background. Is it possible to achieve this using ffmpeg, or am I trying to stretch it too far ?

    &#xA;

    As a minimal example, these are my commands :

    &#xA;

    ffmpeg -i video1.mp4 -vf vidstabdetect=output=transform.trf -f null - &#xA;&#xA;ffmpeg -i video1.mp4 -vf vidstabtransform=input=transform.trf video1_stabilized.mp4&#xA;&#xA;# same for video2.mp4&#xA;&#xA;ffmpeg -i video1_stabilized.mp4 -i video2_stabilized.mp4 -filter_complex "&#xA;    [0:v]split=2[v0blur][v0scale];&#xA;    [v0blur]gblur=sigma=50[v0blur];  // blur the video&#xA;    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];  // scale the video&#xA;    [v0blur][v0scale]overlay=x=100:y=200[v0];  // overlay the scaled video over the blur at a specific location&#xA;    [1:v]split=2[v1blur][v1scale];&#xA;    [v1blur]gblur=sigma=50[v1blur];&#xA;    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];&#xA;    [v1blur][v1scale]overlay=x=150:y=150[v1];&#xA;    [v0][v1]concat=n=2  // concatenate the two clips" &#xA;-c:v libx264 -r 30 out.mp4&#xA;

    &#xA;

    So, I know I can put the vidstabtransform step into the filter_complex-graph (I'll do the detection in a separate step still), but can I also use it such that I can achieve the stabilization over the blurred background and have the clip move around the frame as it is stabilized ?

    &#xA;

    EDIT : so to include vidstabtransform into the filter graph, it would then look like this :

    &#xA;

    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "&#xA;    [0:v]vidstabtransform=input=transform1.trf[v0stab]&#xA;    [v0stab]split=2[v0blur][v0scale];&#xA;    [v0blur]gblur=sigma=50[v0blur];&#xA;    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];&#xA;    [v0blur][v0scale]overlay=x=100:y=200[v0];&#xA;    [1:v]vidstabtransform=input=transform2.trf[v1stab]&#xA;    [v1stab]split=2[v1blur][v1scale];&#xA;    [v1blur]gblur=sigma=50[v1blur];&#xA;    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];&#xA;    [v1blur][v1scale]overlay=x=150:y=150[v1];&#xA;    [v0][v1]concat=n=2"&#xA;-c:v libx264 -r 30 out.mp4&#xA;

    &#xA;