Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (36)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (3788)

  • Processing video frame by frame in AWS Lambda with Node.js and FFmpeg [closed]

    29 décembre 2023, par Aviato

    I am working on a project where I need to process video frames one at a time in an AWS Lambda function using Node.js. My goal is to avoid storing all frames in memory or the filesystem due to resource constraints. I plan to use the fluent-ffmpeg library or ffmpeg from child processes for video processing.

    


    In the past, I used OpenCV to process videos and frames without writing the frames on the disk or storing all the frames at once on the memory itself. But now as I am using node js, its a little hard to set up the code using ffmpeg, etc.

    


    Here is a small snippet from what I did with opencv :-

    


    import cv2

cap = cv2.VideoCapture(video_file)

out = cv2.VideoWriter('output.mp4', fourcc, fps, (width, height))

def generate_frame():
        while cap.isOpened():
            code, frame = cap.read()
            if code:
                yield frame
            else:
                print("completed")
                break

for i, frame in enumerate(generate_frame()):
          # Now we can process the video frames directly and write them on the output opencv
          out.write(editing_frames)


    


    Additionally, I intend to leverage image processing libraries like Sharp and the Canvas API to edit individual frames before assembling the final video. I am looking for help in handling video frames efficiently within the constraints of AWS Lambda.

    


    Any insights, code snippets, or recommendations would be greatly appreciated. Thank you !

    


  • How to read frames of a video and write them on another video output using FFMPEG and nodejs

    29 décembre 2023, par Aviato

    I am working on a project where I need to process video frames one at a time in Node.js. I aim to avoid storing all frames in memory or the filesystem due to resource constraints. I plan to use the ffmpeg from child processes for video processing.
I tried reading a video file and then output frames of it in the filesystem first for testing purposes :-

    


    const ffmpegProcess = spawn('ffmpeg', [
  '-i', videoFile,
  'testfolder/%04d.png' // Output frames to stdout
]);


    


    and the above code works fine, it saves the video frames as png files in the filesystem. Now instead of saving them in the file system, I want to read the frames on at a time and use a image manipulation library and than write the final edited frames to another video as output

    


    I tried this :-

    


    const ffmpegProcess = spawn('ffmpeg', [
  '-i', videoFile,
  'pipe:1' // Output frames to stdout
]);

const ffmpegOutputProcess = spawn('ffmpeg', [
  '-i', '-',
  'outputFileName.mp4'
  ]);

ffmpegProcess.stdout.on('data', (data) => {
  // Process the frame data as needed
  console.log('Received frame data:');
  ffmpegOutputProcess.stdin.write(data)
});

ffmpegProcess.on('close', (code) => {
  if (code !== 0) {
    console.error(`ffmpeg process exited with code ${code}`);
  } else {
    console.log('ffmpeg process successfully completed');
    
  }
});

// Handle errors
ffmpegProcess.on('error', (err) => {
  console.error('Error while spawning ffmpeg:', err);
});


    


    But when I tried above code and also some other modifications in the input and output suffix in the command I got problems as below :-

    


      

    1. ffmpeg process exited with code 1
    2. 


    3. The final output video was corrupted when trying to initializing the filters for commands :-
    4. 


    


    
const ffmpegProcess = spawn('ffmpeg', [
 '-i', videoFile,
 '-f', 'rawvideo',
 '-pix_fmt', 'rgb24',
 'pipe:1' // Output frames to stdout
]);

const ffmpegOutputCommand = [
 '-f', 'rawvideo',
 '-pix_fmt', 'rgb24',
 '-s', '1920x1080',
 '-r', '30',
 '-i', '-',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 outputFileName
];


    


    Thank you so much in advance :)

    


  • avcodec/libjxl.h : include version.h

    23 janvier 2024, par Leo Izen
    avcodec/libjxl.h : include version.h
    

    This file has been exported since our minimum required version (0.7.0),
    but it wasn't documented. Instead it was transitively included by
    <jxl/decode.h> (but not jxl/encode.h), which ffmpeg relied on.

    libjxl broke its API in libjxl/libjxl@66b959239355aef5255 by removing
    the transitive include of version.h, and they do not plan on adding
    it back. Instead they are choosing to leave the API backwards-
    incompatible with downstream callers written for some fairly recent
    versions of their API.

    As a result, we include <jxl/version.h> to continue to build against
    more recent versions of libjxl. The version macros removed are also
    present in that file, so we no longer need to redefine them.

    Signed-off-by : Leo Izen <leo.izen@gmail.com>

    • [DH] libavcodec/libjxl.h