Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (71)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

Sur d’autres sites (11135)

  • How to optimize this code that gets video frame as image

    14 août 2024, par TSR

    I am quite new to mp4 file. But here is my working attempt to extract image frame given video url and a timestamp.

    


    In reality the input url is an 8K 10hours 200GB video, so I can't download it all, I can't load it to memory, and this is an API call so it has to be fast.

    


    Is there anything else I can optimize ?

    


    My doubts :

    


      

    • This line ffprobe -v error -select_streams v:0 -show_entries packet=pos,size,dts_time -read_intervals ${timestamp}%+5 -of csv=p=0 "${url}" I chose this clingy 5s, in which case would this fail ?

      


    • 


    • Same line, I don't know what's going on under the hood of this ffprobe command, but I tested it with the big 8K video and it seems to complete fast. So is it safe to assume that the entire 200GB was not downloaded ? An explanation of how this ffprobe command work would be appreciated

      


    • 


    • Based on trial and error, I concluded that the intervals returned is parsable by ffmpeg only if the first frame until the timestamp is included. If I include only that the single frame interval, ffmpeg says it is an invalid file. (Makes sense cuz I don't think I'll get an image from a 4byte data.) However, how can I be sure that I am selecting the least number of intervals.

      


    • 


    • Worse bottleneck : The function extractFrame takes 6seconds on the big video. It seems to read the entire video segment fetched (by the preceding subrange step). I couldn't find a way to jump to the last frame without computing. Is this how MP4 work ? I read somewhere that MP4 computes the current frame based on the previous. If that is true, does it mean there is no way to compute a specific frame without reading everything since the last keyframe ?

      


    • 


    • Finally, this ffmpeg line is fishy (I got it from SO Extract final frame from a video using ffmpeg) it says that it ovewrites the output at everyframe . Does it mean it is writing to the disk every time ? I experience severe degradation in performance when I used .png instead of .jpg. This tells me that the image is computed every
frame. Can we compute only the final image at the very end ?

      


    • 


    


    Here is the working code to optimize.

    


    import path from "path";
import axios from "axios";
import ffmpeg from "fluent-ffmpeg";
import fs from "fs";
import {promisify} from 'util';
import {exec} from 'child_process';

const execPromise = promisify(exec);


// URL of the video and desired timestamp (in seconds)

const videoUrl = 'https://raw.githubusercontent.com/tolotrasamuel/assets/main/videoplayback.mp4';

console.log(videoUrl);
const timestamp = 30; // Example: 30 seconds into the video


// Function to get the byte range using ffprobe
const getByteRangeForTimestamp = async (url, timestamp) => {
    // Use ffprobe to get the offset and size of the frame at the given timestamp
    const command = `ffprobe -v error -select_streams v:0 -show_entries packet=pos,size,dts_time -read_intervals ${timestamp}%+5 -of csv=p=0 "${url}"`;
    console.log('Running command:', command);
    const {stdout} = await execPromise(command);


    // Parse the output
    const timeStamps = stdout.trim().split("\n");
    const frames = timeStamps.map(ts => {
        const [dts_time, size, offset] = ts.split(',');
        const timeInt = parseFloat(dts_time);
        const offsetInt = parseInt(offset);
        const sizeInt = parseInt(size);
        return {dts_time: timeInt, size: sizeInt, offset: offsetInt};
    })

    if (frames.length === 0) {
        throw new Error('No frames found in the specified interval');
    }

    let closest;


    let i = 0
    while (i < frames.length) {
        if (i === frames.length) {
            throw new Error('No frames found in the specified 5s interval');
        }
        if (frames[i].dts_time >= timestamp) {
            const oldDiff = Math.abs(closest.dts_time - timestamp);
            const newDiff = Math.abs(frames[i].dts_time - timestamp);
            if (newDiff < oldDiff) {
                closest = frames[i];
            }
            break;
        }
        closest = frames[i];
        i++;
    }

    // I experimented with this, but it seems that the first frame is always at the beginning of a valid atom
    // anything after that will make the video unplayable
    const startByte = frames[0].offset;
    const endByte = closest.offset + closest.size - 1;

    const diff = Math.abs(closest.dts_time - timestamp);
    const size = endByte - startByte + 1;
    console.log("Start: ", startByte, "End: ", endByte, "Diff: ", diff, "Timestamp: ", timestamp, "Closest: ", closest.dts_time, "Size to fetch: ", size)


    const startTime = closest.dts_time - frames[0].dts_time;
    return {startByte, endByte, startTime};
};

// Download the specific segment
const downloadSegment = async (url, startByte, endByte, outputPath) => {
    console.log(`Downloading bytes ${startByte}-${endByte}...`);
    const response = await axios.get(url, {
        responseType: 'arraybuffer',
        headers: {
            Range: `bytes=${startByte}-${endByte}`,
        },
    });

    console.log('Segment downloaded!', response.data.length, "Written to: ", outputPath);
    fs.writeFileSync(outputPath, response.data);
};

// Extract frame from the segment
const extractFrameRaw = async (videoPath, timestamp, outputFramePath) => {


    const command = `ffmpeg -sseof -3 -i ${videoPath} -update 1 -q:v 1 ${outputFramePath} -y`;
    console.log('Running command:', command);
    const startTime = new Date().getTime();
    await execPromise(command);
    const endTime = new Date().getTime();
    console.log('Processing time:', endTime - startTime, 'ms');
    console.log('Frame extracted to:', outputFramePath);
};
const extractFrame = (videoPath, timestamp, outputFramePath) => {
    ffmpeg(videoPath)
        .inputOptions(['-sseof -5'])  // Seeks to 3 seconds before the end of the video
        .outputOptions([
            '-update 1', // Continuously update the output file with new frames
            '-q:v 1'     // Set the highest JPEG quality
        ])
        .output(outputFramePath)  // Set the output file path

        // log
        .on('start', function (commandLine) {
            console.log('Spawned Ffmpeg with command: ' + commandLine);
        })
        .on('progress', function (progress) {
            console.log('Processing: ' + progress.timemark + '% done', progress, 'frame: ', progress.frames);
        })
        .on('end', function () {
            console.log('Processing finished !');
        })
        .on('error', function (err, stdout, stderr) {
            console.error('Error:', err);
            console.error('ffmpeg stderr:', stderr);
        })
        .run();
};


const __dirname = path.resolve();

// Main function to orchestrate the process
(async () => {
    try {
        // ffmpeg.setFfmpegPath('/usr/local/bin/ffmpeg');
        const {startByte, endByte, startTime} = await getByteRangeForTimestamp(videoUrl, timestamp);
        const tmpVideoPath = path.resolve(__dirname, 'temp_video.mp4');
        const outputFramePath = path.resolve(__dirname, `frame_${timestamp}.jpg`);

        await downloadSegment(videoUrl, startByte, endByte, tmpVideoPath);
        await extractFrame(tmpVideoPath, startTime, outputFramePath);
    } catch (err) {
        console.error('Error:', err);
    }
})();


    


  • How to parallelize this for loop for rapidly converting YUV422 to RGB888 ?

    16 avril 2015, par vineet

    I am using v4l2 api to grab images from a Microsoft Lifecam and then transferring these images over TCP to a remote computer. I am also encoding the video frames into a MPEG2VIDEO using ffmpeg API. These recorded videos play too fast which is probably because not enough frames have been captured and due to incorrect FPS settings.

    The following is the code which converts a YUV422 source to a RGB888 image. This code fragment is the bottleneck in my code as it takes nearly 100 - 150 ms to execute which means I can’t log more than 6 - 10 FPS at 1280 x 720 resolution. The CPU usage is 100% as well.

    for (int line = 0; line < image_height; line++) {
       for (int column = 0; column < image_width; column++) {
           *dst++ = CLAMP((double)*py + 1.402*((double)*pv - 128.0));                                                  // R - first byte          
           *dst++ = CLAMP((double)*py - 0.344*((double)*pu - 128.0) - 0.714*((double)*pv - 128.0));    // G - next byte
           *dst++ = CLAMP((double)*py + 1.772*((double)*pu - 128.0));                                                            // B - next byte

           vid_frame->data[0][line * frame->linesize[0] + column] = *py;

           // increment py, pu, pv here

       }

    ’dst’ is then compressed as jpeg and sent over TCP and ’vid_frame’ is saved to the disk.

    How can I make this code fragment faster so that I can get atleast 30 FPS at 1280x720 resolution as compared to the present 5-6 FPS ?

    I’ve tried parallelizing the for loop across three threads using p_thread, processing one third of the rows in each thread.

    for (int line = 0; line < image_height/3; line++) // thread 1
    for (int line = image_height/3; line < 2*image_height/3; line++) // thread 2
    for (int line = 2*image_height/3; line < image_height; line++) // thread 3

    This gave me only a minor improvement of 20-30 milliseconds per frame.
    What would be the best way to parallelize such loops ? Can I use GPU computing or something like OpenMP ? Say spwaning some 100 threads to do the calculations ?

    I also noticed higher frame rates with my laptop webcam as compared to the Microsoft USB Lifecam.

    Here are other details :

    • Ubuntu 12.04, ffmpeg 2.6
    • AMG-A8 quad core processor with 6GB RAM
    • Encoder settings :
      • codec : AV_CODEC_ID_MPEG2VIDEO
      • bitrate : 4000000
      • time_base : (AVRational)1, 20
      • pix_fmt : AV_PIX_FMT_YUV420P
      • gop : 10
      • max_b_frames : 1
  • avconv transcoding drops frames

    7 mars 2015, par ziggestardust

    I have a logitech c920 that I can send perfectly fine to a wowza server with this with excelent results :

    ./capture  -o -c0|avconv -f alsa  -b 128k -i hw:1   -re -i - -vcodec copy  -ar 44100 -bufsize 1835k   -map 0:0 -map 1:0 -f flv rtmp://myhost/live/streamname

    (capture program is from here : http://derekmolloy.ie/streaming-video-using-rtp-on-the-beaglebone-black/ I’m not using beaglebone, but his capture software runs excellent on my debian pc)

    However, I want to store the h264 stream from the camera down to disk, and later send it in a lower resolution with avconv so I’m playing with libx264

    It seems I can’t even do this without getting drops :

    ./capture  -o -c0|avconv -f alsa  -b 128k -i hw:1   -re -i - -vcodec libx264  -ar 44100 -bufsize 1835k   -map 0:0 -map 1:0 -f flv rtmp://myhost/live/streamname

    This is what avconv ’-v verbose’ shows :

    alsa @ 0x864b940] Estimating duration from bitrate, this may be inaccurate
    Input #0, alsa, from 'hw:1':
     Duration: N/A, start: 22758.998967, bitrate: N/A
       Stream #0.0: Audio: pcm_s16le, 32000 Hz, 2 channels, s16, 1024 kb/s
    .................................................................................................................................[h264 @ 0x8659400] max_analyze_duration reached
    [h264 @ 0x8659400] Estimating duration from bitrate, this may be inaccurate
    Input #1, h264, from 'pipe:':
     Duration: N/A, bitrate: N/A
       Stream #1.0: Video: h264 (Constrained Baseline), yuvj420p, 1280x720 [PAR 1:1 DAR 16:9], 25 fps, 25 tbr, 1200k tbn, 48 tbc
    Parsing...
    Parsed protocol: 0

    ...... and then this :

    Output #0, flv, to 'rtmp://mystream/live/streamname':
     Metadata:
       encoder         : Lavf53.21.1
       Stream #0.0: Audio: libmp3lame, 44100 Hz, 2 channels, s16, 200 kb/s
       Stream #0.1: Video: libx264, yuvj420p, 1280x720 [PAR 1:1 DAR 16:9], q=-1--1, 1k tbn, 25 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (pcm_s16le -> libmp3lame)
     Stream #1:0 -> #0:1 (h264 -> libx264)
    Press ctrl-c to stop encoding
    [alsa @ 0x864b940] ALSA buffer xrun.
    *** drop!
       Last message repeated 39 times
    *** drop!11 fps=  0 q=0.0 size=       0kB time=0.03 bitrate= 116.9kbits/s dup=0 drop=40    
       Last message repeated 46 times
    *** drop!22 fps= 21 q=0.0 size=       0kB time=0.03 bitrate= 116.9kbits/s dup=0 drop=87    
    .    Last message repeated 20 timesss
    .....*** drop!   27 fps= 13 q=0.0 size=       0kB time=0.03 bitrate= 116.9kbits/s dup=0 drop=108    
       Last message repeated 3 timess
    ..*** drop! fps= 10 q=0.0 size=       0kB time=0.03 bitrate= 116.9kbits/s dup=0 drop=112    
       Last message repeated 5 timesss
    .*** drop!0 fps=  9 q=0.0 size=       0kB time=0.03 bitrate= 116.9kbits/s dup=0 drop=118    
    .    Last message repeated 5 timess
    .*** drop!2 fps=  9 q=0.0 size=       0kB time=0.03 bitrate= 116.9kbits/s dup=0 drop=124    
       Last message repeated 6 timesss
    .*** drop!3 fps=  8 q=0.0 size=       0kB time=0.03 bitrate= 116.9kbits/s dup=0 drop=131    
    ^C    Last message repeated 1 times
    *** drop!
       Last message repeated 3 times

    Any help appreciated