Recherche avancée

Médias (33)

Mot : - Tags -/creative commons

Autres articles (57)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (10879)

  • FFMPEG : decode h264 with multiple frames

    30 avril 2018, par Jasim Khan Afridi

    Please excuse my knowledge of video decoding, I am new to this.

    I need to decode video frames from h264 to bitmap images in C#. I am using FFmpeg.AutoGen for this. But unfortunately, I failed to get any results.

    I have the following data at my disposal.

    • nFrameType = IFrame, PFrame or BFrame
    • nSequence = Frame sequence
    • nWidth = width of resolution
    • nHeight = Height of resolution
    • nVideoSize = size of video data
    • pVideo = video data

    My method at current is the following (Similar to the example shown here) :

    initialization

    private unsafe AVCodecContext* _codecContext;
    private unsafe AVFormatContext* _formatContext;
    private unsafe AVFrame* _frame;
    AVCodecID codecID;
    private unsafe AVCodec* _avCodec;
    private bool decoderInitialized = false;

       if (!decoderInitialized)
       {
           if (nCodecType == 2)
           {
               codecID = AVCodecID.AV_CODEC_ID_H264;
           }
           unsafe
           {
               _avCodec = ffmpeg.avcodec_find_decoder(codecID);
               _codecContext = ffmpeg.avcodec_alloc_context3(_avCodec);
               ffmpeg.avcodec_open2(_codecContext, _avCodec, null);
               _frame = ffmpeg.av_frame_alloc();
           }
           decoderInitialized = true;

       }

    decoding

               AVPacket packet;
               ffmpeg.av_init_packet(&packet);
               packet.data = (byte*)pVideo;
               packet.size = (int)nVideoSize;
               int isFrameFinished = 0;

               int response = ffmpeg.avcodec_decode_video2(_codecContext, _frame, &isFrameFinished, &packet);

    I always get response = -22 and isFrameFinished = 0. I do have a hunch that I am doing something wrong here. But I am unable to find a resource to guide me in some direction. E.g. I know, I need to use nFrameType (IFrame, BFrame and PFrame). But I don’t know how ? Further, I know that I need to use width and height to decode the image properly, but again, I don’t know how to go about it.

  • How to extract a small segment of H264 stream from a complete H264 stream ?

    13 août 2024, par 陈炫宇

    I cut a large image into many tiles (150,000), and then encoded it into a .h264 file with FFmpeg at one frame per second and put it on the server. Now I want to randomly decode a picture from the .h264 file for analysis.
My idea for decoding locally is to record their indexes (i, j) when cutting the pictures, and then let the decoder find the corresponding frame in .h264 according to the index, and the effect is pretty good.
Now I put .h264 on the server (minio), and I want to intercept only a section of the code stream and decode it. Is this possible ?

    


    (For example, my .h264 has 10 frames in total, and I want to decode the 5th frame and save it. Can I cache the 4th to 6th frames and then decode them ?)

    


    ----------------------This is the code I used to capture a small portion of the stream from the server.-----------------------------------------------------------

    


    @lru_cache(1024)
def read_small_cached(self, start, length, retry=5):
    ''' read cached and thread safe, but do not read too large'''
    if length == 0:
        return b''
    resp = None
    for _ in range(retry):
        try:
            _client = minio_get_client()
            resp = _client.get_object(
                self.bucket, self.filename, offset=start, length=length)
            oup = resp.read()
            return oup
        except:
            time.sleep(3)
        finally:
            if resp is not None:
                resp.close()
                resp.release_conn()
    raise Exception(f'can not read {self.name}')


    



    


    I tried parameters such as start = 0, length = 10241024100, and I can read part of the picture.

    


    But when I used the function to find IDR frames and get their position, FFmpeg's decoder thought it was an invalid resource.

    


    ---here is my find key_frame 's code-

    


    def get_all_content(file_path, search_bytes):

if not file_path or not search_bytes:
    raise ValueError("file_path 和 search_bytes 不能为空")

search_len = len(search_bytes)
positions = []

with open(file_path, 'rb') as fp:
    fp.seek(0, 2)  # 移动到文件末尾 0表示文件移动的字节数,2表示到末尾,这样是为了获取文件的总大小
    file_size = fp.tell()  # 文件大小
    fp.seek(0)  # 回到文件开始

    pos = 0
    while pos <= (file_size - search_len):
        fp.seek(pos)
        buf = fp.read(search_len)
        if len(buf) < search_len:
            break
        if buf == search_bytes:
            positions.append(pos)
        pos += 1

return positions


    


    I have only recently started to understand H264. Perhaps I have a misunderstanding of bitstream and H264. I hope to get some guidance. Thanks in advance !

    


  • How to read frames of a video and write them on another video output using FFMPEG and nodejs

    29 décembre 2023, par Aviato

    I am working on a project where I need to process video frames one at a time in Node.js. I aim to avoid storing all frames in memory or the filesystem due to resource constraints. I plan to use the ffmpeg from child processes for video processing.
I tried reading a video file and then output frames of it in the filesystem first for testing purposes :-

    


    const ffmpegProcess = spawn('ffmpeg', [
  '-i', videoFile,
  'testfolder/%04d.png' // Output frames to stdout
]);


    


    and the above code works fine, it saves the video frames as png files in the filesystem. Now instead of saving them in the file system, I want to read the frames on at a time and use a image manipulation library and than write the final edited frames to another video as output

    


    I tried this :-

    


    const ffmpegProcess = spawn('ffmpeg', [
  '-i', videoFile,
  'pipe:1' // Output frames to stdout
]);

const ffmpegOutputProcess = spawn('ffmpeg', [
  '-i', '-',
  'outputFileName.mp4'
  ]);

ffmpegProcess.stdout.on('data', (data) => {
  // Process the frame data as needed
  console.log('Received frame data:');
  ffmpegOutputProcess.stdin.write(data)
});

ffmpegProcess.on('close', (code) => {
  if (code !== 0) {
    console.error(`ffmpeg process exited with code ${code}`);
  } else {
    console.log('ffmpeg process successfully completed');
    
  }
});

// Handle errors
ffmpegProcess.on('error', (err) => {
  console.error('Error while spawning ffmpeg:', err);
});


    


    But when I tried above code and also some other modifications in the input and output suffix in the command I got problems as below :-

    


      

    1. ffmpeg process exited with code 1
    2. 


    3. The final output video was corrupted when trying to initializing the filters for commands :-
    4. 


    


    
const ffmpegProcess = spawn('ffmpeg', [
 '-i', videoFile,
 '-f', 'rawvideo',
 '-pix_fmt', 'rgb24',
 'pipe:1' // Output frames to stdout
]);

const ffmpegOutputCommand = [
 '-f', 'rawvideo',
 '-pix_fmt', 'rgb24',
 '-s', '1920x1080',
 '-r', '30',
 '-i', '-',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 outputFileName
];


    


    Thank you so much in advance :)