
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (95)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (12824)
-
Problem : FFmpeg and C++ extract and save frame
26 mai 2021, par Simba_cl25I am doing a project where I must do the following : extract frames (along with the associated metadata - KLV) from a video given a period (for the moment every 10 seconds).
I have followed codes found on internet but I get an error that I can find a solution to.


extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavdevice></libavdevice>avdevice.h>
#include <libswscale></libswscale>swscale.h>
#include <libavfilter></libavfilter>avfilter.h>
#include <libswresample></libswresample>swresample.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>imgutils.h> 
}

static void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame);


int main(int argc, const char * argv[])
{
 AVFormatContext *pFormatCtx;
 int i, videoStream;
 AVCodecContext *pCodecCtx = NULL;
 const AVCodec *pCodec = NULL;
 AVFrame *pFrame;
 AVFrame *pFrameRGB;
 AVPacket *packet = av_packet_alloc();
 AVStream *pStream;
 int numBytes;
 int64_t Duration;
 uint8_t *buffer;
 bool frameFinished = false;

 // Open video file - check for errors
 pFormatCtx = 0;
 if (avformat_open_input(&pFormatCtx, "00Video\\VideoR.ts", 0, 0) != 0)
 return -1; 

 if (avformat_find_stream_info(pFormatCtx, 0) < 0)
 return -1;

 if (av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, 0, 0) <0)
 return -1;

 av_dump_format(pFormatCtx, 0, "00Video\\VideoR.ts", false);


 // Find the first video stream
 videoStream = -1;
 for (i = 0; i < pFormatCtx->nb_streams; i++)
 if (pFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 videoStream = i;
 break;
 }
 if (videoStream == -1)
 return -1; 
 


 // Find the decoder for the video stream
 pCodec = avcodec_find_decoder(pFormatCtx->streams[videoStream]->codecpar->codec_id);
 pCodecCtx = avcodec_alloc_context3(pCodec);

 if (pCodec == NULL)
 {
 fprintf(stderr, "Codec not found\n");
 return -1; 
 }


 if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
 {
 fprintf(stderr, "Could not open codec\n");
 return -1; 
 }

 // Hack to correct wrong frame rates that seem to be generated by some codecs
 if (pCodecCtx->time_base.num > 1000 && pCodecCtx->time_base.den == 1)
 pCodecCtx->time_base.den = 1000;




 // Allocate video frame - original frame 
 pFrame = av_frame_alloc();

 if (!pFrame) {
 fprintf(stderr, "Could not allocate video frame\n");
 return -1;
 }


 // Allocate an AVFrame structure
 pFrameRGB = av_frame_alloc();

 if (pFrameRGB == NULL)
 {
 fprintf(stderr, "Could not allocate video RGB frame\n");
 return -1;
 }

 
 Duration = av_rescale_q(vstrm->duration, vstrm->time_base, { 1,1000 });
 
 numBytes = av_image_get_buffer_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, 0);
 buffer = (uint8_t*)av_malloc(numBytes * sizeof(uint8_t));


 av_image_fill_arrays(pFrameRGB->data, pFrameRGB->linesize, buffer, AV_PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height, 1);
 
 
 
 
 i = 0;
 while (av_read_frame(pFormatCtx, packet) >= 0)
 {
 // Is this a packet from the video stream?
 if (packet->stream_index == videoStream)
 {
 int ret;
 ret = avcodec_send_packet(pCodecCtx, packet);
 if (ret < 0 || ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 fprintf(stderr, "Error sending a packet for decoding\n");
 //break;
 }
 while (ret >= 0) {
 ret = avcodec_receive_frame(pCodecCtx, pFrame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 return -1;
 else if (ret < 0) {
 fprintf(stderr, "Error during decoding\n");
 return -1;
 frameFinished = true;
 }


 // Did we get a video frame?
 if (frameFinished)
 {
 static struct SwsContext *img_convert_ctx;

 
 if (img_convert_ctx == NULL) {
 int w = pCodecCtx->width;
 int h = pCodecCtx->height;
 img_convert_ctx = sws_getContext(w, h,
 pCodecCtx->pix_fmt,
 w, h, AV_PIX_FMT_RGB24, SWS_FAST_BILINEAR,
 NULL, NULL, NULL);

 if (img_convert_ctx == NULL) {
 fprintf(stderr, "Cannot initialize the conversion context!\n");
 exit(1);
 }
 }

 int ret = sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0,
 pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);


 // Save the frame to disk
 if (i <= Duration)
 {
 SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
 i += 10*1000;
 }
 }
 }

 
 }

 // Free the packet that was allocated by av_read_frame
 av_packet_unref(packet);
 }

 

 // Free the RGB image
 free(buffer);
 av_free(pFrameRGB);

 // Free the YUV frame
 av_free(pFrame);

 // Close the codec
 avcodec_close(pCodecCtx);

 // Close the video file
 avformat_close_input(&pFormatCtx);
 return 0;
 
}




static void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
{
 FILE *pFile;
 char szFilename[32];
 int y;


 // Open file
 sprintf(szFilename, "Im\\frame%d.png", iFrame);
 pFile = fopen(szFilename, "wb");
 if (pFile == NULL)
 return;

 // Write header
 fprintf(pFile, "P6\n%d %d\n255\n", width, height);
 // Write pixel data
 for (y = 0; y < height; y++)
 fwrite(pFrame->data[0] + y * pFrame->linesize[0], 1, width*3, pFile);

 // Close file
 fclose(pFile);
}



The error I get is :


[swscaler @ 03055A80] bad dst image pointers



I think is because


numBytes = av_image_get_buffer_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, 0);



Returns a negative value ( -22) but I don't know why.


Thanks,


-
How to save last 30 seconds of video in py
5 juin 2024, par Mateus CoelhoI want the last 30 seconds to be recorded every time I click enter and sent to the cloud. for example, if I click at 00:10:30, I want a video that records from 00:10:00 to 00:10:30 and if I click in sequence at 00:10:32, I need another different video that in its content is recorded from 00:10:02 to 00:10:32.


I think I have a problem where I will always end up recovering from the same buffer in the last 30 seconds. Is there any approach so that whenever I click enter I retrieve a unique video ? Is my current approach the best for the problem ? Or should I use something else ?


import subprocess
import os
import keyboard
from datetime import datetime
from google.cloud import storage

# Configuration
STATE = "mg"
CITY = "belohorizonte"
COURT = "duna"
RTSP_URL = "rtsp://Apertai:130355va@192.168.0.2/stream1"
BUCKET_NAME = "apertai-cloud"
CREDENTIALS_PATH = "C:/Users/Abidu/ApertAI/key.json"

def start_buffer_stream():
 # Command for continuous buffer that overwrites itself every 30 seconds
 buffer_command = [
 'ffmpeg',
 '-i', RTSP_URL,
 '-map', '0',
 '-c', 'copy',
 '-f', 'segment',
 '-segment_time', '30', # Duration of each segment
 '-segment_wrap', '2', # Number of segments to wrap around
 '-reset_timestamps', '1', # Reset timestamps at the start of each segment
 'buffer-%03d.ts' # Save segments with a numbering pattern
 ]
 return subprocess.Popen(buffer_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

def save_last_30_seconds_from_buffer(buffer_file):
 datetime_now = datetime.now()
 datetime_now_formatted = f"{datetime_now.day:02}{datetime_now.month:02}{datetime_now.year}-{datetime_now.hour:02}{datetime_now.minute:02}"
 output_file_name = os.path.abspath(f"{STATE}-{CITY}-{COURT}-{datetime_now_formatted}.mp4")

 # Copy the most recent buffer segment to the output file
 save_command = [
 'ffmpeg',
 '-i', buffer_file,
 '-c', 'copy',
 output_file_name
 ]
 subprocess.run(save_command, check=True)
 print(f"Saved last 30 seconds: {output_file_name}")
 return output_file_name

def upload_to_google_cloud(file_name):
 client = storage.Client.from_service_account_json(CREDENTIALS_PATH)
 bucket = client.bucket(BUCKET_NAME)
 blob = bucket.blob(os.path.basename(file_name).replace("-", "/"))
 blob.upload_from_filename(file_name, content_type='application/octet-stream')
 print(f"Uploaded {file_name} to {BUCKET_NAME}")
 os.remove(file_name) # Clean up the local file

def main():
 print("Starting continuous buffer for RTSP stream...")
 start_time = datetime.now()
 buffer_process = start_buffer_stream()
 print("Press 'Enter' to save the last 30 seconds of video...")

 while True:
 # Verify if 30 seconds has passed since start
 if keyboard.is_pressed('enter'):
 print("Saving last 30 seconds of video...")
 elapsed_time = (datetime.now() - start_time).total_seconds()
 # Determine which buffer segment to save
 if elapsed_time % 60 < 30:
 buffer_file = 'buffer-000.ts'
 else:
 buffer_file = 'buffer-001.ts'
 final_video = save_last_30_seconds_from_buffer(buffer_file)
 upload_to_google_cloud(final_video)

if _name_ == "_main_":
 main()





-
Revision d205335060 : [svc] Finalize spatial svc first pass rate control 1. Save stats for each
19 mars 2014, par Minghai ShangChanged Paths :
Modify /examples/vp9_spatial_scalable_encoder.c
Modify /test/svc_test.cc
Modify /vp9/encoder/vp9_firstpass.c
Modify /vp9/encoder/vp9_firstpass.h
Modify /vp9/encoder/vp9_onyx_if.c
Modify /vp9/encoder/vp9_onyx_int.h
Modify /vp9/encoder/vp9_svc_layercontext.h
Modify /vpx/src/svc_encodeframe.c
Modify /vpx/vpx_encoder.h
[svc] Finalize spatial svc first pass rate control1. Save stats for each spatial layer
2. Add frame buffer management for svc first pass rc
3. Set default spatial layer to 1
4. Flush encoder at the end of stream in test app
This only supports spatial svc.
Change-Id : Ia89cfa87bb6394e6c0405b921d86c426d0a0c9ae