
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (38)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
D’autres logiciels intéressants
12 avril 2011, parOn ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
Videopress
Site Internet : (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (8104)
-
How to save last 30 seconds of video in py
5 juin 2024, par Mateus CoelhoI want the last 30 seconds to be recorded every time I click enter and sent to the cloud. for example, if I click at 00:10:30, I want a video that records from 00:10:00 to 00:10:30 and if I click in sequence at 00:10:32, I need another different video that in its content is recorded from 00:10:02 to 00:10:32.


I think I have a problem where I will always end up recovering from the same buffer in the last 30 seconds. Is there any approach so that whenever I click enter I retrieve a unique video ? Is my current approach the best for the problem ? Or should I use something else ?


import subprocess
import os
import keyboard
from datetime import datetime
from google.cloud import storage

# Configuration
STATE = "mg"
CITY = "belohorizonte"
COURT = "duna"
RTSP_URL = "rtsp://Apertai:130355va@192.168.0.2/stream1"
BUCKET_NAME = "apertai-cloud"
CREDENTIALS_PATH = "C:/Users/Abidu/ApertAI/key.json"

def start_buffer_stream():
 # Command for continuous buffer that overwrites itself every 30 seconds
 buffer_command = [
 'ffmpeg',
 '-i', RTSP_URL,
 '-map', '0',
 '-c', 'copy',
 '-f', 'segment',
 '-segment_time', '30', # Duration of each segment
 '-segment_wrap', '2', # Number of segments to wrap around
 '-reset_timestamps', '1', # Reset timestamps at the start of each segment
 'buffer-%03d.ts' # Save segments with a numbering pattern
 ]
 return subprocess.Popen(buffer_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

def save_last_30_seconds_from_buffer(buffer_file):
 datetime_now = datetime.now()
 datetime_now_formatted = f"{datetime_now.day:02}{datetime_now.month:02}{datetime_now.year}-{datetime_now.hour:02}{datetime_now.minute:02}"
 output_file_name = os.path.abspath(f"{STATE}-{CITY}-{COURT}-{datetime_now_formatted}.mp4")

 # Copy the most recent buffer segment to the output file
 save_command = [
 'ffmpeg',
 '-i', buffer_file,
 '-c', 'copy',
 output_file_name
 ]
 subprocess.run(save_command, check=True)
 print(f"Saved last 30 seconds: {output_file_name}")
 return output_file_name

def upload_to_google_cloud(file_name):
 client = storage.Client.from_service_account_json(CREDENTIALS_PATH)
 bucket = client.bucket(BUCKET_NAME)
 blob = bucket.blob(os.path.basename(file_name).replace("-", "/"))
 blob.upload_from_filename(file_name, content_type='application/octet-stream')
 print(f"Uploaded {file_name} to {BUCKET_NAME}")
 os.remove(file_name) # Clean up the local file

def main():
 print("Starting continuous buffer for RTSP stream...")
 start_time = datetime.now()
 buffer_process = start_buffer_stream()
 print("Press 'Enter' to save the last 30 seconds of video...")

 while True:
 # Verify if 30 seconds has passed since start
 if keyboard.is_pressed('enter'):
 print("Saving last 30 seconds of video...")
 elapsed_time = (datetime.now() - start_time).total_seconds()
 # Determine which buffer segment to save
 if elapsed_time % 60 < 30:
 buffer_file = 'buffer-000.ts'
 else:
 buffer_file = 'buffer-001.ts'
 final_video = save_last_30_seconds_from_buffer(buffer_file)
 upload_to_google_cloud(final_video)

if _name_ == "_main_":
 main()





-
Problem : FFmpeg and C++ extract and save frame
26 mai 2021, par Simba_cl25I am doing a project where I must do the following : extract frames (along with the associated metadata - KLV) from a video given a period (for the moment every 10 seconds).
I have followed codes found on internet but I get an error that I can find a solution to.


extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavdevice></libavdevice>avdevice.h>
#include <libswscale></libswscale>swscale.h>
#include <libavfilter></libavfilter>avfilter.h>
#include <libswresample></libswresample>swresample.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>imgutils.h> 
}

static void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame);


int main(int argc, const char * argv[])
{
 AVFormatContext *pFormatCtx;
 int i, videoStream;
 AVCodecContext *pCodecCtx = NULL;
 const AVCodec *pCodec = NULL;
 AVFrame *pFrame;
 AVFrame *pFrameRGB;
 AVPacket *packet = av_packet_alloc();
 AVStream *pStream;
 int numBytes;
 int64_t Duration;
 uint8_t *buffer;
 bool frameFinished = false;

 // Open video file - check for errors
 pFormatCtx = 0;
 if (avformat_open_input(&pFormatCtx, "00Video\\VideoR.ts", 0, 0) != 0)
 return -1; 

 if (avformat_find_stream_info(pFormatCtx, 0) < 0)
 return -1;

 if (av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, 0, 0) <0)
 return -1;

 av_dump_format(pFormatCtx, 0, "00Video\\VideoR.ts", false);


 // Find the first video stream
 videoStream = -1;
 for (i = 0; i < pFormatCtx->nb_streams; i++)
 if (pFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 videoStream = i;
 break;
 }
 if (videoStream == -1)
 return -1; 
 


 // Find the decoder for the video stream
 pCodec = avcodec_find_decoder(pFormatCtx->streams[videoStream]->codecpar->codec_id);
 pCodecCtx = avcodec_alloc_context3(pCodec);

 if (pCodec == NULL)
 {
 fprintf(stderr, "Codec not found\n");
 return -1; 
 }


 if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
 {
 fprintf(stderr, "Could not open codec\n");
 return -1; 
 }

 // Hack to correct wrong frame rates that seem to be generated by some codecs
 if (pCodecCtx->time_base.num > 1000 && pCodecCtx->time_base.den == 1)
 pCodecCtx->time_base.den = 1000;




 // Allocate video frame - original frame 
 pFrame = av_frame_alloc();

 if (!pFrame) {
 fprintf(stderr, "Could not allocate video frame\n");
 return -1;
 }


 // Allocate an AVFrame structure
 pFrameRGB = av_frame_alloc();

 if (pFrameRGB == NULL)
 {
 fprintf(stderr, "Could not allocate video RGB frame\n");
 return -1;
 }

 
 Duration = av_rescale_q(vstrm->duration, vstrm->time_base, { 1,1000 });
 
 numBytes = av_image_get_buffer_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, 0);
 buffer = (uint8_t*)av_malloc(numBytes * sizeof(uint8_t));


 av_image_fill_arrays(pFrameRGB->data, pFrameRGB->linesize, buffer, AV_PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height, 1);
 
 
 
 
 i = 0;
 while (av_read_frame(pFormatCtx, packet) >= 0)
 {
 // Is this a packet from the video stream?
 if (packet->stream_index == videoStream)
 {
 int ret;
 ret = avcodec_send_packet(pCodecCtx, packet);
 if (ret < 0 || ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 fprintf(stderr, "Error sending a packet for decoding\n");
 //break;
 }
 while (ret >= 0) {
 ret = avcodec_receive_frame(pCodecCtx, pFrame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 return -1;
 else if (ret < 0) {
 fprintf(stderr, "Error during decoding\n");
 return -1;
 frameFinished = true;
 }


 // Did we get a video frame?
 if (frameFinished)
 {
 static struct SwsContext *img_convert_ctx;

 
 if (img_convert_ctx == NULL) {
 int w = pCodecCtx->width;
 int h = pCodecCtx->height;
 img_convert_ctx = sws_getContext(w, h,
 pCodecCtx->pix_fmt,
 w, h, AV_PIX_FMT_RGB24, SWS_FAST_BILINEAR,
 NULL, NULL, NULL);

 if (img_convert_ctx == NULL) {
 fprintf(stderr, "Cannot initialize the conversion context!\n");
 exit(1);
 }
 }

 int ret = sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0,
 pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);


 // Save the frame to disk
 if (i <= Duration)
 {
 SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
 i += 10*1000;
 }
 }
 }

 
 }

 // Free the packet that was allocated by av_read_frame
 av_packet_unref(packet);
 }

 

 // Free the RGB image
 free(buffer);
 av_free(pFrameRGB);

 // Free the YUV frame
 av_free(pFrame);

 // Close the codec
 avcodec_close(pCodecCtx);

 // Close the video file
 avformat_close_input(&pFormatCtx);
 return 0;
 
}




static void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
{
 FILE *pFile;
 char szFilename[32];
 int y;


 // Open file
 sprintf(szFilename, "Im\\frame%d.png", iFrame);
 pFile = fopen(szFilename, "wb");
 if (pFile == NULL)
 return;

 // Write header
 fprintf(pFile, "P6\n%d %d\n255\n", width, height);
 // Write pixel data
 for (y = 0; y < height; y++)
 fwrite(pFrame->data[0] + y * pFrame->linesize[0], 1, width*3, pFile);

 // Close file
 fclose(pFile);
}



The error I get is :


[swscaler @ 03055A80] bad dst image pointers



I think is because


numBytes = av_image_get_buffer_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, 0);



Returns a negative value ( -22) but I don't know why.


Thanks,


-
CMD Batch Variable Won't Save FFprobe Output
15 juillet 2017, par Matt McManisI have an CMD Batch Script that will convert a folder of mp4 videos to webm.
You will need :
- FFmpeg/FFprobe installed and set in Environment Variables to run from CMD.
- A folder with an mp4 for FFprobe to parse.
To make it easy, this is only the first part of the script, showing the
Video Bitrate
variable.Here is a full script, just replace the paths.
https://pastebin.com/raw/3ng77ExzHow the Script works :
- Loops through all videos in folder
- Has FFprobe parse the Video’s Bitrate and save it to
%V
and
%vBitrate%
. - Has FFmpeg use
%V
. Such as-b:v %V
will become the parsed value
-b:v 9401k
. - Converts each video from mp4 to webm using the parsed Bitrate
Problem
I can’t get FFprobe’s Output to save to the variable. I’ve come up with a workaround, having it first save the bitrate value to a
temp file
, then import that to the%vBitrate%
variable.Example :
(%V > tmp_vBitrate) & SET /p vBitrate= < tmp_vBitrate
.
Works
Temp File Variable
cd "C:\Users\Matt\Videos\" && for %f in (*.mp4) do ffprobe -i "C:\Users\Matt\Desktop\Test\%~f" -select_streams v:0 -show_entries stream=bit_rate -v quiet -of csv="p=0" & for /f "tokens=*" %V in ("ffprobe -i "%~f" -select_streams v:0 -show_entries stream=bit_rate -v quiet -of csv=p=0") do (echo ) & (%V > tmp_vBitrate) & SET /p vBitrate= < tmp_vBitrate & del tmp_vBitrate & for /F %V in ('echo %vBitrate%') do (echo %V)
Does Not Work
Memory Variable
cd "C:\Users\Matt\Videos\" && for %f in (*.mp4) do ffprobe -i "C:\Users\Matt\Desktop\Test\%~f" -select_streams v:0 -show_entries stream=bit_rate -v quiet -of csv="p=0" & for /f "tokens=*" %V in ("ffprobe -i "%~f" -select_streams v:0 -show_entries stream=bit_rate -v quiet -of csv=p=0") do (echo ) & SET vBitrate=%V & for /F %V in ('echo %vBitrate%') do (echo %V)
Testing It
Run the first command. When it is finished, type
echo %vBitrate%
in CMD and press Enter. You’ll see the bitrate of the last mp4 file parsed.Do the same for the second command and you’ll see it doesn’t work.
Solution
I would like to get rid of the
Temp File Variable
and get the second command to work.(%V > tmp_vBitrate) & SET /p vBitrate= < tmp_vBitrate
to justSET vBitrate=%V
.Maybe this whole thing can be simplified ? Am I using the variables wrong ?