
Recherche avancée
Autres articles (7)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Déploiements possibles
31 janvier 2010, parDeux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
Version mono serveur
La version mono serveur consiste à n’utiliser qu’une (...) -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)
Sur d’autres sites (4194)
-
ffmpeg - split video into multiple parts with different duration
16 avril 2023, par Pierrouin order to split very old episodes from my VHS rips, I would like to split video files into multiple parts according to timestamps in csv file :


file1;00:01:13.280;00:14:22.800;Part 1
file1;00:14:41.120;00:26:05.400;Part 2
file1;00:26:23.680;00:39:41.720;Part 3
file1;00:40:00.000;00:51:43.280;Part 4
file1;00:53:50.200;01:06:15.680;Part 5
file1;01:06:33.960;01:20:58.400;Part 6
file1;01:21:16.680;01:34:57.320;Part 7
file1;01:35:15.600;01:48:21.640;Part 8
file1;01:49:15.160;01:51:54.720;Part 9
file2;00:01:13.280;00:13:30.960;Part 1
file2;00:13:49.240;00:29:04.240;Part 2
file2;00:29:22.520;00:43:24.080;Part 3
file2;00:43:42.360;00:58:12.560;Part 4
file2;01:00:03.880;01:12:52.840;Part 5
file2;01:13:11.120;01:24:13.280;Part 6
file2;01:24:31.560;01:51:12.720;Part 7
file2;01:52:06.840;01:54:55.640;Part 8



So how can I have multiple lines like those ?


ffmpeg -i file1.avi -c copy -ss 00:01:13.280 -to 00:14:22.800 file1/part1.avi



So I would like to keep each parts in individual files and remove everything else.


-
Difference between 'display_aspect_ratio' and 'sample_aspect_ratio' in ffprobe [duplicate]
18 juin 2018, par John AllardThis question already has an answer here :
-
ffmpeg scaling not working for video
1 answer
I have an issue where a video is played in the correct 16:9 aspect ratio when played through VLC or quicktime player, but when I attempt to extract individual frames with ffmpeg the frames come out as 4:3 aspect ratio.
The ffprobe output on the video in question is as follows
$ ffprobe -v error -select_streams v:0 -show_entries stream -of default=noprint_wrappers=1 -print_format json movie.mp4
{
"programs": [
],
"streams": [
{
"index": 0,
"codec_name": "h264",
"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
"profile": "Main",
"codec_type": "video",
"codec_time_base": "126669/6400000",
"codec_tag_string": "avc1",
"codec_tag": "0x31637661",
"width": 2592,
"height": 1944,
"coded_width": 2592,
"coded_height": 1944,
"has_b_frames": 0,
"sample_aspect_ratio": "4:3",
"display_aspect_ratio": "16:9",
"pix_fmt": "yuvj420p",
"level": 50,
"color_range": "pc",
"color_space": "bt709",
"color_transfer": "bt709",
"color_primaries": "bt709",
"chroma_location": "left",
"refs": 1,
"is_avc": "true",
"nal_length_size": "4",
"r_frame_rate": "25/1",
"avg_frame_rate": "3200000/126669",
"time_base": "1/12800",
"start_pts": 0,
"start_time": "0.000000",
"duration_ts": 126682,
"duration": "9.897031",
"bit_rate": "4638928",
"bits_per_raw_sample": "8",
"nb_frames": "250",
"disposition": {
"default": 1,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0,
"timed_thumbnails": 0
},
"tags": {
"language": "und",
"handler_name": "VideoHandler"
}
}
]
}So it says
"width": 2592,
"height": 1944,
"coded_width": 2592,
"coded_height": 1944,
"has_b_frames": 0,
"sample_aspect_ratio": "4:3",
"display_aspect_ratio": "16:9",which seems odd to me. The width/height are in 4:3, the sample aspect ratio is 4:3, the display is 16:9 ?
Now, when I play this through VLC/Quicktime the video looks fine (screenshot below)
but now, if I run an ffmpeg command to extract individual frames from this video, they come out in 4:3
ffmpeg -y -hide_banner -nostats -loglevel error -i movie.mp4 -vf select='eq(n\,10)+eq(n\,20)+eq(n\,30)+eq(n\,40)',scale=-1:640 -vsync 0 /tmp/ffmpeg_image_%04d.jpg
So I guess my questions are as follows :
- what is the relation between display aspect ratio, sample aspect ratio, and the width/height ratio ?
- how to I get ffmpeg to output in the correct aspect ratio ?
-
ffmpeg scaling not working for video
-
C# get dominant color in an image
22 janvier 2017, par CK13I’m building a program that makes screenshots from a video.
It extracts frames from the video (with ffmpeg) and then combines them into one file.All works fine, except sometimes I get (almost) black images, mostly in the beginning and ending of the video.
A possible solution I can think of is to detect if the extracted frame is dark. If it is dark, extract another frame from a slightly different time.
How can I detect if the extracted frame is dark/black ? Or is there another way I can solve this ?
private void getScreenshots_Click(object sender, EventArgs e)
{
int index = 0;
foreach (string value in this.filesList.Items)
{
string file = selectedFiles[index] + "\\" + value;
// ------------------------------------------------
// MediaInfo
// ------------------------------------------------
// https://github.com/Nicholi/MediaInfoDotNet
//
// get file width, height, and frame count
//
// get aspect ratio of the video
// and calculate height of thumbnail
// using width and aspect ratio
//
MediaInfo MI = new MediaInfo();
MI.Open(file);
var width = MI.Get(StreamKind.Video, 0, "Width");
var height = MI.Get(StreamKind.Video, 0, "Height");
decimal d = Decimal.Parse(MI.Get(StreamKind.Video, 0, "Duration"));
decimal frameCount = Decimal.Parse(MI.Get(StreamKind.Video, 0, "FrameCount"));
MI.Close();
decimal ratio = Decimal.Divide(Decimal.Parse(width), Decimal.Parse(height));
int newHeight = Decimal.ToInt32(Decimal.Divide(newWidth, ratio));
decimal startTime = Decimal.Divide(d, totalImages);
//totalImages - number of thumbnails the final image will have
for (int x = 0; x < totalImages; x++)
{
// increase the time where the thumbnail is taken on each iteration
decimal newTime = Decimal.Multiply(startTime, x);
string time = TimeSpan.FromMilliseconds(double.Parse(newTime.ToString())).ToString(@"hh\:mm\:ss");
string outputFile = this.tmpPath + "img-" + index + x + ".jpg";
// create individual thumbnails with ffmpeg
proc = new Process();
proc.StartInfo.FileName = "ffmpeg.exe";
proc.StartInfo.Arguments = "-y -seek_timestamp 1 -ss " + time + " -i \"" + file + "\" -frames:v 1 -qscale:v 3 \"" + outputFile + "\"";
proc.Start();
proc.WaitForExit();
}
// set width and height of final image
int w = (this.cols * newWidth) + (this.spacing * this.cols + this.spacing);
int h = (this.rows * newHeight) + (this.spacing * this.rows + this.spacing);
int left, top, i = 0;
// combine individual thumbnails into one image
using (Bitmap bmp = new Bitmap(w, h))
{
using (Graphics g = Graphics.FromImage(bmp))
{
g.Clear(this.backgroundColor);
// this.rows - number of rows
for (int y = 0; y < this.rows; y++)
{
// put images on a column
// this.cols - number of columns
// when x = number of columns go to next row
for (int x = 0; x < this.cols; x++)
{
Image imgFromFile = Image.FromFile(this.tmpPath + "img-" + index + i + ".jpg");
MemoryStream imgFromStream = new MemoryStream();
imgFromFile.Save(imgFromStream, imgFromFile.RawFormat);
imgFromFile.Dispose();
left = (x * newWidth) + ((x + 1) * this.spacing);
top = (this.spacing * (y + 1)) + (newHeight * y);
g.DrawImage(Image.FromStream(imgFromStream), left, top, newWidth, newHeight);
i++;
}
}
}
// save the final image
bmp.Save(selectedFiles[index] + "\\" + value + ".jpg");
}
index++;
}
}