
Recherche avancée
Autres articles (87)
-
L’agrémenter visuellement
10 avril 2011MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté. -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (11252)
-
C# / FFMPEG - Is this the best way to programmatically combine multiple video files in different formats and encodings into one ?
28 avril 2021, par buggybudI've been trying to concat multiple videos into one. These videos may have different file types and extensions. As it stands I am only working with MP4 files that seem to have different resolutions, framerates, you name it.


After having first followed the Stackoverflow answers that talked about using an intermediate file (convert all the files into one format, then concate that) I also came across a solution that uses what I think is called a 'concat video filter.'


This would allow me to ignore the intermediate steps and just combine all the files by specifying their individual settings within one single FFMPEG command.


-i 1.mp4 -i 2.mp4 -i 3.mp4 -filter_complex \
 "[0:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1,fps=30,format=yuv420p[v0];
 [1:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1,fps=30,format=yuv420p[v1];
 [2:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1,fps=30,format=yuv420p[v2];
 [v0][0:a][v1][1:a][v2][2:a]concat=n=3:v=1:a=1[v][a]" \
 -map "[v]" -map "[a]" -c:v libx264 -c:a aac -movflags +faststart output.mp4



Provided above is the snippet that shows you how to combine three videos into one. I've used this Snippet within my code but as opposed to manually specifying the input files, and not finding a way to use the list file, I came across the very hacky solution to programatically create the above command parameters.


var command = "";

 for (var i = 0; i < video_paths.Count; i++)
 {
 command += $"-i \"{video_paths[i]}\" ";
 }

 command += "-filter_complex ";
 command += "\"";

 for (var i = 0; i < video_paths.Count; i++)
 {
 command += $"[{i}:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1,fps=30,format=yuv420p[v{i}];";
 }

 for (var i = 0; i < video_paths.Count; i++)
 {
 command += $"[v{i}][{i}:a]";
 }

 command += $"concat=n={video_paths.Count}:v=1:a=1[v][a]\" ";
 command += $"-map \"[v]\" -map \"[a]\" -c:v libx264 -c:a aac -movflags +faststart \"{path}\"";

 ffmpeg(command);



The above code is the solution to my problem. It works.


The reason I made this Stackoverflow question is the following : Is this the best way to programatically make the arguments ? What is the maximum string limit of these arguments as my video paths are all absolute paths ? How can I make this look nicer and less chaotic in code ?


Bud


-
FFmpeg apply multiple filters (Logo overlay, Brightness change and text overlay)
4 octobre 2018, par A.ButakidisI am trying to add three filters to a png file using ffmpeg in Android (I am using the writing mind lib).
So far I managed to pull together the cmd :
-i /storage/emulated/0/videoApp/temp/firstFrameOfMergedVideo.png
-i /storage/emulated/0/videoApp/temp/logo.png
-filter_complexFIRST FILTER
[1:v]scale=h=-1:w=100[overlay_scaled],[0:v][overlay_scaled]overlay=eval=init:x=W-100-W*0.1:y=W*0.1,
SECOND FILTER
drawtext=fontfile=/system/fonts/Roboto-Regular.ttf:text='xbsg':fontcolor=white:fontsize=60:box=1:boxcolor=0x7FFFD4@0.5:boxborderw=20:x=20:y=h-(text_h*2)-(h*0.1):enable='between(t,0,2)',
THIRD FILTER
drawtext=fontfile=/system/fonts/Roboto-Regular.ttf:text='cbeh':fontcolor=white:fontsize=30:box=1:boxcolor=0x7FFFD4@0.5:boxborderw=20:x=20:y=h-text_h-(h*0.1)+25:enable='between(t,0,2)',
FOURTH FILTER
eq=contrast=1:brightness=0.26180276:saturation=1:gamma=1:gamma_r=1:gamma_g=1:gamma_b=1:gamma_weight=1
-c:a
copy
/storage/emulated/0/videoApp/temp/frameWithFilters.pngRight now I am trying to separate the filters using
,
but I also tried;
It throws me back :
Input #0, png_pipe, from '/storage/emulated/0/videoApp/temp/firstFrameOfMergedVideo.png':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: png, rgb24(pc), 1080x1920, 25 tbr, 25 tbn, 25 tbc
Input #1, png_pipe, from '/storage/emulated/0/videoApp/temp/logo.png':
Duration: N/A, bitrate: N/A
Stream #1:0: Video: png, rgba(pc), 528x582, 25 tbr, 25 tbn, 25 tbc
[NULL @ 0xf265d800] Unable to find a suitable output format for ','
,: Invalid argumentIf I apply them individual they work.
I am new to ffmpeg so any help would be appreciated.
-
Why motion vector extracted from b frames are unchange ?
19 août 2019, par 霍宇琦I am writing c codes to extract motion vectors from b frame in MPEG4(Part2) compressed video format. But some motion vector seems wrong.
I use a raw video clips, using the extract_mvs.c from ffmpeg 4.2. For example if the frame seq is IPBPPBPP.... i can get all the side data for all frame. But when inspecting the mv->src_x, mv->src_y, mv->dst_x, mv->dst_y i find that all the srcs are equal to dsts for some individual frames, there must be sth wrong in it, but i change little from the official code.
//modify from ffmpeg/doc/example/extract_mvs.c:
while(getting frames one by one) {
AVFrameSideData *sd;
video_frame_count++;
//printf("%d", video_frame_count);
if(video_frame_count < 19){
if (frame->pict_type == AV_PICTURE_TYPE_I ) printf("\nI");
if (frame->pict_type == AV_PICTURE_TYPE_B ) printf("B");
if (frame->pict_type == AV_PICTURE_TYPE_P ) printf("P");
sd = av_frame_get_side_data(frame, AV_FRAME_DATA_MOTION_VECTORS);
if (sd) {
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
const AVMotionVector *mv = &mvs[i];
if (mv->dst_x - mv->src_x != 0 || mv->dst_y - mv->src_y != 0) {
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
video_frame_count, mv->source,
mv->w, mv->h, mv->src_x, mv->src_y,
mv->dst_x, mv->dst_y, mv->flags);
break;
}
}
}
av_frame_unref(frame);
}
}Outputs are :
Input #0, avi, from 'origin.avi':
Duration: 00:00:13.63, start: 0.000000, bitrate: 492 kb/s
Stream #0:0: Video: mpeg4 (DX50 / 0x30355844), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 486 kb/s, 30 fps, 30 tbr, 30 tbn, 30k tbc
Metadata:
title : H:\HumanActionDB\MotionClips\hmdb51_30fps_wBrd_10off_divx\brush_??
framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags
[mpeg4 @ 0x55739dc29580] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
IP2,-1,16,16, 137, 24, 136, 24,0x0
BP4,-1,16,16, 152, 57, 152, 56,0x0
P5,-1,16,16, 56, 155, 56, 152,0x0
BP7,-1,16,16, 151, 40, 152, 40,0x0
PB9,-1,16,16, 151, 40, 152, 40,0x0
P10,-1,16,16, 152, 39, 152, 40,0x0
P11,-1,16,16, 26, 55, 24, 56,0x0
B12,-1,16,16, 152, 39, 152, 40,0x0
P13,-1,16,16, 152, 39, 152, 40,0x0
P14,-1,16,16, 41, 168, 40, 168,0x0
B15,-1,16,16, 152, 39, 152, 40,0x0
P16,-1,16,16, 153, 39, 152, 40,0x0
PB18,-1,16,16, 168, 55, 168, 56,0x0you can see that the third frame(B) and the sixth, eighth frame(B, P) and the 17th frame (P) can be read, and the data can be extract from them, but
mv->dst_x == mv->src_x && mv->dst_y - mv->src_y
Can someone help me ? Thanks.