
Recherche avancée
Autres articles (54)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (8437)
-
`ffmpet -f concat` don't work when all input streams appear to have the same spec
9 mars 2023, par RoyMy
ffmpeg
command :

ffmpeg -safe 0 -f concat -i list.txt -c copy out.mp4



My 1st input file :


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'D:\Applications\ffmpeg_6.0_full\a.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf60.3.100
 Duration: 00:00:04.97, start: 0.000000, bitrate: 40 kb/s
 Stream #0:0[0x1](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 2 kb/s (default)
 Metadata:
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
 Stream #0:1[0x2](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 27 kb/s, 30 fps, 30 tbr, 30k tbn (default)
 Metadata:
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 encoder : Lavc60.3.100 libx264



My 2nd input file :


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'D:\Applications\ffmpeg_6.0_full\b.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp41isom
 creation_time : 2023-03-08T06:47:13.000000Z
 artist : Microsoft Game DVR
 title : PUBG: BATTLEGROUNDS
 Duration: 00:10:00.16, start: 0.000000, bitrate: 20885 kb/s
 Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 20739 kb/s, 30 fps, 30 tbr, 30k tbn (default)
 Metadata:
 creation_time : 2023-03-08T06:47:13.000000Z
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 encoder : AVC Coding
 Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 131 kb/s (default)
 Metadata:
 creation_time : 2023-03-08T06:47:13.000000Z
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]



The above command outputs some warning signals :


[mov,mp4,m4a,3gp,3g2,mj2 @ 0000025239902d40] Auto-inserting h264_mp4toannexb bitstream filter
[mp4 @ 00000252396fe5c0] Non-monotonous DTS in output stream 0:1; previous: 218112, current: 150024; changing to 218113. This may result in incorrect timestamps in the output file.
...
a lot of them
...
frame=25992 fps=21754 q=-1.0 Lsize= 1519621kB time=00:14:49.39 bitrate=13996.8kbits/s speed= 744x
video:9649kB audio:1519216kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown



The resultant video can play the first part of the video correctly, then the video players either skips directly to the end of the video (MPC-HC), or don't render anything at all while timer passes as normal (VLC).


My impression of the concat is that it requires all videos to have the same spec, which I think my input achieved (all the "Steam #0:0", etc, line matches). I only see the following difference, which I assumed that should be okay :


- 

- Metadata are different both for the whole input (e.g. "major_brand") and for each stream (e.g. "encoder"). I assumed that metadata won't affect the processing.
- The order of video/audio streams are different in the two inputs : the 1st input file has audio then video ; the 2nd input file has video then audio. I assumed that ffmpeg knows the difference and won't concat a video stream to an audio stream.






The full output of the command can be found in this pastebin : https://pastebin.com/Z5q97Uyg


-
How to find the offset by which the each video must be delayed to sync them perfectly ?
19 janvier 2023, par PirateApp

Let me explain my use case a bit here


- 

-
We are 4 of us playing the same game


-
3 of us recording mkv using OBS studio at 60 fps, 4th guy recording with some other tool at 30 fps


-
Each mission starts at a cutscene and ends with a cutscene


-
I would like to create a video like the image you see above starting at ending at the same points but the intermediate stuff is basically what each player is doing in the game


-
Currently, I follow a process slightly complicated to achieve this and was wondering if there is an easier way to do this


-
My current process


-
Take a screenshot from one of the videos of the cutscene


















Run a search for this screen inside the other videos using the command below


ffmpeg 
 -i "video1.mkv"
 -r 1
 -loop 1
 -i 1.png
 -an -filter_complex "blend=difference:shortest=1,blackframe=90:32"
 -f null -



- 

-
It gives me a result like this in each video


[Parsed_blackframe_1 @ 0x600000c9c000] frame:263438 pblack:91 pts:4390633 t:4390.633000 type:P last_keyframe:263400






Use the start time from each of the results to create a split screen video using the command below


ffmpeg 
 -i first.mkv
 -i second.mkv
 -i third.mkv
 -i fourth.mp4
 -filter_complex " 
 nullsrc=size=640x360 [base];
 [0:v] trim=start=35.567,setpts=PTS-STARTPTS, scale=320x180 [upperleft]; 
 [1:v] trim=start=21.567,setpts=PTS-STARTPTS, scale=320x180 [upperright];
 [2:v] trim=start=41.233,setpts=PTS-STARTPTS, scale=320x180 [lowerleft]; 
 [3:v] trim=start=142.933333,setpts=PTS-STARTPTS, scale=320x180 [lowerright];
 [0:a] atrim=start=35.567,asetpts=PTS-STARTPTS [outa]; [base][upperleft] overlay=shortest=1 [tmp1];



- 

-
As you can see, it is a complex process and depends completely a lot on what image I am capturing. Sometimes, I find out that stuff is still slightly off in the beginning or end because the images dont match a 100%. My guess is that the frame rate is different for each video not to mention 3 of them are mkv inputs and one is an mp4 input


-
Is there a better way to get the offset by how much each video should be moved to sync them perfectly ?


-
The only way that I can think of is to take 1 video


-
Take a starting timestamp and an ending timestamp, say with a total duration of 30s


-
Take the second video


-
Start from 0 to 30s and compare the frames in both videos, set a score


-
start from 0.001 to 30.001 and compare the frames, set a score


-
start from 0.002 to 30.002 and compare the frames, set a score


-
Basically increment the second video by 0.001 second each time and find out the part with the highest score


-
Any better way of doing this ? I need to run this on 100s if not 1000s of videos
























-
-
How do I sync 4 videos in a grid to play the same frame at the same time ?
28 décembre 2022, par PirateApp- 

- 4 of us have recorded ourselves playing a game and want to create a 4 x 4 video grid
- The game has cutscenes at the beginning followed by each person having their unique part for the rest of the video
- I am looking to synchronize the grid such that it starts at the same place in the cutscene for everyone
- Kindly take a look at what is happening currently. The cutscene is off by a few seconds for everyone
- Imagine a time offset a,b,c,d such that when I add this offet to each video, the entire video grid will be in sync
- How to find this a,b,c,d and more importantly how to add it in filter_complex














I used the ffmpeg command below to generate a 4 x 4 video grid and it seems to work


ffmpeg
 -i nano_prologue.mkv -i macko_nimble_guardian.mkv -i nano_nimble_guardian.mkv -i ghost_nimble_guardian_subtle_arrow_1.mp4
 -filter_complex "
 nullsrc=size=1920x1080 [base];
 [0:v] setpts=PTS-STARTPTS, scale=960x540 [upperleft];
 [1:v] setpts=PTS-STARTPTS, scale=960x540 [upperright];
 [2:v] setpts=PTS-STARTPTS, scale=960x540 [lowerleft];
 [3:v] setpts=PTS-STARTPTS, scale=960x540 [lowerright];
 [base][upperleft] overlay=shortest=1 [tmp1];
 [tmp1][upperright] overlay=shortest=1:x=960 [tmp2];
 [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3];
 [tmp3][lowerright] overlay=shortest=1:x=960:y=540
 "
 -c:v libx264 output.mkv



My problem though is that since each of us starts recording at slightly different times, the cutscenes are out of sync


As per the screenshot below, you can see that each video has the same scene starting at a slightly different time.


Is there a way to find where the same frame will start on all videos and then sync each video to start from that frame or 20 seconds before that frame ?




UPDATE 1


i have figured out the offset for each video in millisecond precision using the following technique


take a screenshot of the first video at a particular point in the cutscene and save image as png and run the script below for the remaining 3 videos to find out where this screenshot appears in each video


ffmpeg -i "video2.mp4" -r 1 -loop 1 -i screenshot.png -an -filter_complex "blend=difference:shortest=1,blackframe=90:32" -f null -



Use the command above to search for the offset in every video for that cutscene


It gave me this


VIDEO 3 OFFSET


[Parsed_blackframe_1 @ 0x600003af00b0] frame:3144 pblack:92 pts:804861 t:52.399805 type:P last_keyframe:3120

[Parsed_blackframe_1 @ 0x600003af00b0] frame:3145 pblack:96 pts:805117 t:52.416471 type:P last_keyframe:3120



VIDEO 2 OFFSET


[Parsed_blackframe_1 @ 0x6000014dc0b0] frame:3629 pblack:91 pts:60483 t:60.483000 type:P last_keyframe:3500



VIDEO 4 OFFSET


[Parsed_blackframe_1 @ 0x600002f84160] frame:2885 pblack:93 pts:48083 t:48.083000 type:P last_keyframe:2880

[Parsed_blackframe_1 @ 0x600002f84160] frame:2886 pblack:96 pts:48100 t:48.100000 type:P last_keyframe:2880



Now how do I use filter_complex to say start each video at either the frame above or the timestamp above ?. I would like to include say 10 seconds before the above frame in each video so that it starts from the beginning


UPDATE 2


This command currently gives me a 100% synced video, how do I make it start 15 seconds before the specified frame numbers and how to make it use the audio track from video 2 instead ?


ffmpeg
 -i v_nimble_guardian.mkv -i macko_nimble_guardian.mkv -i ghost_nimble_guardian_subtle_arrow_1.mp4 -i nano_nimble_guardian.mkv
 -filter_complex "
 nullsrc=size=1920x1080 [base];
 [0:v] trim=start_pts=49117,setpts=PTS-STARTPTS, scale=960x540 [upperleft];
 [1:v] trim=start_pts=50483,setpts=PTS-STARTPTS, scale=960x540 [upperright];
 [2:v] trim=start_pts=795117,setpts=PTS-STARTPTS, scale=960x540 [lowerleft];
 [3:v] trim=start_pts=38100,setpts=PTS-STARTPTS, scale=960x540 [lowerright];
 [base][upperleft] overlay=shortest=1 [tmp1];
 [tmp1][upperright] overlay=shortest=1:x=960 [tmp2];
 [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3];
 [tmp3][lowerright] overlay=shortest=1:x=960:y=540
 "
 -c:v libx264 output.mkv