
Recherche avancée
Médias (1)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
Autres articles (85)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (11076)
-
How to find the offset by which the each video must be delayed to sync them perfectly ?
19 janvier 2023, par PirateApp

Let me explain my use case a bit here


- 

-
We are 4 of us playing the same game


-
3 of us recording mkv using OBS studio at 60 fps, 4th guy recording with some other tool at 30 fps


-
Each mission starts at a cutscene and ends with a cutscene


-
I would like to create a video like the image you see above starting at ending at the same points but the intermediate stuff is basically what each player is doing in the game


-
Currently, I follow a process slightly complicated to achieve this and was wondering if there is an easier way to do this


-
My current process


-
Take a screenshot from one of the videos of the cutscene


















Run a search for this screen inside the other videos using the command below


ffmpeg 
 -i "video1.mkv"
 -r 1
 -loop 1
 -i 1.png
 -an -filter_complex "blend=difference:shortest=1,blackframe=90:32"
 -f null -



- 

-
It gives me a result like this in each video


[Parsed_blackframe_1 @ 0x600000c9c000] frame:263438 pblack:91 pts:4390633 t:4390.633000 type:P last_keyframe:263400






Use the start time from each of the results to create a split screen video using the command below


ffmpeg 
 -i first.mkv
 -i second.mkv
 -i third.mkv
 -i fourth.mp4
 -filter_complex " 
 nullsrc=size=640x360 [base];
 [0:v] trim=start=35.567,setpts=PTS-STARTPTS, scale=320x180 [upperleft]; 
 [1:v] trim=start=21.567,setpts=PTS-STARTPTS, scale=320x180 [upperright];
 [2:v] trim=start=41.233,setpts=PTS-STARTPTS, scale=320x180 [lowerleft]; 
 [3:v] trim=start=142.933333,setpts=PTS-STARTPTS, scale=320x180 [lowerright];
 [0:a] atrim=start=35.567,asetpts=PTS-STARTPTS [outa]; [base][upperleft] overlay=shortest=1 [tmp1];



- 

-
As you can see, it is a complex process and depends completely a lot on what image I am capturing. Sometimes, I find out that stuff is still slightly off in the beginning or end because the images dont match a 100%. My guess is that the frame rate is different for each video not to mention 3 of them are mkv inputs and one is an mp4 input


-
Is there a better way to get the offset by how much each video should be moved to sync them perfectly ?


-
The only way that I can think of is to take 1 video


-
Take a starting timestamp and an ending timestamp, say with a total duration of 30s


-
Take the second video


-
Start from 0 to 30s and compare the frames in both videos, set a score


-
start from 0.001 to 30.001 and compare the frames, set a score


-
start from 0.002 to 30.002 and compare the frames, set a score


-
Basically increment the second video by 0.001 second each time and find out the part with the highest score


-
Any better way of doing this ? I need to run this on 100s if not 1000s of videos
























-
-
ffmpeg black screen issue for video video generation from a list of frames
11 mai 2023, par arlaineI used a video to generate a list of frames from it, then I wanted to create multiple videos from this list of frames.
I've set starting and ending frames indexes for each "sub video", so for example,

indexes = [[0, 64], [64, 110], [110, 234], [234, 449]]
, and those indexes will help my code generate 4 videos of various durations. The idea is to decompose the original video into multiple sub videos. My code is working just fine, the video generated.

But every sub video start with multiple seconds of black screen, only the first generated video (so the one using
indexes[0]
for starting and ending frames) is generated without this black screen part. I've tried changing the frame rate for eachsub_video
, according to the number of frames and things like that, but I didn't work. You can find my code below

for i, (start_idx, end_idx) in enumerate(self.video_frames_indexes):
 if end_idx - start_idx > 10:
 shape = cv2.imread(f'output/video_reconstitution/{video_name}/final/frame_{start_idx}.jpg').shape
 os.system(f'ffmpeg -r 30 -s {shape[0]}x{shape[1]} -i output/video_reconstitution/{video_name}/final/frame_%d.JPG'
 f' -vf "select=between(n\,{start_idx}\,{end_idx})" -vcodec libx264 -crf 25'
 f' output/video_reconstitution/IMG_7303/sub_videos/serrage_{i}.mp4')



Just the ffmpeg command


ffmpeg -r 30 -s {shape[0]}x{shape[1]} -i output/video_reconstitution/{video_name}/final/frame_%d.JPG -vf "select=between(n\,{start_idx}\,{end_idx})" -vcodec libx264 -crf 25 output/video_reconstitution/IMG_7303/sub_videos/serrage_{i}.mp4



-
How to make an automated video of changing webpage
21 mars 2023, par jonas woutersI'm currently working on a project where I need to make a recording of a webpage without opening this browser. (headless browser)
The file I'm working on is stored locally on my machine and is generated with a Python script. It's generated because it will be different for every user and will be deleted after the recording is made.


I'm currently stuck with trying to make a recording of a webpage.
Does somebody know how I can record a webpage.


Currently I'm doing this


# Make a video
def create_video(duration, name):
 # Path of the HTML file you want to record
 html_file_path = os.path.join(py_dir_path, 'templates/video/', f'{name}.html')
 width = 1080
 height = 1920
 framerate = 30
 
 options = Options()
 options.headless = True
 options.add_experimental_option('mobileEmulation', {'deviceName': 'iPhone SE'})
 driver = webdriver.Chrome(options=options)
 driver.get(f'file://{html_file_path}')
 print(driver)
 outputName = f'{name}.mp4'

 cmd= f'ffmpeg -y -f xcbgrab -s {width}x{height} -framerate {framerate} -i :0.0+0,0 -f alsa -i default -vcodec libx264 -pix_fmt yuv420p -preset ultrafast {outputName}'
 p = subprocess.Popen(cmd, shell=True)

 import time
 time.sleep(duration)

 p.kill()



This code makes a headless browser and plays the website, but the recording process is not working for me


I've already have had working code, but this was based on making screenshots of the webpage and then pasting these screenshots after each other, this code was difficult to read and worst of all very slow.


Working bad code


# Make a video
def create_video(duration, name, timesFaster):
 # Path of the HTML file you want to record
 html_file_path = os.path.join(py_dir_path, 'templates/video/', f'{name}.html')
 # Use function create_driver to create a driver and use this driver
 try:
 # Make a chrome window with size --> width: 390px, height: 850px
 options = webdriver.ChromeOptions()
 options.add_argument("--headless")
 options.add_experimental_option('mobileEmulation', {'deviceName': 'iPhone SE'})
 driver = webdriver.Chrome(options=options)
 driver.get(f'file://{html_file_path}')
 
 # Use function capture_screenshots to take screenshots for 
 capture_screenshots(driver, int(duration), name, timesFaster)
 finally:
 driver.quit

# Make as many screens as possible in ... amount of time (... = animation_duration)
def capture_screenshots(driver, animation_duration, name, timesFaster):
 screenshots = []
 # Calculate the ending time
 end_time = time.time() + animation_duration
 # Keeps track of amount of screenshots
 index = 1

 try:
 # Take as many screenshots as possible until the current time exceeds the end_time
 while time.time() < end_time:
 # Each time a new filename (so it does not overwrite)
 screenshot_file_name = f'capture{index}.png'
 # Save the screenshot on device
 driver.save_screenshot(screenshot_file_name)
 # Append the screenshot in screenshots ([])
 screenshots.append(screenshot_file_name)
 index += 1
 
 
 # Calculate the FPS
 fps = (len(screenshots)/animation_duration) * timesFaster
 print("sec: ", animation_duration/timesFaster)
 print("fps: ", fps)
 # Make the video with the FPS calculated above
 clip = ImageSequenceClip(screenshots, fps=fps)
 # File name of the result (video)
 output_file_path = os.path.join(mp4_dir_path, f"part/{name}.mp4")
 # Write the videoFile to the system
 clip.write_videofile(output_file_path, codec='libx264', bitrate="2M")
 finally:
 # Delete all screenshots
 for screenshot in screenshots:
 try:
 os.remove(screenshot)
 except:
 pass



At the moment it's not that important for me that it's a local file, if I would be able to record a webpage (for example https://jonaswouters.sinners.be/3d-animatie/) this will be equally helpfull


Thanks in advance
Jonas