
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (57)
-
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (8614)
-
ffmpeg-Error "Buffer queue overflow, dropping." when merging two videos with delay
20 septembre 2016, par Stefan UrbanskyI want to merge two videos (as example the iphone video from https://peach.blender.org/trailer-page/). The videos are placed on an background image with the overlay filter and the second video starts 3 seconds later.
And I need that the audio is mixed.
Here is my code :
ffmpeg \
-loop 1 -i background.png \
-itsoffset 0 -i trailer_iphone.m4v \
-itsoffset 3 -i trailer_iphone.m4v \
\
-y \
-t 36 \
-filter_complex "
[2:a] adelay=3000 [2delayed];
[1:a][2delayed] amerge=inputs=2 [audio];
[0][1:v] overlay=10:10:enable='between(t,0,33)' [lv1];
[lv1][2:v] overlay=10:300:enable='between(t,0,36)' [video]
" \
\
-threads 0 \
-map "[video]" -map "[audio]" \
-vcodec libx264 -acodec aac \
merged-video.mp4I get the error message :
[Parsed_overlay_3 @ 0x7fe892502ac0] [framesync @ 0x7fe892502b88] Buffer queue overflow, dropping.
And the merged video has many dropped frames.
I know that are some other posting with this error message. But the suggested solutions doesn’t work for me.
How can I fix the problem ?
-
CVOpenGLESTextureCacheCreateTextureFromImage from uint8_t buffer
6 novembre 2015, par resident_I’m developing an video player for iPhone. I’m using ffmpeg libraries to decode frames of video and I’m using opengl 2.0 to render the frames to the screen.
But my render method is very slowly.
A user told me :
iOS 5 includes a new way to do this fast. The trick is to use AVFoundation and link a Core Video pixel buffer directly to an OpenGL texture.My problem now is that my video player send to render method a uint8_t* type that I use then with glTexSubImage2D.
But if I want to use CVOpenGLESTextureCacheCreateTextureFromImage I need a CVImageBufferRef with the frame.
The question is : How I can create CVImageBufferRef from uint8_t buffer ?
This is my render method :
- (void) render: (uint8_t*) buffer
NSLog(@"render") ;[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glViewport(0, 0, backingWidth, backingHeight);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// OpenGL loads textures lazily so accessing the buffer is deferred until draw; notify
// the movie player that we're done with the texture after glDrawArrays.
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, mFrameW, mFrameH, GL_RGB,GL_UNSIGNED_SHORT_5_6_5, buffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[moviePlayerDelegate bufferDone];
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];Thanks,
-
How to make an automated video of changing webpage
21 mars 2023, par jonas woutersI'm currently working on a project where I need to make a recording of a webpage without opening this browser. (headless browser)
The file I'm working on is stored locally on my machine and is generated with a Python script. It's generated because it will be different for every user and will be deleted after the recording is made.


I'm currently stuck with trying to make a recording of a webpage.
Does somebody know how I can record a webpage.


Currently I'm doing this


# Make a video
def create_video(duration, name):
 # Path of the HTML file you want to record
 html_file_path = os.path.join(py_dir_path, 'templates/video/', f'{name}.html')
 width = 1080
 height = 1920
 framerate = 30
 
 options = Options()
 options.headless = True
 options.add_experimental_option('mobileEmulation', {'deviceName': 'iPhone SE'})
 driver = webdriver.Chrome(options=options)
 driver.get(f'file://{html_file_path}')
 print(driver)
 outputName = f'{name}.mp4'

 cmd= f'ffmpeg -y -f xcbgrab -s {width}x{height} -framerate {framerate} -i :0.0+0,0 -f alsa -i default -vcodec libx264 -pix_fmt yuv420p -preset ultrafast {outputName}'
 p = subprocess.Popen(cmd, shell=True)

 import time
 time.sleep(duration)

 p.kill()



This code makes a headless browser and plays the website, but the recording process is not working for me


I've already have had working code, but this was based on making screenshots of the webpage and then pasting these screenshots after each other, this code was difficult to read and worst of all very slow.


Working bad code


# Make a video
def create_video(duration, name, timesFaster):
 # Path of the HTML file you want to record
 html_file_path = os.path.join(py_dir_path, 'templates/video/', f'{name}.html')
 # Use function create_driver to create a driver and use this driver
 try:
 # Make a chrome window with size --> width: 390px, height: 850px
 options = webdriver.ChromeOptions()
 options.add_argument("--headless")
 options.add_experimental_option('mobileEmulation', {'deviceName': 'iPhone SE'})
 driver = webdriver.Chrome(options=options)
 driver.get(f'file://{html_file_path}')
 
 # Use function capture_screenshots to take screenshots for 
 capture_screenshots(driver, int(duration), name, timesFaster)
 finally:
 driver.quit

# Make as many screens as possible in ... amount of time (... = animation_duration)
def capture_screenshots(driver, animation_duration, name, timesFaster):
 screenshots = []
 # Calculate the ending time
 end_time = time.time() + animation_duration
 # Keeps track of amount of screenshots
 index = 1

 try:
 # Take as many screenshots as possible until the current time exceeds the end_time
 while time.time() < end_time:
 # Each time a new filename (so it does not overwrite)
 screenshot_file_name = f'capture{index}.png'
 # Save the screenshot on device
 driver.save_screenshot(screenshot_file_name)
 # Append the screenshot in screenshots ([])
 screenshots.append(screenshot_file_name)
 index += 1
 
 
 # Calculate the FPS
 fps = (len(screenshots)/animation_duration) * timesFaster
 print("sec: ", animation_duration/timesFaster)
 print("fps: ", fps)
 # Make the video with the FPS calculated above
 clip = ImageSequenceClip(screenshots, fps=fps)
 # File name of the result (video)
 output_file_path = os.path.join(mp4_dir_path, f"part/{name}.mp4")
 # Write the videoFile to the system
 clip.write_videofile(output_file_path, codec='libx264', bitrate="2M")
 finally:
 # Delete all screenshots
 for screenshot in screenshots:
 try:
 os.remove(screenshot)
 except:
 pass



At the moment it's not that important for me that it's a local file, if I would be able to record a webpage (for example https://jonaswouters.sinners.be/3d-animatie/) this will be equally helpfull


Thanks in advance
Jonas