Recherche avancée

Médias (1)

Mot : - Tags -/embed

Autres articles (111)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (7293)

  • How to make an automated video of changing webpage

    21 mars 2023, par jonas wouters

    I'm currently working on a project where I need to make a recording of a webpage without opening this browser. (headless browser)
The file I'm working on is stored locally on my machine and is generated with a Python script. It's generated because it will be different for every user and will be deleted after the recording is made.

    


    I'm currently stuck with trying to make a recording of a webpage.
Does somebody know how I can record a webpage.

    


    Currently I'm doing this

    


    # Make a video
def create_video(duration, name):
    # Path of the HTML file you want to record
    html_file_path = os.path.join(py_dir_path, 'templates/video/', f'{name}.html')
    width = 1080
    height = 1920
    framerate = 30
    
    options = Options()
    options.headless = True
    options.add_experimental_option('mobileEmulation', {'deviceName': 'iPhone SE'})
    driver = webdriver.Chrome(options=options)
    driver.get(f'file://{html_file_path}')
    print(driver)
    outputName = f'{name}.mp4'

    cmd= f'ffmpeg -y -f xcbgrab -s {width}x{height} -framerate {framerate} -i :0.0+0,0 -f alsa -i default -vcodec libx264 -pix_fmt yuv420p -preset ultrafast {outputName}'
    p = subprocess.Popen(cmd, shell=True)

    import time
    time.sleep(duration)

    p.kill()


    


    This code makes a headless browser and plays the website, but the recording process is not working for me

    


    I've already have had working code, but this was based on making screenshots of the webpage and then pasting these screenshots after each other, this code was difficult to read and worst of all very slow.

    


    Working bad code

    


    # Make a video
def create_video(duration, name, timesFaster):
    # Path of the HTML file you want to record
    html_file_path = os.path.join(py_dir_path, 'templates/video/', f'{name}.html')
    # Use function create_driver to create a driver and use this driver
    try:
        # Make a chrome window with size --> width: 390px, height: 850px
        options = webdriver.ChromeOptions()
        options.add_argument("--headless")
        options.add_experimental_option('mobileEmulation', {'deviceName': 'iPhone SE'})
        driver = webdriver.Chrome(options=options)
        driver.get(f'file://{html_file_path}')
        
        # Use function capture_screenshots to take screenshots for 
        capture_screenshots(driver, int(duration), name, timesFaster)
    finally:
        driver.quit

# Make as many screens as possible in ... amount of time (... = animation_duration)
def capture_screenshots(driver, animation_duration, name, timesFaster):
    screenshots = []
    # Calculate the ending time
    end_time = time.time() + animation_duration
    # Keeps track of amount of screenshots
    index = 1

    try:
        # Take as many screenshots as possible until the current time exceeds the end_time
        while time.time() < end_time:
            # Each time a new filename (so it does not overwrite)
            screenshot_file_name = f'capture{index}.png'
            # Save the screenshot on device
            driver.save_screenshot(screenshot_file_name)
            # Append the screenshot in screenshots ([])
            screenshots.append(screenshot_file_name)
            index += 1
        
        
        # Calculate the FPS
        fps = (len(screenshots)/animation_duration) * timesFaster
        print("sec: ", animation_duration/timesFaster)
        print("fps: ", fps)
        # Make the video with the FPS calculated above
        clip = ImageSequenceClip(screenshots, fps=fps)
        # File name of the result (video)
        output_file_path = os.path.join(mp4_dir_path, f"part/{name}.mp4")
        # Write the videoFile to the system
        clip.write_videofile(output_file_path, codec='libx264', bitrate="2M")
    finally:
        # Delete all screenshots
        for screenshot in screenshots:
            try:
                os.remove(screenshot)
            except:
                pass


    


    At the moment it's not that important for me that it's a local file, if I would be able to record a webpage (for example https://jonaswouters.sinners.be/3d-animatie/) this will be equally helpfull

    


    Thanks in advance
Jonas

    


  • CVOpenGLESTextureCacheCreateTextureFromImage from uint8_t buffer

    6 novembre 2015, par resident_

    I’m developing an video player for iPhone. I’m using ffmpeg libraries to decode frames of video and I’m using opengl 2.0 to render the frames to the screen.

    But my render method is very slowly.

    A user told me :
    iOS 5 includes a new way to do this fast. The trick is to use AVFoundation and link a Core Video pixel buffer directly to an OpenGL texture.

    My problem now is that my video player send to render method a uint8_t* type that I use then with glTexSubImage2D.

    But if I want to use CVOpenGLESTextureCacheCreateTextureFromImage I need a CVImageBufferRef with the frame.

    The question is : How I can create CVImageBufferRef from uint8_t buffer ?

    This is my render method :

    - (void) render: (uint8_t*) buffer


    NSLog(@"render") ;

    [EAGLContext setCurrentContext:context];

    glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
    glViewport(0, 0, backingWidth, backingHeight);

    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // OpenGL loads textures lazily so accessing the buffer is deferred until draw; notify
    // the movie player that we're done with the texture after glDrawArrays.        
    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, mFrameW, mFrameH, GL_RGB,GL_UNSIGNED_SHORT_5_6_5, buffer);  

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    [moviePlayerDelegate bufferDone];

    glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
    [context presentRenderbuffer:GL_RENDERBUFFER];

    Thanks,

  • ffmpeg-Error "Buffer queue overflow, dropping." when merging two videos with delay

    20 septembre 2016, par Stefan Urbansky

    I want to merge two videos (as example the iphone video from https://peach.blender.org/trailer-page/). The videos are placed on an background image with the overlay filter and the second video starts 3 seconds later.

    And I need that the audio is mixed.

    Here is my code :

    ffmpeg \
       -loop 1 -i background.png  \
       -itsoffset 0  -i trailer_iphone.m4v \
       -itsoffset 3  -i trailer_iphone.m4v \
       \
       -y \
       -t 36 \
       -filter_complex "
           [2:a] adelay=3000 [2delayed];
           [1:a][2delayed] amerge=inputs=2 [audio];
           [0][1:v] overlay=10:10:enable='between(t,0,33)' [lv1];
           [lv1][2:v] overlay=10:300:enable='between(t,0,36)' [video]
       " \
       \
       -threads 0 \
       -map "[video]" -map "[audio]" \
       -vcodec libx264 -acodec aac \
       merged-video.mp4

    I get the error message :

    [Parsed_overlay_3 @ 0x7fe892502ac0] [framesync @ 0x7fe892502b88] Buffer queue overflow, dropping.

    And the merged video has many dropped frames.

    I know that are some other posting with this error message. But the suggested solutions doesn’t work for me.

    How can I fix the problem ?