Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (53)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (10074)

  • Input seeking for frame at specified timestamp with Py-AV

    9 décembre 2019, par neonScarecrow

    I have a project already using Py-AV and am trying to replicate a specific ffmpeg command. The goal is to get a frame roughly around the specified timestamp.

    Here’s the ffmpeg commmand :
    https://trac.ffmpeg.org/wiki/Seeking

    ffmpeg -ss 14 -i https://some_url.mp4 -frames:v 1 frame_at_14_seconds.jpg

    Here’s my code :

       #return one frame around 14 seconds into the movie
       target_sec = 14
       container = av.open('https://some_url.mp4', 'r')
       container.streams.video[0].thread_type = 'AUTO'
       video_stream = next(s for s in container.streams if s.type == 'video')
       time_base = float(video_stream.time_base)
       target_timestamp = int(target_sec / time_base) + video_stream.start_time
       video_stream.seek(target_timestamp)
       for frame in container.decode(video_stream):
           frame.to_image().save('frame_at_14_seconds.jpg')
           break

    Additionally, I have found any documentation about this, but does anyone know if either command (ffmpeg/av.open) is downloading the entire file to a tmp file behind the scenes. I’m looking for a less memory-intensive way to read a frame for every second in an up to 60 second video.

  • moviepy black border around png when compositing into an MP4

    27 août 2022, par OneWorld

    compositing a png into an MP4 video creates a black border around the edge.

    


    This is using moviepy 1.0.0

    


    Code below reproduces the MP4 with the attached red text png.

    


    enter image description here

    


    import numpy as np
import moviepy.editor as mped
def composite_txtpng_on_colour():
    bg_color = mped.ColorClip(size=[400, 300], color=np.array([0, 255, 0]).astype(np.uint8),
                          duration=2).set_position((0, 0))
    text_png_postition = [5, 5]
    text_png = mped.ImageClip("./txtpng.png", duration=3).set_position((text_png_postition))

    canvas_size = bg_color.size
    stacked_clips = mped.CompositeVideoClip([bg_color, text_png], size=canvas_size).set_duration(2)
    stacked_clips.write_videofile('text_with_black_border_video.mp4', fps=24)

composite_txtpng_on_colour()


    


    The result is an MP4 that can be played in VLC player. A screenshot of the black edge can be seen below :-

    


    enter image description here

    


    Any suggestions to remove the black borders would be much appreciated.

    


    Update : It looks like moviepy does a blit instead of alpha compositing.

    


    def blit(im1, im2, pos=None, mask=None, ismask=False):
    """ Blit an image over another.  Blits ``im1`` on ``im2`` as position ``pos=(x,y)``, using the
    ``mask`` if provided. If ``im1`` and ``im2`` are mask pictures
    (2D float arrays) then ``ismask`` must be ``True``.
    """
    if pos is None:
        pos = [0, 0]

    # xp1,yp1,xp2,yp2 = blit area on im2
    # x1,y1,x2,y2 = area of im1 to blit on im2
    xp, yp = pos
    x1 = max(0, -xp)
    y1 = max(0, -yp)
    h1, w1 = im1.shape[:2]
    h2, w2 = im2.shape[:2]
    xp2 = min(w2, xp + w1)
    yp2 = min(h2, yp + h1)
    x2 = min(w1, w2 - xp)
    y2 = min(h1, h2 - yp)
    xp1 = max(0, xp)
    yp1 = max(0, yp)

    if (xp1 >= xp2) or (yp1 >= yp2):
        return im2

    blitted = im1[y1:y2, x1:x2]

    new_im2 = +im2

    if mask is None:
        new_im2[yp1:yp2, xp1:xp2] = blitted
    else:
        mask = mask[y1:y2, x1:x2]
        if len(im1.shape) == 3:
            mask = np.dstack(3 * [mask])
        blit_region = new_im2[yp1:yp2, xp1:xp2]
        new_im2[yp1:yp2, xp1:xp2] = (1.0 * mask * blitted + (1.0 - mask) * blit_region)
    
    return new_im2.astype('uint8') if (not ismask) else new_im2


    


    and so, Rotem is right.

    


    new_im2[yp1:yp2, xp1:xp2] = (1.0 * mask * blitted + (1.0 - mask) * blit_region)


    


    is

    


    (alpha * img_rgb + (1.0 - alpha) * bg)


    


    and this is how moviepy composites. And this is why we see black at the edges.

    


  • Writing Live-Multimedia-Application using OpenGL & Co. saving output to disc [closed]

    21 janvier 2013, par user1997286

    I want to write an application that does the following thing :

    • Getting Commands via ArtNET (DMX over Ethernet, a Control Protocol) for each object (called Layer)
    • each Layer could be one of the following : Live Camera Stream, Movie, Image
    • each layer could be translated, rotated or stretched
    • on each layer I can set filters (Like a Kaleidoscope Effect, Blur, Color Correction, etc.)
    • the rsulting video-stream is in the 3d-space
    • I want to display each part of the image on one Projector (in total up to 3 ones) using a TripleHead2GO (3 Projectors display a different region of my DVI-Output). Each Projecector-Image should have own Soft-Edge and Keystone parameters.
    • the resulting image will also be shown on a Preview-Screen with some Information overlay.

    I think all that should be possible with opengl and openal (for the movie audio)

    I think I'll use C++, OpenGL for Graphics, OpenAL for Audio, if needed ffmpeg for Video conversion, Ubuntu/Debian as OS.

    The software is used to do Multimedia-Shows on Concerts including Cameras & Co.

    All that should happen Live (On a FullHD output), Having i7 3770, GLX 670 and 16GB of Ram for at least 8 Layers. (4 Live-Images at once + Some Overlays like the Actors Name and some Logos)

    But now comes the question.

    Is it also Posible to do the following with that setting :

    • Writing the output Image with all the 3d translations to a Movie File (To Master a DVD later) with Audio
    • Mixing Audio from different Inputs & Files (Ambience Mics, Signal from the Sound Mixer, Playbacks from my own application) to more than one Mix (eg. one Mix for the Recording, one Mix for Live)
    • Stream that Output Complete or in Parts (e.g. the left Part of the Image) over the Network (For example, Projector 1 is near the Server, so I connect it using DVI, Projector 2+3 is connected to a Computer that receives the streams for that two projectors (with soft edge on each stream) and Screen 4 is outside the Concert Hall and shows the complete Live-Stream.
    • What GUI-Framework should I use for that ?
    • is it perhaps event performant enough to use Java for that ?
    • is it posible to use that mechanism for just rendering (eg. I have stored the cut points on Disc and saved every single camera stream to change some errors later or cut out some parts)