Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (45)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (6511)

  • matplotlib 3D linecollection animation gets slower over time

    15 juin 2021, par Vignesh Desmond

    I'm trying to animate a 3d line plot for attractors, using Line3DCollection. The animation is initally fast but it gets progressively slower over time. A minimal example of my code :

    


    def generate_video(nframes):

    fig = plt.figure(figsize=(16, 9), dpi=120)
    canvas_width, canvas_height = fig.canvas.get_width_height()
    ax = fig.add_axes([0, 0, 1, 1], projection='3d')

    X = np.random.random(nframes)
    Y = np.random.random(nframes)
    Z = np.random.random(nframes)

    cmap = plt.cm.get_cmap("hsv")
    line = Line3DCollection([], cmap=cmap)
    ax.add_collection3d(line)
    line.set_segments([])

    def update(frame):
        i = frame % len(vect.X)
        points = np.array([vect.X[:i], vect.Y[:i], vect.Z[:i]]).transpose().reshape(-1,1,3)
        segs = np.concatenate([points[:-1],points[1:]],axis=1)
        line.set_segments(segs)
        line.set_array(np.array(vect.Y)) # Color gradient
        ax.elev += 0.0001
        ax.azim += 0.1

    outf = 'test.mp4'
    cmdstring = ('ffmpeg', 
                    '-y', '-r', '60', # overwrite, 1fps
                    '-s', '%dx%d' % (canvas_width, canvas_height),
                    '-pix_fmt', 'argb',
                    '-f', 'rawvideo',  '-i', '-',
                    '-b:v', '5000k','-vcodec', 'mpeg4', outf)
    p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

    for frame in range(nframes):
        update(frame)
        fig.canvas.draw()
        string = fig.canvas.tostring_argb()
        p.stdin.write(string)

    p.communicate()

generate_video(nframes=10000)


    


    I used the code from this answer to save the animation to mp4 using ffmpeg instead of anim.FuncAnimation as its much faster for me. But both methods get slower over time and I'm not sure how to make the animation not become slower. Any advice is welcome.

    


    Versions :
Matplotlib : 3.4.2
FFMpeg : 4.2.4-1ubuntu0.1

    


  • h264 lossless coding

    29 septembre 2014, par cloudraven

    Is it possible to do completely lossless encoding in h264 ? By lossless, I mean that if I feed it a series of frames and encode them, and then if I extract all the frames from the encoded video, I will get the exact same frames as in the input, pixel by pixel, frame by frame. Is that actually possible ?
    Take this example :

    I generate a bunch of frames, then I encode the image sequence to an uncompressed AVI (with something like virtualdub), I then apply lossless h264 (the help files claim that setting —qp 0 makes lossless compression, but I am not sure if that means that there is no loss at any point of the process or that just the quantization is lossless). I can then extract the frames from the resulting h264 video with something like mplayer.

    I tried with Handbrake first, but it turns out it doesn’t support lossless encoding. I tried x264 but it crashes. It may be because my source AVI file is in RGB colorspace instead of YV12. I don’t know how to feed a series of YV12 bitmaps and in what format to x264 anyway, so I cannot even try.

    In summary what I want to know if that is there a way to go from

    Series of lossless bitmaps (in any colorspace) -> some transformation -> h264 encode -> h264 decode -> some transformation -> the original series of lossless bitmaps

    If there a way to achieve this ?

    EDIT : There is a VERY valid point about lossless H264 not making too much sense. I am well aware that there is no way I could tell (with just my eyes) the difference between and uncompressed clip and another compressed at a high rate in H264, but I don’t think it is not without uses. For example, it may be useful for storing video for editing without taking huge amounts of space and not losing quality and spending too much encoding time every time the file is saved.

    UPDATE 2 : Now x264 doesn’t crash. I can use as sources either avisynth or lossless yv12 lagarith (to avoid the colorspace compression warning). Howerver, even with —qp 0 and a rgb or yv12 source I still get some differences, minimal but present. This is troubling, because all the information I have found on lossless predictive coding (—qp 0) claims that the whole encoding should be lossless, but I am unable to verifiy this.

  • h264 lossless coding

    19 juillet 2022, par cloudraven

    Is it possible to do completely lossless encoding in h264 ? By lossless, I mean that if I feed it a series of frames and encode them, and then if I extract all the frames from the encoded video, I will get the exact same frames as in the input, pixel by pixel, frame by frame. Is that actually possible ?
Take this example :

    



    I generate a bunch of frames, then I encode the image sequence to an uncompressed AVI (with something like virtualdub), I then apply lossless h264 (the help files claim that setting —qp 0 makes lossless compression, but I am not sure if that means that there is no loss at any point of the process or that just the quantization is lossless). I can then extract the frames from the resulting h264 video with something like mplayer.

    



    I tried with Handbrake first, but it turns out it doesn't support lossless encoding. I tried x264 but it crashes. It may be because my source AVI file is in RGB colorspace instead of YV12. I don't know how to feed a series of YV12 bitmaps and in what format to x264 anyway, so I cannot even try.

    



    In summary what I want to know if that is there a way to go from

    



    Series of lossless bitmaps (in any colorspace) -> some transformation -> h264 encode -> h264 decode -> some transformation -> the original series of lossless bitmaps

    



    If there a way to achieve this ?

    



    EDIT : There is a VERY valid point about lossless H264 not making too much sense. I am well aware that there is no way I could tell (with just my eyes) the difference between and uncompressed clip and another compressed at a high rate in H264, but I don't think it is not without uses. For example, it may be useful for storing video for editing without taking huge amounts of space and not losing quality and spending too much encoding time every time the file is saved.

    



    UPDATE 2 : Now x264 doesn't crash. I can use as sources either avisynth or lossless yv12 lagarith (to avoid the colorspace compression warning). Howerver, even with —qp 0 and a rgb or yv12 source I still get some differences, minimal but present. This is troubling, because all the information I have found on lossless predictive coding (—qp 0) claims that the whole encoding should be lossless, but I am unable to verifiy this.