Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (101)

  • Les sons

    15 mai 2013, par
  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (12128)

  • avformat/hlsenc : optimize help message default value.

    5 juillet 2017, par Steven Liu
    avformat/hlsenc : optimize help message default value.
    

    show the hls_segment_type default always 0, show the flag name better

    Signed-off-by : Steven Liu <lq@chinaffmpeg.org>

    • [DH] libavformat/hlsenc.c
  • How to convert a Video to a Slideshow with synced audio ? [closed]

    25 mai 2013, par Henry Mazza

    I want a simple way to convert a Ted Talk Presentation to a SlideShow + (synced)Audio format so I can listen to it in my car. I don't want to lose most of the visuals as it does in the audio format and also want to reduce the storage/cell data needs of the video format.

    So far I already extracted the key scenes with timing and the audio, now I must glue this together in a synced fashion.

    Possible ways I found but couldn't make work :

    • MP4Box to make a .m4b (audiobook/enhanced podcast) with mp4chap to set each image as a chapter image (but I could find proper documentation on how to do this)
    • FFMPEG to make a flat movie with the images (but I couldn't make each image stay for a random period of time)

    Ultimately I will automate this process in my VPS and make my private podcast server, so no fancy tools that don't work in Linux, please.

    EDIT : actually the Podcast is a lousy idea, the artwork must be in a square aspect ratio and has great compatibility problems with varius players. Still studing the Book format.

  • Matplotlib pipe canvas.draw() to ffmpeg - unexpected result [duplicate]

    31 juillet 2022, par Narusan

    I'm using this code from here to try and pipe multiple matplotlib plots into ffmpeg to write a video file :

    &#xA;

    import numpy as np&#xA;import matplotlib.pyplot as plt&#xA;import subprocess&#xA;&#xA;xlist = np.random.randint(100,size=100)&#xA;ylist = np.random.randint(100, size=100)&#xA;color = np.random.randint(2, size=100)&#xA;&#xA;f = plt.figure(figsize=(5,5), dpi = 300)&#xA;canvas_width, canvas_height = f.canvas.get_width_height()&#xA;ax = f.add_axes([0,0,1,1])&#xA;ax.axis(&#x27;off&#x27;)&#xA;&#xA;&#xA;# Open an ffmpeg process&#xA;outf = &#x27;ffmpeg.mp4&#x27;&#xA;cmdstring = (&#x27;ffmpeg&#x27;,&#xA;    &#x27;-y&#x27;, &#x27;-r&#x27;, &#x27;30&#x27;, # overwrite, 30fps&#xA;    &#x27;-s&#x27;, &#x27;%dx%d&#x27; % (canvas_width, canvas_height), # size of image string&#xA;    &#x27;-pix_fmt&#x27;, &#x27;argb&#x27;, # format&#xA;    &#x27;-f&#x27;, &#x27;rawvideo&#x27;,  &#x27;-i&#x27;, &#x27;-&#x27;, # tell ffmpeg to expect raw video from the pipe&#xA;    &#x27;-vcodec&#x27;, &#x27;mpeg4&#x27;, outf) # output encoding&#xA;p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)&#xA;&#xA;# Draw 1000 frames and write to the pipe&#xA;for frame in range(10):&#xA;    print("Working on frame")&#xA;    # draw the frame&#xA;    f = plt.figure(figsize=(5,5), dpi=300)&#xA;    ax = f.add_axes([0,0,1,1])&#xA;    ax.scatter(xlist, ylist,&#xA;               c=color, cmap = &#x27;viridis&#x27;)&#xA;    f.canvas.draw()&#xA;    plt.show()&#xA;&#xA;    # extract the image as an ARGB string&#xA;    string = f.canvas.tostring_argb()&#xA;    # write to pipe&#xA;    p.stdin.write(string)&#xA;&#xA;# Finish up&#xA;p.communicate()&#xA;

    &#xA;

    While plt.show() does show the correct plot (see image below), the video that ffmpeg creates is a bit different than what plt.show() shows. I am presuming the issue is with f.canvas.draw(), but I'm not sure how to get a look at what canvas.draw() actually plots.

    &#xA;

    plot.show() :&#xA;enter image description here

    &#xA;

    ffmpeg video (imgur link)

    &#xA;