Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (46)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8488)

  • Most efficient way to render bitmap to screen on Linux [on hold]

    22 juillet 2019, par Maximus

    My goal is to receive a video feed from wifi and display it on my screen. For this, I’ve created a couple of small programs and a bash script to automate them running. It works like this :

    UDPBitmap/Plotter & ffplay -i - < UDPBitmap/pipe & python requester.py;

    Translation : There is a C++ program called Plotter, its job is to receive packets on an assigned UDP port, process them and write it to pipe (named : UDPBitmap/pipe). The pipe is accessed by ffplay, and ffplay renders the video on screen. The python file is solely responsible for accessing and controlling the camera with various HTTP requests.

    The above command works fine, everything works as expected. However, the resulting latency and framerate is a bit lower than what I’ve wanted. The bottleneck of this program is not the pipe, it is fast enough. Wifi transmission is also fast enough. The only thing left is ffplay.

    Question :

    What is the most efficient way to render a bitmap to screen, on Linux ? Is there a de facto library for this that I can use ?

    Note :

    • Language/framework/library does not matter (C, C++, Java, Python, native linux tools and so on...)
    • I do not need a window handle, but is SDL+OpenGL the way to go ?
    • Writing directly to the framebuffer would be super cool...
  • issue starting ffmpeg screen capture on OSX (AVFoundation) from Java

    24 juin 2022, par stewori

    I would like to launch from Java on OSX a screen capture command like explained here : https://trac.ffmpeg.org/wiki/Capture/Desktop

    



    It works fine from the terminal. But when I launch exactly the same command using Java's Runtime.exec I get the following output :

    



    [AVFoundation input device @ 0x7f892f500400] Video device not found

'1:': Input/output error


    



    Assume the command I run is stored as String cmd = "ffmpeg -f avfoundation -i '1:' output.mkv". Things I tried :

    



      

    • Using ffmpeg -f avfoundation -list_devices true -i "" I asserted that 1 is the correct index for the screen. I ran that command also via Runtime.exec and it gives the same indexes as when I run it from terminal.

    • 


    • It does not make a difference whether I use '1:' or "\"1:\"". Well, in the latter case it says "1:": Input/output error. Both variants work in terminal.

    • 


    • Neither does it make a difference whether I call
Runtime.getRuntime().exec(cmd),
Runtime.getRuntime().exec(cmd.split(" ")) or (new ProcessBuilder(cmd.split(" "))).start(). In principle it starts ffmpeg and that terminates with the output given above.

    • 


    • It does not seem to make a difference whether I read out ffmpeg's output or not (via process.getErrorStream())

    • 


    • The only thing that works is to store the command in a file, e.g. in run.sh and then call e.g. Runtime.getRuntime().exec("run.sh"). It should be possible to execute this properly from Java without this kind of workaround, right ? What am I doing wrong ?

    • 


    • On Linux, using e.g. ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :0.0+100,200 output.mp4 it works fine, from command line or from Java, with Runtime.exec and via ProcessBuilder.

    • 


    



    I did not try it on Windows. On OSX (Mojave 10.14.5) I used Java 12, on Linux (Mint 18, 64bit) Java 8. Would be some hassle to try it with Java 12 on Linux and I suspect the Java version is not the cause, given that avfoundation vs x11grab is the far more dominant difference.

    


  • How to use Python (For example, ffmpeg or moviePy) to split large video to display on multiple screen [on hold]

    27 juillet 2019, par Chen Clarence

    I am trying to find resources to split a very long (1080*15360) video into 8 1080p videos of the same time length. Is there anything that could achieve this, or even better allows control over each pixel(use a function to cover part of my video with a black circle). Right now I have to brute force it, such as the following, but I’m sure there are much more efficient methods. Thanks in advance !

    cap=cv2.VideoCapture('sample.avi')
    #inputs & check for error
    numberOfScreens=8
    screenArrangement=(1,8)
    if (numberOfScreens!=(screenArrangement[0]*screenArrangement[1])):
       raise ValueError('The screen arrangement does not match the number of scrrens')
       exit(0)


    currentFrame=0
    originalFrames=[]
    while(True):
       #reading frames
       ret,frame=cap.read()
       if not ret:
           break
       height, width, layers = frame.shape
       unitHeight=(int)(height/screenArrangement[0])   #reduce this to outside loop
       unitWidth=(int)(width/screenArrangement[1])
       #cutting frames into desired size
       for i in range(screenArrangement[0]):
           for j in range(screenArrangement[1]):
               try:
                   if not os.path.exists('Screen'+str(i*screenArrangement[0]+j+1)):
                       os.makedirs('Screen'+str(i*screenArrangement[0]+j+1))
                       print('creating directory '+'Screen'+str(i*screenArrangement[0]+j+1))
               except OSError:
                   print ("Error Creating Directory")
               name='./Screen'+str(i*screenArrangement[0]+j+1)+'/frame'+str(currentFrame)+'.png'
               cropImg = frame[(i*unitHeight):((i+1)*unitHeight), (j*unitWidth):((j+1)*unitWidth)]
               print('creating'+name)
               #saving cropeed frames
               cv2.imwrite(name, cropImg,[cv2.IMWRITE_PNG_COMPRESSION, 0])
       currentFrame+=1
    Frames=currentFrame

    #setting up the writer object
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    writer=cv2.VideoWriter('Screen1.avi', fourcc, 30, (unitWidth, unitHeight), True)
    #write the video
    for i in range(Frames):
       img=cv2.imread('./Screen1/frame'+str(i)+'.png')
       writer.write(img)
    writer.release()
    cap.release()
    cv2.destroyAllWindows()