Recherche avancée

Médias (91)

Autres articles (54)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (8277)

  • Debian Linux How-To use eth0 and wlan0 on different routers ?

    27 mars 2014, par Chris

    I have eth0 and wlan0, both are using different rounter, one is good for download, the other one is good for upload.
    But... When i have eth0 cable plugged in, wlan0 can not ping its own router or ping ex. google.com.

    ping -I eth0 google.com

    i get answer

    ping -I wlan0 google.com

    no answer

    both routers have inet connection and at the moment both interfaces using DHCP.

    I need to run ffmpeg that is getting live-stream via wlan0 -> put it in to ffserver -> second ffmpeg get ist from ffserver and send it via eth0 to my dedicated server...

    any idea ?

  • Saving the openGL context as a video output

    16 septembre 2016, par activatedgeek

    I am currently trying to save the animation made in openGL to a video file. I have tried using openCV’s videowriter but to no advantage. I have successfully been able to generate a snapshot and save it as bmp using the SDL library. If I save all snapshots and then generate the video using ffmpeg, that is like collecting 4 GB worth of images. Not practical.
    How can I write video frames directly during rendering ?
    Here the code i use to take snapshots when I require :

    void snapshot(){
    SDL_Surface* snap = SDL_CreateRGBSurface(SDL_SWSURFACE,WIDTH,HEIGHT,24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);
    char * pixels = new char [3 *WIDTH * HEIGHT];
    glReadPixels(0, 0,WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);

    for (int i = 0 ; i <height>pixels) + snap->pitch * i, pixels + 3 * WIDTH * (HEIGHT-i - 1), WIDTH*3 );

    delete [] pixels;
    SDL_SaveBMP(snap, "snapshot.bmp");
    SDL_FreeSurface(snap);
    }
    </height>

    I need the video output. I have discovered that ffmpeg can be used to create videos from C++ code but have not been able to figure out the process. Please help !

    EDIT : I have tried using openCV CvVideoWriter class but the program crashes ("segmentation fault") the moment it is declared.Compilation shows no errors ofcourse. Any suggestions to that ?

    SOLUTION FOR PYTHON USERS (Requires Python2.7,python-imaging,python-opengl,python-opencv, codecs of format you want to write to, I am on Ubuntu 14.04 64-bit) :

    def snap():
       pixels=[]
       screenshot = glReadPixels(0,0,W,H,GL_RGBA,GL_UNSIGNED_BYTE)
       snapshot = Image.frombuffer("RGBA",W,H),screenshot,"raw","RGBA",0,0)
       snapshot.save(os.path.dirname(videoPath) + "/temp.jpg")
       load = cv2.cv.LoadImage(os.path.dirname(videoPath) + "/temp.jpg")
       cv2.cv.WriteFrame(videoWriter,load)

    Here W and H are the window dimensions (width,height). What is happening is I am using PIL to convert the raw pixels read from the glReadPixels command into a JPEG image. I am loading that JPEG into the openCV image and writing to the videowriter. I was having certain issues by directly using the PIL image into the videowriter (which would save millions of clock cycles of I/O), but right now I am not working on that. Image is a PIL module cv2 is a python-opencv module.

  • Loss of frames results in a memory leak in the FFmpeg H.264 decoder

    9 octobre 2013, par user2863509

    guys !

    I am using FFmpeg library 'avcodec' in my VOIP-application for decoding streaming video in H.264 format. Video stream is transmitted over the network via RTP. I am using function avcodec_decode_video2() to decode H.264 frames received from the network.

    I've used valgrind with tool 'massif' and found the memory leaks in the function avcodec_decode_video2().

    I've discovered that the leaks occur when there is a loss of RTP packets before receiving the first INTRA frame (keyframe). The leak is directly proportional to the time which passes from the moment the first call avcodec_decode_video2() until arrival of the next INTRA frame (keyframe).

    Question 1 : Has anyone seen this behavior function avcodec_decode_video2() ?

    Question 2 : There is a suspicion that the decoder allocates memory for data and does not release it after the arrival of good keyframe. How to bring back the memory in heap ?

    Thank all !