Recherche avancée

Médias (91)

Autres articles (72)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (7705)

  • Invalid key_frame(I) and P- frame sequence in h264 Binary stream

    9 avril 2019, par Syed

    I am getting video from USB webcam and encoding into h.264 raw stream as follows...

    ffmpeg.exe -f dshow -rtbufsize 200M -i video="Logitech HD Webcam C270" -vcodec libx264 -preset ultrafast -tune zerolatency -g 30 -s 480x640 -buf
    size:v 50M output.h264

    I am expecting 1-Key frame and 29-P Frame(with SPS/PPS) in output.h264 stream.But I am not getting the expected result.However video is playing fine.

    I tried to get Metadata of same file using ffprob ..

    ffprobe -show_frames videofilename.h264 > outputlogfile.txt

    Here I can see proper sequence 1-Key frame and 29-P frames.But if we open h264 file in binary reader(I am using HDX) I can see Key/P frames not in proper sequence.

    You can download h264 sample and ffprob logs from below links.

    https://www.dropbox.com/s/3ghpkqdc36wdgxr/TimerSample.h264?dl=0
    https://www.dropbox.com/s/gdn64004o0udrfk/TimerSample.txt?dl=0

    You can find binary sequence of same file from here (filtered by start code)

    Please let me know whether i am missing some filter. Thank you.

  • Java execute ffmpeg commands with (pipe) "... -f nut - | ffmpeg -i - ..." just hangs

    18 mars 2019, par user3776738

    I can’t get this to run,because java just waits for ffmpeg. But ffmpeg doesn’t give an input- nor an error stream. It just runs, but doing nothing.

    The output of "System.out.println("command :.." insert into bash just runs fine as expected.So there is nothing wrong with the ffmpeg syntax.

    Here’s the code.

    package mypackage;

    import java.awt.image.BufferedImage;
    import java.io.BufferedReader;
    import java.io.IOException;
    import java.io.InputStreamReader;
    import javax.imageio.ImageIO;

    /**
    *
    * @author test
    */
    public class ffmpeg_hang {

           /**
        * @param args the command line arguments
        */
       public static void main(String[] args) throws IOException, InterruptedException {
           String INPUT_FILE="/path/to/media";
           String FFMPEG_PATH="/path/to/ffmpegFolder/";

               for(int i=0;(i+4)<40;i+=4){                
               String[] ffmpeg_pipe = new String[]{
                   FFMPEG_PATH + "ffmpeg_4.1.1",
                   "-ss",(i+""),"-t", "4",            
                   "-i", INPUT_FILE,                                        
                   "-ac", "1", "-acodec", "pcm_s16le", "-ar", "16000",
                   "-f","nut","-","|",
                   FFMPEG_PATH + "ffmpeg_4.1.1",
                   "-i","-",
                   "-lavfi", "showspectrumpic=s=128x75:legend=disabled:saturation=0:stop=8000",
                   "-f","image2pipe","pipe:1"};

               System.out.println("command: "+String.join(" ", ffmpeg_pipe));

               Process p;
               //ffmpe wav->pipe->spectrogra->pipe->java
               p = Runtime.getRuntime().exec(ffmpeg_pipe);


               StringBuilder Boxbuffer = new StringBuilder();
               BufferedReader reader = new BufferedReader(new InputStreamReader(p.getErrorStream()));
               String line = "";

               while ((line = reader.readLine()) != null) {
                   Boxbuffer.append(line);
               }


               System.out.println("ffmpeg errors->> "+Boxbuffer.toString());
               p.waitFor();


               BufferedImage image = ImageIO.read(p.getInputStream());
               //do stuff with image
               }

       }

    }
  • Integrating CUDA-based video decoder into libavcodec/ffmpeg

    1er février 2019, par tmlen

    I have a CUDA-based decoder of a video format running on the GPU. I am trying to add a "codec" into libavcodec that uses it as external decoder

    Currenty, I have it working such that I can play a sequence of pictures using ffplay, which
    get decoded on the GPU with the external decoder.

    But with the current implementation, the codec module copies its output (in a RGB24 pixel format) from GPU memory to host memory after each frame, and gives this to libavcodec in its AVFrame. So with this when using ffplay, it will copy the output images back and forth between GPU and host two times (as ffplay has to copy the data to GPU for display).

    My goal is to leave the uncompressed data on GPU using on a CUDA device buffer, and have ffmpeg use it.

    ffmpeg seems to have support for this using AVHWAccel.

    • Is there any example implementation that uses this with a CUDA based decoder (not using the dedicated hardware decoders through NVDEC, CUVID, etc.) ?

    • Does ffmpeg need the output in a pixel format in a CUDA buffer, or can it also be in texture memory, in a CUDA array ?

    • Is it possible to have the hardware decoder as primary decoder of the AVCodec. It seems that hardware-acceleration is foreseen as an add-on, with the software decoder implemented by AVCodec available as fallback ?

    • It seems that ffmpeg will allocate a pool of CUDA buffers to receive its output. Is it also possible to allocate the output buffers oneself in the module’s implementation, and control how many buffers there will be.

    • Is it possible to control with how many CPU threads the decoder will be called ? With the external decoder’s interface, ideal would be one writer thread that pushes compressed codestreams, and one reader thread that pulls the uncompressed output to a CUDA buffer.