Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (45)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (6826)

  • FFMPEG : Working of parser of a video decoder

    4 décembre 2013, par Zax

    I'm going through the working of H.263 video decoders parser in FFMPEG multimedia framework.

    What i know :

    Every video decoder needs a parser to fetch frames from a given input stream and once data related to a frame is obtained, it is sent to the decoder for decoding process.

    Every codec's parser needs to define a structure of type AVCodecParser. This structure has a function pointers :

    .parser_parse -> Points to the function which deals with the parsing functionality

    .parser_close -> points to a function that performs buffer deallocation.

    Taking the example of a video decoder H.264, it has a parser function as shown below :

    static int h263_parse(AVCodecParserContext *s,
                              AVCodecContext *avctx,
                              const uint8_t **poutbuf, int *poutbuf_size,
                              const uint8_t *buf, int buf_size)
    {
       ParseContext *pc = s->priv_data;
       int next;

       if (s->flags & PARSER_FLAG_COMPLETE_FRAMES) {
           next = buf_size;
       } else {
           next= ff_h263_find_frame_end(pc, buf, buf_size);

           if (ff_combine_frame(pc, next, &buf, &buf_size) < 0) {
               *poutbuf = NULL;
               *poutbuf_size = 0;
               return buf_size;
           }
       }

       *poutbuf = buf;
       *poutbuf_size = buf_size;
       return next;
    }

    Could anyone please explain, the parameters of the above function.

    According to me :

    poutbuf-> is a pointer that points to parsed frame data.

    poutbuf_size-> contains the size of the data.

    Are my above assumptions right ? Which parameter holds the input buffer data ? And what is the above parse function returning ? Also a brief explanation for the above code will be anyone who is referring to the post. Any information regarding the same will be really helpful.

    Thanks in advance.

    -Regards

  • Damaged h264 stream not working with ffmpeg but working with vlc or mplayer

    15 avril 2013, par gregoiregentil

    I have a h264 file, coming from an rtsp stream, that is slightly damaged. Some frames are altered.

    ffmpeg reports :

    ffmpeg -i stream.mpg
    ffmpeg version 0.8.6-4:0.8.6-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav developers
     built on Apr  2 2013 17:00:59 with gcc 4.6.3
    *** THIS PROGRAM IS DEPRECATED ***
    This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.

    Seems stream 0 codec frame rate differs from container frame rate: 180000.00 (180000/1) -> 90000.00 (180000/2)
    Input #0, mpegts, from 'a.mpg':
     Duration: 00:03:18.84, start: 93370.745522, bitrate: 2121 kb/s
     Program 1
       Stream #0.0[0x44](): Video: h264 (Baseline), yuv420p, 640x480, 90k tbr, 90k tbn, 180k tbc
    At least one output file must be specified

    I can play the file with VLC or mplayer. Obviously, the damaged frames are "kind of blurred" but it's working. mplayer reports :

    mplayer stream.mpg
    MPlayer2 UNKNOWN (C) 2000-2011 MPlayer Team
    mplayer: could not connect to socket
    mplayer: No such file or directory
    Failed to open LIRC support. You will not be able to use your remote control.

    Playing stream.mpg.
    Detected file format: MPEG-2 transport stream format (libavformat)
    [lavf] stream 0: video (h264), -vid 0
    LAVF: Program 1
    VIDEO:  [H264]  640x480  0bpp  90000.000 fps    0.0 kbps ( 0.0 kbyte/s)
    Load subtitles in .
    [ass] auto-open
    ==========================================================================
    Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
    Asking decoder to use 2 threads if supported.
    Selected video codec: [ffh264] vfm: ffmpeg (FFmpeg H.264)
    ==========================================================================
    Audio: no sound
    Starting playback...
    V:   0.0   0/  0 ??% ??% ??,?% 0 0
    Movie-Aspect is undefined - no prescaling applied.
    VO: [xv] 640x480 => 640x480 Planar YV12
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!

    When I try to re-encode the file with :

    ffmpeg -i stream.mpg -fflags +genpts -an -vcodec mpeg4 -r 65535/2733 stream.mp4

    ffmpeg seems to jump over the altered frames. The length of stream.mp4 << length of stream.mpg

    How could I fix this problem, i.e. having ffmpeg to output something similar to what mplayer and vlc output ?

  • Working on images asynchronously

    15 décembre 2013, par Mikko Koppanen — Imagick, PHP stuff

    To get my quota on buzzwords for the day we are going to look at using ZeroMQ and Imagick to create a simple asynchronous image processing system. Why asynchronous ? First of all, separating the image handling from a interactive PHP scripts allows us to scale the image processing separately from the web heads. For example we could do the image processing on separate servers, which have SSDs attached and more memory. In this example making the images available to all worker nodes is left to the reader.

    Secondly, separating the image processing from a web script can provide more responsive experience to the user. This doesn’t necessarily mean faster, but let’s say in a multiple image upload scenario this method allows the user to do something else on the site while we process the images in the background. This can be beneficial especially in cases where users upload hundreds of images at a time. To achieve a simple distributed image processing infrastructure we are going to use ZeroMQ for communicating between different components and Imagick to work on the images.

    The first part we are going to create is a simple “Worker” -process skeleton. Naturally for a live environment you would like to have more error handling and possibly use pcntl for process control, but for the sake of brewity the example is barebones :

    1. < ?php
    2.  
    3. define (’THUMBNAIL_ADDR’, ’tcp ://127.0.0.1:5000’) ;
    4. define (’COLLECTOR_ADDR’, ’tcp ://127.0.0.1:5001’) ;
    5.  
    6. class Worker {
    7.  
    8.   private $in ;
    9.   private $out ;
    10.  
    11.   public function __construct ($in_addr, $out_addr)
    12.   {
    13.     $context = new ZMQContext () ;
    14.  
    15.     $this->in = new ZMQSocket ($context, ZMQ: :SOCKET_PULL) ;
    16.     $this->in->bind ($in_addr) ;
    17.  
    18.     $this->out = new ZMQSocket ($context, ZMQ: :SOCKET_PUSH) ;
    19.     $this->out->connect ($out_addr) ;
    20.   }
    21.  
    22.   public function work () {
    23.     while ($command = $this->in->recvMulti ()) {
    24.       if (isset ($this->commands [$command [0]])) {
    25.         echo "Received work" . PHP_EOL ;
    26.  
    27.         $callback = $this->commands [$command [0]] ;
    28.  
    29.         array_shift ($command) ;
    30.         $response = call_user_func_array ($callback, $command) ;
    31.  
    32.         if (is_array ($response))
    33.           $this->out->sendMulti ($response) ;
    34.         else
    35.           $this->out->send ($response) ;
    36.       }
    37.       else {
    38.         error_log ("There is no registered worker for $command [0]") ;
    39.       }
    40.     }
    41.   }
    42.  
    43.   public function register ($command, $callback)
    44.   {
    45.     $this->commands [$command] = $callback ;
    46.   }
    47. }
    48.  ?>

    The Worker class allows us to register commands with callbacks associated with them. In our case the Worker class doesn’t actually care or know about the parameters being passed to the actual callback, it just blindly passes them on. We are using two separate sockets in this example, one for incoming work requests and one for passing the results onwards. This allows us to create a simple pipeline by adding more workers in the mix. For example we could first have a watermark worker, which takes the original image and composites a watermark on it, passes the file onwards to thumbnail worker, which then creates different sizes of thumbnails and passes the final results to event collector.

    The next part we are going to create a is a simple worker script that does the actual thumbnailing of the images :

    1. < ?php
    2. include __DIR__ . ’/common.php’ ;
    3.  
    4. // Create worker class and bind the inbound address to ’THUMBNAIL_ADDR’ and connect outbound to ’COLLECTOR_ADDR’
    5. $worker = new Worker (THUMBNAIL_ADDR, COLLECTOR_ADDR) ;
    6.  
    7. // Register our thumbnail callback, nothing special here
    8. $worker->register (’thumbnail’, function ($filename, $width, $height) {
    9.                   $info = pathinfo ($filename) ;
    10.  
    11.                   $out = sprintf ("%s/%s_%dx%d.%s",
    12.                           $info [’dirname’],
    13.                           $info [’filename’],
    14.                           $width,
    15.                           $height,
    16.                           $info [’extension’]) ;
    17.  
    18.                   $status = 1 ;
    19.                   $message = ’’ ;
    20.  
    21.                   try {
    22.                     $im = new Imagick ($filename) ;
    23.                     $im->thumbnailImage ($width, $height) ;
    24.                     $im->writeImage ($out) ;
    25.                   }
    26.                   catch (Exception $e) {
    27.                     $status = 0 ;
    28.                     $message = $e->getMessage () ;
    29.                   }
    30.  
    31.                   return array (
    32.                         ’status’  => $status,
    33.                         ’filename’ => $filename,
    34.                         ’thumbnail’ => $out,
    35.                         ’message’ => $message,
    36.                     ) ;
    37.                 }) ;
    38.  
    39. // Run the worker, will block
    40. echo "Running thumbnail worker.." . PHP_EOL ;
    41. $worker->work () ;

    As you can see from the code the thumbnail worker registers a callback for ‘thumbnail’ command. The callback does the thumbnailing based on input and returns the status, original filename and the thumbnail filename. We have connected our Workers “outbound” socket to event collector, which will receive the results from the thumbnail worker and do something with them. What the “something” is depends on you. For example you could push the response into a websocket to show immediate feeedback to the user or store the results into a database.

    Our example event collector will just do a var_dump on every event it receives from the thumbnailer :

    1. < ?php
    2. include __DIR__ . ’/common.php’ ;
    3.  
    4. $socket = new ZMQSocket (new ZMQContext (), ZMQ: :SOCKET_PULL) ;
    5. $socket->bind (COLLECTOR_ADDR) ;
    6.  
    7. echo "Waiting for events.." . PHP_EOL ;
    8. while (($message = $socket->recvMulti ())) {
    9.   var_dump ($message) ;
    10. }
    11.  ?>

    The final piece of the puzzle is the client that pumps messages into the pipeline. The client connects to the thumbnail worker, passes on filename and desired dimensions :

    1. < ?php
    2. include __DIR__ . ’/common.php’ ;
    3.  
    4. $socket = new ZMQSocket (new ZMQContext (), ZMQ: :SOCKET_PUSH) ;
    5. $socket->connect (THUMBNAIL_ADDR) ;
    6.  
    7. $socket->sendMulti (
    8.       array (
    9.         ’thumbnail’,
    10.         realpath (’./test.jpg’),
    11.         50,
    12.         50,
    13.       )
    14. ) ;
    15. echo "Sent request" . PHP_EOL ;
    16.  ?>

    After this our processing pipeline will look like this :

    simple-pipeline

    Now, if we notice that thumbnail workers or the event collectors can’t keep up with the rate of images we are pushing through we can start scaling the pipeline by adding more processes on each layer. ZeroMQ PUSH socket will automatically round-robin between all connected nodes, which makes adding more workers and event collectors simple. After adding more workers our pipeline will look like this :

    scaling-pipeline

    Using ZeroMQ also allows us to create more flexible architectures by adding forwarding devices in the middle, adding request-reply workers etc. So, the last thing to do is to run our pipeline and see the results :

    Let’s create our test image first :

    $ convert magick:rose test.jpg
    

    From the command-line run the thumbnail script :

    $ php thumbnail.php 
    Running thumbnail worker..
    

    In a separate terminal window run the event collector :

    $ php collector.php 
    Waiting for events..
    

    And finally run the client to send the thumbnail request :

    $ php client.php 
    Sent request
    $
    

    If everything went according to the plan you should now see the following output in the event collector window :

    array(4) 
      [0]=>
      string(1) "1"
      [1]=>
      string(56) "/test.jpg"
      [2]=>
      string(62) "/test_50x50.jpg"
      [3]=>
      string(0) ""
    
    

    Happy hacking !