Recherche avancée

Médias (0)

Mot : - Tags -/gis

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (68)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (6866)

  • Working on images asynchronously

    15 décembre 2013, par Mikko Koppanen — Imagick, PHP stuff

    To get my quota on buzzwords for the day we are going to look at using ZeroMQ and Imagick to create a simple asynchronous image processing system. Why asynchronous ? First of all, separating the image handling from a interactive PHP scripts allows us to scale the image processing separately from the web heads. For example we could do the image processing on separate servers, which have SSDs attached and more memory. In this example making the images available to all worker nodes is left to the reader.

    Secondly, separating the image processing from a web script can provide more responsive experience to the user. This doesn’t necessarily mean faster, but let’s say in a multiple image upload scenario this method allows the user to do something else on the site while we process the images in the background. This can be beneficial especially in cases where users upload hundreds of images at a time. To achieve a simple distributed image processing infrastructure we are going to use ZeroMQ for communicating between different components and Imagick to work on the images.

    The first part we are going to create is a simple “Worker” -process skeleton. Naturally for a live environment you would like to have more error handling and possibly use pcntl for process control, but for the sake of brewity the example is barebones :

    1. < ?php
    2.  
    3. define (’THUMBNAIL_ADDR’, ’tcp ://127.0.0.1:5000’) ;
    4. define (’COLLECTOR_ADDR’, ’tcp ://127.0.0.1:5001’) ;
    5.  
    6. class Worker {
    7.  
    8.   private $in ;
    9.   private $out ;
    10.  
    11.   public function __construct ($in_addr, $out_addr)
    12.   {
    13.     $context = new ZMQContext () ;
    14.  
    15.     $this->in = new ZMQSocket ($context, ZMQ: :SOCKET_PULL) ;
    16.     $this->in->bind ($in_addr) ;
    17.  
    18.     $this->out = new ZMQSocket ($context, ZMQ: :SOCKET_PUSH) ;
    19.     $this->out->connect ($out_addr) ;
    20.   }
    21.  
    22.   public function work () {
    23.     while ($command = $this->in->recvMulti ()) {
    24.       if (isset ($this->commands [$command [0]])) {
    25.         echo "Received work" . PHP_EOL ;
    26.  
    27.         $callback = $this->commands [$command [0]] ;
    28.  
    29.         array_shift ($command) ;
    30.         $response = call_user_func_array ($callback, $command) ;
    31.  
    32.         if (is_array ($response))
    33.           $this->out->sendMulti ($response) ;
    34.         else
    35.           $this->out->send ($response) ;
    36.       }
    37.       else {
    38.         error_log ("There is no registered worker for $command [0]") ;
    39.       }
    40.     }
    41.   }
    42.  
    43.   public function register ($command, $callback)
    44.   {
    45.     $this->commands [$command] = $callback ;
    46.   }
    47. }
    48.  ?>

    The Worker class allows us to register commands with callbacks associated with them. In our case the Worker class doesn’t actually care or know about the parameters being passed to the actual callback, it just blindly passes them on. We are using two separate sockets in this example, one for incoming work requests and one for passing the results onwards. This allows us to create a simple pipeline by adding more workers in the mix. For example we could first have a watermark worker, which takes the original image and composites a watermark on it, passes the file onwards to thumbnail worker, which then creates different sizes of thumbnails and passes the final results to event collector.

    The next part we are going to create a is a simple worker script that does the actual thumbnailing of the images :

    1. < ?php
    2. include __DIR__ . ’/common.php’ ;
    3.  
    4. // Create worker class and bind the inbound address to ’THUMBNAIL_ADDR’ and connect outbound to ’COLLECTOR_ADDR’
    5. $worker = new Worker (THUMBNAIL_ADDR, COLLECTOR_ADDR) ;
    6.  
    7. // Register our thumbnail callback, nothing special here
    8. $worker->register (’thumbnail’, function ($filename, $width, $height) {
    9.                   $info = pathinfo ($filename) ;
    10.  
    11.                   $out = sprintf ("%s/%s_%dx%d.%s",
    12.                           $info [’dirname’],
    13.                           $info [’filename’],
    14.                           $width,
    15.                           $height,
    16.                           $info [’extension’]) ;
    17.  
    18.                   $status = 1 ;
    19.                   $message = ’’ ;
    20.  
    21.                   try {
    22.                     $im = new Imagick ($filename) ;
    23.                     $im->thumbnailImage ($width, $height) ;
    24.                     $im->writeImage ($out) ;
    25.                   }
    26.                   catch (Exception $e) {
    27.                     $status = 0 ;
    28.                     $message = $e->getMessage () ;
    29.                   }
    30.  
    31.                   return array (
    32.                         ’status’  => $status,
    33.                         ’filename’ => $filename,
    34.                         ’thumbnail’ => $out,
    35.                         ’message’ => $message,
    36.                     ) ;
    37.                 }) ;
    38.  
    39. // Run the worker, will block
    40. echo "Running thumbnail worker.." . PHP_EOL ;
    41. $worker->work () ;

    As you can see from the code the thumbnail worker registers a callback for ‘thumbnail’ command. The callback does the thumbnailing based on input and returns the status, original filename and the thumbnail filename. We have connected our Workers “outbound” socket to event collector, which will receive the results from the thumbnail worker and do something with them. What the “something” is depends on you. For example you could push the response into a websocket to show immediate feeedback to the user or store the results into a database.

    Our example event collector will just do a var_dump on every event it receives from the thumbnailer :

    1. < ?php
    2. include __DIR__ . ’/common.php’ ;
    3.  
    4. $socket = new ZMQSocket (new ZMQContext (), ZMQ: :SOCKET_PULL) ;
    5. $socket->bind (COLLECTOR_ADDR) ;
    6.  
    7. echo "Waiting for events.." . PHP_EOL ;
    8. while (($message = $socket->recvMulti ())) {
    9.   var_dump ($message) ;
    10. }
    11.  ?>

    The final piece of the puzzle is the client that pumps messages into the pipeline. The client connects to the thumbnail worker, passes on filename and desired dimensions :

    1. < ?php
    2. include __DIR__ . ’/common.php’ ;
    3.  
    4. $socket = new ZMQSocket (new ZMQContext (), ZMQ: :SOCKET_PUSH) ;
    5. $socket->connect (THUMBNAIL_ADDR) ;
    6.  
    7. $socket->sendMulti (
    8.       array (
    9.         ’thumbnail’,
    10.         realpath (’./test.jpg’),
    11.         50,
    12.         50,
    13.       )
    14. ) ;
    15. echo "Sent request" . PHP_EOL ;
    16.  ?>

    After this our processing pipeline will look like this :

    simple-pipeline

    Now, if we notice that thumbnail workers or the event collectors can’t keep up with the rate of images we are pushing through we can start scaling the pipeline by adding more processes on each layer. ZeroMQ PUSH socket will automatically round-robin between all connected nodes, which makes adding more workers and event collectors simple. After adding more workers our pipeline will look like this :

    scaling-pipeline

    Using ZeroMQ also allows us to create more flexible architectures by adding forwarding devices in the middle, adding request-reply workers etc. So, the last thing to do is to run our pipeline and see the results :

    Let’s create our test image first :

    $ convert magick:rose test.jpg
    

    From the command-line run the thumbnail script :

    $ php thumbnail.php 
    Running thumbnail worker..
    

    In a separate terminal window run the event collector :

    $ php collector.php 
    Waiting for events..
    

    And finally run the client to send the thumbnail request :

    $ php client.php 
    Sent request
    $
    

    If everything went according to the plan you should now see the following output in the event collector window :

    array(4) 
      [0]=>
      string(1) "1"
      [1]=>
      string(56) "/test.jpg"
      [2]=>
      string(62) "/test_50x50.jpg"
      [3]=>
      string(0) ""
    
    

    Happy hacking !

  • Translating Return To Ringworld

    17 août 2016, par Multimedia Mike — Game Hacking

    As indicated in my previous post, the Translator has expressed interest in applying his hobby towards another DOS adventure game from the mid 1990s : Return to Ringworld (henceforth R2RW) by Tsunami Media. This represents significantly more work than the previous outing, Phantasmagoria.


    Return to Ringworld Title Screen
    Return to Ringworld Title Screen

    I have been largely successful thus far in crafting translation tools. I have pushed the fruits of these labors to a Github repository named improved-spoon (named using Github’s random name generator because I wanted something more interesting than ‘game-hacking-tools’).

    Further, I have recorded everything I have learned about the game’s resource format (named RLB) at the XentaxWiki.

    New Challenges
    The previous project mostly involved scribbling subtitle text on an endless series of video files by leveraging a separate software library which took care of rendering fonts. In contrast, R2RW has at least 30k words of English text contained in various blocks which require translation. Further, the game encodes its own fonts (9 of them) which stubbornly refuse to be useful for rendering text in nearly any other language.

    Thus, the immediate 2 challenges are :

    1. Translating volumes of text to Spanish
    2. Expanding the fonts to represent Spanish characters

    Normally, “figuring out the file format data structures involved” is on the list as well. Thankfully, understanding the formats is not a huge challenge since the folks at the ScummVM project already did all the heavy lifting of reverse engineering the file formats.

    The Pitch
    Here was the plan :

    • Create a tool that can dump out the interesting data from the game’s master resource file.
    • Create a tool that can perform the elaborate file copy described in the previous post. The new file should be bit for bit compatible with the original file.
    • Modify the rewriting tool to repack some modified strings into the new resource file.
    • Unpack the fonts and figure out a way to add new characters.
    • Repack the new fonts into the resource file.
    • Repack message strings with Spanish characters.

    Showing The Work : Modifying Strings
    First, I created the tool to unpack blocks of message string resources. I elected to dump the strings to disk as JSON data since it’s easy to write and read JSON using Python, and it’s quick to check if any mistakes have crept in.

    The next step is to find a string to focus on. So I started the game and looked for the first string I could trigger :


    Return to Ringworld: Original text

    This shows up in the JSON string dump as :

      
        "Spanish" : " !0205Your quarters on the Lance of Truth are spartan, in accord with your mercenary lifestyle.",
        "English" : " !0205Your quarters on the Lance of Truth are spartan, in accord with your mercenary lifestyle."
      ,
    

    As you can see, many of the strings are encoded with an ID key as part of the string which should probably be left unmodified. I changed the Spanish string :

      
        "Spanish" : " !0205Hey, is this thing on ?",
        "English" : " !0205Your quarters on the Lance of Truth are spartan, in accord with your mercenary lifestyle."
      ,
    

    And then I wrote the repacking tool to substitute this message block for the original one. Look ! The engine liked it !


    Return to Ringworld: Modified text

    Little steps, little steps.

    Showing The Work : Modifying Fonts
    The next little step is to find a place to put the new characters. First, a problem definition : The immediate goal is to translate the game into Spanish. The current fonts encoded in the game resource only support 128 characters, corresponding to 7-bit ASCII. In order to properly express Spanish, 16 new characters are required : á, é, í, ó, ú, ü, ñ (each in upper and lower case for a total of 14 characters) as well as the inverted punctuation symbols : ¿, ¡.

    Again, ScummVM already documents (via code) the font coding format. So I quickly determined that each of the 9 fonts is comprised of 128 individual bitmaps with either 1 or 2 bits per pixel. I wrote a tool to unpack each character into an individual portable grey map (PGM) image. These can be edited with graphics editors or with text editors since they are just text files.

    Where to put the 16 new Spanish characters ? ASCII characters 1-31 are non-printable, so my first theory was that these characters would be empty and could be repurposed. However, after dumping and inspecting, I learned that they represent the same set of characters as seen in DOS Code Page 437. So that’s a no-go (so I assumed ; I didn’t check if any existing strings leveraged those characters).

    My next plan was hope that I could extend the font beyond index 127 and use positions 128-143. This worked superbly. This is the new example string :

      
        "Spanish" : " !0205¿Ves esto ? ¡La puntuacion se hace girar !",
        "English" : " !0205Your quarters on the Lance of Truth are spartan, in accord with your mercenary lifestyle."
      ,
    

    Fortunately, JSON understands UTF-8 and after mapping the 16 necessary characters down to the numeric range of 128-143, I repacked the new fonts and the new string :


    Return to Ringworld: Espanol
    Translation : “See this ? The punctuation is rotated !”

    Another victory. Notice that there are no diacritics in this string. None are required for this translation (according to Google Translate). But adding the diacritics to the 14 characters isn’t my department. My tool does help by prepopulating [aeiounAEIOUN] into the right positions to make editing easier for the Translator. But the tool does make the effort to rotate the punctuation since that is easy to automate.

    Next Steps and Residual Weirdness
    There is another method for storing ASCII text inside the R2RW resource called strip resources. These store conversation scripts. There are plenty of fields in the data structures that I don’t fully understand. So, following the lessons I learned from my previous translation outing, I was determined to modify as little as possible. This means copying over most of the original data structures intact, but changing the field representing the relative offset that points to the corresponding string. This works well since the strings are invariably stored NULL-terminated in a concatenated manner.

    I wanted to document for the record that the format that R2RW uses has some weirdness in they way it handles residual bytes in a resource. The variant of the resource format that R2RW uses requires every block to be aligned on a 16-byte boundary. If there is space between the logical end of the resource and the start of the next resource, there are random bytes in that space. This leads me to believe that these bytes were originally recorded from stale/uninitialized memory. This frustrates me because when I write the initial file copy tool which unpacks and repacks each block, I want the new file to be identical to the original. However, these apparent nonsense bytes at the end thwart that effort.

    But leaving those bytes as 0 produces an acceptable resource file.

    Text On Static Images
    There is one last resource type we are working on translating. There are various bits of text that are rendered as images. For example, from the intro :


    Return to Ringworld: Static text

    It’s possible to locate and extract the exact image that is overlaid on this scene, though without the colors :


    Original static text

    The palettes are stored in a separate resource type. So it seems the challenge is to figure out the palette in use for these frames and render a transparent image that uses the same palette, then repack the new text-image into the new resource file.

    The post Translating Return To Ringworld first appeared on Breaking Eggs And Making Omelettes.

  • Libav (ffmpeg) copying decoded video timestamps to encoder

    31 octobre 2016, par Jason C

    I am writing an application that decodes a single video stream from an input file (any codec, any container), does a bunch of image processing, and encodes the results to an output file (single video stream, Quicktime RLE, MOV). I am using ffmpeg’s libav 3.1.5 (Windows build for now, but the application will be cross-platform).

    There is a 1:1 correspondence between input and output frames and I want the frame timing in the output to be identical to the input. I am having a really, really hard time accomplishing this. So my general question is : How do I reliably (as in, in all cases of inputs) set the output frame timing identical to the input ?

    It took me a very long time to slog through the API and get to the point I am at now. I put together a minimal test program to work with :

    #include <cstdio>

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libswscale></libswscale>swscale.h>
    }

    using namespace std;


    struct DecoderStuff {
       AVFormatContext *formatx;
       int nstream;
       AVCodec *codec;
       AVStream *stream;
       AVCodecContext *codecx;
       AVFrame *rawframe;
       AVFrame *rgbframe;
       SwsContext *swsx;
    };


    struct EncoderStuff {
       AVFormatContext *formatx;
       AVCodec *codec;
       AVStream *stream;
       AVCodecContext *codecx;
    };


    template <typename t="t">
    static void dump_timebase (const char *what, const T *o) {
       if (o)
           printf("%s timebase: %d/%d\n", what, o->time_base.num, o->time_base.den);
       else
           printf("%s timebase: null object\n", what);
    }


    // reads next frame into d.rawframe and d.rgbframe. returns false on error/eof.
    static bool read_frame (DecoderStuff &amp;d) {

       AVPacket packet;
       int err = 0, haveframe = 0;

       // read
       while (!haveframe &amp;&amp; err >= 0 &amp;&amp; ((err = av_read_frame(d.formatx, &amp;packet)) >= 0)) {
          if (packet.stream_index == d.nstream) {
              err = avcodec_decode_video2(d.codecx, d.rawframe, &amp;haveframe, &amp;packet);
          }
          av_packet_unref(&amp;packet);
       }

       // error output
       if (!haveframe &amp;&amp; err != AVERROR_EOF) {
           char buf[500];
           av_strerror(err, buf, sizeof(buf) - 1);
           buf[499] = 0;
           printf("read_frame: %s\n", buf);
       }

       // convert to rgb
       if (haveframe) {
           sws_scale(d.swsx, d.rawframe->data, d.rawframe->linesize, 0, d.rawframe->height,
                     d.rgbframe->data, d.rgbframe->linesize);
       }

       return haveframe;

    }


    // writes an output frame, returns false on error.
    static bool write_frame (EncoderStuff &amp;e, AVFrame *inframe) {

       // see note in so post about outframe here
       AVFrame *outframe = av_frame_alloc();
       outframe->format = inframe->format;
       outframe->width = inframe->width;
       outframe->height = inframe->height;
       av_image_alloc(outframe->data, outframe->linesize, outframe->width, outframe->height,
                      AV_PIX_FMT_RGB24, 1);
       //av_frame_copy(outframe, inframe);
       static int count = 0;
       for (int n = 0; n &lt; outframe->width * outframe->height; ++ n) {
           outframe->data[0][n*3+0] = ((n+count) % 100) ? 0 : 255;
           outframe->data[0][n*3+1] = ((n+count) % 100) ? 0 : 255;
           outframe->data[0][n*3+2] = ((n+count) % 100) ? 0 : 255;
       }
       ++ count;

       AVPacket packet;
       av_init_packet(&amp;packet);
       packet.size = 0;
       packet.data = NULL;

       int err, havepacket = 0;
       if ((err = avcodec_encode_video2(e.codecx, &amp;packet, outframe, &amp;havepacket)) >= 0 &amp;&amp; havepacket) {
           packet.stream_index = e.stream->index;
           err = av_interleaved_write_frame(e.formatx, &amp;packet);
       }

       if (err &lt; 0) {
           char buf[500];
           av_strerror(err, buf, sizeof(buf) - 1);
           buf[499] = 0;
           printf("write_frame: %s\n", buf);
       }

       av_packet_unref(&amp;packet);
       av_freep(&amp;outframe->data[0]);
       av_frame_free(&amp;outframe);

       return err >= 0;

    }


    int main (int argc, char *argv[]) {

       const char *infile = "wildlife.wmv";
       const char *outfile = "test.mov";
       DecoderStuff d = {};
       EncoderStuff e = {};

       av_register_all();

       // decoder
       avformat_open_input(&amp;d.formatx, infile, NULL, NULL);
       avformat_find_stream_info(d.formatx, NULL);
       d.nstream = av_find_best_stream(d.formatx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;d.codec, 0);
       d.stream = d.formatx->streams[d.nstream];
       d.codecx = avcodec_alloc_context3(d.codec);
       avcodec_parameters_to_context(d.codecx, d.stream->codecpar);
       avcodec_open2(d.codecx, NULL, NULL);
       d.rawframe = av_frame_alloc();
       d.rgbframe = av_frame_alloc();
       d.rgbframe->format = AV_PIX_FMT_RGB24;
       d.rgbframe->width = d.codecx->width;
       d.rgbframe->height = d.codecx->height;
       av_frame_get_buffer(d.rgbframe, 1);
       d.swsx = sws_getContext(d.codecx->width, d.codecx->height, d.codecx->pix_fmt,
                               d.codecx->width, d.codecx->height, AV_PIX_FMT_RGB24,
                               SWS_POINT, NULL, NULL, NULL);
       //av_dump_format(d.formatx, 0, infile, 0);
       dump_timebase("in stream", d.stream);
       dump_timebase("in stream:codec", d.stream->codec); // note: deprecated
       dump_timebase("in codec", d.codecx);

       // encoder
       avformat_alloc_output_context2(&amp;e.formatx, NULL, NULL, outfile);
       e.codec = avcodec_find_encoder(AV_CODEC_ID_QTRLE);
       e.stream = avformat_new_stream(e.formatx, e.codec);
       e.codecx = avcodec_alloc_context3(e.codec);
       e.codecx->bit_rate = 4000000; // arbitrary for qtrle
       e.codecx->width = d.codecx->width;
       e.codecx->height = d.codecx->height;
       e.codecx->gop_size = 30; // 99% sure this is arbitrary for qtrle
       e.codecx->pix_fmt = AV_PIX_FMT_RGB24;
       e.codecx->time_base = d.stream->time_base; // ???
       e.codecx->flags |= (e.formatx->flags &amp; AVFMT_GLOBALHEADER) ? AV_CODEC_FLAG_GLOBAL_HEADER : 0;
       avcodec_open2(e.codecx, NULL, NULL);
       avcodec_parameters_from_context(e.stream->codecpar, e.codecx);
       //av_dump_format(e.formatx, 0, outfile, 1);
       dump_timebase("out stream", e.stream);
       dump_timebase("out stream:codec", e.stream->codec); // note: deprecated
       dump_timebase("out codec", e.codecx);

       // open file and write header
       avio_open(&amp;e.formatx->pb, outfile, AVIO_FLAG_WRITE);
       avformat_write_header(e.formatx, NULL);

       // frames
       while (read_frame(d) &amp;&amp; write_frame(e, d.rgbframe))
           ;

       // write trailer and close file
       av_write_trailer(e.formatx);
       avio_closep(&amp;e.formatx->pb);

    }
    </typename></cstdio>

    A few notes about that :

    • Since all of my attempts at frame timing so far have failed, I’ve removed almost all timing-related stuff from this code to start with a clean slate.
    • Almost all error checking and cleanup omitted for brevity.
    • The reason I allocate a new output frame with a new buffer in write_frame, rather than using inframe directly, is because this is more representative of what my real application is doing. My real app also uses RGB24 internally, hence the conversions here.
    • The reason I generate a weird pattern in outframe, rather than using e.g. av_copy_frame, is because I just wanted a test pattern that compressed well with Quicktime RLE (my test input ends up generating a 1.7GB output file otherwise).
    • The input video I am using, "wildlife.wmv", can be found here. I’ve hard-coded the filenames.
    • I am aware that avcodec_decode_video2 and avcodec_encode_video2 are deprecated, but don’t care. They work fine, I’ve already struggled too much getting my head around the latest version of the API, ffmpeg changes their API with nearly every release, and I really don’t feel like dealing with avcodec_send_* and avcodec_receive_* right now.
    • I think I’m supposed to be finishing off by passing a NULL frame to avcodec_encode_video2 to flush some buffers or something but I’m a bit confused about that. Unless somebody feels like explaining that let’s ignore it for now, it’s a separate question. The docs are as vague about this point as they are about everything else.
    • My test input file’s frame rate is 29.97.

    Now, as for my current attempts. The following timing related fields are present in the above code, with details/confusion in bold. There’s a lot of them, because the API is mind-bogglingly convoluted :

    • main: d.stream->time_base : Input video stream time base. For my test input file this is 1/1000.
    • main: d.stream->codec->time_base : Not sure what this is (I never could make sense of why AVStream has an AVCodecContext field when you always use your own new context anyways) and also the codec field is deprecated. For my test input file this is 1/1000.
    • main: d.codecx->time_base : Input codec context time-base. For my test input file this is 0/1. Am I supposed to set it ?
    • main: e.stream->time_base : Time base of the output stream I create. What do I set this to ?
    • main: e.stream->codec->time_base : Time base of the deprecated and mysterious codec field of the output stream I create. Do I set this to anything ?
    • main: e.codecx->time_base : Time base of the encoder context I create. What do I set this to ?
    • read_frame: packet.dts : Decoding timestamp of packet read.
    • read_frame: packet.pts : Presentation timestamp of packet read.
    • read_frame: packet.duration : Duration of packet read.
    • read_frame: d.rawframe->pts : Presentation timestamp of raw frame decoded. This is always 0. Why isn’t it read by the decoder...?
    • read_frame: d.rgbframe->pts / write_frame: inframe->pts : Presentation timestamp of decoded frame converted to RGB. Not set to anything currently.
    • read_frame: d.rawframe->pkt_* : Fields copied from packet, discovered after reading this post. They are set correctly but I don’t know if they are useful.
    • write_frame: outframe->pts : Presentation timestamp of frame being encoded. Should I set this to something ?
    • write_frame: outframe->pkt_* : Timing fields from a packet. Should I set these ? They seem to be ignored by the encoder.
    • write_frame: packet.dts : Decoding timestamp of packet being encoded. What do I set it to ?
    • write_frame: packet.pts : Presentation timestamp of packet being encoded. What do I set it to ?
    • write_frame: packet.duration : Duration of packet being encoded. What do I set it to ?

    I have tried the following, with the described results. Note that inframe is d.rgbframe :

    1.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.codecx->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set outframe->pts = inframe->pts in write_frame
      • Result : Warning that encoder time base is not set (since d.codecx->time_base was 0/1), seg fault.
    2.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set outframe->pts = inframe->pts in write_frame
      • Result : No warnings, but VLC reports frame rate as 480.048 (no idea where this number came from) and file plays too fast. Also the encoder sets all the timing fields in packet to 0, which was not what I expected. (Edit : Turns out this is because av_interleaved_write_frame, unlike av_write_frame, takes ownership of the packet and swaps it with a blank one, and I was printing the values after that call. So they are not ignored.)
    3.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set any of pts/dts/duration in packet in write_frame to anything.
      • Result : Warnings about packet timestamps not set. Encoder seems to reset all packet timing fields to 0, so none of this has any effect.
    4.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • I found these fields, pkt_pts, pkt_dts, and pkt_duration in AVFrame after reading this post, so I tried copying those all the way through to outframe.
      • Result : Really had my hopes up, but ended up with same results as attempt 3 (packet timestamp not set warning, incorrect results).

    I tried various other hand-wavey permutations of the above and nothing worked. What I want to do is create an output file that plays back with the same timing and frame rate as the input (29.97 constant frame rate in this case).

    So how do I do this ? Of the zillions of timing related fields here, what do I do to make the output be the same as the input ? And how do I do it in such a way that handles arbitrary video input formats that may store their time stamps and time bases in different places ? I need this to always work.


    For reference, here is a table of all the packet and frame timestamps read from the video stream of my test input file, to give a sense of what my test file looks like. None of the input packet pts’ are set, same with frame pts, and for some reason the duration of the first 108 frames is 0. VLC plays the file fine and reports the frame rate as 29.9700089 :