Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (14628)

  • Reverse Engineering Italian Literature

    1er juillet 2014, par Multimedia Mike — Reverse Engineering

    Some time ago, Diego “Flameeyes” Pettenò tried his hand at reverse engineering a set of really old CD-ROMs containing even older Italian literature. The goal of this RE endeavor would be to extract the useful literature along with any structural metadata (chapters, etc.) and convert it to a more open format suitable for publication at, e.g., Project Gutenberg or Archive.org.

    Unfortunately, the structure of the data thwarted the more simplistic analysis attempts (like inspecting for blocks of textual data). This will require deeper RE techniques. Further frustrating the effort, however, is the fact that the binaries that implement the reading program are written for the now-archaic Windows 3.1 operating system.

    In pursuit of this RE goal, I recently thought of a way to glean more intelligence using DOSBox.

    Prior Work
    There are 6 discs in the full set (distributed along with 6 sequential issues of a print magazine named L’Espresso). Analysis of the contents of the various discs reveals that many of the files are the same on each disc. It was straightforward to identify the set of files which are unique on each disc. This set of files all end with the extension “LZn”, where n = 1..6 depending on the disc number. Further, the root directory of each disc has a file indicating the sequence number (1..6) of the CD. Obviously, these are the interesting targets.

    The LZ file extensions stand out to an individual skilled in the art of compression– could it be a variation of the venerable LZ compression ? That’s actually unlikely because LZ — also seen as LIZ — stands for Letteratura Italiana Zanichelli (Zanichelli’s Italian Literature).

    The Unix ‘file’ command was of limited utility, unable to plausibly identify any of the files.

    Progress was stalled.

    Saying Hello To An Old Frenemy
    I have been showing this screenshot to younger coworkers to see if any of them recognize it :


    DOSBox running Window 3.1

    Not a single one has seen it before. Senior computer citizen status : Confirmed.

    I recently watched an Ancient DOS Games video about Windows 3.1 games. This episode showed Windows 3.1 running under DOSBox. I had heard this was possible but that it took a little work to get running. I had a hunch that someone else had probably already done the hard stuff so I took to the BitTorrent networks and quickly found a download that had the goods ready to go– a directory of Windows 3.1 files that just had to be dropped into a DOSBox directory and they would be ready to run.

    Aside : Running OS software procured from a BitTorrent network ? Isn’t that an insane security nightmare ? I’m not too worried since it effectively runs under a sandboxed virtual machine, courtesy of DOSBox. I suppose there’s the risk of trojan’d OS software infecting binaries that eventually leave the sandbox.

    Using DOSBox Like ‘strace’
    strace is a tool available on some Unix systems, including Linux, which is able to monitor the system calls that a program makes. In reverse engineering contexts, it can be useful to monitor an opaque, binary program to see the names of the files it opens and how many bytes it reads, and from which locations. I have written examples of this before (wow, almost 10 years ago to the day ; now I feel old for the second time in this post).

    Here’s the pitch : Make DOSBox perform as strace in order to serve as a platform for reverse engineering Windows 3.1 applications. I formed a mental model about how DOSBox operates — abstracted file system classes with methods for opening and reading files — and then jumped into the source code. Sure enough, the code was exactly as I suspected and a few strategic print statements gave me the data I was looking for.

    Eventually, I even took to running DOSBox under the GNU Debugger (GDB). This hasn’t proven especially useful yet, but it has led to an absurd level of nesting :


    GDB runs DOSBox runs Windows 3.1

    The target application runs under Windows 3.1, which is running under DOSBox, which is running under GDB. This led to a crazy situation in which DOSBox had the mouse focus when a GDB breakpoint was triggered. At this point, DOSBox had all desktop input focus and couldn’t surrender it because it wasn’t running. I had no way to interact with the Linux desktop and had to reboot the computer. The next time, I took care to only use the keyboard to navigate the application and trigger the breakpoint and not allow DOSBox to consume the mouse focus.

    New Intelligence

    By instrumenting the local file class (virtual HD files) and the ISO file class (CD-ROM files), I was able to watch which programs and dynamic libraries are loaded and which data files the code cares about. I was able to narrow down the fact that the most interesting programs are called LEGGENDO.EXE (‘reading’) and LEGGENDA.EXE (‘legend’ ; this has been a great Italian lesson as well as RE puzzle). The first calls the latter, which displays this view of the data we are trying to get at :


    LIZ: Authors index

    When first run, the program takes an interest in a file called DBBIBLIO (‘database library’, I suspect) :

    === Read(’LIZ98\DBBIBLIO.LZ1’) : req 337 bytes ; read 337 bytes from pos 0x0
    === Read(’LIZ98\DBBIBLIO.LZ1’) : req 337 bytes ; read 337 bytes from pos 0x151
    === Read(’LIZ98\DBBIBLIO.LZ1’) : req 337 bytes ; read 337 bytes from pos 0x2A2
    [...]
    

    While we were unable to sort out all of the data files in our cursory investigation, a few things were obvious. The structure of this file looked to contain 336-byte records. Turns out I was off by 1– the records are actually 337 bytes each. The count of records read from disc is equal to the number of items shown in the UI.

    Next, the program is interested in a few more files :

    *** isoFile() : ’DEPOSITO\BLOKCTC.LZ1’, offset 0x27D6000, 2911488 bytes large
    === Read(’DEPOSITO\BLOKCTC.LZ1’) : req 96 bytes ; read 96 bytes from pos 0x0
    *** isoFile() : ’DEPOSITO\BLOKCTX0.LZ1’, offset 0x2A9D000, 17152 bytes large
    === Read(’DEPOSITO\BLOKCTX0.LZ1’) : req 128 bytes ; read 128 bytes from pos 0x0
    === Seek(’DEPOSITO\BLOKCTX0.LZ1’) : seek 384 (0x180) bytes, type 0
    === Read(’DEPOSITO\BLOKCTX0.LZ1’) : req 256 bytes ; read 256 bytes from pos 0x180
    === Seek(’DEPOSITO\BLOKCTC.LZ1’) : seek 1152 (0x480) bytes, type 0
    === Read(’DEPOSITO\BLOKCTC.LZ1’) : req 32 bytes ; read 32 bytes from pos 0x480
    === Read(’DEPOSITO\BLOKCTC.LZ1’) : req 1504 bytes ; read 1504 bytes from pos 0x4A0
    [...]

    Eventually, it becomes obvious that BLOKCTC has the juicy meat. There are 32-byte records followed by variable-length encoded text sections. Since there is no text to be found in these files, the text is either compressed, encrypted, or both. Some rough counting (the program seems to disable copy/paste, which thwarts more precise counting), indicates that the text size is larger than the data chunks being read from disc, so compression seems likely. Encryption isn’t out of the question (especially since the program deems it necessary to disable copy and pasting of this public domain literary data), and if it’s in use, that means the key is being read from one of these files.

    Blocked On Disassembly
    So I’m a bit blocked right now. I know exactly where the data lives, but it’s clear that I need to reverse engineer some binary code. The big problem is that I have no idea how to disassemble Windows 3.1 binaries. These are NE-type executable files. Disassemblers abound for MZ files (MS-DOS executables) and PE files (executables for Windows 95 and beyond). NE files get no respect. It’s difficult (but not impossible) to even find data about the format anymore, and details are incomplete. It should be noted, however, the DOSBox-as-strace method described here lends insight into how Windows 3.1 processes NE-type EXEs. You can’t get any more authoritative than that.

    So far, I have tried the freeware version of IDA Pro. Unfortunately, I haven’t been able to get the program to work on my Windows machine for a long time. Even if I could, I can’t find any evidence that it actually supports NE files (the free version specifically mentions MZ and PE, but does not mention NE or LE).

    I found an old copy of Borland’s beloved Turbo Assembler and Debugger package. It has Turbo Debugger for Windows, both regular and 32-bit versions. Unfortunately, the normal version just hangs Windows 3.1 in DOSBox. The 32-bit Turbo Debugger loads just fine but can’t load the NE file.

    I’ve also wondered if DOSBox contains any advanced features for trapping program execution and disassembling. I haven’t looked too deeply into this yet.

    Future Work
    NE files seem to be the executable format that time forgot. I have a crazy brainstorm about repacking NE files as MZ executables so that they could be taken apart with an MZ disassembler. But this will take some experimenting.

    If anyone else has any ideas about ripping open these binaries, I would appreciate hearing them.

    And I guess I shouldn’t be too surprised to learn that all the literature in this corpus is already freely available and easily downloadable anyway. But you shouldn’t be too surprised if that doesn’t discourage me from trying to crack the format that’s keeping this particular copy of the data locked up.

  • FFmpeg C++ api decode h264 error

    29 mai 2015, par arms

    I’m trying to use the C++ API of FFMpeg (version 20150526) under Windows using the prebuilt binaries to decode an h264 video file (*.ts).

    I’ve written a very simple code that automatically detects the required codec from the file itself (and it is AV_CODEC_ID_H264, as expected).

    Then I re-open the video file in read-binary mode and I read a fixed-size buffer of bytes from it and provide the read bytes to the decoder within a while-loop until the end of file. However when I call the function avcodec_decode_video2 a large amount of errors happen like the following ones :

    [h264 @ 008df020] top block unavailable for requested intro mode at 34 0

    [h264 @ 008df020] error while decoding MB 34 0, bytestream 3152

    [h264 @ 008df020] decode_slice_header error

    Sometimes the function avcodec_decode_video2 sets the value of got_picture_ptr to 1 and hence I expect to find a good frame. Instead, though all the computations are successful, when I view the decoded frame (using OpenCV only for visualization purposes) I see a gray one with some artifacts.

    If I employ the same code to decode an *.avi file it works fine.

    Reading the examples of FFMpeg I did not find a solution to my problem. I’ve also implemented the solution proposed in the simlar question FFmpeg c++ H264 decoding error but it did not work.

    Does anyone know where the error is ?

    Thank you in advance for any reply !

    The code is the following [EDIT : code updated including the parser management] :

    #include <iostream>
    #include <iomanip>
    #include <string>
    #include <sstream>

    #include <opencv2></opencv2>opencv.hpp>

    #ifdef __cplusplus
    extern "C"
    {
    #endif // __cplusplus
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libavfilter></libavfilter>avfilter.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavformat></libavformat>avio.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libpostproc></libpostproc>postprocess.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libswscale></libswscale>swscale.h>
    #ifdef __cplusplus
    } // end extern "C".
    #endif // __cplusplus

    #define INBUF_SIZE  4096

    void main()
    {
       AVCodec*            l_pCodec;
       AVCodecContext*     l_pAVCodecContext;
       SwsContext*         l_pSWSContext;
       AVFormatContext*    l_pAVFormatContext;
       AVFrame*            l_pAVFrame;
       AVFrame*            l_pAVFrameBGR;
       AVPacket            l_AVPacket;
       AVPacket            l_AVPacket_out;
       AVStream*           l_pStream;
       AVCodecParserContext*   l_pParser;
       FILE*               l_pFile_in;
       FILE*               l_pFile_out;
       std::string         l_sFile;
       int                 l_iResult;
       int                 l_iFrameCount;
       int                 l_iGotFrame;
       int                 l_iBufLength;
       int                 l_iParsedBytes;
       int                 l_iPts;
       int                 l_iDts;
       int                 l_iPos;
       int                 l_iSize;
       int                 l_iDecodedBytes;
       uint8_t             l_auiInBuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
       uint8_t*            l_pData;
       cv::Mat             l_cvmImage;

       l_pCodec = NULL;
       l_pAVCodecContext = NULL;
       l_pSWSContext = NULL;
       l_pAVFormatContext = NULL;
       l_pAVFrame = NULL;
       l_pAVFrameBGR = NULL;
       l_pParser = NULL;
       l_pStream = NULL;
       l_pFile_in = NULL;
       l_pFile_out = NULL;
       l_iPts = 0;
       l_iDts = 0;
       l_iPos = 0;
       l_pData = NULL;

       l_sFile = "myvideo.ts";

       avdevice_register_all();
       avfilter_register_all();
       avcodec_register_all();
       av_register_all();
       avformat_network_init();

       l_pAVFormatContext = avformat_alloc_context();

       l_iResult = avformat_open_input(&amp;l_pAVFormatContext,
                                       l_sFile.c_str(),
                                       NULL,
                                       NULL);

       if (l_iResult >= 0)
       {
           l_iResult = avformat_find_stream_info(l_pAVFormatContext, NULL);

           if (l_iResult >= 0)
           {
               for (int i=0; inb_streams; i++)
               {
                   if (l_pAVFormatContext->streams[i]->codec->codec_type ==
                           AVMEDIA_TYPE_VIDEO)
                   {
                       l_pCodec = avcodec_find_decoder(
                                   l_pAVFormatContext->streams[i]->codec->codec_id);

                       l_pStream = l_pAVFormatContext->streams[i];
                   }
               }
           }
       }

       av_init_packet(&amp;l_AVPacket);
       av_init_packet(&amp;l_AVPacket_out);

       memset(l_auiInBuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);

       if (l_pCodec)
       {
           l_pAVCodecContext = avcodec_alloc_context3(l_pCodec);

           l_pParser = av_parser_init(l_pAVCodecContext->codec_id);

           if (l_pParser)
           {
               av_register_codec_parser(l_pParser->parser);
           }

           if (l_pAVCodecContext)
           {
               if (l_pCodec->capabilities &amp; CODEC_CAP_TRUNCATED)
               {
                   l_pAVCodecContext->flags |= CODEC_FLAG_TRUNCATED;
               }

               l_iResult = avcodec_open2(l_pAVCodecContext, l_pCodec, NULL);

               if (l_iResult >= 0)
               {
                   l_pFile_in = fopen(l_sFile.c_str(), "rb");

                   if (l_pFile_in)
                   {
                       l_pAVFrame = av_frame_alloc();
                       l_pAVFrameBGR = av_frame_alloc();

                       if (l_pAVFrame)
                       {
                           l_iFrameCount = 0;

                           avcodec_get_frame_defaults(l_pAVFrame);

                           while (1)
                           {
                               l_iBufLength = fread(l_auiInBuf,
                                                    1,
                                                    INBUF_SIZE,
                                                    l_pFile_in);

                               if (l_iBufLength == 0)
                               {
                                   break;
                               }
                               else
                               {
                                   l_pData = l_auiInBuf;
                                   l_iSize = l_iBufLength;

                                   while (l_iSize > 0)
                                   {
                                       if (l_pParser)
                                       {
                                           l_iParsedBytes = av_parser_parse2(
                                                       l_pParser,
                                                       l_pAVCodecContext,
                                                       &amp;l_AVPacket_out.data,
                                                       &amp;l_AVPacket_out.size,
                                                       l_pData,
                                                       l_iSize,
                                                       l_AVPacket.pts,
                                                       l_AVPacket.dts,
                                                       AV_NOPTS_VALUE);

                                           if (l_iParsedBytes &lt;= 0)
                                           {
                                               break;
                                           }

                                           l_AVPacket.pts = l_AVPacket.dts = AV_NOPTS_VALUE;
                                           l_AVPacket.pos = -1;
                                       }
                                       else
                                       {
                                           l_AVPacket_out.data = l_pData;
                                           l_AVPacket_out.size = l_iSize;
                                       }

                                       l_iDecodedBytes =
                                               avcodec_decode_video2(
                                                   l_pAVCodecContext,
                                                   l_pAVFrame,
                                                   &amp;l_iGotFrame,
                                                   &amp;l_AVPacket_out);

                                       if (l_iDecodedBytes >= 0)
                                       {
                                           if (l_iGotFrame)
                                           {
                                               l_pSWSContext = sws_getContext(
                                                           l_pAVCodecContext->width,
                                                           l_pAVCodecContext->height,
                                                           l_pAVCodecContext->pix_fmt,
                                                           l_pAVCodecContext->width,
                                                           l_pAVCodecContext->height,
                                                           AV_PIX_FMT_BGR24,
                                                           SWS_BICUBIC,
                                                           NULL,
                                                           NULL,
                                                           NULL);

                                               if (l_pSWSContext)
                                               {
                                                   l_iResult = avpicture_alloc(
                                                               reinterpret_cast(l_pAVFrameBGR),
                                                               AV_PIX_FMT_BGR24,
                                                               l_pAVFrame->width,
                                                               l_pAVFrame->height);

                                                   l_iResult = sws_scale(
                                                               l_pSWSContext,
                                                               l_pAVFrame->data,
                                                               l_pAVFrame->linesize,
                                                               0,
                                                               l_pAVCodecContext->height,
                                                               l_pAVFrameBGR->data,
                                                               l_pAVFrameBGR->linesize);

                                                   if (l_iResult > 0)
                                                   {
                                                       l_cvmImage = cv::Mat(
                                                                   l_pAVFrame->height,
                                                                   l_pAVFrame->width,
                                                                   CV_8UC3,
                                                                   l_pAVFrameBGR->data[0],
                                                               l_pAVFrameBGR->linesize[0]);

                                                       if (l_cvmImage.empty() == false)
                                                       {
                                                           cv::imshow("image", l_cvmImage);
                                                           cv::waitKey(10);
                                                       }
                                                   }
                                               }

                                               l_iFrameCount++;
                                           }
                                       }
                                       else
                                       {
                                           break;
                                       }

                                       l_pData += l_iParsedBytes;
                                       l_iSize -= l_iParsedBytes;
                                   }
                               }

                           } // end while(1).
                       }

                       fclose(l_pFile_in);
                   }
               }
           }
       }
    }
    </sstream></string></iomanip></iostream>

    EDIT : The following is the final code that solves my problem, thanks to the suggestions of Ronald.

    #include <iostream>
    #include <iomanip>
    #include <string>
    #include <sstream>

    #include <opencv2></opencv2>opencv.hpp>

    #ifdef __cplusplus
    extern "C"
    {
    #endif // __cplusplus
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libavfilter></libavfilter>avfilter.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavformat></libavformat>avio.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libpostproc></libpostproc>postprocess.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libswscale></libswscale>swscale.h>
    #ifdef __cplusplus
    } // end extern "C".
    #endif // __cplusplus

    void main()
    {
       AVCodec*            l_pCodec;
       AVCodecContext*     l_pAVCodecContext;
       SwsContext*         l_pSWSContext;
       AVFormatContext*    l_pAVFormatContext;
       AVFrame*            l_pAVFrame;
       AVFrame*            l_pAVFrameBGR;
       AVPacket            l_AVPacket;
       std::string         l_sFile;
       uint8_t*            l_puiBuffer;
       int                 l_iResult;
       int                 l_iFrameCount;
       int                 l_iGotFrame;
       int                 l_iDecodedBytes;
       int                 l_iVideoStreamIdx;
       int                 l_iNumBytes;
       cv::Mat             l_cvmImage;

       l_pCodec = NULL;
       l_pAVCodecContext = NULL;
       l_pSWSContext = NULL;
       l_pAVFormatContext = NULL;
       l_pAVFrame = NULL;
       l_pAVFrameBGR = NULL;
       l_puiBuffer = NULL;

       l_sFile = "myvideo.ts";

       av_register_all();

       l_iResult = avformat_open_input(&amp;l_pAVFormatContext,
                                       l_sFile.c_str(),
                                       NULL,
                                       NULL);

       if (l_iResult >= 0)
       {
           l_iResult = avformat_find_stream_info(l_pAVFormatContext, NULL);

           if (l_iResult >= 0)
           {
               for (int i=0; inb_streams; i++)
               {
                   if (l_pAVFormatContext->streams[i]->codec->codec_type ==
                           AVMEDIA_TYPE_VIDEO)
                   {
                       l_iVideoStreamIdx = i;

                       l_pAVCodecContext =
                               l_pAVFormatContext->streams[l_iVideoStreamIdx]->codec;

                       if (l_pAVCodecContext)
                       {
                           l_pCodec = avcodec_find_decoder(l_pAVCodecContext->codec_id);
                       }

                       break;
                   }
               }
           }
       }

       if (l_pCodec &amp;&amp; l_pAVCodecContext)
       {
           l_iResult = avcodec_open2(l_pAVCodecContext, l_pCodec, NULL);

           if (l_iResult >= 0)
           {
               l_pAVFrame = av_frame_alloc();
               l_pAVFrameBGR = av_frame_alloc();

               l_iNumBytes = avpicture_get_size(PIX_FMT_BGR24,
                                                l_pAVCodecContext->width,
                                                l_pAVCodecContext->height);

               l_puiBuffer = (uint8_t *)av_malloc(l_iNumBytes*sizeof(uint8_t));

               avpicture_fill((AVPicture *)l_pAVFrameBGR,
                              l_puiBuffer,
                              PIX_FMT_RGB24,
                              l_pAVCodecContext->width,
                              l_pAVCodecContext->height);

               l_pSWSContext = sws_getContext(
                           l_pAVCodecContext->width,
                           l_pAVCodecContext->height,
                           l_pAVCodecContext->pix_fmt,
                           l_pAVCodecContext->width,
                           l_pAVCodecContext->height,
                           AV_PIX_FMT_BGR24,
                           SWS_BICUBIC,
                           NULL,
                           NULL,
                           NULL);

               while (av_read_frame(l_pAVFormatContext, &amp;l_AVPacket) >= 0)
               {
                   if (l_AVPacket.stream_index == l_iVideoStreamIdx)
                   {
                       l_iDecodedBytes = avcodec_decode_video2(
                                   l_pAVCodecContext,
                                   l_pAVFrame,
                                   &amp;l_iGotFrame,
                                   &amp;l_AVPacket);

                       if (l_iGotFrame)
                       {
                           if (l_pSWSContext)
                           {
                               l_iResult = sws_scale(
                                           l_pSWSContext,
                                           l_pAVFrame->data,
                                           l_pAVFrame->linesize,
                                           0,
                                           l_pAVCodecContext->height,
                                           l_pAVFrameBGR->data,
                                           l_pAVFrameBGR->linesize);

                               if (l_iResult > 0)
                               {
                                   l_cvmImage = cv::Mat(
                                               l_pAVFrame->height,
                                               l_pAVFrame->width,
                                               CV_8UC3,
                                               l_pAVFrameBGR->data[0],
                                           l_pAVFrameBGR->linesize[0]);

                                   if (l_cvmImage.empty() == false)
                                   {
                                       cv::imshow("image", l_cvmImage);
                                       cv::waitKey(1);
                                   }
                               }
                           }

                           l_iFrameCount++;
                       }
                   }
               }
           }
       }
    }
    </sstream></string></iomanip></iostream>
  • My journey to Coviu

    27 octobre 2015, par silvia

    My new startup just released our MVP – this is the story of what got me here.

    I love creating new applications that let people do their work better or in a manner that wasn’t possible before.

    German building and loan socityMy first such passion was as a student intern when I built a system for a building and loan association’s monthly customer magazine. The group I worked with was managing their advertiser contacts through a set of paper cards and I wrote a dBase based system (yes, that long ago) that would manage their customer relationships. They loved it – until it got replaced by an SAP system that cost 100 times what I cost them, had really poor UX, and only gave them half the functionality. It was a corporate system with ongoing support, which made all the difference to them.

    Dr Scholz und Partner GmbHThe story repeated itself with a CRM for my Uncle’s construction company, and with a resume and quotation management system for Accenture right after Uni, both of which I left behind when I decided to go into research.

    Even as a PhD student, I never lost sight of challenges that people were facing and wanted to develop technology to overcome problems. The aim of my PhD thesis was to prepare for the oncoming onslaught of audio and video on the Internet (yes, this was 1994 !) by developing algorithms to automatically extract and locate information in such files, which would enable users to structure, index and search such content.

    Many of the use cases that we explored are now part of products or continue to be challenges : finding music that matches your preferences, identifying music or video pieces e.g. to count ads on the radio or to mark copyright infringement, or the automated creation of video summaries such as trailers.

    CSIRO

    This continued when I joined the CSIRO in Australia – I was working on segmenting speech into words or talk spurts since that would simplify captioning & subtitling, and on MPEG-7 which was a (slightly over-engineered) standard to structure metadata about audio and video.

    In 2001 I had the idea of replicating the Web for videos : i.e. creating hyperlinked and searchable video-only experiences. We called it “Annodex” for annotated and indexed video and it needed full-screen hyperlinked video in browsers – man were we ahead of our time ! It was my first step into standards, got several IETF RFCs to my name, and started my involvement with open codecs through Xiph.

    vquence logoAround the time that YouTube was founded in 2006, I founded Vquence – originally a video search company for the Web, but pivoted to a video metadata mining company. Vquence still exists and continues to sell its data to channel partners, but it lacks the user impact that has always driven my work.

    As the video element started being developed for HTML5, I had to get involved. I contributed many use cases to the W3C, became a co-editor of the HTML5 spec and focused on video captioning with WebVTT while contracting to Mozilla and later to Google. We made huge progress and today the technology exists to publish video on the Web with captions, making the Web more inclusive for everybody. I contributed code to YouTube and Google Chrome, but was keen to make a bigger impact again.

    NICTA logoThe opportunity came when a couple of former CSIRO colleagues who now worked for NICTA approached me to get me interested in addressing new use cases for video conferencing in the context of WebRTC. We worked on a kiosk-style solution to service delivery for large service organisations, particularly targeting government. The emerging WebRTC standard posed many technical challenges that we addressed by building rtc.io , by contributing to the standards, and registering bugs on the browsers.

    Fast-forward through the development of a few further custom solutions for customers in health and education and we are starting to see patterns of need emerge. The core learning that we’ve come away with is that to get things done, you have to go beyond “talking heads” in a video call. It’s not just about seeing the other person, but much more about having a shared view of the things that need to be worked on and a shared way of interacting with them. Also, we learnt that the things that are being worked on are quite varied and may include multiple input cameras, digital documents, Web pages, applications, device data, controls, forms.

    Coviu logoSo we set out to build a solution that would enable productive remote collaboration to take place. It would need to provide an excellent user experience, it would need to be simple to work with, provide for the standard use cases out of the box, yet be architected to be extensible for specialised data sharing needs that we knew some of our customers had. It would need to be usable directly on Coviu.com, but also able to integrate with specialised applications that some of our customers were already using, such as the applications that they spend most of their time in (CRMs, practice management systems, learning management systems, team chat systems). It would need to require our customers to sign up, yet their clients to join a call without sign-up.

    Collaboration is a big problem. People are continuing to get more comfortable with technology and are less and less inclined to travel distances just to get a service done. In a country as large as Australia, where 12% of the population lives in rural and remote areas, people may not even be able to travel distances, particularly to receive or provide recurring or specialised services, or to achieve work/life balance. To make the world a global village, we need to be able to work together better remotely.

    The need for collaboration is being recognised by specialised Web applications already, such as the LiveShare feature of Invision for Designers, Codassium for pair programming, or the recently announced Dropbox Paper. Few go all the way to video – WebRTC is still regarded as a complicated feature to support.

    Coviu in action

    With Coviu, we’d like to offer a collaboration feature to every Web app. We now have a Web app that provides a modern and beautifully designed collaboration interface. To enable other Web apps to integrate it, we are now developing an API. Integration may entail customisation of the data sharing part of Coviu – something Coviu has been designed for. How to replicate the data and keep it consistent when people collaborate remotely – that is where Coviu makes a difference.

    We have started our journey and have just launched free signup to the Coviu base product, which allows individuals to own their own “room” (i.e. a fixed URL) in which to collaborate with others. A huge shout out goes to everyone in the Coviu team – a pretty amazing group of people – who have turned the app from an idea to reality. You are all awesome !

    With Coviu you can share and annotate :

    • images (show your mum photos of your last holidays, or get feedback on an architecture diagram from a customer),
    • pdf files (give a presentation remotely, or walk a customer through a contract),
    • whiteboards (brainstorm with a colleague), and
    • share an application window (watch a YouTube video together, or work through your task list with your colleagues).

    All of these are regarded as “shared documents” in Coviu and thus have zooming and annotations features and are listed in a document tray for ease of navigation.

    This is just the beginning of how we want to make working together online more productive. Give it a go and let us know what you think.

    http://coviu.com/