Recherche avancée

Médias (91)

Autres articles (75)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (7683)

  • H.264 muxed to MP4 using libavformat not playing back

    14 mai 2015, par Brad Mitchell

    I am trying to mux H.264 data into a MP4 file. There appear to be no errors in saving this H.264 Annex B data out to an MP4 file, but the file fails to playback.

    I’ve done a binary comparison on the files and the issue seems to be somewhere in what is being written to the footer (trailer) of the MP4 file.

    I suspect it has to be something with the way the stream is being created or something.

    Init :

    AVOutputFormat* fmt = av_guess_format( 0, "out.mp4", 0 );
    oc = avformat_alloc_context();
    oc->oformat = fmt;
    strcpy(oc->filename, filename);

    Part of this prototype app I have is creating a png file for each IFrame. So when the first IFrame is encountered, I create the video stream and write the av header etc :

    void addVideoStream(AVCodecContext* decoder)
    {
       videoStream = av_new_stream(oc, 0);
       if (!videoStream)
       {
            cout << "ERROR creating video stream" << endl;
            return;        
       }
       vi = videoStream->index;    
       videoContext = videoStream->codec;      
       videoContext->codec_type = AVMEDIA_TYPE_VIDEO;
       videoContext->codec_id = decoder->codec_id;
       videoContext->bit_rate = 512000;
       videoContext->width = decoder->width;
       videoContext->height = decoder->height;
       videoContext->time_base.den = 25;
       videoContext->time_base.num = 1;    
       videoContext->gop_size = decoder->gop_size;
       videoContext->pix_fmt = decoder->pix_fmt;      

       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           videoContext->flags |= CODEC_FLAG_GLOBAL_HEADER;

       av_dump_format(oc, 0, filename, 1);

       if (!(oc->oformat->flags & AVFMT_NOFILE))
       {
           if (avio_open(&oc->pb, filename, AVIO_FLAG_WRITE) < 0) {
           cout << "Error opening file" << endl;
       }
       avformat_write_header(oc, NULL);
    }

    I write packets out :

    unsigned char* data = block->getData();
    unsigned char videoFrameType = data[4];
    int dataLen = block->getDataLen();

    // store pps
    if (videoFrameType == 0x68)
    {
       if (ppsFrame != NULL)
       {
           delete ppsFrame; ppsFrameLength = 0; ppsFrame = NULL;
       }
       ppsFrameLength = block->getDataLen();
       ppsFrame = new unsigned char[ppsFrameLength];
       memcpy(ppsFrame, block->getData(), ppsFrameLength);
    }
    else if (videoFrameType == 0x67)
    {
       // sps
       if (spsFrame != NULL)
       {
           delete spsFrame; spsFrameLength = 0; spsFrame = NULL;
    }
       spsFrameLength = block->getDataLen();
       spsFrame = new unsigned char[spsFrameLength];
       memcpy(spsFrame, block->getData(), spsFrameLength);                
    }                                          

    if (videoFrameType == 0x65 || videoFrameType == 0x41)
    {
       videoFrameNumber++;
    }
    if (videoFrameType == 0x65)
    {
       decodeIFrame(videoFrameNumber, spsFrame, spsFrameLength, ppsFrame, ppsFrameLength, data, dataLen);
    }

    if (videoStream != NULL)
    {
       AVPacket pkt = { 0 };
       av_init_packet(&pkt);
       pkt.stream_index = vi;
       pkt.flags = 0;                      
       pkt.pts = pkt.dts = 0;                                  

       if (videoFrameType == 0x65)
       {
           // combine the SPS PPS & I frames together
           pkt.flags |= AV_PKT_FLAG_KEY;                                                  
           unsigned char* videoFrame = new unsigned char[spsFrameLength+ppsFrameLength+dataLen];
           memcpy(videoFrame, spsFrame, spsFrameLength);
           memcpy(&videoFrame[spsFrameLength], ppsFrame, ppsFrameLength);
           memcpy(&videoFrame[spsFrameLength+ppsFrameLength], data, dataLen);

           // overwrite the start code (00 00 00 01 with a 32-bit length)
           setLength(videoFrame, spsFrameLength-4);
           setLength(&videoFrame[spsFrameLength], ppsFrameLength-4);
           setLength(&videoFrame[spsFrameLength+ppsFrameLength], dataLen-4);
           pkt.size = dataLen + spsFrameLength + ppsFrameLength;
           pkt.data = videoFrame;
           av_interleaved_write_frame(oc, &pkt);
           delete videoFrame; videoFrame = NULL;
       }
       else if (videoFrameType != 0x67 && videoFrameType != 0x68)
       {  
           // Send other frames except pps & sps which are caught and stored                  
           pkt.size = dataLen;
           pkt.data = data;
           setLength(data, dataLen-4);                    
           av_interleaved_write_frame(oc, &pkt);
       }

    Finally to close the file off :

    av_write_trailer(oc);
    int i = 0;
    for (i = 0; i < oc->nb_streams; i++)
    {
       av_freep(&oc->streams[i]->codec);
       av_freep(&oc->streams[i]);      
    }

    if (!(oc->oformat->flags & AVFMT_NOFILE))
    {
       avio_close(oc->pb);
    }
    av_free(oc);

    If I take the H.264 data alone and convert it :

    ffmpeg -i recording.h264 -vcodec copy recording.mp4

    All but the "footer" of the files are the same.

    Output from my program :
    readrec recording.tcp out.mp4
    ** START * 01-03-2013 14:26:01 180000
    Output #0, mp4, to ’out.mp4’ :
    Stream #0:0 : Video : h264, yuv420p, 352x288, q=2-31, 512 kb/s, 90k tbn, 25 tbc
    * END ** 01-03-2013 14:27:01 102000
    Wrote 1499 video frames.

    If I try to convert using ffmpeg the MP4 file created using CODE :

    ffmpeg -i out.mp4 -vcodec copy out2.mp4
    ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
         built on Mar  7 2013 12:49:22 with suncc 0x5110
         configuration: --extra-cflags=-KPIC -g --disable-mmx
         --disable-protocol=udp --disable-encoder=nellymoser --cc=cc --cxx=CC
    libavutil      51. 54.100 / 51. 54.100
    libavcodec     54. 23.100 / 54. 23.100
    libavformat    54.  6.100 / 54.  6.100
    libavdevice    54.  0.100 / 54.  0.100
    libavfilter     2. 77.100 /  2. 77.100
    libswscale      2.  1.100 /  2.  1.100
    libswresample   0. 15.100 /  0. 15.100
    h264 @ 12eaac0] no frame!
       Last message repeated 1 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 23 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 74 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 64 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 34 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 49 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 24 times
    [h264 @ 12eaac0] Partitioned H.264 support is incomplete
    [h264 @ 12eaac0] no frame!
       Last message repeated 23 times
    [h264 @ 12eaac0] sps_id out of range
    [h264 @ 12eaac0] no frame!
       Last message repeated 148 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 33 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 128 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 3 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 3 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 309 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 192 times
    [h264 @ 12eaac0] Partitioned H.264 support is incomplete
    [h264 @ 12eaac0] no frame!
       Last message repeated 73 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 99 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 197 times
    [mov,mp4,m4a,3gp,3g2,mj2 @ 12e3100] decoding for stream 0 failed
    [mov,mp4,m4a,3gp,3g2,mj2 @ 12e3100] Could not find codec parameters
    (Video: h264 (avc1 / 0x31637661), 393539 kb/s)
    out.mp4: could not find codec parameters

    I really do not know where the issue is, except it has to be something to do with the way the streams are being set up. I’ve looked at bits of code from where other people are doing a similar thing, and tried to use this advice in setting up the streams, but to no avail !


    The final code which gave me a H.264/AAC muxed (synced) file is as follows. First a bit of background information. The data is coming from an IP camera. The data is presented via a 3rd party API as video/audio packets. The video packets are presented as the RTP payload data (no header) and consist of NALU’s that are reconstructed and converted to H.264 video in Annex B format. AAC audio is presented as raw AAC and is converted to adts format to enable playback. These packets have been put into a bitstream format that allows the transmission of the timestamp (64 bit milliseconds since Jan 1 1970) along with a few other things.

    This is more or less a prototype and is not clean in any respects. It probably leaks bad. I do however, hope this helps anyone else out trying to achieve something similar to what I am.

    Globals :

    AVFormatContext* oc = NULL;
    AVCodecContext* videoContext = NULL;
    AVStream* videoStream = NULL;
    AVCodecContext* audioContext = NULL;
    AVStream* audioStream = NULL;
    AVCodec* videoCodec = NULL;
    AVCodec* audioCodec = NULL;
    int vi = 0;  // Video stream
    int ai = 1;  // Audio stream

    uint64_t firstVideoTimeStamp = 0;
    uint64_t firstAudioTimeStamp = 0;
    int audioStartOffset = 0;

    char* filename = NULL;

    Boolean first = TRUE;

    int videoFrameNumber = 0;
    int audioFrameNumber = 0;

    Main :

    int main(int argc, char* argv[])
    {
       if (argc != 3)
       {  
           cout &lt;&lt; argv[0] &lt;&lt; " <stream playback="playback" file="file"> <output mp4="mp4" file="file">" &lt;&lt; endl;
           return 0;
       }
       char* input_stream_file = argv[1];
       filename = argv[2];

       av_register_all();    

       fstream inFile;
       inFile.open(input_stream_file, ios::in);

       // Used to store the latest pps &amp; sps frames
       unsigned char* ppsFrame = NULL;
       int ppsFrameLength = 0;
       unsigned char* spsFrame = NULL;
       int spsFrameLength = 0;

       // Setup MP4 output file
       AVOutputFormat* fmt = av_guess_format( 0, filename, 0 );
       oc = avformat_alloc_context();
       oc->oformat = fmt;
       strcpy(oc->filename, filename);

       // Setup the bitstream filter for AAC in adts format.  Could probably also achieve
       // this by stripping the first 7 bytes!
       AVBitStreamFilterContext* bsfc = av_bitstream_filter_init("aac_adtstoasc");
       if (!bsfc)
       {      
           cout &lt;&lt; "Error creating adtstoasc filter" &lt;&lt; endl;
           return -1;
       }

       while (inFile.good())
       {
           TcpAVDataBlock* block = new TcpAVDataBlock();
           block->readStruct(inFile);
           DateTime dt = block->getTimestampAsDateTime();
           switch (block->getPacketType())
           {
               case TCP_PACKET_H264:
               {      
                   if (firstVideoTimeStamp == 0)
                       firstVideoTimeStamp = block->getTimeStamp();
                   unsigned char* data = block->getData();
                   unsigned char videoFrameType = data[4];
                   int dataLen = block->getDataLen();

                   // pps
                   if (videoFrameType == 0x68)
                   {
                       if (ppsFrame != NULL)
                       {
                           delete ppsFrame; ppsFrameLength = 0;
                           ppsFrame = NULL;
                       }
                       ppsFrameLength = block->getDataLen();
                       ppsFrame = new unsigned char[ppsFrameLength];
                       memcpy(ppsFrame, block->getData(), ppsFrameLength);
                   }
                   else if (videoFrameType == 0x67)
                   {
                       // sps
                       if (spsFrame != NULL)
                       {
                           delete spsFrame; spsFrameLength = 0;
                           spsFrame = NULL;
                       }
                       spsFrameLength = block->getDataLen();
                       spsFrame = new unsigned char[spsFrameLength];
                       memcpy(spsFrame, block->getData(), spsFrameLength);                  
                   }                                          

                   if (videoFrameType == 0x65 || videoFrameType == 0x41)
                   {
                       videoFrameNumber++;
                   }
                   // Extract a thumbnail for each I-Frame
                   if (videoFrameType == 0x65)
                   {
                       decodeIFrame(h264, spsFrame, spsFrameLength, ppsFrame, ppsFrameLength, data, dataLen);
                   }
                   if (videoStream != NULL)
                   {
                       AVPacket pkt = { 0 };
                       av_init_packet(&amp;pkt);
                       pkt.stream_index = vi;
                       pkt.flags = 0;          
                       pkt.pts = videoFrameNumber;
                       pkt.dts = videoFrameNumber;          
                       if (videoFrameType == 0x65)
                       {
                           pkt.flags = 1;                          

                           unsigned char* videoFrame = new unsigned char[spsFrameLength+ppsFrameLength+dataLen];
                           memcpy(videoFrame, spsFrame, spsFrameLength);
                           memcpy(&amp;videoFrame[spsFrameLength], ppsFrame, ppsFrameLength);

                           memcpy(&amp;videoFrame[spsFrameLength+ppsFrameLength], data, dataLen);
                           pkt.data = videoFrame;
                           av_interleaved_write_frame(oc, &amp;pkt);
                           delete videoFrame; videoFrame = NULL;
                       }
                       else if (videoFrameType != 0x67 &amp;&amp; videoFrameType != 0x68)
                       {                      
                           pkt.size = dataLen;
                           pkt.data = data;
                           av_interleaved_write_frame(oc, &amp;pkt);
                       }                      
                   }
                   break;
               }

           case TCP_PACKET_AAC:

               if (firstAudioTimeStamp == 0)
               {
                   firstAudioTimeStamp = block->getTimeStamp();
                   uint64_t millseconds_difference = firstAudioTimeStamp - firstVideoTimeStamp;
                   audioStartOffset = millseconds_difference * 16000 / 1000;
                   cout &lt;&lt; "audio offset: " &lt;&lt; audioStartOffset &lt;&lt; endl;
               }

               if (audioStream != NULL)
               {
                   AVPacket pkt = { 0 };
                   av_init_packet(&amp;pkt);
                   pkt.stream_index = ai;
                   pkt.flags = 1;          
                   pkt.pts = audioFrameNumber*1024;
                   pkt.dts = audioFrameNumber*1024;
                   pkt.data = block->getData();
                   pkt.size = block->getDataLen();
                   pkt.duration = 1024;

                   AVPacket newpacket = pkt;                      
                   int rc = av_bitstream_filter_filter(bsfc, audioContext,
                       NULL,
                       &amp;newpacket.data, &amp;newpacket.size,
                       pkt.data, pkt.size,
                       pkt.flags &amp; AV_PKT_FLAG_KEY);

                   if (rc >= 0)
                   {
                       //cout &lt;&lt; "Write audio frame" &lt;&lt; endl;
                       newpacket.pts = audioFrameNumber*1024;
                       newpacket.dts = audioFrameNumber*1024;
                       audioFrameNumber++;
                       newpacket.duration = 1024;                  

                       av_interleaved_write_frame(oc, &amp;newpacket);
                       av_free_packet(&amp;newpacket);
                   }  
                   else
                   {
                       cout &lt;&lt; "Error filtering aac packet" &lt;&lt; endl;

                   }
               }
               break;

           case TCP_PACKET_START:
               break;

           case TCP_PACKET_END:
               break;
           }
           delete block;
       }
       inFile.close();

       av_write_trailer(oc);
       int i = 0;
       for (i = 0; i &lt; oc->nb_streams; i++)
       {
           av_freep(&amp;oc->streams[i]->codec);
           av_freep(&amp;oc->streams[i]);      
       }

       if (!(oc->oformat->flags &amp; AVFMT_NOFILE))
       {
           avio_close(oc->pb);
       }

       av_free(oc);

       delete spsFrame; spsFrame = NULL;
       delete ppsFrame; ppsFrame = NULL;

       cout &lt;&lt; "Wrote " &lt;&lt; videoFrameNumber &lt;&lt; " video frames." &lt;&lt; endl;

       return 0;
    }
    </output></stream>

    The stream stream/codecs are added and the header is created in a function called addVideoAndAudioStream(). This function is called from decodeIFrame() so there are a few assumptions (which aren’t necessarily good)
    1. A video packet comes first
    2. AAC is present

    The decodeIFrame was kind of a separate prototype by where I was creating a thumbnail for each I Frame. The code to generate thumbnails was from : https://gnunet.org/svn/Extractor/src/plugins/thumbnailffmpeg_extractor.c

    The decodeIFrame function passes an AVCodecContext into addVideoAudioStream :

    void addVideoAndAudioStream(AVCodecContext* decoder = NULL)
    {
       videoStream = av_new_stream(oc, 0);
       if (!videoStream)
       {
           cout &lt;&lt; "ERROR creating video stream" &lt;&lt; endl;
           return;      
       }
       vi = videoStream->index;  
       videoContext = videoStream->codec;      
       videoContext->codec_type = AVMEDIA_TYPE_VIDEO;
       videoContext->codec_id = decoder->codec_id;
       videoContext->bit_rate = 512000;
       videoContext->width = decoder->width;
       videoContext->height = decoder->height;
       videoContext->time_base.den = 25;
       videoContext->time_base.num = 1;
       videoContext->gop_size = decoder->gop_size;
       videoContext->pix_fmt = decoder->pix_fmt;      

       audioStream = av_new_stream(oc, 1);
       if (!audioStream)
       {
           cout &lt;&lt; "ERROR creating audio stream" &lt;&lt; endl;
           return;
       }
       ai = audioStream->index;
       audioContext = audioStream->codec;
       audioContext->codec_type = AVMEDIA_TYPE_AUDIO;
       audioContext->codec_id = CODEC_ID_AAC;
       audioContext->bit_rate = 64000;
       audioContext->sample_rate = 16000;
       audioContext->channels = 1;

       if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
       {
           videoContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
           audioContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       av_dump_format(oc, 0, filename, 1);

       if (!(oc->oformat->flags &amp; AVFMT_NOFILE))
       {
           if (avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE) &lt; 0) {
               cout &lt;&lt; "Error opening file" &lt;&lt; endl;
           }
       }

       avformat_write_header(oc, NULL);
    }

    As far as I can tell, a number of assumptions didn’t seem to matter, for example :
    1. Bit Rate. The actual video bit rate was 262k whereas I specified 512kbit
    2. AAC channels. I specified mono, although the actual output was Stereo from memory

    You would still need to know what the frame rate (time base) is for the video & audio.

    Contrary to a lot of other examples, when setting pts & dts on the video packets, it was not playable. I needed to know the time base (25fps) and then set the pts & dts according to that time base, i.e. first frame = 0 (PPS, SPS, I), second frame = 1 (intermediate frame, whatever its called ;)).

    AAC I also had to make the assumption that it was 16000 hz. 1024 samples per AAC packet (You can also have AAC @ 960 samples I think) to determine the audio "offset". I added this to the pts & dts. So the pts/dts are the sample number that it is to played back at. You also need to make sure that the duration of 1024 is set in the packet before writing also.

    I have found additionally today that Annex B isn’t really compatible with any other player so AVCC format should really be used.

    These URLS helped :
    Problem to Decode H264 video over RTP with ffmpeg (libavcodec)
    http://aviadr1.blogspot.com.au/2010/05/h264-extradata-partially-explained-for.html

    When constructing the video stream, I filled out the extradata & extradata_size :

    // Extradata contains PPS &amp; SPS for AVCC format
    int extradata_len = 8 + spsFrameLen-4 + 1 + 2 + ppsFrameLen-4;
    videoContext->extradata = (uint8_t*)av_mallocz(extradata_len);
    videoContext->extradata_size = extradata_len;
    videoContext->extradata[0] = 0x01;
    videoContext->extradata[1] = spsFrame[4+1];
    videoContext->extradata[2] = spsFrame[4+2];
    videoContext->extradata[3] = spsFrame[4+3];
    videoContext->extradata[4] = 0xFC | 3;
    videoContext->extradata[5] = 0xE0 | 1;
    int tmp = spsFrameLen - 4;
    videoContext->extradata[6] = (tmp >> 8) &amp; 0x00ff;
    videoContext->extradata[7] = tmp &amp; 0x00ff;
    int i = 0;
    for (i=0;iextradata[8+i] = spsFrame[4+i];
    videoContext->extradata[8+tmp] = 0x01;
    int tmp2 = ppsFrameLen-4;  
    videoContext->extradata[8+tmp+1] = (tmp2 >> 8) &amp; 0x00ff;
    videoContext->extradata[8+tmp+2] = tmp2 &amp; 0x00ff;
    for (i=0;iextradata[8+tmp+3+i] = ppsFrame[4+i];

    When writing out the frames, don’t prepend the SPS & PPS frames, just write out the I Frame & P frames. In addition, replace the Annex B start code contained in the first 4 bytes (0x00 0x00 0x00 0x01) with the size of the I/P frame.

  • My journey to Coviu

    27 octobre 2015, par silvia

    My new startup just released our MVP – this is the story of what got me here.

    I love creating new applications that let people do their work better or in a manner that wasn’t possible before.

    German building and loan socityMy first such passion was as a student intern when I built a system for a building and loan association’s monthly customer magazine. The group I worked with was managing their advertiser contacts through a set of paper cards and I wrote a dBase based system (yes, that long ago) that would manage their customer relationships. They loved it – until it got replaced by an SAP system that cost 100 times what I cost them, had really poor UX, and only gave them half the functionality. It was a corporate system with ongoing support, which made all the difference to them.

    Dr Scholz und Partner GmbHThe story repeated itself with a CRM for my Uncle’s construction company, and with a resume and quotation management system for Accenture right after Uni, both of which I left behind when I decided to go into research.

    Even as a PhD student, I never lost sight of challenges that people were facing and wanted to develop technology to overcome problems. The aim of my PhD thesis was to prepare for the oncoming onslaught of audio and video on the Internet (yes, this was 1994 !) by developing algorithms to automatically extract and locate information in such files, which would enable users to structure, index and search such content.

    Many of the use cases that we explored are now part of products or continue to be challenges : finding music that matches your preferences, identifying music or video pieces e.g. to count ads on the radio or to mark copyright infringement, or the automated creation of video summaries such as trailers.

    CSIRO

    This continued when I joined the CSIRO in Australia – I was working on segmenting speech into words or talk spurts since that would simplify captioning & subtitling, and on MPEG-7 which was a (slightly over-engineered) standard to structure metadata about audio and video.

    In 2001 I had the idea of replicating the Web for videos : i.e. creating hyperlinked and searchable video-only experiences. We called it “Annodex” for annotated and indexed video and it needed full-screen hyperlinked video in browsers – man were we ahead of our time ! It was my first step into standards, got several IETF RFCs to my name, and started my involvement with open codecs through Xiph.

    vquence logoAround the time that YouTube was founded in 2006, I founded Vquence – originally a video search company for the Web, but pivoted to a video metadata mining company. Vquence still exists and continues to sell its data to channel partners, but it lacks the user impact that has always driven my work.

    As the video element started being developed for HTML5, I had to get involved. I contributed many use cases to the W3C, became a co-editor of the HTML5 spec and focused on video captioning with WebVTT while contracting to Mozilla and later to Google. We made huge progress and today the technology exists to publish video on the Web with captions, making the Web more inclusive for everybody. I contributed code to YouTube and Google Chrome, but was keen to make a bigger impact again.

    NICTA logoThe opportunity came when a couple of former CSIRO colleagues who now worked for NICTA approached me to get me interested in addressing new use cases for video conferencing in the context of WebRTC. We worked on a kiosk-style solution to service delivery for large service organisations, particularly targeting government. The emerging WebRTC standard posed many technical challenges that we addressed by building rtc.io , by contributing to the standards, and registering bugs on the browsers.

    Fast-forward through the development of a few further custom solutions for customers in health and education and we are starting to see patterns of need emerge. The core learning that we’ve come away with is that to get things done, you have to go beyond “talking heads” in a video call. It’s not just about seeing the other person, but much more about having a shared view of the things that need to be worked on and a shared way of interacting with them. Also, we learnt that the things that are being worked on are quite varied and may include multiple input cameras, digital documents, Web pages, applications, device data, controls, forms.

    Coviu logoSo we set out to build a solution that would enable productive remote collaboration to take place. It would need to provide an excellent user experience, it would need to be simple to work with, provide for the standard use cases out of the box, yet be architected to be extensible for specialised data sharing needs that we knew some of our customers had. It would need to be usable directly on Coviu.com, but also able to integrate with specialised applications that some of our customers were already using, such as the applications that they spend most of their time in (CRMs, practice management systems, learning management systems, team chat systems). It would need to require our customers to sign up, yet their clients to join a call without sign-up.

    Collaboration is a big problem. People are continuing to get more comfortable with technology and are less and less inclined to travel distances just to get a service done. In a country as large as Australia, where 12% of the population lives in rural and remote areas, people may not even be able to travel distances, particularly to receive or provide recurring or specialised services, or to achieve work/life balance. To make the world a global village, we need to be able to work together better remotely.

    The need for collaboration is being recognised by specialised Web applications already, such as the LiveShare feature of Invision for Designers, Codassium for pair programming, or the recently announced Dropbox Paper. Few go all the way to video – WebRTC is still regarded as a complicated feature to support.

    Coviu in action

    With Coviu, we’d like to offer a collaboration feature to every Web app. We now have a Web app that provides a modern and beautifully designed collaboration interface. To enable other Web apps to integrate it, we are now developing an API. Integration may entail customisation of the data sharing part of Coviu – something Coviu has been designed for. How to replicate the data and keep it consistent when people collaborate remotely – that is where Coviu makes a difference.

    We have started our journey and have just launched free signup to the Coviu base product, which allows individuals to own their own “room” (i.e. a fixed URL) in which to collaborate with others. A huge shout out goes to everyone in the Coviu team – a pretty amazing group of people – who have turned the app from an idea to reality. You are all awesome !

    With Coviu you can share and annotate :

    • images (show your mum photos of your last holidays, or get feedback on an architecture diagram from a customer),
    • pdf files (give a presentation remotely, or walk a customer through a contract),
    • whiteboards (brainstorm with a colleague), and
    • share an application window (watch a YouTube video together, or work through your task list with your colleagues).

    All of these are regarded as “shared documents” in Coviu and thus have zooming and annotations features and are listed in a document tray for ease of navigation.

    This is just the beginning of how we want to make working together online more productive. Give it a go and let us know what you think.

    http://coviu.com/

  • Attribution Tracking (What It Is and How It Works)

    23 février 2024, par Erin

    Facebook, TikTok, Google, email, display ads — which one is best to grow your business ? There’s one proven way to figure it out : attribution tracking.

    Marketing attribution allows you to see which channels are producing the best results for your marketing campaigns.

    In this guide, we’ll show you what attribution tracking is, why it’s important and how you can leverage it to accelerate your marketing success.

    What is attribution tracking ?

    By 2026, the global digital marketing industry is projected to reach $786.2 billion.

    With nearly three-quarters of a trillion U.S. dollars being poured into digital marketing every year, there’s no doubt it dominates traditional marketing.

    The question is, though, how do you know which digital channels to use ?

    By measuring your marketing efforts with attribution tracking.

    What is attribution tracking?

    So, what is attribution tracking ?

    Attribution tracking is where you use software to keep track of different channels and campaign efforts to determine which channel you should attribute conversion to.

    In other words, you can (and should) use attribution tracking to analyse which channels are pushing the needle and which ones aren’t.

    By tracking your marketing efforts, you’ll be able to accurately measure the scale of impact each of your channels, campaigns and touchpoints have on a customer’s purchasing decision.

    If you don’t track your attribution, you’ll end up blindly pouring time, money, and effort into activities that may or may not be helpful.

    Attribution tracking simply gives you insight into what you’re doing right as a marketer — and what you’re doing wrong.

    By understanding which efforts and channels are driving conversions and revenue, you’ll be able to properly allocate resources toward winning channels to double down on growth.

    Matomo lets you track attribution across various channels. Whether you’re looking to track your conversions through organic, referral websites, campaigns, direct traffic, or social media, you can see all your conversions in one place.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Why attribution tracking is important

    Attribution tracking is crucial to succeed with your marketing since it shows you your most valuable channels.

    It takes the guesswork out of your efforts.

    You don’t need to scratch your head wondering what made your campaigns a success (or a failure).

    While most tools show you last click attribution by default, using attribution tracking, or marketing attribution, you can track revenue and conversions for each touchpoint.

    For example, a Facebook ad might have no led to a conversion immediately. But, maybe the visitor returned to your website two weeks later through your email campaign. Attribution tracking will give credit over longer periods of time to see the bigger picture of how your marketing channels are impacting your overall performance.

    Here are five reasons you need to be using attribution tracking in your business today :

    Why attribution tracking is important.

    1. Measure channel performance

    The most obvious way attribution tracking helps is to show you how well each channel performs.

    When you’re using a variety of marketing channels to reach your audience, you have to know what’s actually doing well (and what’s not).

    This means having clarity on the performance of your :

    • Emails
    • Google Ads
    • Facebook Ads
    • Social media marketing
    • Search engine optimisation (SEO)
    • And more

    Attribution tracking allows you to measure each channel’s ROI and identify how much each channel impacted your campaigns.

    It gives you a more accurate picture of the performance of each channel and each campaign.

    With it, you can easily break down your channels by how much they drove sales, conversions, signups, or other actions.

    With this information, you can then understand where to further allocate your resources to fuel growth.

    2. See campaign performance over longer periods of time

    When you start tracking your channel performance with attribution tracking, you’ll gain new insights into how well your channels and campaigns are performing.

    The best part — you don’t just get to see recent performance.

    You get to track your campaign results over weeks or months.

    For example, if someone found you through Google by searching a question that your blog had an answer to, but they didn’t convert, your traditional tracking strategy would discount SEO.

    But, if that same person clicked a TikTok ad you placed three weeks later, came back, and converted — SEO would receive some attribution on the conversion.

    Using an attribution tracking tool like Matomo can help paint a holistic view of how your marketing is really doing from channel to channel over the long run.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    3. Increase revenue

    Attribution tracking has one incredible benefit for marketers : optimised marketing spend.

    When you begin looking at how well your campaigns and your channels are performing, you’ll start to see what’s working.

    Attribution tracking gives you clarity into the performance of campaigns since it’s not just looking at the first time someone clicks through to your site. It’s looking at every touchpoint a customer made along the way to a conversion.

    By understanding what channels are most effective, you can pour more resources like time, money and labour into those effective channels.

    By doubling down on the winning channels, you’ll be able to grow like never before.

    Rather than trying to “diversify” your marketing efforts, lean into what’s working.

    This is one of the key strategies of an effective marketer to maximise your campaign returns and experience long-term success in terms of revenue.

    4. Improve profit margins

    The final benefit to attribution tracking is simple : you’ll earn more profit.

    Think about it this way : let’s say you’re putting 50% of your marketing spend into Facebook ads and 50% of your spend into email marketing.

    You do this for one year, allocating $500,000 to Facebook and $500,000 to email.

    Then, you start tracking attribution.

    You find that your Facebook ads are generating $900,000 in revenue. 

    That’s a 1,800% return on your investment.

    Not bad, right ?

    Well, after tracking your attribution, you see what your email revenue is.

    In the past year, you generated $1.7 million in email revenue.

    That’s a 3,400% return on your investment (close to the average return of email marketing across all industries).

    In this scenario, you can see that you’re getting nearly twice as much of a return on your marketing spend with email.

    So, the following year, you decide to go for a 75/25 split.

    Instead of putting $500,000 into both email and Facebook ads and email, you put $750,000 into email and $250,000 into Facebook ads.

    You’re still diversifying, but you’re doubling down on what’s working best.

    The result is that you’ll be able to get more revenue by investing the same amount of money, leaving you with higher profit margins.

    Different types of marketing attribution tracking

    There are several types of attribution tracking models in marketing.

    Depending on your goals, your business and your preferred method, there are a variety of types of attribution tracking you can use.

    Here are the six main types of attribution tracking :

    Pros and cons of different marketing attribution models.

    1. Last interaction

    Last interaction attribution model is also called “last touch.”

    It’s one of the most common types of attribution. The way it works is to give 100% of the credit to the final channel a customer interacted with before they converted into a customer.

    This could be through a paid ad, direct traffic, or organic search.

    One potential drawback of last interaction is that it doesn’t factor in other channels that may have assisted in the conversion. However, this model can work really well depending on the business.

    2. First interaction

    This is the opposite of the previous model.

    First interaction, or “first touch,” is all about the first interaction a customer has with your brand.

    It gives 100% of the credit to the channel (i.e. a link clicked from a social media post). And it doesn’t report or attribute anything else to another channel that someone may have interacted with in your marketing mix.

    For example, it won’t attribute the conversion or revenue if the visitor then clicked on an Instagram ad and converted. All credit would be given to the first touch which in this case would be the social media post. 

    The first interaction is a good model to use at the top of your funnel to help establish which channels are bringing leads in from outside your audience.

    3. Last non-direct

    Another model is called the last non-direct attribution model. 

    This model seeks to exclude direct traffic and assigns 100% credit for a conversion to the final channel a customer interacted with before becoming a customer, excluding clicks from direct traffic.

    For instance, if someone first comes to your website from an emai campaignl, and then, a week later, directly visits and buys a product, the email campaign gets all the credit for the sale.

    This attribution model tells a bit more about the whole sales process, shedding some more light on what other channels may have influenced the purchase decision.

    4. Linear

    Another common attribution model is linear.

    This model distributes completely equal credit across every single touchpoint (that’s tracked). 

    Imagine someone comes to your website in different ways : first, they find it through a Google search, then they click a link in an email from your campaign the next day, followed by visiting from a Facebook post a few days later, and finally, a week later, they come from a TikTok ad. 

    Here’s how the attribution is divided among these sources :

    • 25% Organic
    • 25% Email
    • 25% Facebook
    • 25% TikTok ad

    This attirubtion model provides a balanced perspective on the contribution of various sources to a user’s journey on your website.

    5. Position-based

    Position-based attribution is when you give 40% credit to both the first and last touchpoints and 20% credit is spread between the touchpoints in between.

    This model is preferred if you want to identify the initial touchpoint that kickstarted a conversion journey and the final touchpoint that sealed the deal.

    The downside is that you don’t gain much insight into the middle of the customer journey, which can make it hard to make effective decisions.

    For example, someone may have been interacting with your email newsletter for seven weeks, which allowed them to be nurtured and build a relationship with you.

    But that relationship and trust-building effort will be overlooked by the blog post that brought them in and the social media ad that eventually converted them.

    6. Time decay

    The final attribution model is called time decay attribution.

    This is all about giving credit based on the timing of the interactions someone had with your brand.

    For example, the touchpoints that just preceded the sale get the highest score, while the first touchpoints get the lowest score.

    For example, let’s use that scenario from above with the linear model :

    • 25% SEO
    • 25% Email
    • 25% Facebook ad
    • 25% Organic TikTok

    But, instead of splitting credit by 25% to each channel, you weigh the ones closer to the sale with more credit.

    Instead, time decay may look at these same channels like this :

    • 5% SEO (6 weeks ago)
    • 20% Email (3 weeks ago)
    • 30% Facebook ad (1 week ago)
    • 45% Organic TikTok (2 days ago)

    One downside is that it underestimates brand awareness campaigns. And, if you have longer sales cycles, it also isn’t the most accurate, as mid-stage nurturing and relationship building are underlooked. 

    Leverage Matomo : A marketing attribution tool

    Attribution tracking is a crucial part of leading an effective marketing strategy.

    But it’s impossible to do this without the right tools.

    A marketing attribution tool can give you insights into your best-performing channels automatically. 

    What is a marketing attribution tool?

    One of the best marketing attribution tools available is Matomo, a web analytics tool that helps you understand what’s going on with your website and different channels in one easy-to-use dashboard.

    With Matomo, you get marketing attribution as a plug-in or within Matomo On-Premise or for free in Matomo Cloud.

    The best part is it’s all done with crystal-clear data. Matomo gives you 100% accurate data since it doesn’t use data sampling on any plans like Google Analytics.

    To start tracking attribution today, try Matomo’s 21-day free trial. No credit card required.