
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (75)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (7683)
-
H.264 muxed to MP4 using libavformat not playing back
14 mai 2015, par Brad MitchellI am trying to mux H.264 data into a MP4 file. There appear to be no errors in saving this H.264 Annex B data out to an MP4 file, but the file fails to playback.
I’ve done a binary comparison on the files and the issue seems to be somewhere in what is being written to the footer (trailer) of the MP4 file.
I suspect it has to be something with the way the stream is being created or something.
Init :
AVOutputFormat* fmt = av_guess_format( 0, "out.mp4", 0 );
oc = avformat_alloc_context();
oc->oformat = fmt;
strcpy(oc->filename, filename);Part of this prototype app I have is creating a png file for each IFrame. So when the first IFrame is encountered, I create the video stream and write the av header etc :
void addVideoStream(AVCodecContext* decoder)
{
videoStream = av_new_stream(oc, 0);
if (!videoStream)
{
cout << "ERROR creating video stream" << endl;
return;
}
vi = videoStream->index;
videoContext = videoStream->codec;
videoContext->codec_type = AVMEDIA_TYPE_VIDEO;
videoContext->codec_id = decoder->codec_id;
videoContext->bit_rate = 512000;
videoContext->width = decoder->width;
videoContext->height = decoder->height;
videoContext->time_base.den = 25;
videoContext->time_base.num = 1;
videoContext->gop_size = decoder->gop_size;
videoContext->pix_fmt = decoder->pix_fmt;
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
videoContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
av_dump_format(oc, 0, filename, 1);
if (!(oc->oformat->flags & AVFMT_NOFILE))
{
if (avio_open(&oc->pb, filename, AVIO_FLAG_WRITE) < 0) {
cout << "Error opening file" << endl;
}
avformat_write_header(oc, NULL);
}I write packets out :
unsigned char* data = block->getData();
unsigned char videoFrameType = data[4];
int dataLen = block->getDataLen();
// store pps
if (videoFrameType == 0x68)
{
if (ppsFrame != NULL)
{
delete ppsFrame; ppsFrameLength = 0; ppsFrame = NULL;
}
ppsFrameLength = block->getDataLen();
ppsFrame = new unsigned char[ppsFrameLength];
memcpy(ppsFrame, block->getData(), ppsFrameLength);
}
else if (videoFrameType == 0x67)
{
// sps
if (spsFrame != NULL)
{
delete spsFrame; spsFrameLength = 0; spsFrame = NULL;
}
spsFrameLength = block->getDataLen();
spsFrame = new unsigned char[spsFrameLength];
memcpy(spsFrame, block->getData(), spsFrameLength);
}
if (videoFrameType == 0x65 || videoFrameType == 0x41)
{
videoFrameNumber++;
}
if (videoFrameType == 0x65)
{
decodeIFrame(videoFrameNumber, spsFrame, spsFrameLength, ppsFrame, ppsFrameLength, data, dataLen);
}
if (videoStream != NULL)
{
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.stream_index = vi;
pkt.flags = 0;
pkt.pts = pkt.dts = 0;
if (videoFrameType == 0x65)
{
// combine the SPS PPS & I frames together
pkt.flags |= AV_PKT_FLAG_KEY;
unsigned char* videoFrame = new unsigned char[spsFrameLength+ppsFrameLength+dataLen];
memcpy(videoFrame, spsFrame, spsFrameLength);
memcpy(&videoFrame[spsFrameLength], ppsFrame, ppsFrameLength);
memcpy(&videoFrame[spsFrameLength+ppsFrameLength], data, dataLen);
// overwrite the start code (00 00 00 01 with a 32-bit length)
setLength(videoFrame, spsFrameLength-4);
setLength(&videoFrame[spsFrameLength], ppsFrameLength-4);
setLength(&videoFrame[spsFrameLength+ppsFrameLength], dataLen-4);
pkt.size = dataLen + spsFrameLength + ppsFrameLength;
pkt.data = videoFrame;
av_interleaved_write_frame(oc, &pkt);
delete videoFrame; videoFrame = NULL;
}
else if (videoFrameType != 0x67 && videoFrameType != 0x68)
{
// Send other frames except pps & sps which are caught and stored
pkt.size = dataLen;
pkt.data = data;
setLength(data, dataLen-4);
av_interleaved_write_frame(oc, &pkt);
}Finally to close the file off :
av_write_trailer(oc);
int i = 0;
for (i = 0; i < oc->nb_streams; i++)
{
av_freep(&oc->streams[i]->codec);
av_freep(&oc->streams[i]);
}
if (!(oc->oformat->flags & AVFMT_NOFILE))
{
avio_close(oc->pb);
}
av_free(oc);If I take the H.264 data alone and convert it :
ffmpeg -i recording.h264 -vcodec copy recording.mp4
All but the "footer" of the files are the same.
Output from my program :
readrec recording.tcp out.mp4
** START * 01-03-2013 14:26:01 180000
Output #0, mp4, to ’out.mp4’ :
Stream #0:0 : Video : h264, yuv420p, 352x288, q=2-31, 512 kb/s, 90k tbn, 25 tbc
* END ** 01-03-2013 14:27:01 102000
Wrote 1499 video frames.If I try to convert using ffmpeg the MP4 file created using CODE :
ffmpeg -i out.mp4 -vcodec copy out2.mp4
ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
built on Mar 7 2013 12:49:22 with suncc 0x5110
configuration: --extra-cflags=-KPIC -g --disable-mmx
--disable-protocol=udp --disable-encoder=nellymoser --cc=cc --cxx=CC
libavutil 51. 54.100 / 51. 54.100
libavcodec 54. 23.100 / 54. 23.100
libavformat 54. 6.100 / 54. 6.100
libavdevice 54. 0.100 / 54. 0.100
libavfilter 2. 77.100 / 2. 77.100
libswscale 2. 1.100 / 2. 1.100
libswresample 0. 15.100 / 0. 15.100
h264 @ 12eaac0] no frame!
Last message repeated 1 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
Last message repeated 23 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
Last message repeated 74 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
Last message repeated 64 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
Last message repeated 34 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
Last message repeated 49 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
Last message repeated 24 times
[h264 @ 12eaac0] Partitioned H.264 support is incomplete
[h264 @ 12eaac0] no frame!
Last message repeated 23 times
[h264 @ 12eaac0] sps_id out of range
[h264 @ 12eaac0] no frame!
Last message repeated 148 times
[h264 @ 12eaac0] sps_id (32) out of range
Last message repeated 1 times
[h264 @ 12eaac0] no frame!
Last message repeated 33 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
Last message repeated 128 times
[h264 @ 12eaac0] sps_id (32) out of range
Last message repeated 1 times
[h264 @ 12eaac0] no frame!
Last message repeated 3 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
Last message repeated 3 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
Last message repeated 309 times
[h264 @ 12eaac0] sps_id (32) out of range
Last message repeated 1 times
[h264 @ 12eaac0] no frame!
Last message repeated 192 times
[h264 @ 12eaac0] Partitioned H.264 support is incomplete
[h264 @ 12eaac0] no frame!
Last message repeated 73 times
[h264 @ 12eaac0] sps_id (32) out of range
Last message repeated 1 times
[h264 @ 12eaac0] no frame!
Last message repeated 99 times
[h264 @ 12eaac0] sps_id (32) out of range
Last message repeated 1 times
[h264 @ 12eaac0] no frame!
Last message repeated 197 times
[mov,mp4,m4a,3gp,3g2,mj2 @ 12e3100] decoding for stream 0 failed
[mov,mp4,m4a,3gp,3g2,mj2 @ 12e3100] Could not find codec parameters
(Video: h264 (avc1 / 0x31637661), 393539 kb/s)
out.mp4: could not find codec parametersI really do not know where the issue is, except it has to be something to do with the way the streams are being set up. I’ve looked at bits of code from where other people are doing a similar thing, and tried to use this advice in setting up the streams, but to no avail !
The final code which gave me a H.264/AAC muxed (synced) file is as follows. First a bit of background information. The data is coming from an IP camera. The data is presented via a 3rd party API as video/audio packets. The video packets are presented as the RTP payload data (no header) and consist of NALU’s that are reconstructed and converted to H.264 video in Annex B format. AAC audio is presented as raw AAC and is converted to adts format to enable playback. These packets have been put into a bitstream format that allows the transmission of the timestamp (64 bit milliseconds since Jan 1 1970) along with a few other things.
This is more or less a prototype and is not clean in any respects. It probably leaks bad. I do however, hope this helps anyone else out trying to achieve something similar to what I am.
Globals :
AVFormatContext* oc = NULL;
AVCodecContext* videoContext = NULL;
AVStream* videoStream = NULL;
AVCodecContext* audioContext = NULL;
AVStream* audioStream = NULL;
AVCodec* videoCodec = NULL;
AVCodec* audioCodec = NULL;
int vi = 0; // Video stream
int ai = 1; // Audio stream
uint64_t firstVideoTimeStamp = 0;
uint64_t firstAudioTimeStamp = 0;
int audioStartOffset = 0;
char* filename = NULL;
Boolean first = TRUE;
int videoFrameNumber = 0;
int audioFrameNumber = 0;Main :
int main(int argc, char* argv[])
{
if (argc != 3)
{
cout << argv[0] << " <stream playback="playback" file="file"> <output mp4="mp4" file="file">" << endl;
return 0;
}
char* input_stream_file = argv[1];
filename = argv[2];
av_register_all();
fstream inFile;
inFile.open(input_stream_file, ios::in);
// Used to store the latest pps & sps frames
unsigned char* ppsFrame = NULL;
int ppsFrameLength = 0;
unsigned char* spsFrame = NULL;
int spsFrameLength = 0;
// Setup MP4 output file
AVOutputFormat* fmt = av_guess_format( 0, filename, 0 );
oc = avformat_alloc_context();
oc->oformat = fmt;
strcpy(oc->filename, filename);
// Setup the bitstream filter for AAC in adts format. Could probably also achieve
// this by stripping the first 7 bytes!
AVBitStreamFilterContext* bsfc = av_bitstream_filter_init("aac_adtstoasc");
if (!bsfc)
{
cout << "Error creating adtstoasc filter" << endl;
return -1;
}
while (inFile.good())
{
TcpAVDataBlock* block = new TcpAVDataBlock();
block->readStruct(inFile);
DateTime dt = block->getTimestampAsDateTime();
switch (block->getPacketType())
{
case TCP_PACKET_H264:
{
if (firstVideoTimeStamp == 0)
firstVideoTimeStamp = block->getTimeStamp();
unsigned char* data = block->getData();
unsigned char videoFrameType = data[4];
int dataLen = block->getDataLen();
// pps
if (videoFrameType == 0x68)
{
if (ppsFrame != NULL)
{
delete ppsFrame; ppsFrameLength = 0;
ppsFrame = NULL;
}
ppsFrameLength = block->getDataLen();
ppsFrame = new unsigned char[ppsFrameLength];
memcpy(ppsFrame, block->getData(), ppsFrameLength);
}
else if (videoFrameType == 0x67)
{
// sps
if (spsFrame != NULL)
{
delete spsFrame; spsFrameLength = 0;
spsFrame = NULL;
}
spsFrameLength = block->getDataLen();
spsFrame = new unsigned char[spsFrameLength];
memcpy(spsFrame, block->getData(), spsFrameLength);
}
if (videoFrameType == 0x65 || videoFrameType == 0x41)
{
videoFrameNumber++;
}
// Extract a thumbnail for each I-Frame
if (videoFrameType == 0x65)
{
decodeIFrame(h264, spsFrame, spsFrameLength, ppsFrame, ppsFrameLength, data, dataLen);
}
if (videoStream != NULL)
{
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.stream_index = vi;
pkt.flags = 0;
pkt.pts = videoFrameNumber;
pkt.dts = videoFrameNumber;
if (videoFrameType == 0x65)
{
pkt.flags = 1;
unsigned char* videoFrame = new unsigned char[spsFrameLength+ppsFrameLength+dataLen];
memcpy(videoFrame, spsFrame, spsFrameLength);
memcpy(&videoFrame[spsFrameLength], ppsFrame, ppsFrameLength);
memcpy(&videoFrame[spsFrameLength+ppsFrameLength], data, dataLen);
pkt.data = videoFrame;
av_interleaved_write_frame(oc, &pkt);
delete videoFrame; videoFrame = NULL;
}
else if (videoFrameType != 0x67 && videoFrameType != 0x68)
{
pkt.size = dataLen;
pkt.data = data;
av_interleaved_write_frame(oc, &pkt);
}
}
break;
}
case TCP_PACKET_AAC:
if (firstAudioTimeStamp == 0)
{
firstAudioTimeStamp = block->getTimeStamp();
uint64_t millseconds_difference = firstAudioTimeStamp - firstVideoTimeStamp;
audioStartOffset = millseconds_difference * 16000 / 1000;
cout << "audio offset: " << audioStartOffset << endl;
}
if (audioStream != NULL)
{
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.stream_index = ai;
pkt.flags = 1;
pkt.pts = audioFrameNumber*1024;
pkt.dts = audioFrameNumber*1024;
pkt.data = block->getData();
pkt.size = block->getDataLen();
pkt.duration = 1024;
AVPacket newpacket = pkt;
int rc = av_bitstream_filter_filter(bsfc, audioContext,
NULL,
&newpacket.data, &newpacket.size,
pkt.data, pkt.size,
pkt.flags & AV_PKT_FLAG_KEY);
if (rc >= 0)
{
//cout << "Write audio frame" << endl;
newpacket.pts = audioFrameNumber*1024;
newpacket.dts = audioFrameNumber*1024;
audioFrameNumber++;
newpacket.duration = 1024;
av_interleaved_write_frame(oc, &newpacket);
av_free_packet(&newpacket);
}
else
{
cout << "Error filtering aac packet" << endl;
}
}
break;
case TCP_PACKET_START:
break;
case TCP_PACKET_END:
break;
}
delete block;
}
inFile.close();
av_write_trailer(oc);
int i = 0;
for (i = 0; i < oc->nb_streams; i++)
{
av_freep(&oc->streams[i]->codec);
av_freep(&oc->streams[i]);
}
if (!(oc->oformat->flags & AVFMT_NOFILE))
{
avio_close(oc->pb);
}
av_free(oc);
delete spsFrame; spsFrame = NULL;
delete ppsFrame; ppsFrame = NULL;
cout << "Wrote " << videoFrameNumber << " video frames." << endl;
return 0;
}
</output></stream>The stream stream/codecs are added and the header is created in a function called addVideoAndAudioStream(). This function is called from decodeIFrame() so there are a few assumptions (which aren’t necessarily good)
1. A video packet comes first
2. AAC is presentThe decodeIFrame was kind of a separate prototype by where I was creating a thumbnail for each I Frame. The code to generate thumbnails was from : https://gnunet.org/svn/Extractor/src/plugins/thumbnailffmpeg_extractor.c
The decodeIFrame function passes an AVCodecContext into addVideoAudioStream :
void addVideoAndAudioStream(AVCodecContext* decoder = NULL)
{
videoStream = av_new_stream(oc, 0);
if (!videoStream)
{
cout << "ERROR creating video stream" << endl;
return;
}
vi = videoStream->index;
videoContext = videoStream->codec;
videoContext->codec_type = AVMEDIA_TYPE_VIDEO;
videoContext->codec_id = decoder->codec_id;
videoContext->bit_rate = 512000;
videoContext->width = decoder->width;
videoContext->height = decoder->height;
videoContext->time_base.den = 25;
videoContext->time_base.num = 1;
videoContext->gop_size = decoder->gop_size;
videoContext->pix_fmt = decoder->pix_fmt;
audioStream = av_new_stream(oc, 1);
if (!audioStream)
{
cout << "ERROR creating audio stream" << endl;
return;
}
ai = audioStream->index;
audioContext = audioStream->codec;
audioContext->codec_type = AVMEDIA_TYPE_AUDIO;
audioContext->codec_id = CODEC_ID_AAC;
audioContext->bit_rate = 64000;
audioContext->sample_rate = 16000;
audioContext->channels = 1;
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
{
videoContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
audioContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(oc, 0, filename, 1);
if (!(oc->oformat->flags & AVFMT_NOFILE))
{
if (avio_open(&oc->pb, filename, AVIO_FLAG_WRITE) < 0) {
cout << "Error opening file" << endl;
}
}
avformat_write_header(oc, NULL);
}As far as I can tell, a number of assumptions didn’t seem to matter, for example :
1. Bit Rate. The actual video bit rate was 262k whereas I specified 512kbit
2. AAC channels. I specified mono, although the actual output was Stereo from memoryYou would still need to know what the frame rate (time base) is for the video & audio.
Contrary to a lot of other examples, when setting pts & dts on the video packets, it was not playable. I needed to know the time base (25fps) and then set the pts & dts according to that time base, i.e. first frame = 0 (PPS, SPS, I), second frame = 1 (intermediate frame, whatever its called ;)).
AAC I also had to make the assumption that it was 16000 hz. 1024 samples per AAC packet (You can also have AAC @ 960 samples I think) to determine the audio "offset". I added this to the pts & dts. So the pts/dts are the sample number that it is to played back at. You also need to make sure that the duration of 1024 is set in the packet before writing also.
—
I have found additionally today that Annex B isn’t really compatible with any other player so AVCC format should really be used.
These URLS helped :
Problem to Decode H264 video over RTP with ffmpeg (libavcodec)
http://aviadr1.blogspot.com.au/2010/05/h264-extradata-partially-explained-for.htmlWhen constructing the video stream, I filled out the extradata & extradata_size :
// Extradata contains PPS & SPS for AVCC format
int extradata_len = 8 + spsFrameLen-4 + 1 + 2 + ppsFrameLen-4;
videoContext->extradata = (uint8_t*)av_mallocz(extradata_len);
videoContext->extradata_size = extradata_len;
videoContext->extradata[0] = 0x01;
videoContext->extradata[1] = spsFrame[4+1];
videoContext->extradata[2] = spsFrame[4+2];
videoContext->extradata[3] = spsFrame[4+3];
videoContext->extradata[4] = 0xFC | 3;
videoContext->extradata[5] = 0xE0 | 1;
int tmp = spsFrameLen - 4;
videoContext->extradata[6] = (tmp >> 8) & 0x00ff;
videoContext->extradata[7] = tmp & 0x00ff;
int i = 0;
for (i=0;iextradata[8+i] = spsFrame[4+i];
videoContext->extradata[8+tmp] = 0x01;
int tmp2 = ppsFrameLen-4;
videoContext->extradata[8+tmp+1] = (tmp2 >> 8) & 0x00ff;
videoContext->extradata[8+tmp+2] = tmp2 & 0x00ff;
for (i=0;iextradata[8+tmp+3+i] = ppsFrame[4+i];When writing out the frames, don’t prepend the SPS & PPS frames, just write out the I Frame & P frames. In addition, replace the Annex B start code contained in the first 4 bytes (0x00 0x00 0x00 0x01) with the size of the I/P frame.
-
My journey to Coviu
27 octobre 2015, par silviaMy new startup just released our MVP – this is the story of what got me here.
I love creating new applications that let people do their work better or in a manner that wasn’t possible before.
My first such passion was as a student intern when I built a system for a building and loan association’s monthly customer magazine. The group I worked with was managing their advertiser contacts through a set of paper cards and I wrote a dBase based system (yes, that long ago) that would manage their customer relationships. They loved it – until it got replaced by an SAP system that cost 100 times what I cost them, had really poor UX, and only gave them half the functionality. It was a corporate system with ongoing support, which made all the difference to them.
The story repeated itself with a CRM for my Uncle’s construction company, and with a resume and quotation management system for Accenture right after Uni, both of which I left behind when I decided to go into research.
Even as a PhD student, I never lost sight of challenges that people were facing and wanted to develop technology to overcome problems. The aim of my PhD thesis was to prepare for the oncoming onslaught of audio and video on the Internet (yes, this was 1994 !) by developing algorithms to automatically extract and locate information in such files, which would enable users to structure, index and search such content.
Many of the use cases that we explored are now part of products or continue to be challenges : finding music that matches your preferences, identifying music or video pieces e.g. to count ads on the radio or to mark copyright infringement, or the automated creation of video summaries such as trailers.
This continued when I joined the CSIRO in Australia – I was working on segmenting speech into words or talk spurts since that would simplify captioning & subtitling, and on MPEG-7 which was a (slightly over-engineered) standard to structure metadata about audio and video.
In 2001 I had the idea of replicating the Web for videos : i.e. creating hyperlinked and searchable video-only experiences. We called it “Annodex” for annotated and indexed video and it needed full-screen hyperlinked video in browsers – man were we ahead of our time ! It was my first step into standards, got several IETF RFCs to my name, and started my involvement with open codecs through Xiph.
Around the time that YouTube was founded in 2006, I founded Vquence – originally a video search company for the Web, but pivoted to a video metadata mining company. Vquence still exists and continues to sell its data to channel partners, but it lacks the user impact that has always driven my work.
As the video element started being developed for HTML5, I had to get involved. I contributed many use cases to the W3C, became a co-editor of the HTML5 spec and focused on video captioning with WebVTT while contracting to Mozilla and later to Google. We made huge progress and today the technology exists to publish video on the Web with captions, making the Web more inclusive for everybody. I contributed code to YouTube and Google Chrome, but was keen to make a bigger impact again.
The opportunity came when a couple of former CSIRO colleagues who now worked for NICTA approached me to get me interested in addressing new use cases for video conferencing in the context of WebRTC. We worked on a kiosk-style solution to service delivery for large service organisations, particularly targeting government. The emerging WebRTC standard posed many technical challenges that we addressed by building rtc.io , by contributing to the standards, and registering bugs on the browsers.
Fast-forward through the development of a few further custom solutions for customers in health and education and we are starting to see patterns of need emerge. The core learning that we’ve come away with is that to get things done, you have to go beyond “talking heads” in a video call. It’s not just about seeing the other person, but much more about having a shared view of the things that need to be worked on and a shared way of interacting with them. Also, we learnt that the things that are being worked on are quite varied and may include multiple input cameras, digital documents, Web pages, applications, device data, controls, forms.
So we set out to build a solution that would enable productive remote collaboration to take place. It would need to provide an excellent user experience, it would need to be simple to work with, provide for the standard use cases out of the box, yet be architected to be extensible for specialised data sharing needs that we knew some of our customers had. It would need to be usable directly on Coviu.com, but also able to integrate with specialised applications that some of our customers were already using, such as the applications that they spend most of their time in (CRMs, practice management systems, learning management systems, team chat systems). It would need to require our customers to sign up, yet their clients to join a call without sign-up.
Collaboration is a big problem. People are continuing to get more comfortable with technology and are less and less inclined to travel distances just to get a service done. In a country as large as Australia, where 12% of the population lives in rural and remote areas, people may not even be able to travel distances, particularly to receive or provide recurring or specialised services, or to achieve work/life balance. To make the world a global village, we need to be able to work together better remotely.
The need for collaboration is being recognised by specialised Web applications already, such as the LiveShare feature of Invision for Designers, Codassium for pair programming, or the recently announced Dropbox Paper. Few go all the way to video – WebRTC is still regarded as a complicated feature to support.
With Coviu, we’d like to offer a collaboration feature to every Web app. We now have a Web app that provides a modern and beautifully designed collaboration interface. To enable other Web apps to integrate it, we are now developing an API. Integration may entail customisation of the data sharing part of Coviu – something Coviu has been designed for. How to replicate the data and keep it consistent when people collaborate remotely – that is where Coviu makes a difference.
We have started our journey and have just launched free signup to the Coviu base product, which allows individuals to own their own “room” (i.e. a fixed URL) in which to collaborate with others. A huge shout out goes to everyone in the Coviu team – a pretty amazing group of people – who have turned the app from an idea to reality. You are all awesome !
With Coviu you can share and annotate :
- images (show your mum photos of your last holidays, or get feedback on an architecture diagram from a customer),
- pdf files (give a presentation remotely, or walk a customer through a contract),
- whiteboards (brainstorm with a colleague), and
- share an application window (watch a YouTube video together, or work through your task list with your colleagues).
All of these are regarded as “shared documents” in Coviu and thus have zooming and annotations features and are listed in a document tray for ease of navigation.
This is just the beginning of how we want to make working together online more productive. Give it a go and let us know what you think.
-
Attribution Tracking (What It Is and How It Works)
23 février 2024, par Erin