Much effort is being put into building the market for technology that supports rich-media online meetings. This segment covers anything from telepresence and high-quality video conferences to private meetings, conference calls and webinars. In due course, driven by mobile and the cloud, it will extend to applications such as multiple-site remote surgery or online customer focus groups.
The common denominator of all these digital meet-ups is that they inevitably produce – because they are all about humans communicating together – large amounts of content: i.e. recordable language data that can provide substantial added value to all kinds of stakeholders if properly captured and processed.
To give an idea of the market value, 2011 revenue from the global video conferencing infrastructure grew by 26.9% (to $746M). In 2012 a company such as Cisco did significantly better. More important for knowledge management services is the software layer poised on top of the unified communications infrastructure, much invested in by telecoms equipment companies such as Avaya.
Just recently Oracle said it was making a substantial bet on video-conferencing as a major business line. Surely Microsoft among others with its high-potential Skype asset will be eying the same market.
The story of smart meeting technology goes back to the early days of exploring how computing could augment – rather than replace - human intellectual work. This was radically enabled by the research into computer interfaces, networks and graphics in the Augment programme led by Doug Engelbart back in the 1970s. He tried to adapt the technology of his time to help teams of intellectual workers increase their grasp of complex decision-making and data handling during very large-scale industrial projects.
This singular seam of computing history is often contrasted with the development of the famous Artificial Intelligence agenda. The focus here was on automating essentially human practices such as using language to create meaning, and reason over semantic entities. This led to software applications for powering processes ranging from medical diagnoses to driving a car.
Augmenting the value of meetings poses a real challenge. Meetings can involve many participants in free-flowing conversations around documents or presentations that generate a huge flow of information, both trivial and critical. Sorting through the inevitable noise generated in order to identify the key takeaways is a demanding task. As is the parallel need to check on the relevance of those half-forgotten suggestions, criticisms, and expressions of support.
Note that a perfectly intelligent aural record of a meeting would change the psychological dynamics of how people process what happened, producing a cool, detailed photograph rather than human memory’s warm, impressionistic picture. Although everyone keeps their own notes as a partial record of a meeting, there is also a scribe whose job it is to take down the official “minutes”. What if there was an independent and searchable record of the whole event?
Enter Gridspace which has come up with an NLP-driven solution for recording and indexing the content of meetings, and collecting and integrating all documentation associated with it, so that it produces a searchable knowledge base.
The application also claims to provide a dashboard for meeting attendees of what the system considers to be the “most important” content of the meeting – i.e. a sort of automated minutes. The aim is of course to save time and give a rapid solution to the post meeting problem of collating scribbled notes into an “objective” record. Another company operating in this space is VoiceBase
Four leads for adding functionality to future meeting apps:
1. Although small-group online meetings may use a single language – possibly linguafranca English – any meeting knowledge support system will eventually need to be multilingual in scope. In some cases, interpreters could be integrated into the workflow (with the attendant data capture issues); in others, subtitles could be used to simplify communications. Subtitle companies using speech recognition to aid multilingual access include the people behind Jibbigo (before it was bought by Facebook – no news since), but also Translate Your World, which claims translation capabilities for subtitles in 78 languages or automated voice translation in 35 languages. A hard linguistic nut to crack, of course, but essential in the long run.
2. “Intelligent Meeting” applications will also need to be able to consolidate a whole historical series of meetings on the same topics and summarise their contents. They should provide people with references to previous meetings, what people said before, what updates have been shared by email, and so on. In other words, a fully-fledged ideas monitor that can take the burden of searching and consolidating information and morph it into usable input for everyone involved the meeting.
3. Other add-ons will almost certainly be dreamt up to improve meeting productivity in due course. In a BYOD world, wearable computing devices such as smart glasses could well turn into a meeting interface for some, requiring a number of adaptations to the meeting scenario. Fine-tuned emotion recognition software may well find its way into meeting software to help participants detect the temperature of their co-members either in their facial expression or in their manner of speaking. These will enable people who have never met before to rapidly evaluate personalities. Sinister but possibly manageable.
4. Knowledge technology advances in online meeting/webinars will almost certainly extend to the world of education and training (e.g. MOOCs) and open up an interesting multilingual marketplace for smart apps that help learners engage more easily, understand more intelligently, and access richer knowledge repositories. Again, across the language spectrum.
No comments:
Post a Comment