How to hold the fort on the AI bubble without a ‘killer app’

Add bookmark

Does AI research need its own 'RealPlayer' to bridge the PR gap between research and applicability?

app hero
Photo by William Iven on Unsplash

In the AI research and application sphere there’s evident tension between the need to maintain excitement and funding, and the embarrassing truth that state-of-the-art neural networks are still extraordinarily experimental — or even merely conceptual.

If the current groundswell of interest in artificial intelligence should peak and then abate, the field risks to suffer the same kind of ‘false dawn’ which virtual reality experienced in the early 1990s, as business gradually realizes that the enabling technologies and milestones might be a decade or more away.

It’s a grim prospect, since the bursting of the AI bubble would recall the AI winter of the 1970s.

The confidence game

Some of the main commercial engines of the massive resurgence of interest in AI over the last five years are already threatened by legislative, political and economic factors.

Self-driving technologies face years — if not decades — of regulatory hurdles, and any globalized race-to-the-bottom in this respect seems likely to be undermined by the scandal of headline-grabbing setbacks, if history is any indication.

A withdrawal of commercial interest in autonomous vehicles would directly affect some of the most active AI research sectors, including image recognition and video analysis, as well as having an indirect effect on related sectors such as security and facial recognition.

If business becomes chary of continuing commitment to the huge body of pre-commercialized research currently under way, it may fall to more reliable or self-motivated sectors such as military research (drones and automated weapons systems) and data analysis (Google and other major players in online and mobile advertising) to pick up the slack.

However these sectors are not only more secretive than the general assembly of current academic research, but are also benefitting directly from this cross-pollinated frenzy of activity.

This applies particularly to the cut-stricken US military, whose AI tenders have slanted towards economical ‘off-the-shelf’ solutions in the last 6-7 years. Whether these sectors would welcome a shift in role from passenger to prime mover is debatable.

In search of the ‘live’ AI killer app

All the most exciting talk around the future and potential of AI centers around the capability of deep learning; yet most of the anticipated ‘killer apps’ of Machine Learning involve latencies, throughput and accuracy levels which the current generation of neural networks are not yet capable of; networks for which even the central defining frameworks and methodologies are still in flux.

For instance, the most promising of the research papers which I read daily offer systems with accuracy or efficacy in the high nineties. Impressive milestones in many cases; but considering some of the potential critical AI applications which most excite the business sector, that’s tantamount to an ‘acceptable kill ratio’ — the kind of collateral PR damage that can take a company down amidst accusations of dangerously early monetization policies.

Additionally, neural networks represent prototype technology; though open source datasets, methodologies and platforms are becoming established, the field is open enough and volatile enough to be susceptible to disruption on any given day.

It’s a situation destined to excite researchers but stay the enthusiasm and commitment of the funders who enable them.

Holding the fort

It could be argued that big league, ‘full fat’ AI needs the rise of some critically optimized but nonetheless dazzling application which can directly leverage neural network capabilities in a live and very public network environment rather than providing machine-learnt guidance on what kind of ‘baked’ algorithm to institute.

Something a lot, lot more impressive than the current state of the art in personal assistants and chatbots. Something genuinely driven by a very lean Deep Learning System, but which can respond to input at faster-than-postal timeframes.

VR never got that ‘placeholder’ technology. Neither Nintendo’s Virtual Boy nor Apple’s stunning QuickTime VR codec were able to hold back the long VR winter that followed The Lawnmower Man.

QuickTime and the much-criticized RealPlayer did function quite effectively as commercial life-support for video streaming in the late 1990s during the wait for widely accessible and affordable broadband, as well as more efficient video codecs and the eventual open source revolution. Though it may shock those who remember it, I suggest that AI needs its own RealPlayer: a functional, usable application which provides a hint of what will come later.

Slimmed-down neural networks

One group of UC Berkeley Researchers are currently addressing the concern around small problems being thrown at over-specced Machine Learning networks. In the paper IDK Cascades: Fast Deep Learning by Learning not to Overthink the research team paints a picture of the median deep learning system as a kind of Spruce Goose that often carries just one undemanding passenger. It suggests a lighter vehicle, at least for the time being.

The paper defines ‘real time’ as a response made under 200ms under heavy query load, potentially handling millions of live streams and delivering results with a realistically limited array of GPUs and affordable power consumption.

There is a good body of recent research examining the potential for ‘stripped down’ deep learning systems, and the Berkeley paper offers a new iteration of a previously suggested tool, dubbed IDK (‘I don’t know’) Cascades.

Much as was the case in early attempts at text and image compression (particularly LZW), researchers have offered compressed neural network models which trade off accuracy against supervision against time...and many other possible trade-off factors.

IDK Cascades evaluate the probability of the amount of rendering power necessary to accurately drill through to higher resolution results, avoiding the need to take the information through every stage of the entire exhausting process.

The researchers studied the effectiveness of two IDK techniques, Entropy and Class. Class relies on a simple model that can obtain high accuracy by throwing away potential fields of investigation on the probability that they will not be necessary to satisfactorily complete the task.

For instance, this approach has worked well in facial recognition tasks by discarding in advance large sections of image area which are clearly not going to contain facial information — an analogue of the way lossy image/video compression programs will discard or downgrade large areas of color (such as a blue sky) in order to concentrate resolution on areas of higher detail. By contrast the Cascade By Entropy method takes the decay of one factor as an index to help optimize the entire process.

Results from test passes around image classification, object detection, autonomous driving and checkerboard synthetic experiments proved extremely encouraging even when compared against a non-optimized equivalent neural network.

Graph
Source: Arxiv / Pixabay

The Berkeley team conclude that the IDK Cascade approach can substantially reduce the number of layer invocations in a traditional model with 'minimal loss' in accuracy (see [b] in image above).

It’s encouraging to see research that realistically addresses the shortfall between the rigor of the development timeline and the pragmatic need for potential commercial applicability; the ongoing energy of the AI research movement may depend on this kind of 'interstitial' innovation.

Whatever AI’s RealPlayer turns out to be, it’s probably not going to be critical or abstract in nature. But it will need to be quick, functional, able to fail gracefully – and, it seems, to arrive quickly. We can mock it later - but we need it now.

Want more? Become a member of the AI & Intelligent Automation Network for free access to industry news, whitepapers and articles. Sign up today.


RECOMMENDED