INSUBCONTINENT EXCLUSIVE:
The startup has just pulled in $12 million to continue its pursuit of bringing AI to the edge.I wrote about the company when it spun off of
Seattle-based, Paul Allen-backed AI2; its product is essentially a proprietary method of rendering machine learning models in terms of
operations that can be performed quickly by nearly any processor
The speed, memory and power savings are huge, enabling devices with bargain-bin CPUs to perform serious tasks like real-time object
recognition and tracking that normally take serious processing chops to achieve.Since its debut it took $2.6 million in seed funding and has
their magic by means of huge banks of computers in the cloud
You send your image or voice snippet or whatever, it does the processing with a machine learning model hosted in some data center, then
with extremely limited processing power and RAM
room of your home, monitoring for voices, responding to commands, sending its video feed in to watch for unauthorized visitors or emergency
Better not to send it at all.This has the pleasant byproduct of not requiring what might be personal data to some cloud server, where you
Although AI developers are multiplying, comparatively few are trying to run on resource-limited devices like old phones or cheap security
such ARM core and camera and you need to render at five frames per second but only have 128 MB of RAM to work with
proprietary hardware not on the list
New models will be able to be trained up on demand.And Phase 3 is taking models that normally run on cloud infrastructure and adapting and
fundamentally, not trying to optimize and deliver it at lower cost.Edge-based AI models will surely be increasingly important as the
efficiency of algorithms improves, the power of devices rises and the demand for quick-turnaround applications grows
XNOR seems to be among the vanguard in this emerging area of the field, but you can almost certainly expect competition to expand along with