INSUBCONTINENT EXCLUSIVE:
Image copyrightWeWALKImage caption
Kursat Ceylan using the smart cane that he co-developed
When Kursat
Ceylan, who is blind, was trying to find his way to a hotel, he used an app on his phone for directions, but also had to hold his cane and
pull his luggage.He ended up walking into a pole, cutting his forehead.This inspired him to develop, along with a partner, Wewalk - a cane
equipped with artificial intelligence (AI), that detects objects above chest level and pairs with apps including Google Maps and Amazon's
Alexa, so the user can ask questions
Jean Marc Feghali, who helped to develop the product, also has an eye condition
In his case his vision is severely impaired when the light is not good.While the smart cane itself only integrates with basic AI functions
right now, the aim is for Wewalk, to use information gathered from the gyroscope, accelerometer and compass installed inside the cane
It will used that data to understand more about how visually impaired people use the product and behave in general to create a far more
sophisticated product using machine learning (an advanced form of AI)
This would include the creation of an AI voice service with Microsoft, specifically designed for visually-impaired people, and eventually
allowing the device to integrate with other internet connected devices
More Technology of Business"It isn't just meant to be a smart cane, it's meant to be connected with transport networks and autonomous
vehicles," Mr Feghali says
The idea is that Wewalk could interact with traffic lights to help people cross roads without needing to push a button, and could alert a
bus to wait at a specific stop ahead of time.Media playback is unsupported on your deviceMedia captionRobots using AI to tackle a task they
find very difficult - tidying a bedroomSuch innovations would be welcome, but perhaps falls short of the dreams originally inspired by AI
When the field emerged at the end of the 20th century it was hoped that computers would be able to operate on their own, with human-like
abilities - a capability known as generalised AI."Back in the 1970s, there were predictions that by 2020, we should have generalised AI by
now, we should have been having some Moon and Mars bases and we're nowhere near that," says Aditya Kaul, Research Director at Omdia.Progress
has been picking up in recent years as artificial neural networks have become more sophisticated
Inspired by the way the brain forms connections and learns, artificial neural networks are layers of complex equations known as algorithms
which are fed data until they learn to recognise patterns and draw their own conclusions, a process known as deep learning
In 2012, Mr Kaul explains, a neural-network framework known as AlexNet emerged, which started a deep learning revolution."That has led to a
number of different innovations from facial recognition, to voice and speech recognition, as well as to some extent what you see on Netflix
or Amazon in personalizing and predicting what you want to watch or buy," he says
Image copyrightTom Smith PhotographyImage caption
Paul Newman says deep learning allows tech to attack new problems
The founder and chief technology officer of autonomous vehicle software company Oxbotica, Paul Newman, likened the development of
deep learning as the step change for AI between a hand drill and a power drill."We can now attack problems that before we would have no idea
of how to start," he says.But if consumers haven't noticed this progress, that's maybe because it mostly happens behind the scenes."If
there was a robotic device that was integrated in your office that you see every day, then perhaps people wouldn't be disappointed but many
of the advances in AI are so ingrained in how we work that we just forget about them," says Dennis Mortensen, chief executive and co-founder
of x.ai, an AI scheduling tool
Currently, AI used in everyday life consists of either automating or optimising things that humans can do - whether that is detecting fraud
by analysing millions of transactions, sifting through CVs to select the right candidates for a job, or using facial recognition to enable
people to get through some form of security.Image copyrightDennis MortensenImage caption
Dennis Mortensen sees networks
of artificial personal assistants
Mr Mortensen used his scheduling app to set up a telephone call with me - he just had to
tell his virtual assistant Amy to find some time for a call next week
Amy then emails me automatically to select a time and date which works for both of us.The next stage of AI, Mr Mortensen says, is to allow
Amy to be able to interact with other Amys to co-ordinate schedules
That means that if there is a network of 100 people who all use x.ai, Amy could effectively schedule meetings for all of these people to
meet each other - and others - at convenient times and locations, factoring in their own preferences
This would be something even the ablest paid human assistant would not be able to do - and this is where AI is heading.It's hard to
predict when breakthroughs will occur
But in the last few months there have been world firsts: Scientists have used AI to discover the antibiotic properties of an existing drug,
while an entirely new drug molecule 'invented' by AI will soon be used in human trials to treat patients who have obsessive-compulsive
Prof Andrew Hopkins, the chief executive of the company behind the OCD drug, Exscientia, says that drug development usually takes five years
to get to trial as there are potentially billions of design decisions that need to be made - but the AI drug took just 12 months
"The reason it's accelerated is because we're making and testing fewer compounds, and this is because the algorithms that undertake the
design work are able to learn faster and reach the optimised molecule quicker," he says, adding that early stage drug delivery can result in
as much as a 30% cost saving to bring the drug to market.Although his team didn't know when the breakthrough would happen, they there
confident AI would be the best way to find it.Image copyrightOXBOTICAImage caption
One of the big challenges for
autonomous cars is predicting the future
But according to Oxbotica's Mr Newman, the "monster problem in AI" is predicting
the future.Autonomous cars are reasonably good at identifying stop signs or pedestrians
When it comes to path planning - making decisions about where to go to avoid the pedestrians - there is a long way to go.But Kaul says that
even identifying pedestrians and signs were almost intractable problems for decades, and in the last five years many of these have been
solved.He suggests that there may need to be another revolution - like that of AlexNet - to help the industry to overcome these other
Perhaps then, we will see a world of autonomous vehicles, smart canes and transport networks that are all interlinked
This article is the third in a mini-series on disruptive industries
You can find the first on blockchain here, the second, on robotics here and the third on 3D printing here.