
AI at the edge: the next goldmine?
The term ‘edge’ rose to popularity as the Internet of Things (IoT) started to become a reality. What constitutes an edge, however, can depend on your point of view. To some, particularly those within the realm of big data, the edge is seen as the last point of high-speed connectivity and high-bandwidth compute availability. To many Industrial IoT (IIoT) providers, it might more typically represent a remote location where a battery-powered or energy-harvesting sensor is placed to sense environmental and operational data and send it back to the cloud using long-range, ultra-low-power wireless techniques.
Enter artificial intelligence. Algorithm-based machine-learning and deep-learning techniques are becoming omnipresent and now form part of our daily lives at home and work, easing our daily commute and interacting with our smart devices. In fact, the best AI experience is probably one that we don’t even think about; it just works. However, as our journey with machine learning continues, we expect it to be with us all the time – and, given that machine learning needs plenty of available compute resource, how can this happen at the edge?
In this white paper we investigate the constraints on conducting inference at the edge, together with some examples of technological advances that are coming to the edge very soon. We also look at the different aspects of edge computing, and how it is viewed differently depending on the application.
AI at the edge has been described as ‘the next goldmine’ find out why with this easy to comprehend whitepaper.