Artificial Intelligence (AI) Solutions, particularly those based on Deep Learning in the areas of Computer Vision, are done in a cloud-based environment requiring heavy computing capacity.

Inference is a relatively lower compute-intensive task than training, where latency is of greater importance for providing real-time results on a model. Most inference is still performed in the cloud or on a server, but as the diversity of AI applications grows, the centralized training and inference paradigm is coming into question.

Artificial Intelligence (AI) & Computer Vision Solutions on Edge Devices

It is possible, and becoming easier, to run AI and Machine Learning with analytics at the Edge today, depending on the size and scale of the Edge site and the particular system being used. While Edge site computing systems are much smaller than those found in central data centers, they have matured, and now successfully run many workloads due to an immense growth in the processing power of today’s x86 commodity servers. It’s quite amazing how many workloads can now run successfully at the Edge.

StrataHive’s Edge-based AI Solutions for Computer Vision

At StrataHive, we have Ready-to-Deploy Computer Vision Deep Learning Models, with world class accuracy levels in the areas of : Object Detection, Face Detection and RecognitionBrand Logo Detection, Display Execution, Shelf Execution, Shelf Inspection, Shelf Insights, Reading Text through Character Recognition.

With a Strategic thrust, we are extending the above AI & Computer Vision offerings to the Edge-based Devices. The primary drivers of these offerings are:

  1. Cost-Effectiveness
  2. Compact and Standalone Processing
  3. Ready-to-deploy Models
  4. Integration capabilities with NVR’s, PLC’s, IOT and IIOT devices
StrataHive’s Edge-based AI Solutions for Computer Vision

The technology underlying these solutions are primarily build and tested around the promising Raspberry Pi 3 B+ Model with Intel Movidus NCS 2 co-processor.

What’s in it for Our Clients?

• Faster real-time results at scale: By running on-premise image recognition and analyzing images/video streams in real-time, we provide significantly faster insights, compliance levels, which are delivered at highest availability and at scale.

• Enhanced operational reliability: Our solution allows Businesses to locally process images and receive actionable insights from connected devices without worrying about connectivity issues in the Manufacturing / Retail Premises

• Increased security for devices and data: By analyzing images locally on edge devices instead of sending raw data to the cloud, the solution helps Businesses by eliminating the need to send large and potentially sensitive data to the cloud

What is AI on the Edge?

It is possible, and becoming easier, to run AI and machine learning with analytics at the edge today, depending on the size and scale of the edge site and the particular system being used. While edge site computing systems are much smaller than those found in central data centers, they have matured, and now successfully run many workloads due to an immense growth in the processing power of today’s x86 commodity servers. It’s quite amazing how many workloads can now run successfully at the edge.

AI Edge processing today is focused on moving the inference part of the AI workflow to the device, keeping data constrained to the device. The factors that primarily drive the choice of Cloud Processing or Edge Processing are — Privacy, Security, Cost, Latency and Bandwidth.

Applications like autonomous driving have sub-millisecond (ms) latency requirements, while others like voice/speech recognition on smart speakers need to take privacy concerns into account. Keeping AI processing on the edge device circumvents privacy concerns, while avoiding the bandwidth, latency, and cost concerns of cloud computing.

The impact of model compression techniques like Google’s Learn2Compress that enables squeezing large AI models into small hardware form factors is also contributing to the rise of AI edge processing.

Federated learning and blockchain-based decentralized AI architectures are also part of the shift of AI processing to the edge with part of the training also likely to move to the edge.

Depending on the AI application and device category, there are several hardware options for performing AI edge processing. The options include central processing units (CPUs), GPUs, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGA) and system-on-a-chip (SoC) accelerators.

The edge, for the most part, refers to the device and does not include network hubs or micro data centers, except in the case of security cameras where network video recorders (NVRs) are included.

Drivers to Moving AI on the Edge

There are several drivers for moving AI processing to the edge:

Drivers to Moving AI on the Edge
  1. Privacy is one of the main drivers for moving AI to the edge, especially for consumer devices like mobile phones, smart speakers, home security cameras, and consumer robots. There has been a general acceptance among consumers for sharing their data for benefits, however, there has been a resurgence in consumer concern about data collection and privacy Companies like Apple are using privacy as a competitive differentiation, developing their own hardware to enable AI edge processing.
  2. Network latency impacts autonomous mobility in drones, robots, and autonomous cars, with all device categories likely to have sub-ms latency requirements. However, network latency is also important from a consumer experience perspective, with Google’s auto suggest application on mobile phones having a 200-ms latency requirement.
  3. Bandwidth will impact vision-based applications like augmented reality (AR), virtual reality (VR), and mixed reality (MR), collectively known as HMDs, where bandwidth requirements will grow from 2 megabytes per second (Mbps) to 20 Mbps today to 50 Mbps as HMDs support 360° 4K video.
  4. Security is an important consideration for AI use cases such as security cameras, autonomous cars, and drones. Having the edge device store and process the data locally increases redundancy and reduces the number of security vulnerabilities in general, although hardened silicon and secure hardware packaging is critical to prevent tampering or physical attacks.
  5. The cost of performing AI processing in the cloud versus at the edge needs to consider the cost of AI device hardware, cost of bandwidth, and cost of AI cloud/server processing. While these costs will vary with application and device, in most cases, the cost of doing edge-based processing is likely to be orders of magnitude lower than cloud-based processing, with bandwidth costs being the key determining factor.
  6. The ability to run large-scale DNN models on the edge device is not just a function of improvements in hardware, but also in the improvements in software and the techniques that are able to compress models to fit into small hardware form factors with limited power and performance capabilities. Several frameworks and techniques support model compression, including Google’s TensorFlow Lite, Facebook’s Caffe2Go, Apple’s CoreML, Nervana’s Neural Network Distiller, and SqueezeNet.

Prominent Players offering AI on the Edge

The AI edge hardware market ecosystem is a mixture of established semiconductor companies like NVIDIA, Intel, Qualcomm, ARM, etc. In-house hardware development is another trend to watch out for with Google leading the market with its tensor processing unit (TPU) chipset, including Edge TPU

The Top 3 Products are essentially:

  1. Nvidia Jetson Nano
  2. Intel Neural Compute Stick 2
  3. Google Edge TPU Dev Board

Some Applications of AI deployed on the Edge

AI is powering a lot of visual and audio intelligence and enables new interesting and valuable use cases. Some examples include:

  • Security and home camera: Smart detection for when important activities are happening and not requiring 24/7 video streaming (for example, detect a person rather than a smart vacuum cleaner robot).
  • Virtual assistant (smart speaker, phone, etc.): Personalization for natural and intuitive conversations and visual interfaces.
Face Recognition Deep Learning Models on Raspberry Pi 3+ & Intel Movidus NCS 2
  • Phones: Naturally, the smartphone is the pervasive platform for AI. Your phone will detect your context, such as if you are in the car. You can also apply machine learning to smartphones for a better user experience, such as improved power management for better battery life, enhanced photography, and on-device malware detection. And many other examples.
  • Smart transportation: On-device AI is beneficial, for example, for sending less data to the cloud in order to know how many seats are available on a bus.
  • Industrial IoT: Automating the factory of the future will require lots of AI, from visual inspection for defects and intricate robotic control for assembly.
  • Drones/robots: Self navigation in unknown environment as well as coordination with other drones/robots.
  • Auto: Machine learning for passenger safety, scene understanding, sensor fusion, path planning, etc. The huge, real benefit of autonomous driving is saving lives and time.

How to Deliver Edge-based AI Solutions?

Getting to Edge-based Solutions is not an overnight task and it typically involves creating the analytics modeldeploying the model and executing the model at the edge. There are decisions that need to be made in each of these areas with respect to collecting data, preparing data, selecting the algorithms, training the algorithms on a continuous basis, deploying/redeploying the models etc. The processing/storage capacity at the Edge also plays a key role. Some of the merging deployment models include decentralized and peer-to-peer deployment models with pros and cons for each.

How to deliver Edge-based AI Solutions?

So how do you go about implementing this? I will talk about one approach of building this out:

  1. Build your own CNN or start with a pre-trained CNN network (like an Inception model).
  2. Get the training and test data (images and labels).
  3. Train or retrain (i.e transfer learning) the network.
  4. Optimize with learning rates, batch size, etc., for the desired accuracy. Get the trained model.
  5. Use the trained model for classification.
Deep Learning Computer Vision Model Deployment on Intel Movidus NCS -2

Advantages of Edge-based AI Solutions

The advantages of AI-enhanced decision-making at the edge include the following:

  1. Edge-based AI is highly responsive and closer to real-time than the typical centralized IoT model deployed to date. Insights are immediately delivered and processed, most likely within the same hardware or devices.
  2. Edge-based AI ensures greater security. Sending data back and forth with Internet-connected devices subjects data to tampering and exposure even without anyone being aware. Processing at the edge minimizes this risk, with an additional plus: Edge-based AI-powered devices can include enhanced security features.
  3. Edge-based AI is highly flexible. Smart devices support the development of industry-specific or location-specific requirements, from building energy management to medical monitoring.
  4. Edge-based AI doesn’t require a PhD to operate. Since they can be self-contained, AI-based edge devices don’t require data scientists or AI developers to maintain. Required insights are either automatically delivered where they are needed, or visible on the spot through highly graphical interfaces or dashboards.
  5. Edge-based AI provides for superior customer experiences. By enabling responsiveness through location-aware services, or rerouting travel plans in the event of delays, AI helps companies build trust and rapport with their customers.

As we move forward into the highly connected digital economy, inevitably, intelligence will move to the edge. The powerful combination of AI and the IoT opens up new vistas for organizations to truly sense and respond to events and opportunities around them.

Another limitation that cloud-based AI poses is for environments where there’s limited or no connectivity, whether it’s because of lack of communications infrastructure or because of the sensitivity of the operations and information involved. The only alternative to cloud servers are proprietary data centers that cost heavily to set up and maintain.

Remote locations such as countryside farms, which can benefit immensely from artificial intelligence, will have limited access to AI applications because of their poor connectivity. As IoT moves into more eccentric and disconnected environments, the necessity of edge or fog computing will become more prevalent.

Trends in Edge-based AI Solutions

There are several ways that AI can be pushed to the edge and help expand the domains of its applications.

  • Distributed computing: A lot of the computing power across the world goes to waste as devices remain idle. While the processing power of those devices might not be enough to perform data-intensive AI algorithms, their combined resources will be able to tackle most tasks. Blockchain, the technology that underlies cryptocurrencies, provides an interesting solution to create decentralized computers from numerous devices. Blockchain is especially suited for IoT environments.
  • AI co-processors: As GPUs helped drive new innovations in the digital imagery such as gaming and rendering, AI co-processors can drive similar advances in the AI industry. Until now, GPUs have been used for the same purpose because of their immense power in performing parallel operations. The trend has pushed companies like Nvidia, which were exclusively geared toward graphics processing, to make inroads into the field of AI. We’re now seeing the emergence of external AI processors such as the Movidius Neural Compute Stick, which provides deep learning computing power at the edge.
Trends in Edge-based AI Solutions
  • Advanced algorithms: Scientists and researchers are working on algorithms that can more closely mimic the human brain function, which requires less data to understand concepts and make decisions. This can help lower the barriers to bring AI closer to the edge.

The development and combination of these trends can hopefully make it possible to execute AI algorithms closer to where operations are taking place. Edge computing will not be a replacement for the power of the cloud. It can however make AI’s operation model resemble that of humans: perform routine and time-critical decisions at the edge and refer to the cloud where more intensive computation and historical analysis is needed.

“The shift to the edge for AI processing will be driven by cheaper edge hardware, mission-critical applications, a lack of reliable and cost-effective connectivity options, and a desire to avoid expensive cloud implementation.”

StrataHive’s Training Offering in Edge-based AI Solutions for Computer Vision

Encouraged by the growing demand for Edge-based AI Solutions for Computer Vision, we have a specialized training coffering encompassing the following topics:

Module 1: Traditional Computer Vision and Deep Learning Fundamentals

Module 2: Creating an Image Classifier Application using the Intel Movidus NCSDK

Module 3: Deploying Custom Convolutional Neural Network using the Intel Movidus NCSDK

Module 4: Creating a Smart Camera with Raspberry Pi 3 B+ and Intel Movidus NCS 2

Module 5: Deep neural network image processing systems on low-power devices


Organizations will continue to address AI data management challenges by architecting powerful and highly available edge computing systems, which will lower customer costs. New technologies that were previously cost-prohibitive will become more viable over time, and find uses in new markets.

Previously, powerful AI apps required large, expensive data center-class systems to operate. But edge computing devices can reside anywhere. AI at the edge offers endless opportunities that a can help society in ways never before imagined.

Edge-based inferencing will become a foundation of all AI-infused applications in the Internet of Things and People and the majority of new IoT application-development projects will involve building the AI-driven smarts for deployment to edge devices for various levels of local sensor-driven inferencing.

With the growing popularity of connected devices with the evolution of Internet of Things (IoT), many industries such as retail, manufacturing, transportation, and energy are generating vast amounts of data at the edge of the network. Edge analytics is data analytics in real-time and in-situ or on site where data collection is happening.

Even though edge analytics is an exciting area, it should not be viewed as a potential replacement for central data analytics. Both can and will supplement each other in delivering data insights and both models have their place in organizations. One compromise of edge analytics is that only a subset of data can be processed and analyzed at the edge and only the results may be transmitted over the network back to central offices. This will result in ‘loss’ of raw data that might never be stored or processed. So edge analytics is OK if this ‘data loss’ is acceptable. On the other hand, if the latency of decisions (& analytics) is not acceptable as in flight operations or critical remote manufacturing/energy, edge analytics should be preferred.

Thanks for reading through this blog post. Any suggestions for further improving this would be cheerfully solicited.

Know more about StrataHive and It’s Solution Offerings at this link

Leave a Reply