Beta Solutions Blog

Key Trends in Deployment of Visual Artificial Intelligence

Date:  Nov 09, 2020

One of our team members recently attended the Edge AI and Vision Alliance Summit to learn about the recent developments and trends in Visual Artificial Intelligence (AI). We thought we would share some of these learnings with you.


Computer vision has come a long way in the last decade. Traditionally hand-designed algorithms were used for vision applications, but advances in Neural Network (NN) algorithms and the availability of large data sets to train them, have shifted the balance towards AI-based vision solutions.


Some of the key trends in Embedded vision:


Algorithms

  • Neural networks are becoming dominant. In 2016, roughly 38% of vision applications used neural networks whereas, in 2019, over 80% of vision applications used them.
  • The consolidation of Neural Network models is a key trend. Training a high-accuracy model from scratch takes a massive amount of data and effort, however, in recent years, the trend is towards using pre-trained 'off the shelf' models much more. These off-the-shelf models can be used to speed up development by:
  • Using them as out-of-the-box.
  • Refining them to focus on a specific application using additional training data (transfer learning).
  • Using them to detect ‘features’ (eg. edges and shapes) from images then using other classification techniques for the full classification.
  • TensorFlow is still the most popular framework for developing NNs1, however, there is beginning to be a lot more variation in frameworks used.

Industry Trends

  • Typically neural networks models have been very large, computationally expensive and power-hungry to compute, however, with the rise of Industry 4.0, the key application for AI-based vision technologies are now best deployed 'at the edge' (near the source of the information).
  • Computer vision at the edge is important from a performance, latency, bandwidth, privacy, reliability, and economic perspective.
  • The use of 3D perception in vision-based systems is also increasing rapidly. 3D perception involves using time-of-flight cameras or lidar-based systems to add depth information.

Hardware

  • Dozens of specialised, price-optimised, chipsets are now available which are capable of running optimised versions of state-of-the-art NNs at several frames per second in only a few milliwatts of power. The innovation of specialised chipsets is a trend which will continue for some time.
  • While CPUs ( central processing units) and GPUs ( graphics processing units) remain the dominant method of currently deploying Neural Networks, the trend is towards using dedicated deep learning processors and digital signal processors.


Conclusion


The use of AI based vision sensing will continue to be a trend well into the future. There is a lot of innovation happening in this space at the moment, driven by Industry 4.0 trends. Running complex artificial intelligence algorithms is now very possible in embedded devices.


At Beta Solutions, our experienced embedded electronics engineers have the skill and proven track record to use the best suited firmware design methodology for our clients' products. You can get in touch with us to discuss any idea you have in mind via our contact page or give us a call.

 

References:

  1. bit.ly/DeveloperSurveyWhitePaper2020

Supporting Information:

  1. Edge AI and Vision Alliance
  2. bit.ly/VisionIndustryMap
  3. Banner Image retrieved from https://online.stanford.edu/courses/cs231n-convolutional-neural-networks-visual-recognition
  4. Image in blog by https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=4791810"Pixabay>


Share by: