We hear about AI all the time but many of us don’t realize that it’s in use in our everyday life as a consumer. Most of us have smartphones and are used to saying Siri …. Or Alexa play this. Apple, Google and Amazon have changed our world forever. Everything these days seems to have an interface to be controlled by Google, Alexa or Siri. So how does this work. The simple answer is “Speech recognition” Why is this new? We have had speech recognition for several decades, what’s changed? Why can Siri and Alexa speak to you and respond now? This is where Machine learning and Artificial intelligence has come into play.

Before we go deeper, let’s be clear, speech recognition and voice recognition are quite different and people tend to use them interchangeably.  Voice recognition is really a bio-metric that identifies who you are by your voice. Speech recognition identifies what you are saying. With AI, the speech you speak can now be interpreted and a response provided. Before it was a fairly one sided conversation where the biggest use case was replacing the Dictaphone.

According to a recent Forbes Insights survey of 300-plus executives, 95% believe that AI will play an important role in their responsibilities in the near-future. The most-cited business benefits corporate leaders see from AI include:

  • 40%: Increased productivity
  • 28%: Reduced operating costs
  • 21%: Improved speed to market
  • 20%: Transformed business and operating models

And the Award to the Composer of the year goes to Watson!

If we focus on the consumer world for another minute and take Alexa and Siri example a little further; A real popular use of speech recognition is to request for some music to be played. What if the same Artificial intelligence technology could compose music for you based on past requests? The following companies are doing just that:  – IBM Watson Beat, Google Magenta’s NSynth Super, Jukedeck, and Amper.

AI bots in these systems have started to compose music. Most of these systems work by using deep learning networks, a type of AI that’s reliant on analyzing large amounts of data. Basically, you feed the software tons of source material, from dance hits to disco classics, which it then analyzes to find patterns. It picks up on things like chords, tempo, length, and how notes relate to one another, learning from all the input so it can write its own melodies. There are differences between platforms: some deliver MIDI while others deliver audio. Some learn purely by examining data, while others rely on hard-coded rules based on musical theory to guide their output.

However, they all have one thing in common: on a micro scale, the music is convincing, but the longer you listen, the less sense it makes. None of them are good enough to craft a Grammy Award-winning song on their own (yet).

Artificial Intelligence is enabling robots to clone themselves

Manufacturing may be a different story than AI composers. In Oshino, Japan, there is a company called FANUC. In its plants, thousands of yellow manufacturing robots are produced annually. What’s interesting is that the plant is fully automated and no humans are present. It’s a completely lights out operation.

Automation has been rising over the past decade there, partly because, as wages and living standards have risen, workers have proved less willing to perform dangerous, monotonous tasks, and partly because Chinese manufacturers are seeking the same efficiencies as their overseas counterparts. More and more, it’s Fanuc’s industrial robots that assemble and paint automobiles in China, construct complex motors, and make injection-molded parts and electrical components. At pharmaceutical companies, Fanuc’s sorting robots categorize and package pills. At food-packaging facilities, they slice, squirt, and wrap edibles. The main draw here is that at FANUC, the robots build themselves, test themselves and inspect themselves.

This is one of the rare plants in the world where 24/7 operation is a reality and intelligent robots create computerized offspring capable, just like them, of machine learning and computer vision.  So how does Artificial Intelligence drive FANUC?

Computer Vision

Can you spot flaws half the width of a human hair. With a camera with very high resolution, a machine can sense that flaw and is more sensitive than the naked eye instead of sending it to a human, it hands it over to another machine to fix (or scrap).

Generative Design

Generative design is an ingenuous way for a company to explore all the options of making manufacturing more efficient. Engineers will need to input design goals into the software. Then they then put in materials, manufacturing methods and cost constraints. The software then presents different combinations for all possible solutions, and quickly “generates” design alternatives.

The system also uses machine learning to test and learn from each iteration. It learns from prior results and determines what will work and what wont.

Digital twins

The digital twin leverages the Internet of Things (IoT) but requires the skills of machine learning and artificial intelligence. In essence it’s a clone of the real thing in a virtual model.  These twins are handy for trouble-shooting, prototyping and for remote monitoring.

Sensors embedded within a physical object gather data in real-time about the operations and sends it up to the cloud and this is presented in the digital twin.

Predictive maintenance

Predictive (not preventive) maintenance leverages IOT and machines actively reports its status many times in an hour. AI algorithms will use machine learning to predict issues before the occur so we can schedule down time in advance as opposed to emergency repairs after its broken down or unnecessary preventative maintenance.

FANUC employs all these AI applications to automate their operations among other things and have become the undisputed leader in robotics. Others in manufacturing will need to follow suit. Stephen Ezell, a global innovation expert recently said “ “If you’re stuck to the old way and don’t have the capacity to digitalize manufacturing processes, your costs are probably going to rise, your products are going to be late to market, and your ability to provide distinctive value-add to customers will decline.”

In closing, I think it’s important to realize that AI and robotics are not going to make the human obsolete, but allow the humans to step away from mundane and dangerous tasks, freeing them up for more creative work. In addition, if corporate profits rise from automation, employers can afford to pay employees more.  As a caution, Gartner also states the following “Enterprise architecture and technology innovation leaders must walk a fine line between embracing and overplaying AI technologies’ role in delivering business value for digital business. Success is almost impossible without promotion, but Hype is dangerous because it sets the wrong expectations”.

About the Author:

Team

Motherson Technology Services USA Limited