Service Center 800-441-8246
  1. Homepage

The Layers of Artificial Intelligence

Artificial intelligence has developed immensely over the past few decades; you might commonly hear the terms algorithm, machine learning, and deep learning in the emerging technology vernacular without fully understanding the differences between them.  It can become hard to distinguish between new technologies, making it difficult to understand what underlying processes are powering the technology around you.  Below, we explore these terms in-depth, in their computational context, to associate them with their various applications.

An ALGORITHM is a written set of instructions telling a computer how to complete a particular task, or more simply, the method of automating instructions.

These instructions are logic sequences that rely on AND, OR, and NOT statements, very similar to computer programming or coding.  While this can be used to define simple tasks, they can also be built to accomplish more complex tasks by building additional statements in layers.  A frequently used algorithm in your daily life would be whenever you turn to Google to find an answer.  In a fraction of a second, Google’s algorithm sorts through trillions of webpages to determine which are the most relevant to your exact search term and provides you with a list of the most relevant in descending order.

ARTIFICIAL INTELLIGENCE is the overarching term that encompasses any technology layering algorithms together to build neural networks.

This adopted terminology alludes to the similar function of how the human brain makes connections as we perceive data from the world around us, in the form of what our senses bring back to the brain.  We can give computers a set of rules to accomplish tasks that would take a human an infinite amount of time to complete on their own.  This concept depends on algorithms to solve a particular challenge mimicking how a human would work, but at a much higher rate of speed.  Because this is a more generic term, someone can “build an AI” effectively meaning that they have created one of these systems to solve their particular purpose.  Commonly, you might see this when you open a help chat window on a website.  Often, they are using simple artificial intelligence to respond with prepared statements based on what you write in the chat box.

MACHINE LEARNING is a subset of artificial intelligence where someone creates a filter that takes in data, analyzes it based on the instructions given, and then recognizes patterns from the data.

You might be most familiar with machine learning in the form of fraud alerts from your bank.  Every transaction on your card serves as a data point fed into the network where their machine learning system can start to recognize patterns.  By establishing your normal behaviors, it is able to identify when a purchase is outside of that pattern and it alerts the bank to ask for confirmation.  Another common example of machine learning in consumer products lays in the underlying technology powering smart thermostats in homes across the country from companies like Nest. It learns your habits and starts to predict when you will be home so it can adjust your thermostat settings accordingly to keep you comfortable and save you money on your energy bill.

DEEP LEARNING differentiates from machine learning by going a step further to teach itself and provide additional insight or predictions.

A recent accomplishment in the field of deep learning was Google’s AlphaGo model that became the first computer to beat a human in the Chinese game of Go.  This took significantly more skill than beating a person at chess, which only requires looking at all of the possible moves and playing accordingly. Because Go is a game that requires intuition, a computer cannot learn by sheer computational power alone; the possible positions in the game are a higher total than there are atoms in the entire universe, making brute force impossible.  Instead, the model had to start playing the game in rapid succession, teaching itself and optimizing with every subsequent attempt.  Its final move of the game that beat world-class player, Ke Jie, is studied by professional Go players and has even been described as “artistic.”

Additionally, deep learning is helping in the development of autonomous vehicle technology.  By feeding the deep learning model with massive amounts of data about the real world and letting it practice on the road with a driver behind the wheel, the model can teach itself and continually improve as it accumulates more experience on the road.  This is why testing started in predictable climates like Arizona, allowing the model to learn substantially before introducing it to different weather conditions and road configurations.  As the models have learned, companies have started deploying test drives in cities with less predictable climates like Pittsburgh.  While there are many exciting new applications for Deep Learning, it will not be the solution to every challenge; there will still be many instances where other forms of artificial intelligence are more appropriate and reliable.

DATA is the underlying input that builds all of these systems.

We need more, robust data to build better-equipped networks that go on to produce more useable outputs.  However, collecting and using that data can create a privacy tradeoff.  In the past few weeks, Amazon employees disclosed that Amazon allows tech workers to listen to voice recordings of conversations that customers have with their Alexa voice assistants through any of Amazon’s many AI-activated devices.  This caused some backlash, because it made people feel like it was a major violation of their privacy and trust for occurring without their knowledge.  This is where the technology sector could stand to be more transparent to help people understand how their interactions with technology (i.e. the “data” feeding these intelligence networks) may be used.  Instead of ignoring this problem, technology companies will need to start educating customers so that they understand that giving up some of their privacy helps the company to create a better product that in turn provides them with a better experience in the future.  As a society, we need to have this discussion and decide what guaranteed level of security and privacy we demand and feel comfortable with while also allowing technology to progress.

Evaluating real use cases and addressing ethical concerns is ongoing.

Additionally, tech workers have started to raise concerns about the underlying ethics of these networks as well, because those relying on new technology to solve problems may not again understand the underlying flaws in doing so.  The programmers at tech companies recognize the need to correct potential biases programmed into algorithms and artificial intelligence networks.  We assume as customers that the conclusions derived by these systems are unbiased, coming from objective entities capable of making difficult decisions in place of humans so we can avoid blame of the resulting consequences.  However, the truth is that they are open to bias in a way that humans also are fallible, since humans are writing the underlying logic statements.  If biased statements are written into the algorithms, it can produce potentially unethical results.  This means that implementing methods of testing or retesting might teach them how to produce better and more ethical results.

Almost two years ago, iStep reported on the ethical dilemma of programming the code for autonomous cars.  Since then, research has indicated that consumers are likely to report that they do want cars coded to maximize saving lives when an accident is inevitable, but that the overwhelming majority of consumers would want to buy the car programmed to maximize saving their life, even at the expense of others.  With a split in theory and action reported like this, we see how the ethics and morals involved in programming will need continued discussion.

https://www.ted.com/talks/patrick_lin_the_ethical_dilemma_of_self_driving_cars?language=en

While artificial intelligence has existed and been evolving for years now, its continued progress is undeniable and will likely bring even more use cases to our daily lives.  The more we understand the underlying technology, the better equipped we will be to integrate it intentionally into our lives and work environments in a sustainable and ethical way.