Article by Lindsay Witmer Collins, originally posted on the WLCM blog, The Journal.
Photo by The Halal Design Studio on Unsplash
At WLCM, we’re full steam ahead on building AI apps for OpenAI’s GPT marketplace. This piece is Part 4 in a series helping entrepreneurs understand this new frontier. See why we’re going all in on it, how the major components of AI fit together, and what you can build with AI right now.
Earlier in this series, I talked about AI is more of a conceptual goal than it is a technology — the concept being a machine capable of human-like intelligence. What’s more human than the ability to learn?
This ability to learn is what makes AI so powerful. If you believe movies like Terminator and The Matrix are onto something, learning is what makes AI so dangerous. It’s kind of like turning a curious teenager loose on the internet for the first time. How do ensure that they’re learning based on good content and instruction? How do you put guardrails on learning? How do you correct course if something goes wrong?
If you’re building with AI, or simply curious about it, understanding the learning process is key to creating AI that moves the needle forward.
Machine learning
Machine learning is a large umbrella that covers many technologies that enable AI.
Machine learning goes beyond rigid, decision-tree style logic. It allows AI to continually learn as it gains more data and experience, without the need for definitive programming.
This learning can take place with varying degrees of human intervention, depending on the goal.
Supervised learning
Supervised learning takes a clean, normalized data set and runs it through an algorithm to make predictions about the future. The more data it has, the better predictions it can make. This is how AI gets better — or learns — over time.
For example, if you wanted to create an AI that predicts housing prices, you’d gather pricing data from, say, Zillow, and put it in a database. Within this database, you’d organize prices by region, house size, date of construction, and other factors.
Then you’d feed that database into the algorithm that best corresponds to your goal, in this case, a linear regression algorithm. This would produce a price prediction.
Unsupervised learning
Unsupervised learning does not require clean, organized data. It works best when you don’t have a defined desired outcome. Essentially, you are asking the AI to look at a huge, disparate data set and find patterns and relationships that a human could never discern.
A key point here is that the AI can learn on different formats of unstructured data. For example, it could consume a web page, a .pdf document, a text file, and an email without a human duplicating all that information into a normalized form, e.g., into a spreadsheet.
So, you’d feed raw data into the appropriate algorithm — supervised learning and unsupervised learning favor different sets and types — and then validate whether the output is valuable or relevant.
For example, if a bank found out a customer was laundering money, it could use an algorithm to examine their history and find behavioral patterns distinct from regular customers. Because the learning doesn’t require structured, normalized data, it could theoretically go beyond banking information to consider social media activity, travel history, socioeconomic traits, and more.
Imagine you could do this analysis on several convicted money launderers. Maybe the AI notices that money launderers tend to deposit money under a certain amount, or at peculiar hours, or that they tend to own parakeets.
It’s up to you to tell the AI which correlations are worthwhile and which ones are not. That’s how this type of AI learns.
Reinforcement learning
In reinforcement learning, the AI interacts with its environment in pursuit of a reward. Basically, you’ve told the AI what is desirable, and through trial and error, it modifies its behavior to achieve as many desirable outcomes as possible. Like the other types of learning, Reinforcement has its own set of algorithms.
For example, back in 2018, a group of German machine-learning researchers taught an AI agent to score as many points as possible in an old Atari game called Q*bert. The agent found a glitch in the game and accumulated an astounding number of points.
However, the AI never moved on from the glitch to reach the next level. While it scored a lot of points, it didn’t “beat” the game.
AI’s potential is truly magical. But this illustrates the kind of unintended consequences that can result from a lack of intentionality and thorough consideration in training AI.
Deep learning
Deep learning is a subset within machine learning that basically uses more complex algorithms. These algorithms analyze data through layers of neural networks designed to mimic the way humans think and make judgements. The more layers of neural network the algorithm has, the “deeper” it can learn.
For example, in identifying signs, one layer may recognize the signs colors, another its shape, another its location, another its letters.
If trained properly, the deep learning algorithms will gain the ability to learn from their own mistakes and will know how to correct themselves without human intervention.
The challenge of data and labeling
The hard part of creating AI isn’t the “AI.” It’s getting enough data. Google led the foray into AI for so many years because it had oceans of data to train its algorithms on.
Categorizing, contextualizing, and labeling data is a huge endeavor. The AI literally lives and dies by it. “Garbage in, garbage out,” as we say in the development world.
Even for unsupervised learning, before algorithms can learn on their own, humans must manually point out the qualities of the training data that indicate a particular characteristic.
You’ve been helping without even knowing it.
Every time you’ve used a reCAPTCHA puzzle to prove you’re not a robot, you may have been training AI by pointing to a piece of a picture and saying “These are stairs,” or “this is a bus.”
Imagine how much harder and subjective this task gets with language.
For example, someone has to feed a sentence into the AI and say “The tone of this sentence is sarcastic.” They have to feed it tens of thousands of examples of sarcastic sentences before the AI can recognize or produce sarcasm on its own.
The challenge of training is that it must account for an infinite amount of variability, combinations and edge cases. This is mind-boggling to even think about.
In 2018, a Tempe, Arizona woman was killed by a self-driving vehicle as she walked her bike across a road. Experts speculated that while the car’s image recognition capabilities could adequately recognize a person walking and a person riding on a bike, it couldn’t recognize a person walking a bike.
Just think about how many ways this could go. Could the AI recognize a person on a bike with a baby seat on the back? Or with a dog in a basket in the front? Or a tandem bicycle?
Good learning comes from good training
Of course, to someone not involved with building tech, all this complicated stuff lives outside of your purview, or beneath it. But it shouldn’t.
I would argue that even the average person has a stake in the quality and dynamics of training AI. After all, it’s foolish to believe that data is an objective, unimpeachable resource. It reflects our own biases.
If we don’t account for that bias, AI will amplify them and the harm they perpetuate.
Comments