Thoughts about AI
A thought that comes to my mind whenever I hear about AI replacing humans is people saying machines are great at computation but not creativity. I believe this is an entirely false statement and it's eroding quickly as time passes and we see more new innovative innovations in this field.
The Evidence
We thought these models don't have any creativity and they are very limited. However, the evidence tells a different story.
AlphaFold predicted protein structures in ways that surprised experts it's just literal evidence that it's indeed creative. AI writes code sometimes in ways very unexpected which surprises us developers. Another example is AlphaGo it just doesn't compute stuff but outputs creative stuff.
The Argument
When humans are faced with this evidence, this creativity argument shifts us to a few things:
AI doesn't understand
I think this is very debatable and do they really need to understand? Maybe it's not required at all after all.
AI just bases off existing creations
I wonder what do humans do? Humans need initial idea and existing creations to start on something new. Is it possible to create something from scratch? My answer to this is No.
It's indeed a topic of controversy and we don't know the capabilities of it in its full extent yet. There are a lot of complex questions which we don't have answers to and won't anytime soon.
Remember, just 20 years ago we claimed machines can't play chess creatively and is bound to an algorithm. But, is that true anymore? Not at all. They are very creative these days. Now we just claim that they create but they don't understand things so we just keep moving this argument. Maybe there's just something fundamentally different about human creativity and machine creativity. Maybe not. Either way, banking our future on that assumption seems unwise.
Challenges
Beyond the creativity debate, there are fundamental issues we need to grapple with.
Biases
Humans have biases and they do always no matter how neutral they are it's just human nature. The case is very same for models they're dependent on the training data and there isn't any truly neutral data so the AI will most likely remain biased all the time. "Neutrality" is just a myth.
You can't eliminate this bias however an effective method is mitigation. The honest reality is that perfect neutrality is impossible for humans or AI.
Black Box
We humans do not understand how these systems work internally. There has been significant research work done on interpretability but these internal workings are still a mystery box. We can see what they're doing but the behavior from billions of parameters interacting with each other isn't completely predictable or explainable. It's an active area of research and concern.
Magic Neuron
To understand what's actually happening inside these systems, it helps to look at the basic building block:
Single Neuron
x₁ ─────[w₁]───┐
│
x₂ ─────[w₂]───┤
├─Σ ─ f(z) ─ output
x₃ ─────[w₃]───┤
│
bias ──────────┘
Inputs Weights Sum Activation
What happens inside?
z = (x₁×w₁) + (x₂×w₂) + (x₃×w₃) + bias
output = activation_function(z)
e.g., ReLU, sigmoid, tanhThe magic happens when you stack millions of these together in layers each learning tiny features that combine into complex representations.
This article is to be improved slowly.