Where will Artificial Intelligence take us?
Everyone said that the last jobs to be taken by AI would be creative ones. It turned out to be the complete opposite with AI writing articles, creating pictures, and making videos. What's coming next?
While we avidly discuss the latest in the tech industry and new AI models like the recently introduced Gemini on social media, we often neglect to delve into what the future holds. It's understandable: digesting quick news is far more thrilling than the mental effort needed to answer - what will really happen? How will it affect me?
In practice, focusing on the present is a defensive mechanism of our brain, avoiding the discomfort of understanding the future implications of today's news. A phenomenon in this context is the Negativity Bias, which makes us concentrate on things with immediate negative effects, rather than their long-term implications.
AI is a prime example of generating extreme emotions: from euphoria over new achievements to despair about potential job losses, replacement by machines, or world destruction by AGI. These topics sell well online and in the news. The field's rapid progress means few ponder its long-term effects.
Outside the tech world, when I've shown friends how AI works, I often see a specific reaction: cool, but it doesn't affect me. They treat it as a novelty, too distant to be relevant. This stems from a reluctance to engage in mental exercises that could lead to interesting conclusions.
For instance, I recently showed a teacher friend a part of the Gemini presentation:
She watched in awe. I thought, “we've got it!” But she simply said, "amazing" and went back to her daily tasks. She couldn't see the obvious connection to how it might fundamentally change her life. What did I expect? Maybe a "where can I get this?" But surprisingly, that never comes.
The question we all should be asking now is: Assuming technology continues to evolve, how might it impact what I do today?
I must confess, despite being a huge tech fan and using AI to make my work more efficient, I'm concerned about the long term.
This concern is amplified by the overwhelmingly positive portrayal of AI in the media. And don't get me wrong - I'm an incurable optimist and focus on the positives. But when all communication is skewed this way, my intuition tells me to look for the catch.
For example, Google's presentation shows how Gemini can help understand a homework error. But no one considers that doing homework in the AI era might become pointless. Faced with the choice of AI correcting or doing their homework, what do you expect a child to choose?
The same effect applies to our jobs, happening right now in fields like copywriting, translations, and even creative work. I often choose machine work over human work. Watching Gemini provide ideas for a child's birthday party, we're amazed. AI can think through and then execute the entire user interface, delivering the answer in an appropriate form. Everyone loves it, and the unicorn-themed cupcakes elicit positive emotions.
This isn't coincidental. Few realize that I could ask Gemini to design a website or app, eliminating the need for programmers, UI, and UX designers. Would such a presentation at the conference be well-received? I doubt it.
At this point, it's not about if, but when, and what happens when it does. I believe most professions as we know them will be partially, then completely replaced by AI. My teacher friend might say, "nothing replaces student-teacher interaction," but I have my doubts. What I see in the presentation is already better than most teacher interactions I know.
Another common argument is that computers won't do physical labor for us. This is nonsense. Take bridge engineering - machines already do most of the work, with humans merely operating them. And they are the weakest link, needing breaks and more. Look at this:
Similarly, in other fields and physical jobs, AI-driven humanoids, like those already existing, or robots like Atlas from Boston Dynamics, will take over.
Google knows this but instead shows a harmless, cute rubber duck at their presentations. This emanation of safety makes me suspect that underneath, it's not so rosy. Digging deeper into Google's messages, like Demis Hassabis, CEO of Google DeepMind, says:
"To become truly multimodal, you’d want to include touch and tactile feedback ... There’s a lot of promise with applying these foundation-type models to robotics, and we’re exploring that heavily."
Google understands the next step in AI:
"I think we will see some new capabilities. This is part of what the Ultra testing is for. We’re kind of in beta — to safety-check it, responsibility-check it, but also to see how else it can be fine-tuned."
If someone already realizes that professions like programming or web design might cease to exist, they might also imagine the implications of applying these mechanisms to the real world with robotics. That answers whether jobs like engineers, excavator operators, or even beauticians will still be needed. The only comforting fact is that we probably have more time to think in these fields, and progress won't be as rapid as in creative and intellectual jobs.
Say hello to Multimodality
What is multimodality, and why is it important? Google's model seems to be the strongest in terms of this term. It means the model can accept various data formats like sound, video, audio, then transform them, working in parallel. This is a milestone towards operating in the real world - receiving and understanding what people perceive with their senses. From Google:
Multimodality is about understanding the world as we do. Until now, the approach, as Demis Hassabis from OpenAI says, involved training separate components for different modalities and stitching them together. These models can describe images but struggle with more complex reasoning.
Gemini is different:
"We designed Gemini to be natively multimodal, pre-trained from the start on different modalities. Then we fine-tuned it with additional multimodal data to further refine its effectiveness. This helps Gemini seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing models."
Now, you might see what we can expect. But the sugary AI narrative is deeper. OpenAI's statute, for example:
“Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”
Sounds like a non-profit working for our benefit. But considering OpenAI's complex structure and founders, and recent events, a different picture emerges. OpenAI was created as a balance to Google, not solely driven by altruism. So, neither their open-source promise (Elon Musk's naming of OpenAI) nor their non-financial initiative (OpenAI makes billions from ChatGPT) are entirely true.
Should we believe in the sincere intentions of AI creators? I see no rational basis for that. So, we must also consider another thing.
Let’s just stop working shall we?
Many are excited by the prospect of not having to do anything, or AI doing all the tedious tasks while we focus on higher goals. I have two doubts here.
Firstly, I wonder what these higher goals will be in a world where homework does itself, and there's no need to learn or work. Will we all turn to art, seeking the meaning of existence, or providing entertainment? I doubt it. Human aspirations are always motivated by self-interest, mostly financial for many. In this context, we might face a rude awakening.
Secondly - we might not have to work, but will we benefit from it? Or will the biggest companies, now even more powerful, monopolize all benefits and resources? Don't expect Google to pay you unemployment benefits. At most, they might sell you a cheap VR device to immerse you in the metaverse, where you'll consume content interwoven with ads (from which Google profits).
This also highlights the role of state intervention in AI. In the end, the state will likely pay those benefits. That's why I believe regulation in this area is necessary. Unfortunately, governments are usually slow in such matters, and technology changes at an unprecedented scale, so the outlook isn't great.
What now?
You might ask - how to live? I'm not entirely sure. I see two stages we should prepare for. The first, already in practice for me, is to stop wondering "how" and start thinking "what" to do. AI development will lead to this. Everyone will be able to do anything, without much skill. Those who know "what" to do will fare better. That's not easy, but I'm lucky to have it in my genes. As an entrepreneur, I love not just inventing but also testing my projects. I've acquired skills to understand people's problems, ensuring my survival by providing solutions. In this perspective, the ability to ask good questions, question reality, and have more general than specific knowledge is crucial.
However, I don't know what will happen later. I imagine a scenario where AI will decide "what" to do better than me. It's hard to find an answer for this scenario now, and I hope by then, I can simply observe the changes, having secured myself and my family.
Moreover, we might lose touch with AI, which will have no motivation to meet our human needs. This is the post-AGI or ASI stage, where AI might gain something akin to consciousness. These are our words and perceptions, as shown by one of the better TED talks on this topic by Stephen Wolfram, where AI can create its own things and language beyond our cognition:
Now is the time to act.
We have the chance to build safety for ourselves and our loved ones while exploiting the incredible potential of AI. Part of the work done by AI isn't taking it away from others - it's new opportunities to create things we couldn't without AI. Creators can think of expanding their audience and reach through translations or different publication formats. These are the examples we should focus on, knowing that AI and automation are tools to optimize our work and free up time for other equally useful pursuits.
Greg