Close

A Battle for AI Supremacy Among Heavyweights

May 24, 2024

2 min 00 sec read
15 Facebook Twitter Whatsapp Facebook Twitter LinkedIn Pinterest WhatsApp Copy Link Your browser does not support automatic copying, please select and copy the link in the text box, then paste it where you need it.
There is a battle for AI supremacy raging, and guess what? It's not with the "scary" AI entities now roaming the earth. This is not Skynet and Terminator. Arnold Schwarzenegger and friends are not roaming the planet doing battle and sucking the life out of humanity. It's companies like Meta, Alphabet, and X that want to seize the control.

While the doomsayers are screaming about AI taking over, these companies and many others, are in a war to grab exclusive data deals that will ensure their models are best equipped to meet ever-growing popularity of AI.

Robot Hand with Chess Piece Knocking Over Another Chess Piece with Person Sitting on It
On one hand, there's a bunch of organizations and governments scrambling to develop safety pledges that corporations can register with, both for PR and collaborative development. Currently, there are numerous agreements on the table, including Frontier Model Forum, Safety by Design, started by the anti human trafficking organization Thorn, The U.S. Govermenent's AI Safety Institute Consortium, and the European Union's Artificial Intelligence Act, which imposes a ton of AI development rules in the region.

While all of that is going on, Meta established an AI product advisory council with experts who will advise the company on "evolving AI opportunities."

With so many big-league players (with loads of money and other resources) in the game aiming for AI domination, it's going to be important that safety remains at the forefront, and the various agreements and accords can provide some of those protections.

The big fear, of course, is that AI is getting smarter than humans and are on a mission to make us their slaves. We'll be used up and then tossed away as robots take over. Skynet here we come. Bye, by humanity.

But we're not even close. The most recent flood of generative AI tools are pretty darn cool with all the things they can do, but remember, they aren't actually thinking on their own. All they do is match data based on commonalities in their models. Basically, right now they are just super smart math machines, but AI has no consciousness. They are not sentient in any way.

According to Meta's chief AI scientist Yann LeCun, a highly respected voice in AI development, "LLMs have a very limited understanding of logic, and don't understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan hierarchically."

Basically, they can't replicate a human, or even an animal brain, despite the appearance of human-like responses. It's all smoke and mirrors right now.

However, there are several groups (Meta in the bunch) working on Artificial general intelligence (AGI), which is perhaps the next step in simulating human-like thought processes. Those big corporations are the ones pushing for smarter, more humanized AI.

So, go ahead and ask ChatGPT or your AI buddy of choice if they are getting ready to destroy mankind. The answer may give you the creeps, it may comfort you, or it might open your eyes to the real possibilities of AI.

You're safe . . . for now.

Want to read this in Spanish? Spanish Version >>

15 Facebook Twitter Whatsapp Facebook Twitter LinkedIn Pinterest WhatsApp Copy Link Your browser does not support automatic copying, please select and copy the link in the text box, then paste it where you need it.