Big players, including Microsoft, with Copilot, Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.
How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.”
Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”
But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.
Baidu’s ChatGPT competitor has reached over 100 million users.
Ernie Bot, the AI-powered chatbot developed by Chinese tech giant Baidu, hit the milestone months after its public release in August. Just like ChatGPT, Ernie Bot can do things like summarize documents, answer questions, generate outlines, and more.
Microsoft’s AI chatbot Copilot will now be able to churn out AI songs on demand — thanks to a new plug-in with Suno. The Cambridge-based AI music startup offers a tool on Discord that can compose an original song — complete with lyrics — based on a text prompt. Now, Copilot users will be able to access Suno using the Microsoft chatbot.
In order to start making music, Copilot users only need to sign on to their accounts and enable the Suno plug-in — or click on the logo that says, “Make music with Suno.” Users then need to think of a simple one- or two-line text prompt that describes their desired song, such as “create a folk song about Alaska summers” or “write a song about cats in the style of Cat Power” (both prompts I tried personally on Suno via Discord) and type it into Copilot.
Read Article >
Google just announced Gemini, its most powerful suite of AI models yet, and the company has already been accused of lying about its performance.
An op-ed from Bloomberg claims Google misrepresented the power of Gemini in a recent video. Google aired an impressive “what the quack” hands-on video during its announcement earlier this week, and columnist Parmy Olson says it seemed remarkably capable in the video — perhaps too capable.
Read Article >
It’s the beginning of a new era of AI at Google, says CEO Sundar Pichai: the Gemini era. Gemini is Google’s latest large language model, which Pichai first teased at the I/O developer conference in June and is now launching to the public. To hear Pichai and Google DeepMind CEO Demis Hassabis describe it, it’s a huge leap forward in an AI model that will ultimately affect practically all of Google’s products. “One of the powerful things about this moment,” Pichai says, “is you can work on one underlying technology and make it better and it immediately flows across our products.”
Gemini is more than a single AI model. There’s a lighter version called Gemini Nano that is meant to be run natively and offline on Android devices. There’s a beefier version called Gemini Pro that will soon power lots of Google AI services and is the backbone of Bard starting today. And there’s an even more capable model called Gemini Ultra that is the most powerful LLM Google has yet created and seems to be mostly designed for data centers and enterprise applications.
Read Article >
While OpenAI’s ChatGPT has become a worldwide phenomenon and one of the fastest-growing consumer products ever, Google’s Bard has been something of an afterthought. The chatbot has steadily gained new features, including access to your data across other Google products, but its answers and information have rarely seemed to rival what you get from ChatGPT and other bots using GPT-3 and GPT-4.
The case for Bard may have just gotten more compelling, though: as of today, for English-speaking users in 170 countries, Bard is now powered by Google’s new Gemini model, which it says matches and even exceeds OpenAI’s tech in a number of ways. (Google says Gemini is coming to more languages and countries “in the near future.”)
Read Article >OpenAI execs dubbed ChatGPT a “Low key research preview.”
The phrase became an internal joke after ChatGPT’s popularity exploded right out of the gate, according to the NYT’s recap of its launch a year ago and the reaction among Big Tech companies.
Google and Meta scrambled AI teams to launch competing products — even if that meant removing some guardrails — like Bard and LLaMa. And Microsoft’s rush to beat Google had Satya Nadella saying, “We have a big order coming to you, a really big order coming to you,” to Nvidia’s Jensen Huang as he ordered $2 billion in chips.Microsoft Copilot is now generally available.
Copilot, the AI chatbot formerly known as Bing Chat, is out of preview. That means Copilot is now available in 105 languages and 169 countries “on all modern browsers for mobile and web,” according to Caitlin Roulston, the director of communications at Microsoft.
Even though the preview label is going away today, Roulston says Microsoft will continue to “launch new features in preview while we iterate, listen to feedback, and improve the experience for our users.”A troll worthy of Clippy themself.
This Wall Street Journal article about the recent drama at OpenAI contains an amazing anecdote. Apparently an employee at AI rival Anthropic thought it’d be funny to send “thousands of paper clips in the shape of OpenAI’s logo” as a prank, in reference to the infamous paperclip maximizer thought experiment.
Weirdly, I think OpenAI’s logo makes for a great paperclip design. Should we be worried?Sony finished its second round of tests of its in-camera authenticity tech.
The company tested baking a cryptographic “digital signature” into photos taken by its cameras to set them apart from AI-generated or otherwise faked images. Sony says the feature will come to cameras like the Alpha 9 III via a firmware update in Spring 2024.