ChatGPT, Copilot, and Bard: How AI is rewriting the internet

Big players, including Microsoft, with Copilot, Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.

Highlights

  • Ex-Trump Lawyer Michael Cohen Attends Court

    Ex-Trump Lawyer Michael Cohen Attends Court

    Michael Cohen, the former lawyer for Donald Trump, admitted to citing fake, AI-generated court cases in a legal document that wound up in front of a federal judge, as reported earlier by The New York Times. A filing unsealed on Friday says Cohen used Google’s Bard to perform research after mistaking it for “a super-charged search engine” rather than an AI chatbot.

    The document in question was a motion that asked a federal judge to shorten the length of Cohen’s three-year probation, which he’s now facing following prison time and a guilty plea to tax evasion and other charges. But after reviewing the letter brief, US District Judge Jesse Furman wrote in a filing that “none of these cases exist” and asked Cohen’s lawyer, David Schwartz, to explain why the three cases are included in the motion as well as whether his now-disbarred client helped draft it.

    Read Article >

  • Baidu’s ChatGPT competitor has reached over 100 million users.

    Ernie Bot, the AI-powered chatbot developed by Chinese tech giant Baidu, hit the milestone months after its public release in August. Just like ChatGPT, Ernie Bot can do things like summarize documents, answer questions, generate outlines, and more.

  • The New York Times Will Report Q2 2023 Results On Tuesday

    The New York Times Will Report Q2 2023 Results On Tuesday

    The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

    As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”

    Read Article >

  • An illustration of a glitchy pencil writing on paper.

    An illustration of a glitchy pencil writing on paper.

    Earlier this month, Google announced the release of Gemini, what it considers its most powerful AI model yet. It integrated Gemini immediately into its flagship generative AI chatbot, Bard, in hopes of steering more users away from its biggest competitor, OpenAI’s ChatGPT. 

    ChatGPT and the new Gemini-powered Bard are similar products. Gemini Pro is most comparable to GPT-4, available in the subscription-based ChatGPT Plus. So we decided to test the two chatbots to see just how they stack up — in accuracy, speed, and overall helpfulness.

    Read Article >

  • Rows of blue headphones floating diagonally against a pink background.

    Rows of blue headphones floating diagonally against a pink background.

    Microsoft’s AI chatbot Copilot will now be able to churn out AI songs on demand — thanks to a new plug-in with Suno. The Cambridge-based AI music startup offers a tool on Discord that can compose an original song — complete with lyrics — based on a text prompt. Now, Copilot users will be able to access Suno using the Microsoft chatbot. 

    In order to start making music, Copilot users only need to sign on to their accounts and enable the Suno plug-in — or click on the logo that says, “Make music with Suno.” Users then need to think of a simple one- or two-line text prompt that describes their desired song, such as “create a folk song about Alaska summers” or “write a song about cats in the style of Cat Power” (both prompts I tried personally on Suno via Discord) and type it into Copilot.

    Read Article >

  • An image showing a graphic of a brain on a black background

    An image showing a graphic of a brain on a black background

    Susan Zhuang, a Democrat who will soon represent the 43rd Council District in Brooklyn, New York, admitted to using AI when answering questions from a local news publication, according to a report by the New York Post. In a text message sent to the Post, Zhuang wrote that she uses “AI as a tool to help foster deeper understanding” because English is not her first language.

    The responses in question were included in an article from City & State, which asked local council member-elects to fill out a questionnaire about their personal interests and policies. However, the Post noticed something was off with Zhuang’s answer to the question, “What makes someone a New Yorker?”:

    Read Article >

  • The Microsoft Edge web browser logo against a swirling blue background.

    The Microsoft Edge web browser logo against a swirling blue background.

    One feature added to Microsoft’s AI Copilot in the Edge browser this week is the ability to generate text summaries of videos. But Edge Copilot’s time-saving feature is still fairly limited and only works on pre-processed videos or those with subtitles, as Mikhail Parakhin, Microsoft’s CEO of advertising and web services, explained.

    As spotted by MSPowerUser, Parakhin writes, “In order for it to work, we need to pre-process the video. If the video has subtitles – we can always fallback on that, if it does not and we didn’t preprocess it yet – then it won’t work,” in response to a question. 

    Read Article >

  • Google just announced Gemini, its most powerful suite of AI models yet, and the company has already been accused of lying about its performance. 

    An op-ed from Bloomberg claims Google misrepresented the power of Gemini in a recent video. Google aired an impressive “what the quack” hands-on video during its announcement earlier this week, and columnist Parmy Olson says it seemed remarkably capable in the video — perhaps too capable.

    Read Article >

  • An illustration showing Gemini’s ability to take photos, audio, and more.

    An illustration showing Gemini’s ability to take photos, audio, and more.

    a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: Google

    It’s the beginning of a new era of AI at Google, says CEO Sundar Pichai: the Gemini era. Gemini is Google’s latest large language model, which Pichai first teased at the I/O developer conference in June and is now launching to the public. To hear Pichai and Google DeepMind CEO Demis Hassabis describe it, it’s a huge leap forward in an AI model that will ultimately affect practically all of Google’s products. “One of the powerful things about this moment,” Pichai says, “is you can work on one underlying technology and make it better and it immediately flows across our products.” 

    Gemini is more than a single AI model. There’s a lighter version called Gemini Nano that is meant to be run natively and offline on Android devices. There’s a beefier version called Gemini Pro that will soon power lots of Google AI services and is the backbone of Bard starting today. And there’s an even more capable model called Gemini Ultra that is the most powerful LLM Google has yet created and seems to be mostly designed for data centers and enterprise applications. 

    Read Article >

  • A graphic showing Bard’s logo with Gmail, Drive, Docs, and other apps

    A graphic showing Bard’s logo with Gmail, Drive, Docs, and other apps

    a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: Google

    While OpenAI’s ChatGPT has become a worldwide phenomenon and one of the fastest-growing consumer products ever, Google’s Bard has been something of an afterthought. The chatbot has steadily gained new features, including access to your data across other Google products, but its answers and information have rarely seemed to rival what you get from ChatGPT and other bots using GPT-3 and GPT-4. 

    The case for Bard may have just gotten more compelling, though: as of today, for English-speaking users in 170 countries, Bard is now powered by Google’s new Gemini model, which it says matches and even exceeds OpenAI’s tech in a number of ways. (Google says Gemini is coming to more languages and countries “in the near future.”)

    Read Article >

  • OpenAI execs dubbed ChatGPT a “Low key research preview.”

    The phrase became an internal joke after ChatGPT’s popularity exploded right out of the gate, according to the NYT’s recap of its launch a year ago and the reaction among Big Tech companies.

    Google and Meta scrambled AI teams to launch competing products — even if that meant removing some guardrails — like Bard and LLaMa. And Microsoft’s rush to beat Google had Satya Nadella saying, “We have a big order coming to you, a really big order coming to you,” to Nvidia’s Jensen Huang as he ordered $2 billion in chips.

  • A Bing logo.

    A Bing logo.

    Microsoft is readying a new Bing feature that should take the hassle out of coming up with your own AI prompt. The GPT-4-powered capability, called Deep Search, takes your Bing query and expands on it, allowing the search engine to find answers about several topics related to your question on the web.

    As an example, Microsoft shows how Bing turns a vague search for “how do points systems work in Japan” into a detailed prompt that asks Bing to:

    Read Article >

  • Microsoft Copilot is now generally available.

    Copilot, the AI chatbot formerly known as Bing Chat, is out of preview. That means Copilot is now available in 105 languages and 169 countries “on all modern browsers for mobile and web,” according to Caitlin Roulston, the director of communications at Microsoft.

    Even though the preview label is going away today, Roulston says Microsoft will continue to “launch new features in preview while we iterate, listen to feedback, and improve the experience for our users.”

  • A trippy graphic displaying a collection of items like paintbrushes, books, phone messages, and a notepad to represent generative AI. A large pair of eyes and hands can be seen at the center of the image.

    A trippy graphic displaying a collection of items like paintbrushes, books, phone messages, and a notepad to represent generative AI. A large pair of eyes and hands can be seen at the center of the image.

    There have been a handful of before-and-after moments in the modern technology era. Everything was one way, and then just like that, it was suddenly obvious it would never be like that again. Netscape showed the world the internet; Facebook made that internet personal; the iPhone made plain how the mobile era would take over. There are others — there’s a dating-app moment in there somewhere, and Netflix starting to stream movies might qualify, too — but not many.

    ChatGPT, which OpenAI launched a year ago today, might have been the lowest-key game-changer ever. Nobody took a stage and announced that they’d invented the future, and nobody thought they were launching the thing that would make them rich. If we’ve learned one thing in the last 12 months, it’s that no one — not OpenAI’s competitors, not the tech-using public, not even the platform’s creators — thought ChatGPT would become the fastest-growing consumer technology in history. And in retrospect, the fact that nobody saw ChatGPT coming might be exactly why it has seemingly changed everything.

    Read Article >

  • Sam Altman at OpenAI’s developer conference.

    Sam Altman at OpenAI’s developer conference.

    Sam Altman is officially OpenAI’s CEO again.

    Just before Thanksgiving, the company said it had reached a deal in principle for him to return, and now it’s done. Microsoft is getting a non-voting observer seat on the nonprofit board that controls OpenAI as well, the company announced on Wednesday.

    Read Article >

  • A troll worthy of Clippy themself.

    This Wall Street Journal article about the recent drama at OpenAI contains an amazing anecdote. Apparently an employee at AI rival Anthropic thought it’d be funny to send “thousands of paper clips in the shape of OpenAI’s logo” as a prank, in reference to the infamous paperclip maximizer thought experiment.

    Weirdly, I think OpenAI’s logo makes for a great paperclip design. Should we be worried?

  • Sony finished its second round of tests of its in-camera authenticity tech.

    The company tested baking a cryptographic “digital signature” into photos taken by its cameras to set them apart from AI-generated or otherwise faked images. Sony says the feature will come to cameras like the Alpha 9 III via a firmware update in Spring 2024.

    Picture of a Sony digital camera

    Picture of a Sony digital camera

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.

    ChatGPT’s voice feature is now available to all users for free. In a post on X (formerly Twitter), OpenAI announced users can now tap the headphones icon to use their voice to talk with ChatGPT in the mobile app, as well as get an audible response.

    OpenAI first rolled out the ability to prompt ChatGPT with your voice and images in September, but it only made the feature available to paying users.

    Read Article >

  • Anthropic’s logo — the outline of a head surrounded by three more larger outlines of heads, and a star-shaped set of lines emerging from a center point of the smallest head and bursting out in seven different lengths of line, terminating in filled-in circles. The background is orange.

    Anthropic’s logo — the outline of a head surrounded by three more larger outlines of heads, and a star-shaped set of lines emerging from a center point of the smallest head and bursting out in seven different lengths of line, terminating in filled-in circles. The background is orange.

    a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: Anthropic

    While OpenAI is in the middle of an existential crisis, there’s a new chatbot update from Anthropic, the Google-backed AI startup founded by former OpenAI engineers who left over disagreements about the company’s increasingly commercial direction as its Microsoft partnership went on.

    Anthropic has announced that the latest update of its chatbot, Claude 2.1, can digest up to 200,000 tokens at once for Pro tier users, which it says equals over 500 pages of material.

    Read Article >

  • Image of Meta’s logo with a red and blue background.

    Image of Meta’s logo with a red and blue background.

    Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence. The Information broke the news today, citing an internal post it had seen.

    According to the report, most RAI members will move to the company’s generative AI product team, while others will work on Meta’s AI infrastructure. The company regularly says it wants to develop AI responsibly and even has a page devoted to the promise, where the company lists its “pillars of responsible AI,” including accountability, transparency, safety, privacy, and more.

    Read Article >

  • Some of Bing’s search results now have AI-generated descriptions.

    Microsoft says it’s using GPT-4 to garner the “most pertinent insights” from webpages and write summaries beneath Bing search results. You won’t see these summaries beneath every search result, but you can check which ones are AI-generated by clicking the little arrow next to the result’s URL. If the description is written by AI, it’ll say “AI-Generated Caption.”

    A screenshot of an AI-generated search description on Bing

    A screenshot of an AI-generated search description on Bing

    Google’s next-generation ‘Gemini’ AI model is reportedly delayed.

    Earlier this year, Google combined two AI teams into one group which is working on a new model to compete with OpenAI’s GPT-4. Its leader, Demis Hassabis, discussed the combo on Decoder:

    And we’re already feeling, even a couple of months in, the benefits and the strengths of that with projects like Gemini that you may have heard of, which is our next-generation multimodal large models — very, very exciting work going on there, combining all the best ideas from across both world-class research groups. It’s pretty impressive to see.

    Now, The Information cites two sources saying its launch is expected in Q1 of 2024, not this month as they were previously told. It also reports Google co-founder Sergey Brin has been spending “four to five days a week” with the developers.

SOURCE

Leave a Comment

. . . . . . . . . . . . . . . . . . . . . . . . . .