Scroll Top

New study finds AI copies humans in the winter and gets lazier

WHY THIS MATTERS IN BRIEF

You’d never imagine that a “digital entity” could take on human traits, but they do …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

It seems that GPT-4 Turbo – the most recent incarnation of the Large Language Model (LLM) from OpenAI – winds down for the winter and gives worse answers to questions in much the same way that people who are winding down for the December holidays seem to do.

 

See also
Snapchat's ChatGPT chat bot is creeping out kids and parents

 

We all get those end of year holiday season chill vibes and, after a new study, that appears to be why GPT-4 Turbo seems to be giving worse answers because when it’s been trained on datasets from December it seems to have “learned” that humans do worse work and then “adopted” those same behaviours. Weird I know right!?

 

The Future of AI and Generative AI, by Keynote Matthew Griffin

 

As Wccftech highlighted, the interesting observation on the AI’s behavior was made by an LLM enthusiast, Rob Lynch, on X (formerly Twitter).

The claim is that GPT-4 Turbo produces shorter responses – to a statistically significant extent – when the Artificial Intelligence (AI) believes that it’s December, as opposed to May – with the testing done by changing the date in the system prompt.

 

See also
Huupe unveils its interactive smart basketball hoop complete with Netflix

 

So, the tentative conclusion is that it appears GPT-4 Turbo learns this behaviour from us, an idea advanced by Ethan Mollick who’s an Associate Professor at the Wharton School of the University of Pennsylvania who specializes in AI.

Apparently GPT-4 Turbo is about 5% less productive if the AI thinks it’s the Holiday season. This is known as the ‘AI winter break hypothesis’ and it’s an area that is worth exploring further. Because not only is it odd but, yet again, it shows how oddly and unexpectedly AI’s can behave …

What it goes to show is how unintended influences can be picked up by an AI that we wouldn’t dream of considering – although some researchers obviously did notice and consider it, and then test it. But still, you get what I mean – and there’s a whole lot of worry around these kinds of unexpected developments.

 

See also
Researchers use mind-reading AI to put people's thoughts on TV

 

As AI progresses, its influences, and the direction that the tech takes itself in, need careful watching over, hence all the talk of safeguards for AI being vital. We’re rushing ahead with developing AI – or rather, the likes of Google, Microsoft, and OpenAI certainly are – caught up in a tech arms race, with most of the focus on driving progress as hard as possible, with safeguards being more of an afterthought.

At any rate, regarding this specific experiment, it’s just one piece of evidence that the winter break theory is true for GPT-4 Turbo, and Lynch has urged others to get in touch if they can reproduce the results – and we do have one report of a successful reproduction so far. Still, that’s not enough for a concrete conclusion yet – watch this space.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This