Home Artificial Intelligence DAI#46 – Skeleton key, exam cheats, and famous AI voices

DAI#46 – Skeleton key, exam cheats, and famous AI voices

by admin
DAI#8 - AI gets inside your head and resurrects Johnny Cash

Welcome to this week’s roundup of bio-generated AI news.

This week AI got famous voices from beyond the grave.

Your AI memes are driving up Google’s utility bill.

And countries say ‘Let’s share our tech’ as they deploy more AI weapons.

Let’s dig in.

This one simple trick

After all the alignment work AI companies have done over the last 18 months you’d expect that getting an AI model to misbehave would be tough now, right?

Microsoft revealed a “Skeleton Key Jailbreak” which works across different AI models and is laughably simple. But was this just a marketing exercise?

Making one model safe is tricky. What will it take to get multiple models to work together safely?

AI is becoming a big part of our smartphones with solutions from multiple AI vendors being jammed together. This integration is throwing up some unusual collaborations between competitors and raises potential risks.

The way the biggest AI companies demo, release, and recall their products doesn’t exactly inspire confidence.

Search controversy

Google’s commitment to becoming carbon-neutral is being derailed by our hunger for AI.

The search giant’s sustainability report demonstrates the urgent need for greener AI and makes for uncomfortable reading if you’re concerned about the environment.

Google’s greenhouse gas emissions for 2023 compared to 2019 are crazy.

Search upstart Perplexity AI is embroiled in controversy. The news articles it serves up are allegedly scraped from reputable news outlets and reproduced without attribution.

In a time where AI companies play fast and loose with data, Perplexity’s CEO says the outrage over its copy-paste approach is just a “misunderstanding.”

AI examination fails

The Detroit Police Department has blamed AI for a ‘misunderstanding’ that saw an innocent man arrested for shoplifting.

They settled with the victim and made changes to their facial recognition policy. Apparently trusting the computer even though the person it flags clearly doesn’t look like the suspect is a bad idea.

‘So officer, since you and I both agree that I don’t look like the guy in the photo, am I free to go?’

via GIPHY

It’s not just the cops that are being fooled by AI.

Researchers at the University of Reading created fake student profiles and submitted answers generated entirely by ChatGPT to online psychology exams.

Guess how many of these cheating ‘students’ were flagged by the exam evaluators.

AI finds its voice

It’s getting increasingly difficult to tell human voices apart from AI-generated ones. A new study found that the emotion of the voice affects the odds of you spotting AI correctly.

Even when we can’t tell them apart, our brain reacts differently to human and AI voices. Guess which part of your brain responds when it processes an AI voice.

Out-of-work voice actors may disagree, but AI-generated voices are an inevitable part of how we will consume content. ElevenLabs signed deals to use the iconic voices of some famous dead celebrities in its Reader App.

Which of these voices will you choose to read your ebook to you? How would Laurence Olivier feel about being reduced to reading your emails?

Fair share?

The UN adopted a Chinese-sponsored resolution that calls for wealthier countries to share their AI technology and know-how with developing countries. The US supported the bill but China accused it of ignoring key commitments with its ongoing sanctions.

Will this resolution see AI benefits flow to poorer countries, or will economic and political interests scupper it?

One of the key concerns about sharing AI tech is related to defense applications. AI weapons are moving from defense contractors’ dreams to grim reality.

Ongoing conflicts are turning modern battlefields into a breeding ground for experimental AI weaponry.

Is there an ethical way to allow AI to decide who lives and who dies? What happens when it inevitably goes wrong?

By design

University of Toronto researchers built a peptide prediction model called PepFlow and it beats Google’s AlphaFold 2.

Peptide drugs have key advantages over small-molecule and protein-based medicines. PepFlow could make it much easier for scientists to design the next groundbreaking medicine.

How much variation is there among butterflies? Are males more diverse than females? If these questions have kept you up at night then AI could help.

A new study used AI to unravel birdwing butterfly evolution, shedding light on evolutionary debates.

In other news…

Here are some other clickworthy AI stories we enjoyed this week:

And that’s a wrap.

Have you tried the ElevenLabs Reader App? I don’t know if I’d want Judy Garland’s voice to read my emails to me. Which voices should they add next?

I recycle and try to reduce my use of plastic but Google’s sustainability report now has me rethinking how I use AI. Are environmentally friendly prompts something we should be working on?

Let’s hope AI gives us clean energy and quantum computing so we can get back to guilt-free ChatGPT.

Let us know what you think, chat with us on X, and keep sending us cool AI stories we may have missed.



Source Link

Related Posts

Leave a Comment