Ice Lounge Media

Ice Lounge Media

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here.

Is AI “normal”?

Despite its ubiquity, AI is seen as anything but a normal technology. There is talk of AI systems that will soon merit the term “superintelligence,” and the former CEO of Google recently suggested we control AI models the way we control uranium and other nuclear weapons materials.

A recent essay by two AI researchers at Princeton argues that AI is a general-purpose technology whose application might be better compared to the drawn-out adoption of electricity or the internet than to nuclear weapons. Read on to learn more about the policies the authors propose.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US Congress has passed the Take It Down Act
The legislation is designed to crack down on revenge porn and deepfake nudes. (WP $)
+ But critics fear it’ll be weaponized to suppress online speech and encryption. (The Verge)
+ Donald Trump has said he wants to use the bill to protect himself. (The Hill)

2 The Trump administration is embracing shady crypto firms
Including Tether, whose stablecoin is often used by criminals. (NYT $)
+ Crypto lender Nexo, which ran into regulatory trouble, is now returning to the US. (CoinDesk)
+ The UAE is planning a stablecoin regulated by the country’s central bank. (Bloomberg $)

3 Elon Musk’s DOGE conflicts of interest are worth $2.37 billion
Although experts estimate the true worth could be higher. (The Guardian)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

4 Researchers secretly deployed bots into a debate subreddit
In a highly unethical bid to try and change users’ minds. (404 Media)
+ AI is no replacement for human mediators. (MIT Technology Review)

5 Amazon’s first internet satellites have been launched successfully
27 down, 3,209 to go. (Reuters)
+ It’s Bezos’s answer to Musk’s Starlink. (FT $)

6 Amazon is pressuring its suppliers to slash their prices
It’s trying to protect its margins as Trump’s tariffs start to bite. (FT $)
+ Temu’s approach? Pass on the new taxes to its customers. (Bloomberg $)
+ Here’s how the tariffs are going to worsen the digital divide. (Wired $)
+ Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review)

7 Sam Altman and Satya Nadella are drifting apart
The pair disagree on OpenAI’s approach to AGI, among other things. (WSJ $)

8 Duolingo is replacing human workers with AI
It’s all part of the plan to make the language learning app “AI-first.” (The Verge)

9 Earthquakes may be a rich source of hydrogen 
Which is good news for the scientists trying to track down the gas. (New Scientist $)
+ Why the next energy race is for underground hydrogen. (MIT Technology Review)

10 The Hubble Space Telescope is turning 35-years old 🔭
And it’s still capturing jaw dropping images. (The Atlantic $)
+ Scientists have made some interesting discoveries about Jupiter’s volcanic moon. (Quanta Magazine)

Quote of the day

“When the person championing your anti-abuse legislation is promising to use it for abuse, you might have a problem.”

—Entrepreneur Mike Masnick says Donald Trump’s endorsement of the Take It Down Bill is self-serving in a post on Techdirt.

One more thing

The terrible complexity of technological problems

The philosopher Karl Popper once argued that there are two kinds of problems in the world: clock problems and cloud problems. As the metaphor suggests, clock problems obey a certain logic. The fix may not be easy, but it’s achievable.

Cloud problems offer no such assurances. They are inherently complex and unpredictable, and they usually have social, psychological, or political dimensions. Because of their dynamic, shape-shifting nature, trying to “fix” a cloud problem often ends up creating several new problems.

But there are ways to reckon with this kind of technological complexity—and the wicked problems it creates. Read the full story.

—Bryan Gardiner

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The annual Corgi Derby is a sight to behold—congratulations to the winner Juno!
+ Caroline Polachek is the sound of spring.
+ Why women are overtaking men in the most extreme sporting events 🏃‍♀️
+ Maybe there’s something to these obscenely-priced celebrity smoothies.

Read more

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

AI agents are the AI industry’s hypiest new product—intelligent assistants capable of completing tasks without human supervision. But while they can be theoretically useful—Simular AI’s S2 agent, for example, intelligently switches between models depending on what it’s been told to do—they could also be weaponized to execute cyberattacks. Elsewhere, OpenAI is reported to be throwing its hat into the social media arena, and AI models are getting more adept at making music. Oh, and if the results of the first half-marathon pitting humans against humanoid robots are anything to go by, we won’t have to worry about the robot uprising any time soon.

Read more

Right now, despite its ubiquity, AI is seen as anything but a normal technology. There is talk of AI systems that will soon merit the term “superintelligence,” and the former CEO of Google recently suggested we control AI models the way we control uranium and other nuclear weapons materials. Anthropic is dedicating time and money to study AI “welfare,” including what rights AI models may be entitled to. Meanwhile, such models are moving into disciplines that feel distinctly human, from making music to providing therapy.

No wonder that anyone pondering AI’s future tends to fall into either a utopian or a dystopian camp. While OpenAI’s Sam Altman muses that AI’s impact will feel more like the Renaissance than the Industrial Revolution, over half of Americans are more concerned than excited about AI’s future. (That half includes a few friends of mine, who at a party recently speculated whether AI-resistant communities might emerge—modern-day Mennonites, carving out spaces where AI is limited by choice, not necessity.) 

So against this backdrop, a recent essay by two AI researchers at Princeton felt quite provocative. Arvind Narayanan, who directs the university’s Center for Information Technology Policy, and doctoral candidate Sayash Kapoor wrote a 40-page plea for everyone to calm down and think of AI as a normal technology. This runs opposite to the “common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.”

Instead, according to the researchers, AI is a general-purpose technology whose application might be better compared to the drawn-out adoption of electricity or the internet than to nuclear weapons—though they concede this is in some ways a flawed analogy.

The core point, Kapoor says, is that we need to start differentiating between the rapid development of AI methods—the flashy and impressive displays of what AI can do in the lab—and what comes from the actual applications of AI, which in historical examples of other technologies lag behind by decades. 

“Much of the discussion of AI’s societal impacts ignores this process of adoption,” Kapoor told me, “and expects societal impacts to occur at the speed of technological development.” In other words, the adoption of useful artificial intelligence, in his view, will be less of a tsunami and more of a trickle.

In the essay, the pair make some other bracing arguments: terms like “superintelligence” are so incoherent and speculative that we shouldn’t use them; AI won’t automate everything but will birth a category of human labor that monitors, verifies, and supervises AI; and we should focus more on AI’s likelihood to worsen current problems in society than the possibility of it creating new ones.

“AI supercharges capitalism,” Narayanan says. It has the capacity to either help or hurt inequality, labor markets, the free press, and democratic backsliding, depending on how it’s deployed, he says. 

There’s one alarming deployment of AI that the authors leave out, though: the use of AI by militaries. That, of course, is picking up rapidly, raising alarms that life and death decisions are increasingly being aided by AI. The authors exclude that use from their essay because it’s hard to analyze without access to classified information, but they say their research on the subject is forthcoming. 

One of the biggest implications of treating AI as “normal” is that it would upend the position that both the Biden administration and now the Trump White House have taken: Building the best AI is a national security priority, and the federal government should take a range of actions—limiting what chips can be exported to China, dedicating more energy to data centers—to make that happen. In their paper, the two authors refer to US-China “AI arms race” rhetoric as “shrill.”

“The arms race framing verges on absurd,” Narayanan says. The knowledge it takes to build powerful AI models spreads quickly and is already being undertaken by researchers around the world, he says, and “it is not feasible to keep secrets at that scale.” 

So what policies do the authors propose? Rather than planning around sci-fi fears, Kapoor talks about “strengthening democratic institutions, increasing technical expertise in government, improving AI literacy, and incentivizing defenders to adopt AI.” 

By contrast to policies aimed at controlling AI superintelligence or winning the arms race, these recommendations sound totally boring. And that’s kind of the point.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Read more

Crypto group asks Trump to end prosecution of crypto devs, Roman Storm

The crypto lobby group, the DeFi Education Fund, has petitioned the Trump administration to end what it claimed was the “lawless prosecution” of open-source software developers, including Roman Storm, a creator of the crypto mixing service Tornado Cash.

In an April 28 letter to White House crypto czar David Sacks, the group urged President Donald Trump “to take immediate action to discontinue the Biden-era Department of Justice’s lawless campaign to criminalize open-source software development.” 

The letter specifically mentioned the prosecution of Storm, who was charged in August 2023 with helping launder over $1 billion in crypto through Tornado Cash. His trial is still set for July, and his fellow charged co-founder, Roman Semenov, is at large and believed to be in Russia.

The DeFi Education Fund said that in Storm’s case, the Department of Justice is attempting to hold software developers criminally liable for how others use their code, which is “not only absurd in principle, but it sets a precedent that potentially chills all crypto development in the United States.”

The group also called for the recognition that the prosecution contradicts the Treasury Department’s Financial Crimes Enforcement Network (FinCEN) guidance from Trump’s first term, which established that developers of self-custodial, peer-to-peer protocols are not money transmitters. 

Crypto group asks Trump to end prosecution of crypto devs, Roman Storm
Source: DeFi Education Fund

“This kind of legal environment does not just chill innovation — it freezes it,” they argued. The letter added that it also “empowers politically-motivated enforcement and puts every open-source developer at risk, regardless of industry.”

In January, a federal court in Texas ruled that the Treasury overstepped its authority by sanctioning Tornado Cash. 

Stakes could not be higher

The group thanked Trump for his support of the industry and his stated goal to make America the “crypto capital of the planet.” 

They added, however, that his goal can’t be realized if developers are prosecuted for building tools that enable the technology.

“We ask President Trump to protect American software developers, restore legal clarity, and end this unlawful DOJ overreach. The job’s not finished, and the stakes could not be higher.”

Related: Tornado Cash dev wants charges dropped after court said OFAC ‘overstepped’

Variant Fund chief legal officer Jake Chervinsky said the Justice Department’s case against Storm is “an outdated remnant of the Biden administration’s war on crypto.” 

“There is no justification in law or policy for prosecuting software developers for launching non-custodial smart contract protocols,” he added. 

At the time of writing, the petition had attracted 232 signatures from industry executives and developers, including Coinbase co-founder Fred Ehrsam, Paradigm co-founder Matt Huang, and Ethereum core developer Tim Beiko, among others.

Magazine: Bitcoin $100K hopes on ice, SBF’s mysterious prison move: Hodler’s Digest

Read more

Mastercard links with Circle, Paxos for merchant stablecoin payments

Mastercard says it will allow merchants across its network to be paid with stablecoins in a partnership with payment processor Nuvei and stablecoin issuers Circle and Paxos. 

Through the venture, 150 million merchants across the Mastercard network will now have the option to receive payments in stablecoins, regardless of how a customer pays, Mastercard said on April 28.

The payments giant also partnered with crypto exchange OKX for a crypto-enabled bank card, which Mastercard product chief Jorn Lambert said creates a “360-degree approach” where consumers can spend stablecoins and merchants can receive them.

He added that the “mainstream use cases are clear” for blockchain tech, and the company wanted “to make it as easy for merchants to receive stablecoin payments and for consumers to use them.”

Wallet, Payments, Mastercard, Stablecoin
Source: Mastercard News

The stablecoin market has continued to make gains, crossing a market value of $230 billion, an increase of 54% since last year, with Tether (USDT) and USDC (USDC) dominating 90% of the market.

Active stablecoin wallets have also increased over 50% in one year, according to a report last month from onchain analysis platforms Artemis and Dune.

Investment banking giant Citigroup predicted in an April 23 report that a combination of growing regulatory support and adoption by financial institutions has set the stage for the stablecoin market to reach as high as $3.7 trillion by 2030.

Mastercard launches another crypto card

Mastercard said its partnership with OKX for the so-called OKX Card aims to give crypto users “easy access to their funds” and integrate stablecoins into daily transactions.

OKX marketing chief Haider Rafique said the exchange’s venture with Mastercard is “a significant step toward integrating stablecoins into daily transactions and creating richer experiences.”

Related: Mastercard tokenized 30% of its transactions in 2024

Crypto wallet maker MetaMask also partnered with Mastercard on April 28 to launch a crypto payments card allowing users to spend self-custodied funds, using smart contracts to execute the IRL (In Real Life) transactions, with a processing speed under five seconds. 

Mastercard has also worked with crypto exchanges like Kraken, Binance, and Crypto.com to allow crypto-enabled debit cards.

Magazine: Bitcoin payments are being undermined by centralized stablecoins

Read more
1 2 3 4 2,662