Ice Lounge Media

Ice Lounge Media

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

IBM aims to build the world’s first large-scale, error-corrected quantum computer by 2028

The news: IBM announced detailed plans today to build an error-corrected quantum computer with significantly more computational capability than existing machines by 2028. It hopes to make the computer available to users via the cloud by 2029.

What is it? The proposed machine, named Starling, will consist of a network of modules, each of which contains a set of chips, housed within a new data center in Poughkeepsie, New York.

Why it matters: IBM claims Starling will be a leap forward in quantum computing. In particular, the company aims for it to be the first large-scale machine to implement error correction. If Starling achieves this, IBM will have solved arguably the biggest technical hurdle facing the industry today. Read the full story.

—Sophia Chen

The Pentagon is gutting the team that tests AI and weapons systems

The Trump administration’s chainsaw approach to federal spending lives on, even as Elon Musk turns on the president. 

As part of a string of moves, Secretary of Defense Pete Hegseth has cut the size of the Office of the Director of Operational Test and Evaluation in half. The group was established in the 1980s after criticisms that the Pentagon was fielding weapons and systems that didn’t perform as safely or effectively as advertised. Hegseth is reducing the agency’s staff to about 45, down from 94, and firing and replacing its director. 

It is a significant overhaul of a department that in 40 years has never before been placed so squarely on the chopping block. Here’s how defense tech companies stand to gain (and the rest of us may stand to lose).

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Conspiracy theories are spreading about the LA protests
Misleading photos and videos are circulating on social media. (NYT $)
+ Donald Trump has vowed to send 700 Marines to the city. (The Guardian)
+ Waymo has paused its service in downtown LA after its vehicles were set alight. (LA Times $)

2 RFK Jr has fired an entire CDC panel of vaccine experts
The anti-vaccine advocate accused them of conflicts of interest. (Ars Technica)
+ He claims that their replacements will “exercise independent judgment.” (WSJ $)
+ RFK Jr is interested in using a toxic bleach solution to treat ailments. (Wired $)
+ How measuring vaccine hesitancy could help health professionals tackle it. (MIT Technology Review)

3 A new covid variant is spreading across Europe and the US
While it’s considered low risk, ‘Nimbus’ appears to be more infectious. (Wired $)

4 White House security cautioned against installing Starlink internet
But Elon Musk’s team ignored them and fitted the service in the complex anyway. (WP $)
+ Trump isn’t planning on getting rid of it, though. (Bloomberg $)

5 Developers are underwhelmed by Apple’s AI efforts
Its WWDC announcements haven’t been met with much enthusiasm. (WSJ $)
+ The company is opening up its AI models to developers for the first time. (FT $)
+ Where’s the overhauled, AI-powered Siri we were promised? (TechCrunch)

6 Meta is assembling a new AI research lab
Researchers will be tasked with beating its rivals to achieve superintelligence. (Bloomberg $)
+ There’s no doubt that Meta is feeling the heat right now. (The Information $)

7 Vulnerable minors are increasingly becoming radicalized online
The sad case of Rhianan Rudd illustrates the ease of access to extremist material. (FT $)

8 Our nerves may play a central role in how cancer spreads
Researchers believe they may help tumors to grow. (New Scientist $)
+ Why it’s so hard to use AI to diagnose cancer. (MIT Technology Review)

9 An end is in sight for the video game actors’ strike
Major labels have reached a tentative deal with the SAG-AFTRA. (Variety $)
+ How Meta and AI companies recruited striking actors to train AI. (MIT Technology Review)

10 The UK is planning a robotaxi trial next next
Many years behind other countries. (FT $)

Quote of the day

“At the end of the day, what they need to do is deliver on what they presented a year ago.”

—Bob O’Donnell, chief analyst at Technalysis Research, tells Reuters where Apple went wrong with its lacklustre WWDC announcements.

One more thing

The great AI consciousness conundrum

AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences that philosophers, cognitive scientists, and engineers alike are currently grappling with.

Fail to identify a conscious AI, and you might unintentionally subjugate a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code.

Over the past few decades, a small research community has doggedly attacked the question of what consciousness is and how it works. The effort has yielded real progress. And now, with the rapid advance of AI technology, these insights could offer our only guide to the untested, morally fraught waters of artificial consciousness. Read the full story.

—Grace Huckins

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Rest in power Sly Stone, truly one of the funky greats.
+ Did you know there’s an Olympics for scaffolding? Well, you do now.
+ Just one man is responsible for some of the greatest film artwork of all time—Drew Struzan.
+ That’s one dramatic pizza maker.

Read more

IBM announced detailed plans today to build an error-corrected quantum computer with significantly more computational capability than existing machines by 2028. It hopes to make the computer available to users via the cloud by 2029. 

The proposed machine, named Starling, will consist of a network of modules, each of which contains a set of chips, housed within a new data center in Poughkeepsie, New York. “We’ve already started building the space,” says Jay Gambetta, vice president of IBM’s quantum initiative.

IBM claims Starling will be a leap forward in quantum computing. In particular, the company aims for it to be the first large-scale machine to implement error correction. If Starling achieves this, IBM will have solved arguably the biggest technical hurdle facing the industry today to beat competitors including Google, Amazon Web Services, and smaller startups such as Boston-based QuEra and PsiQuantum of Palo Alto, California. 

IBM, along with the rest of the industry, has years of work ahead. But Gambetta thinks it has an edge because it has all the building blocks to build error correction capabilities in a large-scale machine. That means improvements in everything from algorithm development to chip packaging. “We’ve cracked the code for quantum error correction, and now we’ve moved from science to engineering,” he says. 

Correcting errors in a quantum computer has been an engineering challenge, owing to the unique way the machines crunch numbers. Whereas classical computers encode information in the form of bits, or binary 1 and 0, quantum computers instead use qubits, which can represent “superpositions” of both values at once. IBM builds qubits made of tiny superconducting circuits, kept near absolute zero, in an interconnected layout on chips. Other companies have built qubits out of other materials, including neutral atoms, ions, and photons.

Quantum computers sometimes commit errors, such as when the hardware operates on one qubit but accidentally also alters a neighboring qubit that should not be involved in the computation. These errors add up over time. Without error correction, quantum computers cannot accurately perform the complex algorithms that are expected to be the source of their scientific or commercial value, such as extremely precise chemistry simulations for discovering new materials and pharmaceutical drugs. 

But error correction requires significant hardware overhead. Instead of encoding a single unit of information in a single “physical” qubit, error correction algorithms encode a unit of information in a constellation of physical qubits, referred to collectively as a “logical qubit.”

Currently, quantum computing researchers are competing to develop the best error correction scheme. Google’s surface code algorithm, while effective at correcting errors, requires on the order of 100 qubits to store a single logical qubit in memory. AWS’s Ocelot quantum computer uses a more efficient error correction scheme that requires nine physical qubits per logical qubit in memory. (The overhead is higher for qubits performing computations for storing data.) IBM’s error correction algorithm, known as a low-density parity check code, will make it possible to use 12 physical qubits per logical qubit in memory, a ratio comparable to AWS’s. 

One distinguishing characteristic of Starling’s design will be its anticipated ability to diagnose errors, known as decoding, in real time. Decoding involves determining whether a measured signal from the quantum computer corresponds to an error. IBM has developed a decoding algorithm that can be quickly executed by a type of conventional chip known as an FPGA. This work bolsters the “credibility” of IBM’s error correction method, says Neil Gillespie of the UK-based quantum computing startup Riverlane. 

However, other error correction schemes and hardware designs aren’t out of the running yet. “It’s still not clear what the winning architecture is going to be,” says Gillespie. 

IBM intends Starling to be able to perform computational tasks beyond the capability of classical computers. Starling will have 200 logical qubits, which will be constructed using the company’s chips. It should be able to perform 100 million logical operations consecutively with accuracy; existing quantum computers can do so for only a few thousand. 

The system will demonstrate error correction at a much larger scale than anything done before, claims Gambetta. Previous error correction demonstrations, such as those done by Google and Amazon, involve a single logical qubit, built from a single chip. Gambetta calls them “gadget experiments,” saying “They’re small-scale.” 

Still, it’s unclear whether Starling will be able to solve practical problems. Some experts think that you need a billion error-corrected logical operations to execute any useful algorithm. Starling represents “an interesting stepping-stone regime,” says Wolfgang Pfaff, a physicist at the University of Illinois Urbana-Champaign. “But it’s unlikely that this will generate economic value.” (Pfaff, who studies quantum computing hardware, has received research funding from IBM but is not involved with Starling.) 

The timeline for Starling looks feasible, according to Pfaff. The design is “based in experimental and engineering reality,” he says. “They’ve come up with something that looks pretty compelling.” But building a quantum computer is hard, and it’s possible that IBM will encounter delays due to unforeseen technical complications. “This is the first time someone’s doing this,” he says of making a large-scale error-corrected quantum computer.

IBM’s road map involves first building smaller machines before Starling. This year, it plans to demonstrate that error-corrected information can be stored robustly in a chip called Loon. Next year the company will build Kookaburra, a module that can both store information and perform computations. By the end of 2027, it plans to connect two Kookaburra-type modules together into a larger quantum computer, Cockatoo. After demonstrating that successfully, the next step is to scale up and connect around 100 modules to create Starling.

This strategy, says Pfaff, reflects the industry’s recent embrace of “modularity” when scaling up quantum computers—networking multiple modules together to create a larger quantum computer rather than laying out qubits on a single chip, as researchers did in earlier designs. 

IBM is also looking beyond 2029. After Starling, it plans to build another, Blue Jay. (“I like birds,” says Gambetta.) Blue Jay will contain 2000 logical qubits and is expected to be capable of a billion logical operations.

Read more

The Trump administration’s chainsaw approach to federal spending lives on, even as Elon Musk turns on the president. On May 28, Secretary of Defense Pete Hegseth announced he’d be gutting a key office at the Department of Defense responsible for testing and evaluating the safety of weapons and AI systems.

As part of a string of moves aimed at “reducing bloated bureaucracy and wasteful spending in favor of increased lethality,” Hegseth cut the size of the Office of the Director of Operational Test and Evaluation in half. The group was established in the 1980s—following orders from Congress—after criticisms that the Pentagon was fielding weapons and systems that didn’t perform as safely or effectively as advertised. Hegseth is reducing the agency’s staff to about 45, down from 94, and firing and replacing its director. He gave the office just seven days to implement the changes.

It is a significant overhaul of a department that in 40 years has never before been placed so squarely on the chopping block. Here’s how today’s defense tech companies, which have fostered close connections to the Trump administration, stand to gain, and why safety testing might suffer as a result. 

The Operational Test and Evaluation office is “the last gate before a technology gets to the field,” says Missy Cummings, a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. Though the military can do small experiments with new systems without running it by the office, it has to test anything that gets fielded at scale.

“In a bipartisan way—up until now—everybody has seen it’s working to help reduce waste, fraud, and abuse,” she says. That’s because it provides an independent check on companies’ and contractors’ claims about how well their technology works. It also aims to expose the systems to more rigorous safety testing.

The gutting comes at a particularly pivotal time for AI and military adoption: The Pentagon is experimenting with putting AI into everything, mainstream companies like OpenAI are now more comfortable working with the military, and defense giants like Anduril are winning big contracts to launch AI systems (last Thursday, Anduril announced a whopping $2.5 billion funding round, doubling its valuation to over $30 billion). 

Hegseth claims his cuts will “make testing and fielding weapons more efficient,” saving $300 million. But Cummings is concerned that they are paving a way to faster adoption while increasing the chances that new systems won’t be as safe or effective as promised. “The firings in DOTE send a clear message that all perceived obstacles for companies favored by Trump are going to be removed,” she says.

Anduril and Anthropic, which have launched AI applications for military use, did not respond to my questions about whether they pushed for or approve of the cuts. A representative for OpenAI said that the company was not involved in lobbying for the restructuring. 

“The cuts make me nervous,” says Mark Cancian, a senior advisor at the Center for Strategic and International Studies who previously worked at the Pentagon in collaboration with the testing office. “It’s not that we’ll go from effective to ineffective, but you might not catch some of the problems that would surface in combat without this testing step.”

It’s hard to say precisely how the cuts will affect the office’s ability to test systems, and Cancian admits that those responsible for getting new technologies out onto the battlefield sometimes complain that it can really slow down adoption. But still, he says, the office frequently uncovers errors that weren’t previously caught.

It’s an especially important step, Cancian says, whenever the military is adopting a new type of technology like generative AI. Systems that might perform well in a lab setting almost always encounter new challenges in more realistic scenarios, and the Operational Test and Evaluation group is where that rubber meets the road.

So what to make of all this? It’s true that the military was experimenting with artificial intelligence long before the current AI boom, particularly with computer vision for drone feeds, and defense tech companies have been winning big contracts for this push across multiple presidential administrations. But this era is different. The Pentagon is announcing ambitious pilots specifically for large language models, a relatively nascent technology that by its very nature produces hallucinations and errors, and it appears eager to put much-hyped AI into everything. The key independent group dedicated to evaluating the accuracy of these new and complex systems now only has half the staff to do it. I’m not sure that’s a win for anyone.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Read more
1 7 8 9 10 11 2,723