Ice Lounge Media

Ice Lounge Media

Crypto VC giant targets $1B for new funds, expects oversubscription — Report

Venture capital firm Haun Ventures is reportedly looking to raise $1 billion for two new crypto-related investment funds within the next three months.

If successful, $500 million will be allocated to early-stage crypto investments, while the remaining $500 million will go toward late-stage crypto investments, people familiar with the matter told Fortune Crypto on March 21.

Different market conditions to 2022 led to lowered expectations

The VC firm, founded by former Coinbase board member and federal prosecutor Katie Haun in 2022, reportedly did not aim for the $1.5 billion it raised in its highly praised funding round in 2022. It cited different market conditions as the reason for the lower target.

However, Haun reportedly expects the two new funds will be “oversubscribed.” In March 2022, Haun secured $1.5 billion in the company’s first funding round, shortly after its launch. Haun had also recruited former executives from Airbnb, Coinbase and Google tech incubator Jigsaw.

The firm’s latest fundraising round is set to close in June and is expected to be one of the largest in crypto funding in the past two years. Venture capital firm Paradigm and digital asset investment manager Pantera Capital both sought similar amounts in 2024.

Cryptocurrencies, Venture Capital

137 crypto companies raised a combined $1.11 billion in funding in February 2025. Source: The TIE

In June 2024, Paradigm closed an $850 million investment fund, while in April, digital asset investment manager Pantera Capital sought to raise over $1 billion for a new blockchain-focused fund.

VCs predict that stablecoins will continue to be a focus in 2025

More recently, Haun Ventures participated in crypto asset management firm Bitwise’s $70 million funding round alongside investors such as Electric Capital, MassMutual, MIT Investment Management Company, and Highland Capital.

While the specific focus of Haun’s upcoming crypto funds is not publicly known yet, other venture capitalists have recently predicted that stablecoin interest will continue into 2025.

Related: Venture capital firms invest $400M in TON blockchain

Deng Chao, CEO of institutional asset manager HashKey Capital, recently told Cointelegraph that stablecoins were the strongest proven use case for crypto in 2024.

Meanwhile, market analyst Infinity Hedge predicted that crypto VC investment in 2025 would surpass last year’s levels but wouldn’t approach the peak recorded during the 2021 bull market. VC crypto funding in 2021 reached $33.8 billion, while in 2024 it reached $13.6 billion.

Cointelegraph reached out to Haun Ventures but did not receive a response by time of publication.

Magazine: Dummies guide to native rollups: L2s as secure as Ethereum itself

Read more

OpenAI says over 400 million people use ChatGPT every week. But how does interacting with it affect us? Does it make us more or less lonely? These are some of the questions OpenAI set out to investigate, in partnership with the MIT Media Lab, in a pair of new studies

They found that only a small subset of users engage emotionally with ChatGPT. This isn’t surprising given that ChatGPT isn’t marketed as an AI companion app like Replika or Character.AI, says Kate Devlin, a professor of AI and society at King’s College London, who did not work on the project. “ChatGPT has been set up as a productivity tool,” she says. “But we know that people are using it like a companion app anyway.” In fact, the people who do use it that way are likely to interact with it for extended periods of time, some of them averaging about half an hour a day. 

“The authors are very clear about what the limitations of these studies are, but it’s exciting to see they’ve done this,” Devlin says. “To have access to this level of data is incredible.” 

The researchers found some intriguing differences between how men and women respond to using ChatGPT. After using the chatbot for four weeks, female study participants were slightly less likely to socialize with people than their male counterparts who did the same. Meanwhile, participants who interacted with ChatGPT’s voice mode in a gender that was not their own for their interactions reported significantly higher levels of loneliness and more emotional dependency on the chatbot at the end of the experiment. OpenAI plans to submit both studies to peer-reviewed journals.

Chatbots powered by large language models are still a nascent technology, and it’s difficult to study how they affect us emotionally. A lot of existing research in the area—including some of the new work by OpenAI and MIT—relies upon self-reported data, which may not always be accurate or reliable. That said, this latest research does chime with what scientists so far have discovered about how emotionally compelling chatbot conversations can be. For example, in 2023 MIT Media Lab researchers found that chatbots tend to mirror the emotional sentiment of a user’s messages, suggesting a kind of feedback loop where the happier you act, the happier the AI seems, or on the flipside, if you act sadder, so does the AI.  

OpenAI and the MIT Media Lab used a two-pronged method. First they collected and analyzed real-world data from close to 40 million interactions with ChatGPT. Then they asked the 4,076 users who’d had those interactions how they made them feel. Next, the Media Lab recruited almost 1,000 people to take part in a four-week trial. This was more in-depth, examining how participants interacted with ChatGPT for a minimum of five minutes each day. At the end of the experiment, participants completed a questionnaire to measure their perceptions of the chatbot, their subjective feelings of loneliness, their levels of social engagement, their emotional dependence on the bot, and their sense of whether their use of the bot was problematic. They found that participants who trusted and “bonded” with ChatGPT more were likelier than others to be lonely, and to rely on it more. 

This work is an important first step toward greater insight into ChatGPT’s impact on us, which could help AI platforms enable safer and healthier interactions, says Jason Phang, an OpenAI safety researcher who worked on the project.

“A lot of what we’re doing here is preliminary, but we’re trying to start the conversation with the field about the kinds of things that we can start to measure, and to start thinking about what the long-term impact on users is,” he says.

Although the research is welcome, it’s still difficult to identify when a human is—and isn’t—engaging with technology on an emotional level, says Devlin. She says the study participants may have been experiencing emotions that weren’t recorded by the researchers.

“In terms of what the teams set out to measure, people might not necessarily have been using ChatGPT in an emotional way, but you can’t divorce being a human from your interactions [with technology],” she says. “We use these emotion classifiers that we have created to look for certain things—but what that actually means to someone’s life is really hard to extrapolate.”

Correction: An earlier version of this article misstated that study participants set the gender of ChatGPT’s voice, and that OpenAI did not plan to publish either study. Study participants were assigned the voice mode gender, and OpenAI plans to submit both studies to peer-reviewed journals. The article has since been updated.

Read more
1 64 65 66 67 68 2,674