Ice Lounge Media

Ice Lounge Media

The U.S. Food and Drug Administration (FDA) has granted an Emergency Use Authorization (EUA) for the COVID-19 vaccine developed by Pfizer and its partner BioNTech, the New York Times first reported on Friday night, and later supported by The Wall Street Journal. This EUA follows a recommendation by an independent panel of experts commissioned by the FDA to review Pfizer’s application and provide a recommendation, which the panel unanimously supported earlier this week.

Following this authorization, shipment of the vaccine are expected to begin immediately, with 2.9 million doses in the initial shipment order. Patients in the category of highly vulnerable individuals, which include healthcare workers and senior citizens in long-term care facilities, are expected to begin receiving doses within just a few days not was the EUA is granted.

This approval isn’t a full certification by the U.S. therapeutics regulator, but it is an emergency measure that still requires a comprehensive review of the available information supplied by Pfizer based on its Phase 3 clinical trial, which covered a group of 44,000 volunteer participants. Pfizer found that its vaccine, which is an mRNA-based treatment, was 95% effective in its final analysis of the data resulting form the trial to date – and also found that safety data indicated no significant safety issues in patients who received the vaccine.

On top of the initial 2.9 million dose order, the U.S. intends to distribute around 25 million doses by the end of 2020, which could result in far fewer people actually vaccinated since the Pfizer course requires two innoculations for maximum efficacy. Most American shouldn’t expect the vaccine to be available until at least late Q1 or Q2 2021, given the pace of Pfizer’s production and the U.S. order volume.

Still, this is a promising first step, and a monumental achievement in terms of vaccine development turnaround time, since it’s been roughly eight months since work began on the Pfizer vaccine candidate. Moderna has also submitted an EUA for its vaccine candidate, which is also an mRNA treatment (which provides instructions to a person’s cells to produce effective countermeasures to the virus). That could follow shortly, meaning two vaccines might be available under EUA within the U.S. before the end of the year.

Read more

Hyundai takes a controlling stake in an iconic robotics company, Twitter acquires a screen-sharing startup and we round up some security-themed gift ideas. This is your Daily Crunch for December 11, 2020.

The big story: Hyundai acquires 80% stake in Boston Robotics

Boston Robotics is behind a number of impressive robots, including the dog-like quadruped Spot. Over the past decade, it’s changed ownership several times, with Google acquiring it in 2013, then selling it to Japanese investment giant SoftBank in 2017.

After today’s deal, which values Boston Robotics at $1.1 billion and is subject to regulatory approval, SoftBank will still own a 20% stake.

Boston Dynamics will benefit substantially from new capital, technology, affiliated customers, and Hyundai Motor Group’s global market reach enhancing commercialization opportunity for its robot products,” Hyundai said in a press release.

The tech giants

Twitter acquires screen-sharing social app Squad — The entire Squad team is joining Twitter, while the Squad app will be shut down tomorrow.

Europe urged to block Google-Fitbit ahead of major digital policy overhaul — Shoshana Zuboff, the Harvard professor who wrote the defining book on surveillance capitalism, has become the latest voice raised against the $2.1 billion deal.

Twitter app code indicates that live video broadcasting app Periscope may get shut down — If Periscope does get shut down, it would be the end of a five-year run.

Startups, funding and venture capital

Gorillas, the on-demand grocery delivery startup taking Berlin by storm, has raised $44M Series A — Gorillas delivers groceries within an average of 10 minutes.

Sweden’s Tink raises $103M as its open banking platform grows to 3,400 banks and 250M customers — Tink aggregates a number of banks and financial services by way of an API.

Benchmark fills out its, yes, bench, with Miles Grimshaw — From his post as a general partner with New York-based Thrive, Grimshaw sourced deals in Lattice, Mapbox, Benchling and Airtable.

Advice and analysis from Extra Crunch

Cloud-gaming platforms were 2020’s most overhyped trend — The future of the technology is bright, but much less sexy.

General Catalyst’s Katherine Boyle and Peter Boyce are looking for ‘obsessive’ founders — We sat down with Boyle and Boyce to discuss what they look for in founders, which sectors they’re most excited about and how business has changed in the wake of the COVID-19 pandemic.

What to expect while fundraising in 2021 — DocSend CEO Russ Heddleston peers into a post-pandemic future.

(Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

Gift Guide: 9 security and privacy gifts to keep your friends and family safe — It’s a good time to evaluate how you’re keeping your data safe, and to help others in your life do the same.

Disney+ has plans for 10 Marvel shows and 10 Star Wars shows in the next few years — The company announced an ambitious slate of streaming originals.

Give the gift of Extra Crunch for 25% off — Speaking of Extra Crunch, TechCrunch readers can send an annual membership as a gift to a friend, family member or co-worker for 25% off.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Read more

Austinites, watch out; another tech company is headed into town.

Just days after Tesla CEO Elon Musk revealed during an interview that he has moved to Texas, and less than two weeks after HP Enterprise, a spin-out of the iconic Silicon Valley company Hewlett-Packard, announced that it is separately moving to Texas, yet another of the Bay Area’s best-known brands — Oracle —  is pulling up stakes and headed east to Texas, too.

The news was first reported by Bloomberg. Oracle confirmed the move in a statement sent to TechCrunch, saying that along with a “more flexible employee work location policy,” the company has changed its corporate headquarters from Redwood Shores, California, to Austin. “We believe these moves best position Oracle for growth and provide our personnel with more flexibility about where and how they work.”

A spokeswoman declined to answer more questions related to the move, but Oracle says that “many” of its employees can choose their office location, as well as continue to work from home part time or full time.

HPE and Oracle aren’t the first major tech companies to plot such moves in recent times. Late last year, the brokerage giant Charles Schwab said it was leaving the Bay Area for Texas as it was announcing its $26 billion merger with TD Ameritrade, though it chose Dallas, about 200 miles away from Austin.

Tech giants Apple and Google have also been expanding their presence in the state. Apple announced in 2018 that it was building a $1 billion campus in Austin. Meanwhile, Google, which opened its first Austin office 13 years ago, said last year that it was beginning to lease far more space in the city.

Taxes, a more affordable cost of living for employees, a lower cost of doing business and less competition for talent are among the top drivers for the companies’ moves, though there is also a growing sense that culture is a factor, as well.

While California is led by Democrats, Texas is led by Republicans, and as the divide between the two parties grows, so does the divide between their respective supporters, with even self-described centrists saying they feel alienated.

Oracle co-founder and Chairman Larry Ellison has notably been one of few top tech execs to openly support President Donald Trump.

Meanwhile, Joe Lonsdale, a co-founder of the venture firm 8VC and Palantir Technologies (which itself recently headed to Denver from Palo Alto), recently explained his own move this year to Texas from California in the WSJ, writing: “Politics in the state is in many ways closed off to different ideas. We grew weary of California’s intolerant far left, which would rather demonize opponents than discuss honest differences of opinion.”

This fall, in conversation with reporter Kara Swisher, Musk suggested he was also outside of Democratic circles, describing his political views as “socially very liberal and then economically right of center, maybe, or center? I don’t know. Obviously I’m not a communist.

While Austin is becoming a go-to spot for many of California’s wealthiest contrarians, others are headed to Florida. Coincidentally or not, Florida is another Republican-controlled state that, like Texas, does not collect state tax.

Keith Rabois, a Founders Fund investor who recently left the Bay Area for Miami, contributed to the NeverTrump PAC in 2016 and said his first choice for U.S. president this year was Democratic contender Pete Buttigieg. But he has also worried openly about democratic socialism, of which the GOP has long accused Democrats of promoting.

Venture capitalist David Blumberg, a Trump supporter, is also headed to Miami, he announced recently. Blumberg said he had it with “poor governance at the local level in San Francisco and statewide in California.” Yet he seemed to have grown frustrated with the Bay Area some time ago.

As Blumberg told Vox last year, he believes that tech platforms are biased against conservatives. He also told the outlet that the Valley was home to many more Trump supporters than might be imagined, and that “we generally keep our heads down” because “people who go out publicly for Republicans and for Trump can get business banned or get blackballed.”

Impetus notwithstanding, a longer-term question is whether these moves — particular for those individuals and smaller outfits that are relocating — will prove permanent.

At least one tech exec, Twitter and Medium co-founder Ev Williams, has returned to the Bay Area after moving away — in his case, to New York. Williams, who was largely “looking for a change,” made the move with his family late last year after spending 20 years in the Bay Area, he recently told TechCrunch. Then COVID struck.

“I had never lived in New York and thought, ‘Why not go? Now seems like a good time.’ Turns out I was wrong. [Laughs.] It was a very bad time to move to New York.”

Read more

You don’t have to buy into 5G conspiracy theories to think that you could do with a little less radiation in your life. One way of blocking radiation is a Faraday cage, but this is usually a metal mesh of some kind, making everyday use difficult. Researchers at Drexel University have managed to create a Faraday fabric by infusing ordinary cotton with a compound called MXene — meaning your tinfoil hat is about to get a lot comfier.

Faraday cages work because radiation in radio frequencies is blocked by certain metals, but because of its wavelength, the metal doesn’t even have to be solid — it can be a solid cage or flexible mesh. Many facilities are lined with materials like this to prevent outside radiation from interfering with sensitive measurements, but recently companies like Silent Pocket have integrated meshes into bags and cases that totally isolate devices from incoming signals.

Let’s be frank here and say that this is definitely paranoia-adjacent. RF radiation is not harmful in the doses and frequencies we get it, and the FCC makes sure no device exceeds certain thresholds. But there’s also the possibility that your phone or laptop is naively connecting to public Wi-Fi, getting its MAC number skimmed by other devices, and otherwise interacting with the environment in a way you might not like. And honestly… with the amount of devices emitting radiation right now, who wouldn’t mind lowering their dose a little, just to be extra sure?

That may be much easier to do in the near future, as Yury Gogotsi and his team at the Drexel Nanomaterials Institute, of which he is director, have come up with a way to coat ordinary textile fibers in a metallic compound that makes them effective Faraday cages — but also flexible, durable and washable.

The material, which they call MXene and is more of a category than a single compound, is useful in lots of ways, and the subject of dozens of papers by the team — this is just the most recent application.

“We have known for some time that MXene has the ability to block electromagnetic interference better than other materials, but this discovery shows that it can effectively adhere to fabrics and maintain its unique shielding capabilities,” said Gogotsi in a news release. You can see the fabric in action on video here.

Image Credits: Drexel University

MXenes are conductive metal-carbon compounds that can be fabricated into all sorts of forms: solid, liquid, even sprays. In this case it’s a liquid — a solution of tiny MXene flakes that adhere to the fabric quite easily and produce a Faraday effect, blocking 99.9% of RF radiation in tests. After sitting around for a couple years (perhaps forgotten in a lab cupboard) it kept 90% of their effectiveness, and the treated fabric can also be washed and worn safely.

You wouldn’t necessarily want to wear a whole suit of the stuff, but this would make it easier for clothing to include an RF-blocking pocket in a jacket, jeans or laptop bag that doesn’t feel out of place with the other materials. A hat (or underwear) with a layer of this fabric would be a popular item among conspiracy theorists, of course.

It’s still a ways from showing up on the rack, but Gogotsi was optimistic about its prospects for commercialization, noting that Drexel has multiple patents on the material and its uses. Other ways of infusing fabric with MXenes could lead to clothes that generate and store energy as well.

You can read more about this particular application of MXenes in the journal Carbon.

Read more

General Catalyst has made early bets on some of the biggest companies in tech today, including Airbnb, Lemonade and Warby Parker.

We sat down with Katherine Boyle and Peter Boyce, who co-lead the firm’s seed-stage investments, to discuss what they look for in founders, which sectors they’re most excited about and how business has changed in the wake of the COVID-19 pandemic.

This conversation is part of our broader Extra Crunch Live series, where we sit down with VCs and founders to discuss startup core competencies and get advice. We’ve spoken to folks like Aileen Lee, Mark Cuban, Roelof Botha, Charles Hudson and many, others. You can browse the full library of episodes here.

Check out our full conversation with Boyce and Boyle in the YouTube video below, or skim the text for the highlights.

Which personality traits are most important in founders

Katherine Boyle: I look for what I would call this obsessive trait, where they are learning more about the regulatory complications, where they are constantly trying to figure out how to solve a problem.

I’d say that the common theme among the founders that I support are that they have this sort of obsessive gene or personality, where they will go deeper and deeper and deeper. When we invest in these companies, it becomes very clear that they often have sort of a contrarian view of the industry. Maybe they are not industry-native. They come at it from a different perspective of problem solving. They’ve had to defend that thesis for a very, very long time in front of a variety of different customers and different people. In some ways, that makes them much stronger in terms of the way they approach problems.

Peter Boyce: I think the first would be being magnetic for talent. It ends up influencing the speed of learning and development. Really incredible founding teams that can be magnetic for talent and learning just kind of spirals out of control in really good ways over time. I really look for the speed and the sources of learning. And can folks be really intentional? Can they get the right set of advisors and teammates around them?

The second would be the personal connection to the problem space. It’s like there’s this kind of deep-seated source of energy and fuel that actually isn’t going to run out. Catherine and I’ve been lucky to work across a number of different particular thematic areas, but the thing they have in common is just this personal connection to how and why their business needs to exist. Because I just think that that fuel doesn’t run out, you know what I mean? Like, that’s renewable.

On fundraising and building trust remotely

Boyle: If you’re someone who’s comfortable presenting on Zoom, making connections on Zoom, or using Signal and using Twitter and being very online, then I 100% think that you can make investments, build community and build connections through digital worlds and digital platforms. If you really like that in-person connectivity, then you might consider staying in a tech hub, or you might consider sort of these distanced walks until things go back to normal.

Read more

Deep learning is an inefficient energy hog. It requires massive amounts of data and abundant computational resources, which explodes its electricity consumption. In the last few years, the overall research trend has made the problem worse. Models of gargantuan proportions—trained on billions of data points for several days—are in vogue, and likely won’t be going away any time soon.

Some researchers have rushed to find new directions, like algorithms that can train on less data, or hardware that can run those algorithms faster. Now IBM researchers are proposing a different one. Their idea would reduce the number of bits, or 1s and 0s, needed to represent the data—from 16 bits, the current industry standard, to only four.

The work, which is being presented this week at NeurIPS, the largest annual AI research conference, could increase the speed and cut the energy costs needed to train deep learning by more than sevenfold. It could also make training powerful AI models possible on smartphones and other small devices, which would improve privacy by helping to keep personal data on a local device. And it would make the process more accessible to researchers outside big, resource-rich tech companies.

How bits work

You’ve probably heard before that computers store things in 1s and 0s. These fundamental units of information are known as bits. When a bit is “on,” it corresponds with a 1; when it’s “off,” it turns into a 0. Each bit, in other words, can store only two pieces of information.

But once you string them together, the amount of information you can encode grows exponentially. Two bits can represent four pieces of information because there are 2^2 combinations: 00, 01, 10, and 11. Four bits can represent 2^4, or 16 pieces of information. Eight bits can represent 2^8, or 256. And so on.

The right combination of bits can represent types of data like numbers, letters, and colors, or types of operations like addition, subtraction, and comparison. Most laptops these days are 32- or 64-bit computers. That doesn’t mean the computer can only encode 2^32 or 2^64 pieces of information total. (That would be a very wimpy computer.) It means that it can use that many bits of complexity to encode each piece of data or individual operation.

4-bit deep learning

So what does 4-bit training mean? Well, to start, we have a 4-bit computer, and thus 4 bits of complexity. One way to think about this: every single number we use during the training process has to be one of 16 whole numbers between -8 and 7, because these are the only numbers our computer can represent. That goes for the data points we feed into the neural network, the numbers we use to represent the neural network, and the intermediate numbers we need to store during training.

So how do we do this? Let’s first think about the training data. Imagine it’s a whole bunch of black-and-white images. Step one: we need to convert those images into numbers, so the computer can understand them. We do this by representing each pixel in terms of its grayscale value—0 for black, 1 for white, and the decimals between for the shades of gray. Our image is now a list of numbers ranging from 0 to 1. But in 4-bit land, we need it to range from -8 to 7. The trick here is to linearly scale our list of numbers, so 0 becomes -8 and 1 becomes 7, and the decimals map to the integers in the middle. So:

You can scale your list of numbers from 0 to 1 to stretch between -8 and 7, and then round any decimals to a whole number.

This process isn’t perfect. If you started with the number 0.3, say, you would end up with the scaled number -3.5. But our four bits can only represent whole numbers, so you have to round -3.5 to -4. You end up losing some of the gray shades, or so-called precision, in your image. You can see what that looks like in the image below.

The lower the number of bits, the less detail the photo has. This is what is called a loss of precision.

This trick isn’t too shabby for the training data. But when we apply it again to the neural network itself, things get a bit more complicated.

A neural network.

We often see neural networks drawn as something with nodes and connections, like the image above. But to a computer, these also turn into a series of numbers. Each node has a so-called activation value, which usually ranges from 0 to 1, and each connection has a weight, which usually ranges from -1 to 1.

We could scale these in the same way we did with our pixels, but activations and weights also change with every round of training. For example, sometimes the activations range from 0.2 to 0.9 in one round and 0.1 to 0.7 in another. So the IBM group figured out a new trick back in 2018: to rescale those ranges to stretch between -8 and 7 in every round (as shown below), which effectively avoids losing too much precision.

The IBM researchers rescale the activations and weights in the neural network for every round of training, to avoid losing too much precision.

But then we’re left with one final piece: how to represent in four bits the intermediate values that crop up during training. What’s challenging is that these values can span across several orders of magnitude, unlike the numbers we were handling for our images, weights, and activations. They can be tiny, like 0.001, or huge, like 1,000. Trying to linearly scale this to between -8 and 7 loses all the granularity at the tiny end of the scale.

Linearly scaling numbers that span several orders of magnitude loses all the granularity at the tiny end of the scale. As you can see here, any numbers smaller than 100 would be scaled to -8 or -7. The lack of precision would hurt the final performance of the AI model.

After two years of research, the researchers finally cracked the puzzle: borrowing an existing idea from others, they scale these intermediate numbers logarithmically. To see what I mean, below is a logarithmic scale you might recognize, with a so-called “base” of 10, using only four bits of complexity. (The researchers instead use a base of 4, because trial and error showed that this worked best.) You can see how it lets you encode both tiny and large numbers within the bit constraints.

A logarithmic scale with base 10.

With all these pieces in place, this latest paper shows how they come together. The IBM researchers run several experiments where they simulate 4-bit training for a variety of deep-learning models in computer vision, speech, and natural-language processing. The results show a limited loss of accuracy in the models’ overall performance compared with 16-bit deep learning. The process is also more than seven times faster and seven times more energy efficient.

Future work

There are still several more steps before 4-bit deep learning becomes an actual practice. The paper only simulates the results of this kind of training. Doing it in the real world would require new 4-bit hardware. In 2019, IBM Research launched an AI Hardware Center to accelerate the process of developing and producing such equipment. Kailash Gopalakrishnan, an IBM fellow and senior manager who oversaw this work, says he expects to have 4-bit hardware ready for deep-learning training in three to four years.

Boris Murmann, a professor at Stanford who was not involved in the research, calls the results exciting. “This advancement opens the door for training in resource-constrained environments,” he says. It wouldn’t necessarily make new applications possible, but it would make existing ones faster and less battery-draining “by a good margin.” Apple and Google, for example, have increasingly sought to move the process of training their AI models, like speech-to-text and autocorrect systems, away from the cloud and onto user phones. This preserves users’ privacy by keeping their data on their own phone while still improving the device’s AI capabilities.

But Murmann also notes that more needs to be done to verify the soundness of the research. In 2016, his group published a paper that demonstrated 5-bit training. But the approach didn’t hold up over the years. “Our simple approach fell apart because neural networks became a lot more sensitive,” he says. “So it’s not clear if a technique like this would also survive the test of time.”

Nonetheless, the paper “will motivate other people to look at this very carefully and stimulate new ideas,” he says. “This is a very welcome advancement.”

Read more

Many of the most successful and widely used machine-learning models are trained with the help of thousands of low-paid gig workers. Millions of people around the world earn money on platforms like Amazon Mechanical Turk, which allow companies and researchers to outsource small tasks to online crowdworkers. According to one estimate, more than a million people in the US alone earn money each month by doing work on these platforms. Around 250,000 of them earn at least three-quarters of their income this way. But even though many work for some of the richest AI labs in the world, they are paid below minimum wage and given no opportunities to develop their skills. 

Saiph Savage is the director of the human-computer interaction lab at West Virginia University, where she works on civic technology, focusing on issues such as fighting disinformation and helping gig workers improve their working conditions. This week she gave an invited talk at NeurIPS, one of the world’s biggest AI conferences, titled “A future of work for the invisible workers in AI.” I talked to Savage on Zoom the day before she gave her talk. 

Our conversation has been edited for clarity and length.  

You talk about the invisible workers in AI. What sorts of jobs are these people doing?  

A lot of tasks involve labeling data—especially image data—that gets fed into supervised machine-learning models so that they understand the world better. Other tasks involve transcribing audio. For instance, when you talk to Amazon’s Alexa you might have workers transcribing what you say so that the voice recognition algorithm learns to understand speech better. And I just had a meeting with crowdworkers in rural West Virginia. They get hired by Amazon to read out a lot of dialogue to help Alexa understand how people in that region talk. You can also have workers labeling websites that might be filled with hate speech or pedophilia. This is why, when you search for images on Google or Bing, you’re not exposed to those things.

People are hired to do these tasks on platforms like Amazon Mechanical Turk. Large tech companies might use in-house versions—Facebook and Microsoft have their own, for instance. The difference with Amazon Mechanical Turk is that anyone can use it. Researchers and startups can plug into the platform and power themselves with invisible workers.

What problems do these invisible workers have?

I don’t actually see crowdwork as a bad thing; it’s a really good idea. It has made it very easy for companies to add an external workforce.

But there are a number of problems. One is that workers on these platforms earn very low wages. We did a study where we followed hundreds of Amazon Mechanical Turk workers for several years, and we found that they were earning around $2 per hour. This is much less than the US minimum wage. There are people who dedicate their lives to these platforms; it’s their main source of income.

And that brings other problems. These platforms cut off future job opportunities as well, because full-time crowdworkers are not given a way to develop their skills—at least not ones that are recognized. We found that a lot of people don’t put their work on these platforms on their résumé. If they say they worked on Amazon Mechanical Turk, most employers won’t even know what that is. Most employers are not aware that these are the workers behind our AI.

It’s clear you have a real passion for what you do. How did you end up working on this?

I worked on a research project at Stanford where I was basically a crowdworker, and it exposed me to the problems. I helped design a new platform, which was like Amazon Mechanical Turk but controlled by the workers. But I was also a tech worker at Microsoft. And that also opened my eyes to what it’s like working within a large tech company. You become faceless, which is very similar to what crowdworkers experience. And that really sparked me into wanting to change the workplace.   

You mentioned doing a study. How do you find out what these workers are doing and what conditions they face?

I do three things. I interview workers, I conduct surveys, and I build tools that give me a more quantitative perspective on what is happening on these platforms. I have been able to measure how much time workers invest in completing tasks. I’m also measuring the amount of unpaid labor that workers do, such as searching for tasks or communicating with an employer—things you’d be paid for if you had a salary.

You’ve been invited to give a talk at NeurIPS this week. Why is this something that the AI community needs to hear?

Well, they’re powering their research with the labor of these workers. I think it’s very important to realize that a self-driving car or whatever exists because of people that aren’t paid minimum wage. While we’re thinking about the future of AI, we should think about the future of work. It’s helpful to be reminded that these workers are humans.

Are you saying companies or researchers are deliberately underpaying?

No, that’s not it. I think they might underestimate what they’re asking workers to do and how long it will take. But a lot of the time they simply haven’t thought about the other side of the transaction at all.

Because they just see a platform on the internet. And it’s cheap.

Yes, exactly.

What do we do about it?  

Lots of things. I’m helping workers get an idea how long a task might take them to do. This way they can evaluate if a task is going to be worth it. So I’ve been developing an AI plug-in for these platforms that helps workers share information and coach each other about which tasks are worth their time and which let you develop certain skills. The AI learns what type of advice is most effective. It takes in the text comments that workers write to each other and learns what advice leads to better results, and promotes it on the platform.

Let’s say workers want to increase their wages. The AI identifies what type of advice or strategy is best suited to help workers do that. For instance, it might suggest that you do these types of task from these employers but not these other types of task over there. Or it will tell you not to spend more than five minutes searching for work. The machine-learning model is based on the subjective opinion of workers on Amazon Mechanical Turk, but I found that it could still increase workers’ wages and develop their skills.

So it’s about helping workers get the most out of these platforms?

That’s a start. But it would be interesting to think about career ladders. For instance, we could guide workers to do a number of different tasks that let them develop their skills. We can also think about providing other opportunities. Companies putting jobs on these platforms could offer online micro-internships for the workers.

And we should support entrepreneurs. I’ve been developing tools that help people create their own gig marketplaces. Think about these workers: they are very familiar with gig work and they might have new ideas about how to run a platform. The problem is that they don’t have the technical skills to set one up, so I’m building a tool that makes setting up a platform a little like configuring a website template.  

A lot of this is about using technology to shift the balance of power.

It’s about changing the narrative, too. I recently met with two crowdworkers that I’ve been talking to and they actually call themselves tech workers, which—I mean, they are tech workers in a certain way because they are powering our tech. When we talk about crowdworkers they are typically presented as having these horrible jobs. But it can be helpful to change the way we think about who these people are. It’s just another tech job.  

Read more

A genetically modified salmon will become the first GM food animal to go on sale in the US, according to its maker, AquaBounty, possibly launching an era of steaks and chops from creatures with modified DNA.

In the US, a number of genetically modified animals have been approved or cleared for sale. There’s the neon GloFish with added fluorescence, which you can find at a pet store. And there are a handful of goats, rabbits, and chickens engineered to manufacture drugs in their milk or eggs.

But so far, only one genetically engineered animal has been approved in the US as food. That animal, an Atlantic salmon engineered to grow faster on fish farms, took 20 years to win a nod from regulators, and then got held up for four more years over a labeling dispute. AquaBounty predicted that it would be ready to sell salmon to distributors in the US by this month. 

Aquabounty’s long (and expensive) trip to the marketplace has been discouraging. Who wants their product to be denounced as a frankenfish by environmental campaigners or be prominently labeled as “bioengineered”? Yet now that the fish has won approval, it may be a “wildly important” signal to others working on genetically engineered animals, says Jack Bobo, a former board member at the company. “All GMO research on animals basically stopped for 20 years,” he says. “There was no reason to do it until something got approved.” 

The Aquabounty salmon is transgenicit has a gene from a different species (a Chinook salmon) pasted in. Now, though, with new gene-editing tools, researchers have better ways to introduce gene changes and a wider menu of possible enhancements. Already, gene editing has led to experimental pigs that resist viral infections and dairy cattle whose spots have been changed from black to gray, to thrive in hot climates.

Today, MIT Technology review is also reporting on a British company called Genus, which is pursuing the largest project yet to genetically modify large farm animals. It’s using newer gene-editing tools to create thousands of pigs immune to some common, and deadly, viruses that affect barnyards.

Animal behavior is on the table too. In 2019, Japanese researchers tried changing a gene in tuna fish to slow them down. Tuna can swim at 40 miles per hour (about seven times as fast as Michael Phelps) and often die in sushi fish farms after collisions with walls.

The path to your dinner table remains a difficult one for these innovations. Activists will criticize them as enabling intensive livestock farming, and it’s true that many genetic innovations were devised to solve problems created by crowding animals together, like disease. 

And the US agency that oversees genetically modified food animals, the Food and Drug Administration, is no pushover. The FDA considers alterations to an animal’s genome to be just like a veterinary drug. That means it wants evidence that the modifications do what their makers say and that they’re safe, for the animals and for us.

Ultimately, though, it will be consumers and food marketers who decide how gene editing fares in the fish and meat aisles. Will people buy salmon or pork chops slapped with labels saying they are genetically engineered? The arrival of the Aquabounty salmon to the market could help answer the question. The company is angry about being required to use such labels and says its fish are just as good as anyone’s. Still, as Bobo says, “it’s best to be transparent and hope that people don’t really care.”

Read more

When covid-19 began to race around the world, countries closed businesses and told people to stay home. Many thought that would be enough to stop the coronavirus. If we had paid more attention to pigs, we might have known better. When it comes to controlling airborne viruses, says Bill Christianson, “I think we fool ourselves on how effective we can be.”

Christianson is an epidemiologist and veterinarian who heads the Pig Improvement Company, in Hendersonville, Tennessee. The company sells elite breeding swine to the pork industry, which for the last 34 years has been fighting a viral disease called porcine reproductive and respiratory syndrome PRRS.

The pathogen causes an illness known as blue ear, for one of its more visible symptoms; when it first emerged, in the 1980s, it was simply called “mystery swine disease.” Once infected with PRRS (pronounced “purrs”), a sow is liable to miscarry or give birth to dead, shriveled piglets. 

“And I’m going to say yes, it’s worse for pigs than covid is for us,” says Christianson. 

To stop PRRS, as well as other diseases, pig farmers employ measures familiar to anyone who has been avoiding covid-19. Before you enter a secure pig barn, you get your temperature taken, shower, and change clothes. Lunch boxes get bathed in UV light, and supplies are fogged with disinfectant. Then there’s the questionnaire about your “last pig contact”—seen any swine on your day off? Been to a country fair? (Answering yes means a two-week quarantine away from work.) 

Despite the precautions, the virus can slip in. Once inside, it quickly spreads in the close quarters. Swift “depopulation”—i.e., culling—of the animals is the most effective way to get rid of it. In bad years, American pig farmers lose $600 million to PRRS. 

Now Christianson’s company, which is a division of the British animal genetics firm Genus, is trying something different. Instead of trying to seal animals off from the environment, it’s changing the pigs themselves. At an experimental facility in the central US (the location kept secret for security reasons), the company has a swine IVF center and a lab where pig eggs are being genetically edited using CRISPR, the revolutionary gene scissors. 

During a virtual tour, a worker carried a smartphone through the editing lab into the gestation area, where sows spend nine months until giving birth—“farrowing” is the farmer’s term. Then he led the way to a concrete room where gene-edited piglets grunted and peered at the camera. According to the company, these young pigs are immune to PRRS because their bodies no longer contain the molecular receptor the virus docks with. 

Every virus attacks cells by fusing with them and injecting its genetic cargo. With covid-19, the virus attaches to a receptor called ACE-2, which is common on airway and lung cells—the reason the disease causes problems with breathing. With PRRS, it’s CD163, a receptor on white blood cells. These experimental pigs don’t have a complete CD163 gene because part of it was snipped away with gene editing. No receptor, no infection. 

“I never thought it would be a light switch … but it seems to work on all types of pigs and against all the strains of the virus.” 

Bill Christianson

According to the company’s unpublished research, attempts to infect the gene-edited pigs with PRRS have not succeeded. “I never thought it would be a light switch,” says Christianson. “But it seems to work on all types of pigs and against all the strains of the virus.” 

Notoriously, a similar method has been tried in humans. In a disastrously reckless 2018 outing, Chinese scientists edited human embryos in hopes of conferring resistance to HIV, the cause of AIDS. Those researchers likewise dreamed of halting a disease by removing a receptor. The problem was the technology wasn’t ready to do such an ambitious job safely. Although the CRISPR tool is immensely versatile, it lacks precision, and the DNA surgery created something akin to genetic scars in the twins born from the experiment. 

In September a high-level international panel said no one should try modifying babies again “until it has been clearly established that it is possible to efficiently and reliably make precise genomic changes without undesired changes in human embryos.”

But with pigs, the era of genetic modification is now, and its benefits might be visible soon. Genus hopes to win approval to sell its pigs in the US and China as early as 2025. Already, its experimental stations are home to hundreds of gene-edited pigs and thousands of their descendants—likely the largest number anywhere. (Read the sidebar on the regulatory approval of GM food animals.)

To Raymond Rowland, a researcher at the University of Illinois who was involved in creating the first PRRS-proof animals, gene editing is “in its largest sense, a way to create a more perfect life” for pigs and their keepers. “The pig never gets the virus. You don’t need vaccines; you don’t need a diagnostic test. It takes everything off the table,” he says. 

Elite pigs

Aldous Huxley’s novel Brave New World begins with a tour of the “Central London Hatchery,” where children in a future society are being produced through a test-tube process under a sign that reads “COMMUNITY, IDENTITY, STABILITY.” The signs at Genus’s facilities are mostly about temperature checks and hand-washing, but the concept is not so different. Every pig is numbered, monitored, and DNA-tested for its genetic qualities. 

The firm manages animals selected to be the healthiest and fastest growing, and to have the largest litters. These animals—what Genus calls “elite germplasm”—are then propagated via breeding on “multiplier farms” and purchased by producers everywhere from Iowa to Beijing, who breed them still further. 

The company has been using DNA sequencing for several years to identify pigs with preferred traits and to steer its breeding programs. In 2015, it signed an exclusive license to gene-edit pigs and cattle using technology from Caribou Biosciences, a company started by Jennifer Doudna of the University of California, Berkeley, who last October shared a Nobel Prize for the development of CRISPR. 

edited pig gene concept
SELMAN DESIGN

Because the pig company had no experience in genetic engineering, it began to hire plant biologists. One of them is its chief scientific officer, Elena Rice, a Russian-born geneticist who spent 18 years at Monsanto, mostly developing genetically modified corn plants to grow bigger and resist drought. “The plants were never emotional to me,” says Rice. “The little pig or little cow—it’s very emotional. You want to hug them; you want them to be healthy. It’s like having a kid. You don’t want them to be sick.” 

The Genus research station is set up to carry out the editing process quickly, on many pigs. Sows are anesthetized and then rolled into a surgical suite, where veterinarians remove eggs from their ovaries. The eggs are moved to the lab, where they are fertilized and the CRISPR molecules are introduced. Two days after editing, the embryos—by then a few cells big—are implanted into surrogate sows. 

CRISPR is renowned for its ability to cut DNA at predetermined locations, but in practice, the technology has a random element. Aim it at one spot in a genome and you’ll change it in one of several possible ways. Unplanned changes, or “off targets,” can appear far away in the genome, too. 

In plants, this randomness isn’t such a problem. A successful genetic change to a single seed (an “event,” as plant engineers call it) can be multiplied into a million more fairly quickly. In pigs, it’s necessary to create identical edits in many animals in order to establish a population of founder pigs for breeding. 

In experiments on pig cells, the Genus researchers have tried many possible edits to the CD163 gene, looking for those that occur most predictably. Even with such efforts, the pigs being born have the right edit only about 20 to 30% of the time. Those piglets whose genomes have errors end up in a compost heap. “I want to convey that this technology is not simple. You can be good at this technology or bad at it,” says Mark Cigan, a molecular biologist with a senior role in the program. “We need to be rigorous, because we want a predictable change in all the pigs. It has to be the same change every time.”

Eradicating influenza

While PRRS is the big problem in the US, Genus and other companies think they can make pigs immune to other viruses too. They are exploring whether gene editing could create pigs that don’t catch African swine fever, a disease that’s rampant in China and since 2018 has led to the loss of half that country’s pigs. Researchers like Rowland say edited pigs could also have the indirect benefit of lowering the chance that certain viruses will spill over from pigs to humans.

The origins of covid-19 are still undetermined, but the prevailing theory is that the disease is zoonotic, meaning it jumped from animals to people. Since pigs don’t catch the new coronavirus, they probably played no part in covid-19’s emergence. But pig farms are notorious for starting flu pandemics. Pigs can catch both bird and human influenza, in addition to swine flu. That makes them a dangerous mixing vessel in which flu viruses can swap stretches of DNA with each other.

Such a reassortment of genetic parts can suddenly produce a new flu virus that spreads among people, who will not have immunity. The 2009 H1N1 swine flu carried viral elements from birds, pigs, and humans. In the US there were about 61 million cases: almost 300,000 people ended up in the hospital, and around 12,500 died. The deadly 1918 flu pandemic was accompanied in the US by a “hog flu,” though the connection between them remains unproven. 

Starting last year, Genus has been paying a Kansas State University scientist, Jürgen Richt, to help design pigs resistant to influenza. Richt isn’t sure he can render pigs entirely immune to the fast-evolving flu viruses, but he’s hopeful he can slow the pathogens down, maybe even enough to lower the odds of another pandemic. “If you get less replication, you get less mutation, less reassortment,” he says. The end result is less evolution of the virus.

Because the receptors influenza attaches to are so common in the body, no animal could survive their removal, Richt says. So the project aims instead to remove other genes, for proteins called proteases that the flu—and covid-19—require as helper molecules to effectively enter cells. Because there are many types of flu, it will be necessary to remove more than one protease, leading to the question of whether pigs with too many deleted genes can thrive. If a pig is a Jenga tower, just how many blocks can be removed before the animal falls apart?

“I don’t know the limit to taking out genes. That is why we do trial and error,” says Richt. “But what we want is to make them resistant to all influenzas, from all walks of life.”

It’s not clear yet whether the PRRS-resistant pigs, with only one receptor removed, are healthy and otherwise normal. Cigan says the company thinks they are; researchers can’t see other differences in their tests, which measure things like how much the pigs eat and gain weight. But unplanned changes could be subtle. 

Richt says a decade ago he was involved in making cattle resistant to mad cow disease. After removing one gene, he sensed they were changed. “The way they stood up was funny—it was hard to get them back up,” he says. “The caretaker told me they are stupid, so maybe intelligence was affected.” With only a dozen cows, he never was sure, but he suspects the cattle lost a “luxury function”—one that wasn’t vital to survival but whose removal led to a degradation of the sensory system. 

Black Plague

If gene editing is perfected in pigs—a species anatomically so similar to humans that doctors hope to transplant pig kidneys to humans someday—what will be the implications for people? The debate about human genetic modification has often been reduced to asking whether it would be moral to change a child’s eye color or intelligence, for instance. But the pig hatchery shows that CRISPR might be able to give people inborn “genetic vaccines” against the worst infectious diseases they might encounter. 

The scientists in China who edited human embryos to resist HIV were pursuing just such a revolutionary development. And the problems they ran into were similar to those Genus faces: they couldn’t control the exact edits they made and couldn’t be sure that disrupting one gene (called CCR5) wouldn’t have unanticipated consequences. In that experiment, though, there were no second tries. In addition, many questioned whether the risky attempt was medically necessary, since drugs can keep HIV under control for decades.

If gene editing is perfected in pigs—a species anatomically so similar to humans that doctors hope to transplant pig kidneys to humans someday—what will be the implications for people?

Since the Chinese fiasco, the American and British science academies have said that gene editing, when it’s safe enough to use in human reproduction, should avoid “enhancement” of any kind and instead take on narrower goals, such as preventing people from passing inherited conditions like sickle-­cell disease to their children. 

Yet others think it’s important to master the technology as a possible guard against future pandemics. Removing a receptor from the next generations of humans could be civilization’s fallback if society is hit with a super-disease that can’t be controlled by vaccines or drugs, and for which we don’t develop immunity. 

 “We as a species need to maintain the flexibility, in the face of future threats, to take control over our own heredity,” George Daley, the dean of Harvard Medical School, told an audience in Hong Kong in 2018. He listed “resistance to global pandemics” as one reason to develop techniques to modify human beings.

Covid-19 shows how a novel germ can explode out of nowhere and spread globally. The overall death rate from an infection with the new coronavirus, perhaps 0.5%, doesn’t threaten humanity’s existence. But what if the next pandemic is more like the Black Plague, which killed one-third or more of the population of Europe in the Middle Ages? It’s a remote possibility, like an asteroid strike. But being able to engineer humans to resist specific germs might be a back-pocket technology worth having. 

From what they know of animals, scientists at Genus think editing humans is futuristic but not impossible. Twenty years ago, Rice would have said it was pure fiction. “But now we can actually do it for animals,” she says. “We have the tools.”

Read more
1 2,527 2,528 2,529 2,530 2,531 2,684