Ice Lounge Media

Ice Lounge Media

Call it a holiday miracle. Apple today announced that animated holiday classics “A Charlie Brown Thanksgiving” and “A Charlie Brown Christmas” will, indeed, be appearing on television this year. The news comes after some pushback against an Apple TV+ exclusive that found the Peanuts cartoons being pulled from TV broadcast.

As we noted last month, the deal would mark the first time in 55 years the beloved Christmas special wouldn’t be broadcast on network television. Both holiday specials appeared to be resolved to a similar fate as the 1966 Halloween special, “It’s the Great Pumpkin, Charlie Brown.”

While Apple’s rights had a clause that involved a window for free broadcast, it was hard to shake the feeling that relegating a holiday tradition to a premium subscription service flew in the face of the original special’s staunch, anti-consumer message.

Thankfully, in addition to appearing on TV+, “A Charlie Brown Thanksgiving” will appear on PBS and PBS on November 22, 2020 at 7:30 pm local time/6:30 pm CT, while “A Charlie Brown Christmas” will air on December 13, 2020 at 7:30 pm local time/6:30 pm CT.

It’s a small victory, perhaps, but these days we’ll take them where we can get them. And this time without ads.

Read more

A solar-powered autonomous drone scans for forest fires. A surgeon first operates on a digital heart before she picks up a scalpel. A global community bands together to print personal protection equipment to fight a pandemic.

“The future is now,” says Frédéric Vacher, head of innovation at Dassault Systèmes. And all of this is possible with cloud computing, artificial intelligence (AI), and a virtual 3D design shop, or as Dassault calls it, the 3DEXPERIENCE innovation lab. This open innovation laboratory embraces the concept of the social enterprise and merges collective intelligence with a cross-collaborative approach by building what Vacher calls “communities of people—passionate and willing to work together to accomplish a common objective.”

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff. 

“It’s not only software, it’s not only cloud, but it’s also a community of people’s skills and services available for the marketplace,” Vacher says. 

“Now, because technologies are more accessible, newcomers can also disrupt, and this is where we want to focus with the lab.” 

And for Dassault Systèmes, there’s unlimited real-world opportunities with the power of collective intelligence, especially when you are bringing together industry experts, health-care professionals, makers, and scientists to tackle covid-19. Vacher explains, “We created an open community, ‘Open Covid-19,’ to welcome any volunteer makers, engineers, and designers to help, because we saw at that time that many people were trying to do things but on their own, in their lab, in their country.” This wasted time and resources during a global crisis. And, Vacher continues, the urgency of working together to share information became obvious, “They were all facing the same issues, and by working together, we thought it could be an interesting way to accelerate, to transfer the know-how, and to avoid any mistakes.”

Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next. 

This episode of Business Lab is produced in association with Dassault Systèmes. 

Show notes and links 

“How Effective is a Facemask? Here’s a Simulation of Your Unfettered Sneeze,” by Josh Mings, SolidSmack, April 2, 2020 

“Open COVID-19 Community Lets Makers Contribute to Pandemic Relief,” by Clare Scott, The SIMULIA Blog, Dassault, July 15, 2020

Dassault 3DEXPERIENCE platform

“Collective intelligence and collaboration around 3D printing: rising to the challenge of Covid-19,” by Frédéric Vacher, STAT, August 10, 2020

Full Transcript 

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is accelerating disruptive innovations to benefit society by building and running massive simulations. The world has big problems, and it’s going to take all of us to help solve them. Two words for you: collective intelligence.  

My guest is Frédéric Vacher, who is the head of innovation at Dassault Systèmes. He is a mechanical engineer who has had a long career at Dassault. First, leading the partnership program, and then launching the 3DEXPERIENCE Lab. This episode of Business Lab is produced in association with Dassault Systèmes. Frédéric, welcome to Business Lab. 

Frédéric Vacher: Good morning, Laurel. Good morning, everyone. 

Laurel: Could you start by first telling us a bit about Dassault Systèmes? I don’t want listeners to be confused with the aviation company, because we’re talking about a 3D modeling and simulation enterprise that was founded almost 40 years ago and has more than 20,000 employees around the globe. 

FrédéricYeah, that is true. We are Dassault Systèmes, the 3D experience company. We have been digital since day one. Dassault Aviation is one of our clients—like all the aerospace companies—but our customers are also car, shipbuilding, consumer goods, consumer packaged goods companies, and so on. We are a worldwide leader in providing digital solutions, from design simulation to production and we cover 11 industries. Our purpose is to harmonize product, nature, and life. 

For the past two years, we have helped our clients’ industry roles to innovate by digitalizing and engineering their products from very complex products to simpler products. For the past 10 years, we have invested very strongly in two directions: the nature of life and from things, to life. 

Laurel: That is a complicated kind of process to sort of imagine. But with the 3DEXPERIENCE Lab, scientists and engineers can go in and build these cloud-based simulations for 3D modeling, digital twins, and product in a way that is really collaborative, taking advantage of that human system. Could you talk to us a bit more about why Dassault felt it was important to create this 3DEXPERIENCE Lab in a way that was so collaborative? 

FrédéricWe started the 3DEXPERIENCE Lab initiative five years ago to accelerate newcomers, very small actors, startups, makers, as we believe that innovation is everywhere. For 40 years, we innovated with the aerospace and defense industry. [For example] we established a partnership with Boeing on the 777, for instance, the first airplane that was fully digitized [made into a digital twin]. And not only the product, but all the processes and the factories. Now, because technologies are more accessible, newcomers can also disrupt, and this is where we want to focus with the lab. This lab is targeting open innovation with startup accelerators empowering communities online, communities of people, passionate and willing to work together to accomplish a common objective. 

Laurel: And because it is an open lab, anyone can participate. But you have created a specific program for startups. Could you tell us more about that program? 

FrédéricSince the beginning, we have been identifying and sourcing startups that provide a strong impact to society as a disruptive product or project wanting to make a real impact. This program provides those startups access to our software and professional solutions that the industry is using in their day-to-day activities. But also funds to this cloud platform for communities to create access to mentors. Mentors will help them accelerate in their development, providing know-how and knowledge. 

Laurel: That kind of access for startups is rather difficult to get, right? Because this kind of software is professional grade, it is expensive. They may not be able to afford it or understand that they even have access to it. But interestingly, it’s not just the software companies and startups that would have access to it’s also the people who work at Dassault, correct? 

FrédéricYeah, that is correct. Thanks to this 3DEXPERIENCE platform in the cloud, as you mentioned, we have 20,000 people worldwide in 140 countries. Those people are knowledgeable as they support their business in many industries in terms of technology and in science. And those people on a volunteering basis, could join a lab project as coach, mentor, or startup. Thanks to these cloud platforms, they are not only discussing and providing some insight or information or guidance, but they can really co-design with those guys. Like a Google document, many people can work on the same document while being in different locations. This program enables us to perform the same way but on a digital mock-up. 

Laurel: People can kind of really visualize what you have in mind. The 3DEXPERIENCE Lab does two things. One, it creates a way for an enterprise to build an entire product as a 3D vision, incorporating feedback from the research lab, the factory floor, and the customer. So, all of the stakeholders can work in a single environment. Could you give us an example of that and how that works? 

FrédéricIn a single environment in the cloud they can start with using some apps, maybe from CATIA or SolidWorks. They can do the engineering part of the job on the same data model they would use to perform their own simulations. Any type of digital simulation that will help those guys to announce the engineering and the design of their product. Through that, they will optimize the design and then go to the manufacturing aspect, delivering all the processes needed up to programming the machines. But that is, I would say, the standard way to operate.  

Now for this platform, you have access also to services for marketplace. This is particularly interesting for early-stage startups as they struggle to find the right partner or the right supplier to manufacture something. Here, at one click of a button, they can source components from millions of components that are available through qualified suppliers online. Just drag and drop the component into the project. 

They can access to thousands of factories worldwide, whereby, they will be able to produce their parts for those factories, managing all the business online between the two suppliers. And then, they may also have access to engineering services, where if you want to do something, but you don’t have the skills to do it, then you contract the job to a service bureau or qualified partners that could deliver the job. So, it’s not only software, it’s not only cloud, but it’s also a community of people’s skills and services available for the marketplace. 

Laurel: And it really is a platform, right? To directly offer services and innovation from one company to another, in a way that’s very visual and hands-on, so you could actually almost demo the product before you buy it because you are in this 3DEXPERIENCE environment. How does that work? With an example from a company? Am I thinking of that in the correct way? 

FrédéricYou’re correct. The complete digital project is done on the platform before the real product is produced. You want to develop a new car or a new table or a new chair or new lamp, you design everything in 3D. You simulate to make it robust and then you do the engineering to make sure that the manufacturing would be fine based on your manufacturing capacities or partners. And, you go one step further on that, then you can really produce the marketing operations, produce the advertising, the high quality pictures you need for your flyers, even the experience from the video to record the commercials. So, the digital assets that are done already at the beginning of the project, to engineer a new product, are now used not only for production, but also for communication, marketing, training, and so on. That means that those people in your marketing department can do the job in parallel and perform all their deliverables, even if the product, the physical product, is not there yet. 

Laurel: How do companies feel about sharing some of this intellectual property ahead of time before the product is even developed? You must have to have very special philosophies and outlooks to want to do this, right? 

FrédéricYeah. The IP is very important for us and obviously for our clients. We deliver to each client a dedicated platform so that they are in a 100% secure network environment. This is true for the big guys like Boeing, Airbus in the aerospace industry or BMW, Tesla in the auto industry. But it’s also true for smaller startups like we are talking about with this innovation lab. 

Laurel: The system really does bring together an enormous amount of complicated issues, including cybersecurity, as well as processing power, data science, artificial intelligence, but also that human intelligence. How does Dassault define collective intelligence? Why is that so important as a philosophy? 

FrédéricIt’s key. Behind any project, behind any companies, you have people, right? This is why on this platform, on the baseline services, you have all those services to enable people to collaborate, not only to manage their project with sequences, with milestones, with task management and so on like any corporation would do, but now in a very agile way for communities. To connect people, to help people, to work better together to match skills and needs. This is a new approach, obviously this approach is new for professionals, but these services were brought by social networks to the general public many years ago, but we applied these services to innovative processes onto engineering processes within a company. 

Laurel: You mentioned skills, and I think an interesting place to kind of look at it for a bit, is how do people transfer knowledge? And is this environment conducive to training and helping perhaps one group teach the other group how to perform basic tasks or understand a product better? Are you seeing that when companies work with the platform, they actually bring in everyone, including marketing? So, everyone can have a much better understanding of the entire product? 

FrédéricDefinitely. First, we share a common referential. So, there is no loss in email exchange, in data exchange, and so on. Everyone’s work around a digital twin of the project is accurate and up-to-date. Second, this platform enables you to capitalize knowledge and know-how and it is very important, especially when seniors are retiring, to transfer the knowledge to new generations. We have seen in the past, especially in the aerospace industry, many fellows, who have left their company have to come back to the company because they are seen as critical in the process with their knowledge. Such a platform now allows companies to keep the knowledge inside and to transfer the knowledge from one generation to another. 

Laurel: So that idea of collective intelligence really does spread throughout an entire enterprise. The lab does take on a number of themes, including healthcare. Could you talk about a few of those ideas? 

FrédéricYeah. With the lab, as I said, we have main criteria to select a project: a strong, positive impact to society and a disruptive project that calls for collective intelligence. We are very selective as we really want to think big. We want to accelerate about 10 projects a year on a global standpoint. We heavily use data intelligence and our tools to scan and to scroll through all news on the web, new VCs, the founding a new startup, [all of this is done] in order to understand the weak signals, the new trends, and be able to identify those newer innovators. We use the same platform to orchestrate this ideation process. Having a small idea, nurturing and qualifying the idea, up to validating this idea coming from the startups with the community—the lab community, which is able to challenge the project to give their insights, their suggestions, and then vote. 

On every quarter, a new batch of startups are presenting their projects. They are pitching using the platform. Having as a record, all of those discussions on the project from several, you know mentors’ experience giving their opinion, the committee’s voting includes our CEO himself, with a few members of the boards validating the project based on all these discussions. So it’s a very flexible process. A very rapid process, considering we have a big company. In less than a few months we can orchestrate a completely new project.  

It’s a complete reverse approach than building a PowerPoint document to validate a project. It’s a very cool innovation with inclusive methodology where every volunteer, every person, who wants to contribute are welcome to. And obviously when validated, the startups get free access to our software, to those mentors that are recruited. Like, you know, on dating apps, but we are doing matching between mentors that have expertise and skills with needs requested by those startups or on projects. 

Laurel: That’s quite a benefit for a startup for people be matched with mentors and other innovators in their particular field. But to have Dassault’s CEO so intimately involved in these processes? That is really quite astounding. 

FrédéricIt’s huge. Even if the startup is not selected, we are working on the project, we are challenging the project with experts. Our CEO himself is challenging the project. It’s already important information for those guys and a huge value. To answer your question about the themes, we have three main themes that drives our sourcing: life, city infrastructure, and lifestyle well-being. As I said, what we want is to positively impact the society. We believe that the only progress is human. So those themes, as you understood, are driven into a better world. 

Laurel: What’s an example of one of these startups that have come to you, what are they working on? 

FrédéricWe have a huge variety of projects. We have amazing projects that, for instance, that are performing organ 3D printing with patient-specific geometry reconstruction in order to create a virtual twin of a patient. This is in order to have a simulator for the surgeons who would use it to train before the real surgery in the operating room. It was one of the first startups, BioModex, a French startup. We accelerated at the beginning of the lab. They started at two people and are now at 50 and they started in Paris. They have now also settled in Boston to connect with the life science community. And it is huge if you look at it, especially for neurosurgery, in some complex case, the surgeon can train on your own digital twin before the real surgery. So, it reduces risk with higher efficiency. 

Another example is about mobility on drones. We are helping young startups that are working on a solar autonomous drone. You remember, there is a story about solar impulse with Bertrand Piccard, a pioneer who did a world tour with a plane powered by the sun. The limit of this project was the pilot, because you cannot stay too long, not drinking, not eating. I think a drone disrupts completely the concept. This solar autonomous drone is due to perform and operate missions, like forest fire detection. So, if a drone can stay on the radar of any fires early in the process, it would help controlling borders or coasts or pipeline monitoring. We are working on it for the past three years. Last summer, they did their first test flight–12 hours powered by the sun doing 600 kilometers. So, the first flight was a success and there is lot of potential in this project. It’s called a drone, but it’s more like a plane with two wings.  

The third one is a US-based company SparkCharge. They are creating portable, ultra-fast charging units for electrical vehicles. Two weeks ago, they were on Shark Tank on ABC and they won. They got funded by Mark Cuban. It’s a huge success. 

Laurel: We should take a minute to define digital twin. A digital twin is a copy of a system that can be manipulated to experiment with different outcomes. Sort of like making a photocopy to preserve the original, but to be able to write on or make changes to the copy. In this case, having a digital twin for a medical procedure helps the surgeon walk through what she is going to do before she does it on a live patient.  

And the second idea of a solar autonomous drone/plane, really, because it’s not a small drone that we think of, it’s a very large one with solar panels on it. Being able to autonomously fly for hours on end to survey forest fires or even oil pipelines, any kind of long flight ability – that really does sound like the future to me. Do you ever just pinch yourself and say, “I can’t believe these are some of the amazing projects people are coming to us with?”

FrédéricYeah, the future is now. This 3D printed organ is in production and it is already being used. The solar autonomous drone made its first flight, and we expect several flights next year. Things are accelerating for the good. 

Laurel: And speaking of one of the most important things that we are dealing with here in 2020 is the covid-19 pandemic. Dassault Systèmes had a direct response, as many companies are working very closely with trying to work on solutions to the virus. So, what is the Open Covid-19 project and how’s Dassault helping? 

FrédéricAs I said earlier, the 3DEXPERIENCE Lab has had two kind of projects: a very collective and collaborative project around a startup or a complete community project with special needs. We did that for instance, to reconstruct Leonardo da Vinci’s machines in 3D. We created an online community, shared the collection—all those manuscripts that were the draftings of Leonardo at that time—to engineers. We are using our software, or any 3D software, to design and engineer those machines and it worked pretty well. It started eight years ago, and it is still going. Many machines have been reconstructed and now they are forming a playground of many machines. Some of them worked and some of them did not. At that time, he invented so many things, but obviously not everything was going to work. We did the same for the covid-19 situation. 

When the pandemic started, it was in China, and our colleagues were reporting the issues to us. We saw the pandemic coming into Europe from Italy first, and then in France. So, we decided to first work with our data intelligence to understand the needs by developing dashboards to scan what people were saying. And very quickly we identified two main needs: ventilators and protection. They were the focus of things. 

So we created an open community, Open Covid-19, to welcome any volunteer makers, engineers and designers, to help, because we saw at that time that many people were trying to do things, but on their own, in their lab, in their country. They were all facing the same issues, and by working together, we thought it could be an interesting way to accelerate, to transfer the know-how, to avoid any mistakes done already. 

For this community, we accelerated more than 150 projects on the global standpoint. With around 25 ventilators in India, a startup called Inali did a complete engineering simulation and prototyping of a new ventilator in eight days. Once again thanks to the cloud and the mentoring.

For collaborative projects with industry, it was the case in Brazil and Mexico. To make these projects you have makers in the fab lab, trying to do some frugal innovation with what they have. Some of those projects have been certified, for instance, from when we worked with the Fab Foundation from MIT’s Center for Bits and Atoms (CBA). They are gathering, with this foundation, all fab labs around the world to connect local production. It was mainly the case for protection, for PPE and for face shields, so that they could 3D print those face shields. And we were able to do some data and GPS localization of those fab labs in hospitals. I think urgency dictates to connect them locally so that you can connect to a local production. A fab lab could develop on design and fabricate PPE for the healthcare workers close by. 

Laurel: And one of those projects that obviously got a lot of interest, is the way that sneeze particles are spread. And with covid-19, everyone is very interested in understanding how aerosol particles move through the air. 

FrédéricYeah, that is true. We developed a sneeze simulation model from the front of a person to model virtual particles to see the scientific simulation of the human sneeze to evaluate how pathogens, such as covid, would spread. And we did this simulation model with MIT’s CBA with Neil Gershenfeld to first announce the design of the PPE, personal protective equipment, the face shield design. And to see from two virtual persons in front of them, one- or two-meters distance where one is sneezing. What is impact on how those particles would spread from one person to the other, to optimize the design? We very quickly understood, for instance, that those face shields need a top cover since the particles are dropping down and infecting the other person. 

Laurel: So how do you see artificial intelligence augmenting human intelligence? 

FrédéricAI, for many people, AI is deep learning. It is machine learning computer vision, or data science–everybody is doing it. For us, artificial intelligence also leads to generative designs, for instance. The algorithm creates a shape that meets your design intent, your constraints. So, the designer is no longer sketching the shape he wants, he is providing the constraints. The requirements on the algorithm is proposing a design shape that meets those intents. It reverses completely the way the designers perform the function thanks to the artificial intelligence. 

We spoke about human, augmented human by leveraging the virtual twin. Your virtual me, in a way, of your body, of your organs. We have this collaborative project called Living Heart driven by our American colleagues to revolutionize, the cardiovascular science through realistic simulations. This research project delivered a heart model to explore novel digital therapies. And from this model, we accelerated a new startup, a Belgium company called FEops that now can offer the first and the only patient specific simulation model for structural heart intervention with AI, which will predict the best TAVI [those valve implants] that the surgeon would need for matching correctly his patient’s anatomy. 

Laurel: So, the simulation really does come out of the cloud, and out of the computer to real life. And, in a rapid way that helps people on a day-to-day basis, which is really fantastic. It’s not something that just lingers around for approval. You can make changes, see the effect, and then move on to see what else you can do to improve situations.  

The face shield project is also one of those that is so critical. Bringing in the makers, as you said, so many folks wanted to get involved, and still are from around the world, and helping out in their own way. So, this idea of bringing in amateur makers, as well as startups, as well as these professionals, as well as enterprises, all working together to really combat a global pandemic is really quite something else. This shows me that Dassault really does have an innovator’s mindset when it comes to science, when it comes to helping humanity. How else are you seeing the successes of the 3DEXPERIENCE Lab sort of ripple throughout the Dassault? 

FrédéricAt Dassault Systèmes yes, we are all innovators in a way. That’s why, when I established this 3D lab initiative five years ago, I decided not to create a new organization with the boss that would perform innovation. I was willing to have an inclusive management system. We decided to allow any of our 20,000 employees to take up to 10% of their time to volunteer on innovation accelerated by the lab. And bring their hard skills and their know-how knowledge.  

And again, this is possible thanks to this platform. So we invented, in a way, a new management organization with communities, completely across silos, across divisions, so that anyone could join a project for few hours, a few days or a few weeks in order to work on it. It was really a new governance for open innovation, with new management methodologies that impacted not only the person, or the employees, but also our own platform on solutions. We work closely with our R&D to enhance a few or to develop new applications, to sustain new methodologies on process. 

Laurel: And do other companies come to Dassault to ask, “how did you do this?” You’re a large corporation, with global offices, and you’ve been around for a long time. You probably have very specific ways of thinking. How did you manage in five years to become this innovative company, they must want to learn from you? 

FrédéricThat’s true. I don’t know if they want to learn from us, but at least get inspiration from us. What we do is we are always training ahead of our time. Thinking of new ways of working at the lab. We experimented with new usage, thanks to the cloud. We succeeded because now it really works with 20,000 people in operation with deliverables and KPIs. Our point is really to inspire them and to show them what is possible and what we can do to transform ourselves. It’s also digital transformation for Dassault Systèmes with these employees in order for them to think how it could also impact them, how they can also transform their management system and their companies. 

Laurel: That’s excellent. What a perfect way to end today’s interview. Thank you so much for joining us. 

Frédéric: Thank you, Laurel. 

Laurel: That was Frédéric Vacher, the Head of Innovation at Dassault Systèmes, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles River. 

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at dozens of events each year around the world and online. 

For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening. 

Read more

In the second of two exclusive interviews, Technology Review’s Editor-in-Chief Gideon Lichfield sat down with Parag Agrawal, Twitter’s Chief Technology officer to discuss the rise of misinformation on the social media platform. Agrawal discusses some of the measures the company has taken to fight back, while admitting Twitter is trying to thread a needle of mitigating harm caused by false content without becoming an arbiter of truth. This conversation is from the EmTech MIT virtual conference and has been edited for clarity.

For more of coverage on this topic, check out this week’s episode of Deep Tech and our tech policy coverage.

Credits:

This episode from EmTech MIT was produced by Jennifer Strong and Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.

Transcript:

Strong: Hey everybody it’s Jennifer Strong back with part two of our conversation about misinformation and social media. If Facebook is a meeting place where you go to find your community, YouTube a concert hall or backstage for something you’re a fan of, then Twitter is a bit like the public square where you go to find out what’s being said about something. But what responsibility do these platforms have as these conversations unfold? Twitter has said one of its “responsibilities is to ensure the public conversation is healthy”. What does that mean and how do you measure that? 

It’s a question we put to Twitter’s Chief Technology Officer Parag Agrawal. Here he is, in conversation with Tech Review’s editor-in-chief Gideon Lichfield. It was taped at our EmTech conference and has been edited for length and clarity.

Lichfield: A couple of years ago, there was a project you started talking about metrics that would measure what a healthy public conversation is. I haven’t seen very much about it since then. So what’s going on with that? How do you measure this?

Agrawal: Two years ago in working with actually some folks at the MIT media lab and inspired by the thinking, we set out on a project to work with academics outside of the company, to see if we could define a few simple metrics or measurements to indicate the health of the public conversation. What we realized in working with experts from many places is that it’s very, very challenging to boil down the nuances and intricacies of what we consider a healthy public conversation into a few simple to understand, easy to measure metrics that you can put your faith in. And this conversation has informed a change in our approach.

What’s changed is whether or not we are prescriptive in trying to boil things down to a few numbers. But what’s remained is us realizing that we need to work with academic researchers outside of Twitter, share more of our data in an open-ended setting, where they’re able to use it to do research, to advance various fields. Uh, and there are a bunch of API related products that we’ll be shipping in the coming months. And one of the things that directly led to that conversation was in April, as we saw, uh, COVID, uh, we created an end point for COVID-related conversation that academic researchers could have access to. Uh, we’ve seen research across four 20 countries, access it.

So in some sense, I’m glad that we set out on that journey. And I still hold out hope that with this open-ended approach, there’ll be academics and our collaboration with them, which will ultimately lead us to understand public conversation and healthy public conversation enough to be able to boil down the measurement to few metrics. But I’m also excited about all the other avenues of research this approach opens up for us. 

Lichfield: Do you have a sense of what an example of such a metric would look like?

Agrawal: So when we set out to talk about this, we hypothesized, there were a few metrics around, do people share a sense of reality? Do people have diverse perspectives and can be exposed to diverse perspectives? We thought about is the conversation civil, right? So, conceptually these are all properties we desire in a healthy public conversation. The challenge lies in being able to measure them in a way that is able to evolve as the conversation evolves, in a way that is reliable and can stand the test of time, as the conversation two years ago was very different from the conversation today. The challenges two years ago, as we understood them are very different today. Uh, and that’s where some of the challenges and our understanding of what healthy public organization means is still emergent for us to be able to boil it down into these simple metrics.

Lichfield: Let’s talk a little bit about some of the things you’ve done over the last couple of years. I mean, there’s been a lot of attention, obviously, on the decisions to flag some of Donald Trump’s tweets. I think the more systematic work that you’ve been doing over the last couple of years against misinformation, can you summarize the main points of what you’ve been doing? 

Agrawal: Our approach to it isn’t to try to identify or flag all potential misinformation. But our approach is rooted in trying to avoid specific harm that misleading information can cause. We’ve been focused in our approach, and focusing on harm that can be done with misinformation around COVID-19, which has to do with public health, where a few people being misinformed can lead to implications on everyone. Similarly, we focused in on misinformation around what we call civic integrity, which is about people having the ability to know how to participate in elections.

So an example, just to make this clear, is around civic integrity, we care about and we take action on content which might misinform people who say you should vote on November 5th, when election day is November 3rd. And, we do not try to determine uh what’s true or false when someone takes a policy position or when someone says the sky is purple or blue, or red for that matter. Our approach for misinformation is also not one that’s focused on taking content down as the only measure, which is the regime we all have operated in for many years. But it’s an increasingly nuanced approach with a range of interventions, where we think about whether or not certain content should be amplified without context, or whether it’s our responsibility to provide some context so that people can see a bunch of information, but also have the ability and ease to discover all the conversation and context around it, to inform themselves about what they choose to believe in.

Lichfield: How do you evaluate whether something is harmful without also trying to figure out whether it’s true, in other words, COVID specifically for example?

Agrawal: That’s a great question and I think in some cases you rely on credible sources to provide that context. So you don’t always have to determine if something is true or false, but if there’s potential for harm, we choose not to flag something as true or false, but we choose to add a link to credible sources, or to additional conversation around that topic, to provide people context around the piece of content so that they can be better informed, even as this data for understanding and knowledge is evolving. And public conversation is critical to that evolution. We saw people learn through Twitter, because of the way they got informed. And experts have conversations through Twitter to advance the state of our understanding around this disease as well. 

Lichfield: People have been warning about QAnon for years. You started taking down QAnon accounts in July. What took you so long? Why did you… what changed in your thinking?

Agrawal: The way we think about QAnon or we thought about QAnon, is we have a coordinated manipulation policy that we’ve had for awhile, and the way it works is we work with civil services and human rights groups across the globe in trying to understand which groups, or which organizations, or what kind of activity rises to a level of harm where it requires action from us. In hindsight, I wish we’d acted sooner, but since we understood the threat well, by working with these groups, we took action and our actions have involved sort of decreasing amplification of this content and flagging this content in a way that led to very rapid decrease in the amount of reach QAnon and related content got on the platform by over 50%. And since then, we’ve seen sustained decreases as a result of this move.

Lichfield: I’m getting quite a few questions from the audience, which are kind of all asking the same thing. And they’re basically asking, well, I’ll read them. Who gets to decide what is misinformation? Can you give a clear clinical definition of misinformation? Does something have to have malicious intent to be misinformation? How do you know if your credible sources are truthful, what’s measuring the credibility of those sources and someone even saying I’ve seen misinformation in the so-called credible sources. So how do you define that phrase?

Agrawal: I think that’s the, the existential question of our times. Defining misinformation is really, really hard. As we learn through time, our understanding of truth also evolves. We attempt to not adjudicate truth, we focus on potential for harm. And when we say we lean on credible sources, we also lean on all the conversation on the platform that also gets to talk about these credible sources and points out potential gaps as a result of which the credible sources also evolve their thinking or what they talk about.

So, we focused way less on what’s true and what’s false. We focus way more on potential for harm as a result of certain content being amplified on the platform without appropriate context. And context is oftentimes just additional conversation that provides a different point of view on a topic so that people can see the breadth of the conversation on our platform and outside and make their own determinations in a world where we’re all learning together.

Lichfield: Do you apply a different standard to things that come from world leaders? 

Agrawal: We do have a policy around public content in the public interest, it’s in our policy framework. So, yes, we do apply different standards. And this is based on the understanding and the knowledge that there’s certain content from elected officials that is important for the public to see and hear. And that all of the content on Twitter is not only on Twitter. It is in newsrooms, it is in press conferences, but oftentimes the source content is on Twitter. The public interest policy exists to make sure that the source content is accessible. We do however flag very clearly for everyone around when such content violates any of our policies. We take the bold move to flag it, label it so that people have the appropriate context that this is indeed an example of a violation, so people can look at that content in light of that understanding.

Lichfield: If you take President Trump, there was a Cornell study showing that – they measured that 38% of COVID misinformation mentions him. They called them the single largest driver of misinformation around COVID. You flagged some of his tweets, but there’s a lot that he puts out that doesn’t quite rise to the strict definition of misinformation, and yet misleads people about the nature of the pandemic. So doesn’t this, this exception for public officials, doesn’t it undermine the whole strategy?

Agrawal: Every public official has access to multiple ways of reaching people. Twitter is one of them. We exist in a large ecosystem. Our approach in labeling content actually allows us to, at the source flag content, that might potentially harm people, and also provide people additional context and additional conversation around it. So a lot of these studies and I’m not familiar on the one you cited, are actually broader than Twitter. And if they are about Twitter, they talk about reach and impressions, without talking about people also being exposed to other bits of information around the topic. Now, we don’t get to decide what people choose to believe, but we do get to showcase content and a diversity of points of views on any topic, so that people can make their own determinations.

Lichfield: That sounds a little bit like you’re trying to say, well, it’s not just our fault. It’s everybody’s fault. And therefore there’s not much we can do about it.

Agrawal: I don’t believe I’m saying that. What I’m talking about, the topics of misinformation have always existed in society. We are now a critical part of the fabric of public conversation, and that’s our role in the world. These are not topics we get to extricate ourselves from. These are topics that will remain relevant today and will remain relevant in five years. I don’t live in the illusion that we can do something that magically makes the misleading information problem goes away. We don’t have that kind of power or control. And I would honestly like not want that power or control. But we do have the privilege of listening to people, of having a diverse set of people on our platform, them expressing a diverse set of points of view, the things that really matter to everyone, and for us to be able to showcase them with the right context so that society can learn from each other and move forward.

Lichfield: When you talk about letting people see content and draw their own conclusions or come to their own opinions, that’s the kind of language that is associated with, I think the way that social media platforms traditionally presented themselves. ‘We’re just a neutral space, people come and use us, we don’t try to adjudicate’. And it seems a little bit at odds with what you were saying earlier about the wanting to promote a healthy public conversation, which clearly involves a lot of value judgments about what is healthy. So how are you reconciling those two?

Agrawal: Oh, I’m not saying that we are a neutral party to this whole conversation. As I said, we’re critical part of the fabric of public conversation. And, you wouldn’t want us to be adjudicating what’s true or what is false in the world. And honestly, we cannot do that globally in all the countries we work in across all the cultures and all the nuances that exist. We do, however, have the privilege of having everyone on the platform being able to change things, to give people more control and have to steer the conversation in a way that it’s sort of more receptive and allows more voices to be heard and for all of us to be better informed. 

Lichfield: One of the things that some observers say you could do that would make a big difference would be to abolish the trending topics feature, because that is where a lot of misinformation ends up getting surfaced. Things like the QAnon hashtag save the children, or there was a conspiracy theory about Hillary Clinton staffers rigging the Iowa caucus. Sometimes things like that make their way into trending topics, and then they have a big influence. What do you think about that?

Agrawal: I don’t know if you saw it, but just this week we made a change to how trends and trending topics work on the platform. And one of the things we did was, we’re going to show context on everything that trends, so that people are better informed as they see what people are talking about.

Strong: We’re going to take a short break – but first… I want to suggest another show I think you’ll like. Brave New Planet weighs the pros and cons of a wide range of powerful innovations in science and tech. Dr. Eric Lander, who directs the Broad Institute of MIT and Harvard, explores hard questions like

Lander: Should we alter the Earth’s atmosphere to prevent climate change? And, can truth and democracy survive the impact of deepfakes? 

Strong: Brave New Planet is from Pushkin Industries. You can find it wherever you get your podcasts. We’ll be back right after this.

[Advertisement]

Strong: Welcome back to a special episode of In Machines We Trust. This is a conversation between Twitter’s Chief Technology Officer Parag Agrawal and Tech Review’s editor-in-chief Gideon Lichfield. If you want more on this topic, including our analysis, please check out the show notes or visit us at Technology Review dot com.

Lichfield: The election obviously is very close. And I think a lot of people are asking what is going to happen particularly on election day, as reports start to come in from the polls, there’s worry that some politicians are going to be spreading rumors of violence or vote rigging or other, other problems, which in turn could spark demonstrations and violence. And so that’s something that all of the social platforms are going to need to react to very quickly in real time. What will you be doing?

Agrawal: We’ve worked through elections in many countries over the last four years. India, Brazil, large democracies learned through each of them, and we’ve been doing work over the years to be better prepared for what’s to come. Last year we made a change around policy to ban all political advertising on Twitter, which was in anticipation of its potential to do harm. And we wanted our attention to be focused, not on advertising, but on the public conversation that’s happening organically to be able to protect it and improve it, especially as it relates to conversations around the elections.

We did a bunch of work on technology to get better at detecting and understanding state bad actors and their attempts to manipulate elections, and we’ve been very transparent about this. We’ve made public releases of hundreds of such operations from over 10 nations, with tens of thousands of accounts each and terabytes of data that allow people outside the company to analyze it and understand the patterns of manipulation at play. And we’ve gone ahead with product changes to make there be more consideration and thoughtfulness in how people share content and how people amplify content.

So, we’ve done a bunch of this work in preparation and through learnings along the way. To get to an answer about election night. We’ve also strengthened policies on our civic integrity to not allow anyone, any candidate or anyone across all races to be able to claim an election when a winner has not been declared. We also have strict measures in place to avoid incitements of violence. And we have a team ready, which will work 24/7 to put us in an agile state. 

That being said, we’ve done a bunch of work to anticipate what could happen, but one thing we know for sure is what’s likely to happen is not something we’ve exactly anticipated. So what’s going to be important for us on that night and beyond, and even leading up to that time to be prepared, to be agile, to respond to the feedback we were getting on the platform, to respond to the conversation you see seeing on and off platform, uh, and try to do our best to serve the public conversation conversation in this important time in this country.

Lichfield: Someone in, uh, in the audience asked something that I don’t think you would agree to, which was, they said, should Facebook and Twitter be shut down for three days before the election? But maybe a more modest version of that would be, is there some kind of  content that you think should be shut down right before an election?

Agrawal: Just this week one of the prominent changes that’s worth talking about in some detail is we made people have more consideration, more thought when they retweet. So instead of being able to easily just retweet content without additional commentary, we now default people into adding a comment when they retweet. And this is for two reasons, one to add additional considerations when you retweet and amplify certain content and two, to have content be shared with more context about what you think about it so that people understand why you’re sharing it, and what the context around the set of conversation is. We also made the trends change which I described earlier. These are changes which are meant to have the conversation on Twitter be more thoughtful.

That being said, Twitter is going to be a very, very powerful tool during the time of elections for people to understand what’s happening, for people to get really important information. We have labels on all candidates. We have information on the platform about how they can vote. We have real-time feedback coming from people all over the country, telling people what’s happening on the ground. And all of this is important information for everyone in this country to be aware of in that time. It’s a moment where each of us is looking for information and our platform serves a particularly important role on that day.

Lichfield: You’re caught in a bit of a hard place as somebody in the audience is also pointing out, that you’re trying to combat misinformation, you also want to protect free speech as a core value, and also in the U.S. as the first amendment. How do you balance those two?

Agrawal: Our role is not to be bound by the First Amendment, but our role is to serve a healthy public conversation and our moves are reflective of things that we believe lead to a healthier public conversation. The kinds of things that we do about this is, focus less on thinking about free speech, but thinking about how the times have changed. One of the changes today that we see is speech is easy on the internet. Most people can speak. Where our role is particularly emphasized is who can be heard. The scarce commodity today is attention. There’s a lot of content out there. A lot of tweets out there, not all of it gets attention, some subset of it gets attention. And so increasingly our role is moving towards how we recommend content and that sort of, is, is, a struggle that we’re working through in terms of how we make sure these recommendation systems that we’re building, how we direct people’s attention is leading to a healthy public conversation that is most participatory. 

Lichfield: Well, we are out of time, but thank you for really interesting insight into how you think about these very complicated issues.

Agrawal: Thank you Gideon for having me.

[Music]

Strong: If you’d like to hear our newsroom’s analysis of this topic and the election… I’ve dropped a link in our show notes. I hope you’ll check it out. This episode from EmTech was produced by me and by Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. As always, thanks for listening. I’m Jennifer Strong.

[TR ID]

Read more

Misinformation and social media have become inseparable from one another; as platforms like Twitter and Facebook have grown to globe-spanning size, so too has the threat posed by the spread of false content. In the midst of a volatile election season in the US and a raging global pandemic, the power of information to alter opinions and save lives (or endanger them) is on full display. In the first of two exclusive interviews with two of the tech world’s most powerful people, Technology Review’s Editor-in-Chief Gideon Lichfield sits down with Facebook CTO Mike Schroepfer to talk about the challenges of combating false and harmful content on an online platform used by billions around the world. This conversation is from the EmTech MIT virtual conference and has been edited for length and clarity.

For more of coverage on this topic, check out this week’s episode of Deep Tech and our tech policy coverage.

Credits:

This episode from EmTech was produced by Jennifer Strong and Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.

Transcript:

Strong: Hey everybody, it’s Jennifer Strong. Last week I promised to pick out something to play for you from EmTech, our newsroom’s big annual conference. So here it is. With the U-S election just days away, we’re going to dive straight into one of the most contentious topics in the world of tech and beyond – misinformation. 

Now a lot of this starts on conspiracy websites, but it’s on social media that it gets amplified and spread. These companies are taking increasingly bold measures to ban certain kinds of fake news and extremist groups, and they’re using technology to filter out misinformation before humans can see it. They claim to be getting better and better at that, and one day they say they’ll be able to make the internet safe again for everyone. But, can they really do that? 

In the next two episodes we’re going to meet the chief technology officers of Facebook and Twitter. They’ve both taken VERY different approaches when it comes to misinformation, in part because a lot of what happens on Facebook is in private groups, which makes it a harder problem to tackle. Whereas on Twitter, most everything happens in public. So, first up – Facebook. Here’s Gideon Lichfield, the editor in chief of Tech Review. He’s on the virtual mainstage of EmTech for a session that asks, ‘Can AI clean up the internet’? This conversation’s been edited for length and clarity.

Lichfield: I am going to turn to our first speaker, who is Mike Schroepfer. Known generally to all his colleagues as Schrep. He is the CTO of Facebook. He’s worked at Facebook since 2008 and when it was a lot smaller and he became CTO in 2013. Last year The New York Times wrote a big profile of him, which is a very interesting read. It was titled ‘Facebook’s AI whiz is now facing the task of cleaning it up. Sometimes that leads him tears.  Schrep, welcome. Thank you for joining us at EmTech.

Schroepfer: Hey Gideon, thanks. Happy to be here.

Lichfield: Facebook has made some pretty aggressive moves particularly in just the last few months. You’ve taken action against QAnon, you’ve banned Holocaust denial, and anti-vaccination ads. But people have been warning about QAnon for years, people have been warning about anti-vaccination misinformation for years. So, why did it take you so long? What, what, changed in your thinking to make you take this action?

Schroepfer: Yeah, I mean, the world is changing all the time. There’s a lot of recent data you know, on the rise of antisemitic beliefs or lack of understanding about the Holocaust. QAnon you know has moved into more of a threat of violence in recent years. And the idea that there would be threats of violence around a US election is a new thing. And so, particularly around places where society and things that are critical events, like an election, we’re doing everything we can to, to make sure that people feel safe and secure and informed to make the decision they get to make to elect who is in government. And so we’re taking more aggressive measures.

Lichfield: You said something just now, you said there was a lot of data. And that sort of resonates with me with something that I had Alex Stamos, the former chief security officer of Facebook, he said in a podcast recently, that at Facebook decisions are really taken on the basis of data. So is it that you need, you needed to have overwhelming data evidence, but, you know, the Holocaust denial is causing harm or the QAnon is causing harm before you take action against it. 

Schroepfer: What I’d say is this is. We operate a service that’s used by billions of people around the world and so a mistake I don’t wanna make is assume that I understand what other people need, what other people want, or what’s happening. And so, a way to avoid that is to rely on expertise where we have it. So, you know, for example, for dangerous organizations, we have many people with backgrounds in counter terrorism, went to West Point, we have many people with law enforcement backgrounds where you talk about voting interference, we have experts with backgrounds and voting and rights.

And so you, you listen to experts, uh, and you look at data and you, and you try to understand that topic rather than, you know, you don’t want me making these decisions. You, you want sort of the experts and you want the data to do it. And because it’s not just, you know, this issue here, it’s, it’s issues of privacy, it’s issues and locales, and, and, so I would say that we try to be rigorous in using sort of expertise and data where we can, so we’re not making assumptions about what’s happening in the world or, or what we think people need.

Lichfield: Well, let’s talk a bit more about QAnon specifically because the approach that you take, obviously, to handling this information, as you try to train your AIs to recognize stuff that is harmful. And the difficulty with this approach is the nature of misinformation keeps changing it’s context specific, right? And misinformation about Muslims in Myanmar, which sparked riots there. You don’t know that that is misinformation until it starts appearing. The issue it seems to me with Q Anon is it’s such a, it’s not like ISIS or something. its beliefs keep changing the accounts, keep changing. So, how do you tackle something that is so ill defined as, as a threat like that?

Schroepfer: Well, you know, I will talk about this and, and I think one of the, from a technical perspective, one of the hardest challenges that I’ve been very focused on in the last few years, because of similar problems in terms of subtlety, coded language and adversarial behavior, which is hate speech.  There’s overt hate speech, which is very obvious and you can use sort of phrases you’ve banked or, or, or keywords. But people adapt and they use coded language and they do it, you know, on a daily, weekly basis. And you can even do this with memes where you have a picture and then you overlay some words on top of it, and it completely changes the meaning. You smell great today. And the pictures of skunk is a very different thing than, you know, a flower, and you have to put it all together.

And so, um, and similarly, as you say, with QAnon and there can be subtlety and things like that. This is why I’ve been so focused on, you know, a couple of key AI technologies. One is we’ve dramatically increased the power of these classifiers to understand and, and deal with nuanced information. You know, five or ten years ago, sort of keywords were probably the best we could do. Now we’re at the point where our classifiers are catching errors in the labeling data or catching errors that human reviewers sometimes make. Because they are powerful enough to catch subtlety in topics like, is this a post that’s inciting violence against a voter? Or are they just expressing displeasure with voting or this population? Those are two very… unfortunately it’s a, it’s a fine line when you look at how careful people try to be about coding the language to sort of get around it.

And so you see similar things with QAnon and others. And so we’ve got classifiers now that, that, you know, our state-of-the-art work in multiple languages and are really impressive in what they’ve done through techniques that we can go into like self supervision, um, to look at, you know, billions of pieces of data to, to train. And then the other thing we’ve got is we sort of use a similar technique like this, that allows us to do, you know, the best way to describe it as sort of fuzzy matching. Which is as a human reviewer, spends the time and says, you know what, I think that these pieces of misinformation, or this is a QAnon group, even though it’s coded in different languages, what we can then do is sort of fan out and find things that are semantically similar, not the exact words, not keywords, not regexes, um, but things that are very close in a, in an embedding space that are semantically similar. And then we can take action on them.

And this allows what I call quick reaction. So, even if I had no idea what this thing was yesterday, today, if a bunch of human reviewers find it, we can then go amplify their work sort of across the network and implement that proactively anytime new pieces of information. Just to put this in context, you know, in Q2, we took down 7 million pieces of COVID misinformation. Obviously in Q4 of last year, there was no such thing as COVID misinformation. So we had to sort of build a new classifier techniques to do this. And the thing I’ve challenged the team is like getting our classifier build time down from what used to be many, many months to, you know, what, sometimes weeks, to days, to minutes. First time I see an example, or first time I read a new policy, I want to be able to build a classifier that’s functional at, you know, at billion user scale. And, you know, we’re not there yet, but we’re making rapid progress

Lichfield: Well. So I think this is what the question is, how rapid is the progress, right? That, that 7 million pieces of misinformation statistic. I saw that quoted by a Facebook spokesperson in response to a study that came out from Avaaz in August. And it had looked at COVID misinformation that found that the top 10 websites that were spreading misinformation had four times as many estimated views on Facebook as equivalent content from the websites of 10 leading health institutions, like the WHO, they found that only 16% of all health misinformation, they analyzed had a warning label from Facebook. So in other words, you’re obviously doing a lot, you’re doing a lot more than you were and you, and you’re still, by that count way behind the curve. How, and this is a crisis that is killing people. So how long is it going to take you to get there, do you think?

Schroepfer: Yeah, I mean, I think that, you know, this is where, you know, I’d like us to be publishing more data on this. Because really what you needed to compare apples to apples is overall reach of this information, and sort of what is the information, sort of, exposure diet of the average Facebook user. And I think there’s a couple of pieces that people don’t get. The first is most people’s newsfeed is filled with content from their friends. Like, news links, these are sort of a minority of the views all in and people’s news feed and Facebook. I mean, the point of Facebook is to connect with your friends and you’ve probably experienced this yourself. It’s, you know, posts and pictures and things like that.

Secondly, on things like COVID misinformation, like what you really got to compare that with is, comparing it, for example, to views of our COVID information center, which we literally shoved to the very top of the newsfeed so that everyone could get information on that. We’re doing similar things, um, for voting. We’ve help to register almost two and a half million voters, in the U.S.. Similar information, you know, for issues of racial justice given all the horrible events that have happened this year. So what I don’t have is the comprehensive study of, you know, how many times did someone view the COVID information hub versus these other things? Um, you know, but my guess is it would be that they’re getting a lot more of that good information from us.

But look, you know, anytime any of this stuff escapes I’m, I’m not done yet. This is why I’m still here doing my job is, is we want to get this better. And, and, and yes, I wish it was 0%. I wish our classifiers were 99.999% accurate. They’re not. You know, my job is to get them there as fast as humanly possible. And when we get off this call, that’s what I’m going to go work on. What I can do is just look at like recent history and project progress forward. Because I can’t fix the past, but I can fix today and tomorrow. When I look at things like, you know, hate speech where, you know, in 2017, only about a quarter of the pieces of hate speech were found by our systems, first. Almost three quarters of it was found by someone on Facebook first. Which is awful, which means they were exposed to it and had to had to report it to us. And now the number’s up to 99, 94.5%. Even in the last, you know, between Q2 of this year and same time last year, we 5Xed, the amount of content we’re taking down for hate speech. And I can trace all of that. Now, that number should be 99.99 and we shouldn’t even be having this conversation because you should say, I’ve never seen any of this stuff, and I never hear about it, ‘cause it’s gone.

That is my goal, but I can’t get there yet. But if you just look at the last, you know, anytime I say something 5Xs in a year, or it goes from 24% to 94% in two years, like, and I say, we’ve got a, we’re not, I’m not out of ideas, we’re still deploying state-of-the-art stuff like this week, next week, last week, then that’s why I’m optimistic overall that, that we’re going to move this problem into a place where it’s not the first thing you want to talk to me about but I’m not there yet.

Lichfield: It’s a tech problem. It’s also obviously a, a workforce problem. You’re obviously going to be familiar with, uh, the, the memo that Sophie Zhang, who was a former Facebook data scientist wrote when she departed. And she wrote about how she was working on one of the teams, you have multiple teams that work on trying to identify harmful information around the world. And her main complaint, it seems was that she felt like those teams were understaffed and she was having to prioritize decisions about whether to treat, you know, misinformation around an election in a country for instances as dangerous. And when that, those decisions one prioritized, sometimes it could take months for a problem to be dealt with and that could have real consequences. Um, you have, I think what 15,000 human moderators right now, do you think you have enough people?

Schroepfer: I never think we have enough people on anything. So I, you know, I’ve yet to be on a project where we were looking for things to work on and I mean that real seriously. And we, you know, at 35,000 people working on this from, you know, review and content and safety and security side. The other thing that I think we don’t talk a lot about is, if you go talk to the heads of my AI team and ask them what has Schrep been asking us to do for the last three years, it’s integrity, it’s content moderation. It’s not cool wizzy, new things. It’s like, how do we fight this problem? And it’s been years we’ve been working on it.

So I’ve taken sort of the best and the brightest we have in the company and said, you know, and it’s not like I have to order them to do it because they want to work on it. I say, we’ve got this huge problem, we can help, let’s go get this done. Are we done yet? No. Am I impatient? Absolutely. Do I wish we had more people working on it? All the time. You know, we have to make our trade-offs on these things, and so, you know, um, but my job, you know, and what we can do with technology is sort of remove some of those trade-offs. You know, every time we deploy a new, more powerful classifier, um, that removes a ton of work from our human moderators, who can then go work on higher level problems. You know, instead of you, you know, really easy decisions, they move on to misinformation and really vague things and evaluating dangerous groups and that sort of moving people up the difficulty curve is, is also improving things. And that’s what we’re trying to do. 

Strong: We’re going to take a short break – but first, I want to suggest another show I think you’ll like. Brave New Planet weighs the pros and cons of a wide range of powerful innovations in science and tech. Dr. Eric Lander, who directs the Broad Institute of MIT and Harvard explores hard questions like;

Lander: Should we alter the Earth’s atmosphere to prevent climate change? And Can truth and democracy survive the impact of deepfakes? 

Strong: Brave New Planet is from Pushkin Industries. You can find it wherever you get your podcasts. We’ll be back right after this.

[Advertisement]

Strong: Welcome back to a special episode of In Machines We Trust. This is a conversation between Facebook’s Mike Schroepfer and Tech Review’s Editor-In-Chief Gideon Lichfield. It happened live on the virtual stage of our EmTech Conference, and it’s been edited for length and clarity. If you want more on this topic, including our analysis, please check out the show notes or visit us at Technology Review dot com.

Lichfield: A couple of questions that I’m going to throw in from the audience, how does misinformation affect Facebook’s revenue stream? And another is, um, about, uh, how does it affect trust in Facebook? Well, there seems to be an underlying lack of trust in Facebook and how do you measure trust? And the gloss that we want to put on these questions is, clearly you care about misinformation, clearly a lot of the people that work at Facebook care about it or worried by it, but there is, I think an underlying question that people have is does Facebook as a company care about it, is it impacted by it negatively enough for it to really tackle the problem seriously? 

Schroepfer: Yeah. I mean, look, I’m a person in society too. I care a lot about democracy and the future and advancing people’s lives in a positive way. And I challenge you to find, you know, someone who feels differently inside our offices. And so we, yes, we work at Facebook, but we’re people in the world and I care a lot about the future for my children. And well, well, you’re asking, do we care? And the answer is yes. Um, you know, do we have the incentives? Like what did we spend a lot of our time talking about today? We talked about misinformation and other things, you know, honestly, what would I rather talk about? I’d rather talk about VR and, and positive uses of AR and all the awesome new technology we’re building, because, you know, that’s, that’s normally what a CTO would be talking about.

So it is obviously something that is challenging trust in the company, trust in our products, that is a huge problem for us, um, from a self-interest standpoint. So even if you think I’m full of it, you just, from a practical self-interested standpoint, like as a brand, as a consumer product that people voluntarily use every single day, when I try to sell a new product like Portal, which is a camera for your home, like the people trust the company that’s behind this product and think we have, you know, their, their best intentions at heart. If they don’t, it’s going to be a huge challenge for absolutely everything I do. So, I think the interests here are, are pretty aligned. I don’t think there’s a lot of good examples of consumer products that are free, that survive if people don’t like them, don’t like the companies or think they’re bad. So this is from a self-interested standpoint, a critical issue for us. 

[Credits]

Strong: This conversation with Facebook’s CTO is the first of two episodes on misinformation and social media. In the next part we chat with the CTO of Twitter. If you’d like to hear our newsroom’s analysis of this topic and the election, I’ve dropped a link in our show notes. I hope you’ll check it out. This episode from EmTech was produced by me and by Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. As always, thanks for listening.  I’m Jennifer Strong. 

[TR ID]

Read more

Defining what is, or isn’t artificial intelligence can be tricky (or tough). So much so, even the experts get it wrong sometimes. That’s why MIT Technology Review’s Senior AI Reporter Karen Hao created a flowchart to explain it all. In this bonus content our Host Jennifer Strong and her team reimagine Hao’s reporting, gamifying it into an audio postcard of sorts. 

Credits:

This episode was reported by Karen Hao. It was adapted for audio and produced by Jennifer Strong and Emma Cillekens. The voices you heard were Emma Cillekens, as well as Eric Mongeon and Kyle Thomas Hemingway from our art team. We’re edited by Michael Reilly and Niall Firth.

Transcript:

Strong: Hi there… I’m Jennifer Strong, senior editor for live journalism, podcasts and host of the series, In Machines We Trust.

Hao: And I’m Karen Hao, Tech Review’s senior reporter covering artificial intelligence. 

Strong: We’re at the biggest conference of the year for our newsroom diving into trends in emerging technology – it’s called EmTech.

Hao: But don’t worry, We aren’t going to leave you empty handed.

Strong: No, not at all. Thank you so much for being here and as a way to show our appreciation, we made you a little gift. 

Hao: A while back I drew something to help make sense of a really basic question, so basic, and yet… so important. We return to it constantly in our work to try to make sure we’re all on the same page.

Strong: Karen’s drawing helps us tease out whether something actually involves artificial intelligence.

Hao: That’s because it’s confusing! Even to experts. Companies claiming to use AI also fail this test more often than you might think.

Strong: Problem is, you can’t see me holding it up right now. And so, instead, we’ve gamified it into what I think of as an audio postcard.

Hao: Yes, it’s a wonderful audio postcard. Have fun! 

Cillekens: Ladies and gentlemen… “Welcome to ‘This is AI”!

[Music]

Players will ask questions that get to the bottom of what it is or isn’t AI. And I’ve brought along an “assistant” to help out with the answers.

Voice Assistant: Hello. 

Cillekens: Hello, Alexa. And just so we’re all on the same page… Artificial Intelligence… in its broadest sense refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, much like humans and animals do. This bell…

[Bell ding to indicate correct answer]

…means correctly identified AI… and this buzzer…

[Buzzer to indicate false answer]

Well, not so much. Ok. So, let’s test your knowledge.. Ready… set… player one, go!

Mongeon: Can ‘it’ see?

Voice Assistant: Yes

Mongeon: Can it identify what it sees?

Voice Assistant: No

[Buzzer to indicate false answer]

Cillekens: Ok, so that’s just a camera.

Mongeon:…ok ok… but what if it can identify what it sees? 

[Bell ding to indicate correct answer]

Cillekens: Yep – that’s computer vision and image processing. Player two!

Hemingway: Can it hear?

Voice Assistant: Yes

Hemingway: Does it respond in a useful, sensible way to what it hears?

Voice Assistant: Yes 

[Bell ding to indicate correct answer]

Cillekens: So, that’s N-L-P – natural language processing. The goal of this kind of A-I is to help computers make sense of human languages in a way that’s useful. But..????. what if it doesn’t respond in a useful, sensible was to what it hears… Could that also be A-I?

Hemingway: If it’s transcribing what you say? 

[Bell ding to indicate correct answer]

Cillekens: That’s also A-I — it’s speech recognition…// which is similar but working from the spoken word instead of text. New round of questions! Player 1.

Mongeon: Can it read?

Voice Assistant: Yes

Mongeon: Is it reading what you type?

Voice Assistant: No

Mongeon: Is it reading passages of text? 

Voice Assistant: Yes

Mongeon: Is it analyzing the text for patterns?

Voice Assistant: Yes

[Bell ding to indicate correct answer]

Cillekens: Once again that’s N-L-P – natural language processing. Well done! 

Hemingway: I’ll take that same question again. Can it read?

Voice Assistant: Yes

Hemingway: Is it reading what you type?

Voice Assistant: Yes

Hemingway: Does it respond in a sensible, useful way? 

Voice Assistant: Yes

[Bell ding to indicate correct answer]

Cillekens: Ok – that’s also N-L-P – natural language processing. New question please player 1.

Mongeon: Can it reason?

Voice Assistant: Yes

Mongeon: Is it looking for patterns in massive amounts of data? 

Voice Assistant: Yes

Mongeon: Is it using these patterns to make decisions?

Cillekens: Well, if not, that sounds like math…. 

Mongeon: But if it is using patterns to make decisions?

Voice Assistant: Yes

[Bell ding to indicate correct answer]

Cillekens: Then that’s machine learning – which is when a machine learns through experience. Final round!

Hemingway: Can it move?

Voice Assistant: Yes

Hemingway: By itself, without help?

Voice Assistant: Yes

Hemingway: Does it move based on what it sees and hears? 

Voice Assistant: Yes

Hemingway: Are you sure it’s not just moving along a pre-programmed path?

Voice Assistant: Hmmm. I’m not sure.

Cillekens: Very funny… but if so, that’s just a bot.

[Buzzer to indicate false answer]

Hemingway: Ok, let’s try that again. Is it moving along a pre-programmed path?

Voice Assistant: No.

[Bell ding to indicate correct answer]

Cillekens: Ok, so that’s a smart robot, meaning one that’s using A-I to make some of its own decisions. Great….And that’s the game. Thanks for playing!

[Music]

Strong: We’re going to take a short break – but first, I want to suggest another show I think you’ll like. Brave New Planet weighs the pros and cons of a wide range of powerful innovations in science and tech. Dr. Eric Lander, who directs the Broad Institute of MIT and Harvard, explores hard questions like…

Lander: Should we alter the Earth’s atmosphere to prevent climate change? And can truth and democracy survive the impact of deepfakes? 

Strong: Brave New Planet is from Pushkin industries. You can find it wherever you get your podcasts. We’ll be back right after this.

[Advertisement]

Strong: Welcome back to this bonus episode of In Machines We Trust. We’re at our EmTech Conference this week… and so instead of coming to you with a regular episode… we made something fun that hopefully helps clear up some confusion around what is or isn’t A-I…

Hao: And if you’d like to see what it looks like on paper, you can check out my drawing. It’s called “What is A-I” at Technology Review dot com.

Strong: In the meantime, Karen and I are going to put our heads together and pick out something to play for you from the conference… so keep an eye out for that.

Hao: We’re also working on another short series about the many ways facial recognition is used in things like retail and sports… we hope you’ll join us

[Music]

Strong: Many thanks to the talented voices in this episode — including our producer, Emma Cillekens, and our art team — Eric Mongeon and Kyle Thomas Hemingway. Which, by the way, if you like our cover art you just heard the folks who made it. Karen Hao did the reporting and it was adapted by me, Jennifer Strong. Our editors are Michael Reilly and Niall Firth.  

Hao: Tell us what you think. Love it? Hate it? Have suggestions for what you’d like to hear next? Please send us feedback to podcasts at technology review dot com. 

[TR ID]

Read more

The news: A new AI model for summarizing scientific literature can now assist researchers in wading through and identifying the latest cutting-edge papers they want to read. On November 16, the Allen Institute for Artificial Intelligence (AI2) rolled out the model onto its flagship product, Semantic Scholar, an AI-powered scientific paper search engine. It provides a one-sentence tl;dr (too long; didn’t read) summary under every computer science paper (for now) when users use the search function or go to an author’s page. The work was also accepted to the Empirical Methods for Natural Language Processing conference this week.

A screenshot of the TLDR feature in Semantic Scholar.
A screenshot of the tl;dr feature in Semantic Scholar.
AI2

The context: In an era of information overload, using AI to summarize text has been a popular natural-language processing (NLP) problem. There are two general approaches to this task. One is called “extractive,” which seeks to find a sentence or set of sentences from the text verbatim that captures its essence. The other is called “abstractive,” which involves generating new sentences. While extractive techniques used to be more popular due to the limitations of NLP systems, advances in natural language generation in recent years have made the abstractive one a whole lot better.

How they did it: AI2’s abstractive model uses what’s known as a transformer—a type of neural network architecture first invented in 2017 that has since powered all of the major leaps in NLP, including OpenAI’s GPT-3. The researchers first trained the transformer on a generic corpus of text to establish its baseline familiarity with the English language. This process is known as “pre-training” and is part of what makes transformers so powerful. They then fine-tuned the model—in other words, trained it further—on the specific task of summarization.

The fine-tuning data: The researchers first created a dataset called SciTldr, which contains roughly 5,400 pairs of scientific papers and corresponding single-sentence summaries. To find these high-quality summaries, they first went hunting for them on OpenReview, a public conference paper submission platform where researchers will often post their own one-sentence synopsis of their paper. This provided a couple thousand pairs. The researchers then hired annotators to summarize more papers by reading and further condensing the synopses that had already been written by peer reviewers.

To supplement these 5,400 pairs even further, the researchers compiled a second dataset of 20,000 pairs of scientific papers and their titles. The researchers intuited that because titles themselves are a form of summary, they would further help the model improve its results. This was confirmed through experimentation.

Semantic Scholar's TLDR feature on mobile.
The tl;dr feature is particularly useful for skimming papers on mobile.
AI2

Extreme summarization: While many other research efforts have tackled the task of summarization, this one stands out for the level of compression it can achieve. The scientific papers included in the SciTldr dataset average 5,000 words. Their one-sentence summaries average 21. This means each paper is compressed on average to 238 times its size. The next best abstractive method is trained to compress scientific papers by an average of only 36.5 times. During testing, human reviewers also judged the model’s summaries to be more informative and accurate than previous methods.

Next steps: There are already a number of ways that AI2 is now working to improve their model in the short term, says Daniel Weld, a professor at the University of Washington and manager of the Semantic Scholar research group. For one, they plan to train the model to handle more than just computer science papers. For another, perhaps in part due to the training process, they’ve found that the tl;dr summaries sometimes overlap too much with the paper title, diminishing their overall utility. They plan to update the model’s training process to penalize such overlap so it learns to avoid repetition over time.

In the long-term, the team will also work summarizing multiple documents at a time, which could be useful for researchers entering a new field or perhaps even for policymakers wanting to get quickly up to speed. “What we’re really excited to do is create personalized research briefings,” Weld says, “where we can summarize not just one paper, but a set of six recent advances in a particular sub-area.”

Read more

Ready to run LinkedIn ads but don’t know where to start? Worried the costs are too high? In this article, you’ll learn how to properly set up LinkedIn’s most cost-effective ad type: LinkedIn text ads. You’ll find tips for targeting, bidding on cost per click (CPC), and more. To learn how to set up LinkedIn […]

The post How to Use LinkedIn Text Ads: The Budget-Friendly Option appeared first on Social Media Examiner | Social Media Marketing.

Read more
1 2,571 2,572 2,573 2,574 2,575 2,680