Ice Lounge Media

Ice Lounge Media

Over the past year, we’ve seen a rise in robotaxis and autonomous vehicle use. Companies such as Waymo, Cruise, and Baidu have all made strong headway as industry pioneers. In China specifically, 2020 headlines regularly featured major autonomous vehicle announcements, such as the public launch of Baidu Apollo robotaxi services in the cities of Beijing, Changsha, and Cangzhou.

But despite the increasing visibility of robotaxis and the public’s broader exposure to autonomous vehicle technology, many people remain hesitant about the safety of self-driving cars. Nearly three in four Americans say autonomous vehicle technology “is not ready for primetime,” according to a poll from Partners for Automated Vehicle Education (PAVE). Nearly half (48%) say they would never get in a taxi or ride-sharing vehicle that was self-driving.

Building trust between human passengers and self-driving cars is fundamental to the public’s use of autonomous vehicles. The current lack of trust may stem from high-profile controversies and accidents involving passengers and autonomous vehicles. Tesla’s FSD (Full Self-Driving) vehicles raised concerns because its technology was released without proper safety-focused communications while requiring drivers to constantly monitor the road. Uber’s self-driving vehicles drove controversy after one test drive ended with a fatality when the vehicle failed to recognize a pedestrian. The risks and mistrust of autonomous vehicles make safety the top priority for companies and their passengers.

As the leading autonomous driving player in China, Baidu has made significant progress building public trust in its autonomous vehicle technology. Part of its success stems from the continual development of industry-leading safety features, such as a state-of-the-art artificial intelligence (AI) system and 5G-enabled teleoperation. But just as important has been its deliberate focus on developing these leading safety features and communicating the ways these technologies and best practices—such as information security and safety assessments—ensure safe travel by minimizing hazards and accidents.

So what role do these technologies play in improving the safety of self-driving cars? How did Baidu communicate around these features to quell consumer concerns? And what can other companies learn from Baidu’s journey to better develop widespread public trust in self-driving vehicles?

The screen shown to Apollo Go passengers visualizes how the vehicle sees its environment.

A reliable AI system

The Baidu Apollo fleet of nearly 500 autonomous driving vehicles is known for its reliability and track record. The self-driving cars have driven more than 7 million kilometers (4.35 million miles) with zero accidents and have safely carried more than 210,000 passengers. In 2019, Baidu’s autonomous vehicles drove 108,300 miles in California, with only six disengagements and no accidents or injuries. Similarly, Baidu’s 52 autonomous vehicles traveled nearly 468,513 miles in Beijing in 2019 without incident.

Simulation tests and a well-devised verification mechanism are the key to building a safe autonomous vehicle. A 2018 report that demonstrates autonomous vehicle reliability calculates that it would take 8.8 billion miles—with a fleet of 100 vehicles being test-driven 24 hours a day, 365 days a year, at an average speed of 25 miles per hour.

Simulation makes large-scale testing of autonomous vehicles practically possible. Each iteration of the Baidu Apollo system is tested for millions of miles every day in a virtual environment that now accommodates over 10 million simulation scenarios. Baidu researchers also introduced a method to augment real-world pictures with a simulated traffic flow to create photorealistic simulation scenarios.

Before testing on public roads, the system undergoes a series of hierarchy verifications, starting from a fully virtual environment to mixed reality to closed areas. Each module of self-driving cars including models, software, sensors, and vehicles will be fully examined.

Driverless backups: 5G-enabled teleoperation

The Baidu Apollo integrated AI system allows its vehicles to drive independently without a safety driver inside the vehicle. To ensure public safety in the extreme road conditions, Baidu integrated 5G-enabled teleoperation into its vehicles.

The 5G Remote Driving Service, powered by 5G networks, smart transportation systems, and vehicle-to-everything technologies, provides immediate assistance from remote human operators during emergency situations. The operators aim to ensure the safety of passengers and pedestrians while its non-autonomous driving mode is in use.

All remote human operators have completed more than 1,000 hours of cloud-based driving training without any accidents and can ensure the safety of passengers and pedestrians when the non-autonomous driving mode is engaged. Since the autonomous vehicle drivers can handle most road conditions, human intervention is a rare occasion. Currently, Apollo’s self-driving cars in Changsha and Beijing have 5G-enabled teleoperation that can enable them to test on public roads without a safety driver.

Regardless of the efficacy of integrated AI systems, systems such as Baidu’s 5G Remote Driving Service guarantees that a person is always available to intervene.

Baidu co-founder and CEO Robin Li (center) introduces 5G Remote Driving Service at Baidu World 2020.

Information security

To further the advancement of safety in mobility as a long-term vision toward the development of an autonomous driving ecosystem, Baidu launched an automotive cybersecurity lab, becoming the only Chinese company to do so.

The lab researches automotive cybersecurity technologies, trends, and solutions for in-vehicle systems, car-to-car communications, the controller area network, and sensors. It explores best security practices in data protection, in-vehicle infotainment, reference software, and hardware designs, and countermeasures to fake signals that misguide autonomous driving systems.

Knowing vulnerabilities and potential cyberattacks are industrywide concerns. In 2019, Baidu joined hands with automotive industry leaders and released a 157-page white paper that outlined how to build, test, and operate a safe automated vehicle. Through its CTF Challenge contest, Baidu encouraged developers and white hat hackers to design protection mechanisms for autonomous driving systems.

The interface view of Baidu’s first CTF Challenge for autonomous driving.

The lab, widespread information sharing, and strong industry collaboration are key components to maintaining public trust in autonomous vehicles. No system is completely impervious to cyberattacks, as was proven when researchers demonstrated the ability to remotely take control of a Jeep Cherokee (and technically most modern vehicles), controlling things like steering, brakes, and windshield wipers.

Baidu’s emphasis on industrywide security research and collaboration not only improves the security of these autonomous systems, it also demonstrates that public safety and security are its top priority when vulnerabilities are inevitably exposed.

Operation safety assessments

The final way Baidu is constantly assuring the public that its autonomous vehicle systems are working effectively is through regular, in-depth safety assessments. Baidu Apollo has a full-time “safety assurance team” which aims to prevent robotaxi accidents. The team is responsible for executing various aspects of operation area safety assessments, operation platform management, safety driver training, and safety assurance mechanisms. Team members evaluate the external environment of autonomous vehicles, including how to set up the road network and how to set up and expand the site.

Like any piece of equipment that comes with inherent safety risks, failures will happen, and safety systems can always be improved. By constantly reviewing and revising the safety measures it implements across its autonomous vehicle deployments, Baidu can catch issues before they result in an accident while providing ongoing reassurance to the public that it’s taking their safety seriously.

Building trust moving forward

Building trust between the public and autonomous vehicles is fundamental to their mass adoption. As an industry pioneer, Baidu has focused much of its innovation on developing and demonstrating technologies and best practices that can minimize risk. Through these initiatives, Baidu has developed a strong foundation of public trust in China that other companies can build on moving forward.

The key is that technology innovation is only half the battle. Reliable safety features like AI systems, teleoperation options, and strong information security are critical. But communicating how those innovations not only improve safety but compare to the everyday risks we accept with manual driving, is perhaps even more important. This is the model for shifting public sentiment and trust in self-driving vehicles.

This content was produced by Baidu. It was not written by MIT Technology Review’s editorial staff.

Read more

On November 3, Tina Barton ran into a problem. It was Election Day in the US and Barton, a Republican, was city clerk for Rochester Hills, Michigan, a conservative-leaning community near Detroit. As her team was uploading voting results, a technical issue resulted in the double counting of some votes. The error wasn’t initially realized, but within 24 hours, it was noticed and reported to Oakland County officials. The voting data was quickly fixed, but by that time the entire country was looking at the state’s election results. 

The change was very public, and it generated a huge swell of misinformation. This was supercharged on November 6, when Ronna McDaniel, the chair of the Republican National Committee, flew to Oakland County and held a press conference. She claimed that 2,000 ballots had been counted as Republican before being “given” to Democrats in an accusation of election fraud. 

“If we are going to come out of this and say this was a fair and free election, what we are hearing from the city of Detroit is deeply troubling,” McDaniel said.

Upset at how the situation was being misrepresented, Barton posted a video on Twitter refuting the claims. She’s been the Rochester Hills clerk for eight years, and when she spoke out against McDaniel, she knew she was putting her career on the line. In the video, which has since been deleted, Barton said, “I am disturbed that this is intentionally being mischaracterized to undermine the election process.” 

Her remarks went viral, and they were met with threats and anger. In an email to MIT Technology Review, Barton said that “since Ms. McDaniel’s press conference, I have received threatening voice mails and messages.” One caller claimed to be on the way to Michigan. Barton upgraded the security system of her home.

Targeting our natural fears

Data shows that during the election, disinformation was highly targeted locally, with voters in swing states exposed to significantly more online messages about voter intimidation, fraud, ballot glitches, and unrest than voters in other states. 

In a data set provided by Zignal Labs, we looked at mentions across social media of over 30 terms related to voter suppression or intimidation, fraud, technical errors, and unrest that focused on a particular polling location. Our sample of 16 states found that between October 1 and November 13, swing states had more than four times the amount of such mentions: a median of 115,200, while non-swing states saw a median of 28,000 related mentions.

Here’s a chart showing how the volume of messages changed over the days leading up to and after the election.

Mentions relating to voter intimidation, fraud, technical glitches, and voter suppression at specific polling places

Bhaskar Chakravorti, dean of global business at Tuft University’s Fletcher School, conducts research on the conditions that leave a community particularly vulnerable to disinformation. He says that this local focus is typical of effective disinformation campaigns, which are usually pinned to a specific place and slice the target audience into its smallest, stereotyped parts. “Clever misinformation” is organized, he says, in the same way that political campaigning is. 

Disinformation is “targeted at our natural or native hopes and fears, and hopes and fears vary depending on who I am,” he says. “It varies depending on how rich or poor I am. It varies depending on what my ethnicity or race is.” 

In some places, this localization was more visible than in others. In Florida, Latino voters were subjected to intense campaigns based on their age, heritage, or neighborhood profiles as both parties fought to win the state. As a result of being flooded with this material, says Chakravorti, voters grew distrustful of political information at large and turned to more private spaces for discourse—which were, in fact, ripe environments for localized disinformation that became particularly hard to confront. 

Two-pronged approach

These problems all came despite the fact that election officials were significantly better prepared for the challenges in 2020 than in the previous presidential election. Many secretaries of state conducted media blitzes intended to direct people to trusted sources of information for voting, while also battling specific rumors. 

Elizabeth Howard, senior counsel at the Brennan Center for Justice, describes it as a two-pronged approach. It involved “proactively educating voters about what’s going on,” she says, “and then, to varying degrees, election officials working to identify and combat mis- and disinformation at the local and hyper-local level.” 

Despite all their efforts, however, disinformation about polling still wreaked havoc—particularly for election officials like Tina Barton, who, says Howard, “are just doing their job in compliance with state law across the country.” 

Chakravorti says fighting this disinformation in the future may require the use of small-scale media campaigns, local influencers, and community-level ads that spread trusted content. But these tactics won’t fix the deeper structural issues that make a community vulnerable to disinformation. Chakravorti found, unsurprisingly, that some key indicators of vulnerability for US states include political competitiveness, education levels, polarization, and degree of trust in news sources. And none of those issues are new.

Lights out

In September 1993, the FBI sent a safety alert to the Chicago police department warning of a rumored “new and murderous initiation ritual” for the city’s most notorious street gang. The supposed ceremony required prospective members to drive at night with their headlights off to lure and kill unsuspecting drivers. The claim turned out to be false—but the rumor spread like wildfire. 

According to researchers who have studied the “lights out” urban legend, it flourished partly because the summer of 1993 was one of the worst stretches of crime Chicago had ever seen. Tensions were high, seeded by deep-rooted racial friction and political polarization. 

Disinformation—whether it’s gang folklore or rumors of election intimidation—is almost always most effective at a local level. It’s worse in polarized, closed environments. We’re most likely to believe things from our own circle. We still struggle to dispel the neighborhood rumor mill, and we certainly don’t know how to do it at scale. 

And while the struggle to fight disinformation continues, local officials like Tina Barton are under increasing pressure.

“These are things that take a huge personal toll on our election officials,” says the Brennan Center’s Howard. “These are big stakes for people. These are their neighbors. These are their friends.”

Read more

We’re now six weeks past Election Day, and electors in every state followed the will of the voters and confirmed the victory of Joe Biden. But while the Electoral College made the results official, President Donald Trump is continuing to protest them, despite having lost dozens of court cases within the past month. In any case, Congress is slated to complete the process of electing Biden on January 6.

President Trump’s attack on American elections accelerated a problem that already existed in the United States: the public doesn’t trust the vote. 

So how can we help more Americans believe in the most important function of our democracy? One of the states with the most contentious states votes in 2020 might have something to tell us.

How it went down in Georgia

Georgia’s election was close. When it turned out that were only 12,000 votes separating Joe Biden from Donald Trump, the world turned its attention to the count there.

The state’s election processes have changed significantly in just the last year, including a switch to more secure paper ballots and a law requiring a post-election audit, which was then used to examine this year’s tight presidential race.

An audit is not a recount. Instead, it is a routine check of a portion of ballots, using statistical tests to root out anomalies. This is meant to increase everyone’s confidence that the outcome is correct. Georgia’s secretary of state, a Republican, ran the audit this year: it discovered and corrected a relatively small number of counting errors. That process was open and transparent, and the changes were too few to affect the results. In the end, it reaffirmed Joe Biden’s win in Georgia.

This was one of the most high-profile post-election audits in American history, but it won’t be the last. Election integrity and security experts are increasingly pushing to make risk-limiting audits (RLAs) a legal requirement for elections in all 50 states. In just the last few years, 11 states have passed laws requiring, allowing, or piloting risk-limiting audits. The idea is to build trust in systems that have become less transparent as they are more mechanized.

“Machine counting is great, but should we trust them?” says Ben Adida, who runs the election security nonprofit VotingWorks. “The whole point of risk-limiting audits is that machines are great for speed, accuracy, and objectivity—but let’s audit them to make sure they don’t make mistakes and they haven’t been hacked.”

After a largely automated initial count, a risk-limiting audit takes a small number of ballots, which humans count and check against the initial outcome. The process has been put together and picked apart by statisticians, voting experts, election officials, and computer scientists over the last two decades, but it has only recently started to be used in significant elections. (Colorado was the first state to pass legislation about risk-limiting audits, in 2009; its first statewide RLA took place in 2017.)

Adida’s nonprofit, launched in 2018, has built open-source and free software to help conduct these audits cheaply and quickly, in the hope of getting states to adopt RLAs more widely. VotingWorks helped Georgia run its audit this year, and Adida is hopeful the idea will spread.

Starting with a dice roll

In Colorado, the process starts with a big, weird public ceremony that anyone can attend: the state rolls a 10-sided die 20 times in order to create a random “seed” number that kicks off the audit. That determines which ballots will be checked against the results. The whole thing is done this way to try building trust.

“The fact that risk-limiting audits are public ceremonies that the public and press can attend is a very positive thing that will help voters perceive elections to be more trustworthy in addition to the election being more trustworthy in actuality,” Adida says. “We’re going to think about how we name it and talk about it. Even the word audit itself has a lot of negative connotations, and that’s understandable.”

So what happens now? The next four years could see as many as half of all states adopt risk-limiting audits with those goals. In the meantime, aside from the final stages of congressional confirmation, this particular election is over.

Read more

Use email in your marketing? Wondering how an email sequence can help turn people into customers? In this article, you’ll learn about an email sequence that creates genuine loyalty and grows sales. You’ll also discover how to recycle this email system to stay engaged with your audience over time. Finally, you’ll find out how to […]

The post How to Use an Email Sequence to Turn Prospects Into Customers appeared first on Social Media Examiner | Social Media Marketing.

Read more
1 2,519 2,520 2,521 2,522 2,523 2,685