Ice Lounge Media

Ice Lounge Media

Provizio, a combination hardware and software startup with technology to improve car safety, has closed a seed investment round of $6.2million. Investors include Bobby Hambrick (the founder of Autonomous Stuff); the founders of Movidius; the European Innovation Council (EIC); ACT Venture Capital.

The startup has a “five-dimensional” sensory platform that — it says — perceives, predicts and prevents car accidents in real time and beyond the line-of-sight. Its “Accident Prevention Technology Platform” combines proprietary vision sensors, machine learning and radar with ultra-long range and foresight capabilities to prevent collisions at high speed and in all weather conditions, says the company. The Provizio team is made up of experts in robotics, AI and vision and radar sensor development.

Barry Lunn, CEO of Provizio said: “One point three five road deaths to zero drives everything we do at Provizio. We have put together an incredible team that is growing daily. AI is the future of automotive accident prevention and Provizio 5D radars with AI on-the-edge are the first step towards that goal.”

Also involved in Provizio is Dr. Scott Thayer and Prof. Jeff Mishler, formally of Carnegie Mellon robotics, famous for developing early autonomous technologies for Google / Waymo, Argo, Aurora and Uber.

Read more

Though the counts aren’t finished and the legal challenges could drag on for weeks, Joe Biden’s victory in the US presidential election is looking increasingly likely. If he does triumph, it will also be a win for action on climate change. But his ability to push through any sweeping legislation will be seriously constrained if, as appears likely, Republicans retain control of the Senate.

This outcome is far from the landslide repudiation of President Donald Trump’s assaults on environmental policy, science, and pluralism that climate activists had fervently hoped for. Climate change did appear to be a motivating issue in certain regions and races, and a concern for a solid majority of voters. But polling found that the economy, health care, and the coronavirus outbreak were far more important issues to voters than climate change, where they remain sharply divided along partisan lines.

“The potential for Biden to do something big on climate feels, to me, pretty small,” says David Keith, a professor of public policy at the Harvard Kennedy School. “The reality is there will be a lot of other priorities for an early Biden administration … and you’re sitting on a pretty weak mandate.”

Republicans and Democrats each held 48 seats in the Senate as of Friday afternoon, but Senator Susan Collins’ win in Maine has tilted the odds toward the Republicans hanging onto control of the chamber. To split the Senate evenly, Democratic contenders now need to win two contentious races in the swing state of Georgia, both of which could end up in runoff contests in January. (A 50-50 Senate split would give the edge to Democrats if Biden wins, as his vice president, Kamala Harris, would be called upon to break tied votes.)

Many observers never believed Biden had high odds of passing every part of his proposal to pour nearly $2 trillion of federal funds into climate efforts, an ambitious policy package clearly shaped by progressive support for the Green New Deal. But without Democratic control of the Senate, it will be difficult to pass any major climate laws. And the sorts of bold steps necessary to get the nation on track to eliminate emissions from the power sector by 2035 and achieve net-zero emissions economy-wide by 2050, the central goals of Biden’s proposals, may well be out of reach.

A Biden administration could still make some progress on climate change. Much of it, however, would have to occur through executive actions and within federal agencies, as was largely the case under President Barack Obama. These moves would have a harder time surviving legal challenges under a Supreme Court that’s just become more conservative, with Amy Coney Barrett replacing the late Ruth Bader Ginsburg.

Biden has pledged to sign a series of executive orders on his first day in office, including measures that would impose methane pollution limits on oil and gas operations; push through higher vehicle fuel economy standards under the Clean Air Act; and spend hundreds of billions of federal government dollars on zero-emissions vehicles and clean energy resources.

He could also work to reverse Trump’s dozens of efforts to roll back previous climate and environmental policies by asking courts to halt pending litigation or by rescinding and replacing the administration’s rules, as Jody Freedman, a Harvard law professor and counselor on climate issues in the Obama White House, explained in a Twitter thread.

Restoring—or ending legal challenges against—regulations like the Clean Air Act, the Clean Power Plan, and the ability of states like California to set their own vehicle emissions standards could prevent the release of billions of tons of greenhouse gases, according to previous estimates of the impact of Trump’s policies.

Executive actions are “not necessarily the most durable form of policy, as we learned under the Trump administration,” says Kelly Sims Gallagher, a professor of energy and environmental policy at the Fletcher School at Tufts University. “But for sure it works.”

A Biden administration would also be likely to quickly remove the roster of climate deniers, fossil-fuel lobbyists, and oil executives that Trump placed in positions of power throughout federal agencies; end the suppression of scientific reports; and restore the federal government’s reliance on scientists and other experts to make critical decisions on climate change (and other crucial issues like the covid-19 pandemic).

But there could still be opportunities to make some longer-lasting progress on climate by passing new laws, observers say.

Notably, there’s broad support for an economic stimulus package amid the pandemic-driven downturn. Such a bill could include significant research and development funding for areas like next-generation nuclear power and carbon capture, removal, and storage technologies, says Josh Freed, who leads the climate and energy program at Third Way, a center-left think tank in Washington, DC. It could also include job training programs for renewables and other clean energy sectors. The Obama administration used economic stimulus in the wake of the 2008-09 recession to direct some $90 billion of federal investment into green industries.

There’s also bipartisan appetite for an infrastructure bill, which could include investments in electricity transmission lines, offshore wind farms, shoreline protections, and other climate adaptation measures.

But Freed says just how much climate-related spending can be packed into these measures will depend on the level of cooperation from Mitch McConnell, the Kentucky Republican senator who won his reelection bid on Tuesday and who will likely remain majority leader. There’s also pressure to enact at least a first round of stimulus before the end of the year, under Trump, which would be unlikely to include significant climate funding.

Finally, on the international front, Biden committed to rejoining the Paris climate agreement, which the US officially exited on Wednesday. Stepping back into the international fold wouldn’t in itself create any new domestic climate policy. But it would require the US to submit a new set of commitments to cut emissions before the next UN Climate Change Conference in 2021, as well as a plan for slashing climate pollution by midcentury, Gallagher says.

Just as importantly, the US’s return to the Paris accord would strengthen the global alliance around climate issues and put more pressure on other nations to keep to or step up their commitments. But after Trump’s “America First” reign, during which he routinely upended critical trade and military alliances, it also may simply take time to restore some of the nation’s credibility.

“It may be that the US will need to play less of an agenda-setting role than it has in the past until it’s accumulated enough trust,” Gallagher says.

Read more

Nearly 4,300 exoplanets have been discovered by astronomers, and it’s quite obvious now our galaxy is filled with them. But the point of looking for these new worlds is more than just an exercise in stamp collecting—it’s to find one that could be home to life, be it future humans who have found a way to travel those distances or extraterrestrial life that’s made a home for itself already. The best opportunity to find something like that is to find a planet that resembles Earth.

And what better way to look for Earth 2.0 than to search around stars similar to the sun? A new analysis of exoplanet data collected by NASA’s Kepler space telescope, which operated from 2009 to 2018, has come up with some new predictions for how many stars in the Milky Way galaxy that are comparable to the sun in temperature and age are likely to be orbited by a rocky, potentially habitable planet like Earth. When applied to current estimates of 4.1 billion sun-like stars in the galaxy, their model suggests there are at minimum 300 million with at least one habitable planet. 

The model’s average, however, posits that one in two sun-like stars could have a habitable planet, causing that figure to swell to over 2 billion. Even less conservative predictions suggest it could be over 3.6 billion.

The new study has not yet been peer-reviewed, but it will be soon, and it is due to be published in the Astronomical Journal.

“This appears to be a very careful study and deals with really thorny issues about extrapolating from the Kepler catalogue,” says Adam Frank, a physicist and astronomer at the University of Rochester, who was not involved with the study. “The goal is to get a complete, reliable, and accurate estimate for the average number of potentially habitable planets around stars. They seem to have made a good run at that.”

Scientists have made several attempts in the past to use Kepler data to work out how many sun-like stars in the galaxy have potentially habitable exoplanets in their orbit. But these studies have provided answers that ranged from less than 1% to more than 100% (i.e., multiple planets around these stars). It’s a reflection of how hard it’s been to work with this data, says Steve Bryson of NASA Ames Research Center in California, who led the new work.

Two major issues have created this large window: incomplete data, and the need to cull false detections from the Kepler data set.

The new study addresses both of these problems. It’s the first of its kind to use the full Kepler exoplanet data set (more than 4,000 detections from 150,000 stars), but it’s also using stellar data from Gaia, the European Space Agency’s mission to map every star in the Milky Way. All that helped make the final estimates more accurate, with smaller uncertainties. And this is after scientists have spent years analyzing the Kepler catalogue to strip away obscuring elements and ensure that only real exoplanets are left. Armed with both Kepler and Gaia data, Bryson and his team were able to determine the rate of formation for sun-like stars in the galaxy, the number of stars likely to have rocky planets (with radiuses 0.5 to 1.5 times Earth’s), and the likelihood those planets would be habitable.

On average, Bryson and his team predict, 37 to 60% of sun-like stars in the Milky Way should be home to at least one potentially habitable planet. Optimistically, the figure could be as high as 88%. The conservative calculations pull this figure down to 7% of sun-like stars in the galaxy (hence 300 million)—and on the basis of that number, the team predicts there are four sun-like stars with habitable planets within 30 light-years of Earth. 

“One of the original goals of the Kepler mission was to compute exactly this number,” says Bryson. “We have always intended to do this.” 

Habitability has to do with the chances a planet has temperatures moderate enough for liquid water to exist on the surface (since water is essential for life as we know it). Most studies figure this out by gauging the distance of an exoplanet from its host star and whether its orbit is not too close and not too far—the so-called Goldilocks zone.

According to Bryson, orbital distance is a useful metric when you’re examining one specific star. But when you’re looking at many stars, they’ll all exhibit different brightnesses that deliver different amounts of heat to surrounding objects, which means their habitable zones will vary. The team instead chose to think about habitability in terms of the volume of light hitting the surface of an exoplanet, which the paper calls the “instellation flux.” 

Through stellar brightness data, “we are measuring the true temperature of the planet—whether or not it is truly in the habitable zone—for all the planets around all the stars in our sample,” says Bryson. You don’t get the same sort of reliable temperature figures working with distances, he says. 

Though Bryson claims this study’s uncertainties are smaller than those in previous efforts, they are still quite large. This is mainly because the team is working with such a small sample of discovered rocky exoplanets. Kepler has identified over 2,800 exoplanets, only some of which orbit sun-like stars. It’s not an ideal number to use to predict the existence of hundreds of millions of others in the galaxy. “By having so few observations, it limits what you can say about what the truth is,” says Bryson.

Lastly, the new study assumes a simple model for these exoplanets that could depart dramatically from conditions in the real world (some of these stars may form  binary star systems with other stars, for example). Plugging more variables into the model would help paint a more accurate picture, but that requires more precise data that we don’t really have yet. 

But it’s studies like these that could help us acquire that data. The whole point of Kepler was to help scientists figure out what kinds of interstellar objects they ought to devote more resources to studying to find extraterrestrial life, especially with space-based telescopes whose observation time is limited. These are the instruments (such as NASA’s James Webb Space Telescope and the ESA’s PLATO telescope) that could determine whether a potentially habitable exoplanet has an atmosphere or is home to any potential biosignatures, and studies like this latest one can help engineers design telescopes more suited to these tasks. 

“Almost every sun-like star in the galaxy has a planet where life could form,” says Frank. “Humanity has been asking this question for more than 2,500 years, and now we not only know the answer, we are refining our knowledge of that answer. This paper tells us there are a lot of planets out there in the right place for life to form.”

Read more

Back in 2016, I could count on one hand the kinds of interventions that technology companies were willing to use to rid their platforms of misinformation, hate speech, and harassment. Over the years, crude mechanisms like blocking content and banning accounts have morphed into a more complex set of tools, including quarantining topics, removing posts from search, barring recommendations, and down-ranking posts in priority. 

And yet, even with more options at their disposal, misinformation remains a serious problem. There was a great deal of coverage about misinformation on Election Day—my colleague Emily Drefyuss found, for example, that when Twitter tried to deal with content using the hashtag #BidenCrimeFamily, with tactics including “de-indexing” by blocking search results, users including Donald Trump adapted by using variants of the same tag. But we still don’t know much about how Twitter decided to do those things in the first place, or how it weighs and learns from the ways users react to moderation.

What actions did these companies take? How do their moderation teams work? What is the process for making decisions?

As social media companies suspended accounts and labeled and deleted posts, many researchers, civil society organizations, and journalists scrambled to understand their decisions. The lack of transparency about those decisions and processes means that—for many—the election results end up with an asterisk this year, just as they did in 2016.

What actions did these companies take? How do their moderation teams work? What is the process for making decisions? Over the last few years, platform companies put together large task forces dedicated to removing election misinformation and labeling early declarations of victory. Sarah Roberts, a professor at UCLA, has written about the invisible labor of platform content moderators as a shadow industry, a labyrinth of contractors and complex rules which the public knows little about. Why don’t we know more? 

In the post-election fog, social media has become the terrain for a low-grade war on our cognitive security, with misinformation campaigns and conspiracy theories proliferating. When the broadcast news business served the role of information gatekeeper, it was saddled with public interest obligations such as sharing timely, local, and relevant information. Social media companies have inherited a similar position in society, but they have not taken on those same responsibilities. This situation has loaded the cannons for claims of bias and censorship in how they moderated election-related content.  

Bearing the costs

In October, I joined a panel of experts on misinformation, conspiracy, and infodemics for the House Permanent Select Committee on Intelligence. I was flanked by Cindy Otis, an ex-CIA analyst; Nina Jankowicz, a disinformation fellow at the Wilson Center; and Melanie Smith, head of analysis at Graphika. 

As I prepared my testimony, Facebook was struggling to cope with QAnon, a militarized social movement being monitored by their dangerous-organizations department and condemned by the House in a bipartisan bill. My team has been investigating QAnon for years. This conspiracy theory has become a favored topic among misinformation researchers because of all the ways it has remained extensible, adaptable, and resilient in the face of platform companies’ efforts to quarantine and remove it. 

QAnon has also become an issue for Congress, because it’s no longer about people participating in a strange online game: it has touched down like a tornado in the lives of politicians, who are now the targets of harassment campaigns that cross over from the fever dreams of conspiracists to violence. Moreover, it’s happened quickly and in new ways. Conspiracy theories usually take years to spread through society, with the promotion of key political, media, and religious figures. Social media has sped this process through ever-growing forms of content delivery. QAnon followers don’t just comment on breaking news; they bend it to their bidding

I focused my testimony on the many unnamed harms caused by the inability of social media companies to prevent misinformation from saturating their services. Journalists, public health and medical professionals, civil society leaders, and city administrators, like law enforcement and election officials, are bearing the cost of misinformation-at-scale and the burden of addressing its effects. Many people tiptoe around political issues when chatting with friends and family, but as misinformation about protests began to mobilize white vigilantes and medical misinformation led people to downplay the pandemic, different professional sectors took on important new roles as advocates for truth

Take public health and medical professionals, who have had to develop resources for mitigating medical misinformation about covid-19. Doctors are attempting to become online influencers in order to correct bogus advice and false claims of miracle cures—taking time away from delivering care or developing treatments. Many newsrooms, meanwhile, adapted to the normalization of misinformation on social media by developing a “misinformation beat”—debunking conspiracy theories or fake news claims that might affect their readers. But those resources would be much better spent on sustaining journalism rather than essentially acting as third-party content moderators. 

Civil society organizations, too, have been forced to spend resources on monitoring misinformation and protecting their base from targeted campaigns. Racialized disinformation is a seasoned tactic of domestic and foreign influence operations: campaigns either impersonate communities of color or use racism to boost polarization on wedge issues. Brandi Collins-Dexter testified about these issues at a congressional hearing in June, highlighting how tech companies hide behind calls to protect free speech at all costs without doing enough to protect Black communities targeted daily on social media with medical misinformation, hate speech, incitement, and harassment. 

Election officials, law enforcement personnel, and first responders are at a serious disadvantage attempting to do their jobs while rumors and conspiracy theories spread online. Right now, law enforcement is preparing for violence at polling places. 

A pathway to improve

When misinformation spreads from the digital to the physical world, it can redirect public resources and threaten people’s safety. This is why social media companies must take the issue as seriously as they take their desire to profit. 

But they need a pathway to improve. Section 230 of the Communications and Decency Act empowers social media companies to improve content moderation, but politicians have threatened to remove these protections so they can continue with their own propaganda campaigns. All throughout the October hearing, the specter loomed of a new agency that could independently audit civil rights violations, examine issues of data privacy, and assess the market externalities of this industry on other sectors. 

As I argued during the hearing, the enormous reach of social media across the globe means it is important that regulation not begin with dismantling Section 230 until a new policy is in place. 

Until then, we need more transparency. Misinformation is not solely about the facts; it’s about who gets to say what the facts are. Fair content moderation decisions are key to public accountability. 

Rather than hold on to technostalgia for a time when it wasn’t this bad, sometimes it is worth asking what it would take to uninvent social media, so that we can chart a course for the web we want—a web that promotes democracy, knowledge, care, and equity. Otherwise, every unexplained decision by tech companies about access to information potentially becomes fodder for conspiracists and, even worse, the foundation for overreaching governmental policy.

Read more

You’ve probably heard us say this countless times: GPT-3, the gargantuan AI that spews uncannily human-like language, is a marvel. It’s also largely a mirage. You can tell with a simple trick: Ask it the color of sheep, and it will suggest “black” as often as “white”—reflecting the phrase “black sheep” in our vernacular.

That’s the problem with language models: because they’re only trained on text, they lack common sense. Now researchers from the University of North Carolina, Chapel Hill, have designed a new technique to change that. They call it “vokenization,” and it gives language models like GPT-3 the ability to “see.”

It’s not the first time people have sought to combine language models with computer vision. This is actually a rapidly growing area of AI research. The idea is that both types of AI have different strengths. Language models like GPT-3 are trained through unsupervised learning, which requires no manual data labeling, making them easy to scale. Image models like object recognition systems, by contrast, learn more directly from reality. In other words, their understanding doesn’t rely on the kind of abstraction of the world that text provides. They can “see” from pictures of sheep that they are in fact white.

AI models that can parse both language and visual input also have very practical uses. If we want to build robotic assistants, for example, they need computer vision to navigate the world and language to communicate about it to humans.

But combining both types of AI is easier said than done. It isn’t as simple as stapling together an existing language model with an existing object recognition system. It requires training a new model from scratch with a data set that includes text and images, otherwise known as a visual-language data set.

The most common approach for curating such a data set is to compile a collection of images with descriptive captions. A picture like the one below, for example, would be captioned “An orange cat sits in the suitcase ready to be packed.” This differs from typical image data sets, which would label the same picture with only one noun, like “cat.” A visual-language data set can therefore teach an AI model not just how to recognize objects but how they relate to and act on one other, using verbs and prepositions.

But you can see why this data curation process would take forever. This is why the visual-language data sets that exist are so puny. A popular text-only data set like English Wikipedia (which indeed includes nearly all the English-language Wikipedia entries) might contain nearly 3 billion words. A visual-language data set like Microsoft Common Objects in Context, or MS COCO, contains only 7 million. It’s simply not enough data to train an AI model for anything useful.

“Vokenization” gets around this problem, using unsupervised learning methods to scale the tiny amount of data in MS COCO to the size of English Wikipedia. The resultant visual-language model outperforms state-of-the-art models in some of the hardest tests used to evaluate AI language comprehension today.

“You don’t beat state of the art on these tests by just trying a little bit,” says Thomas Wolf, the cofounder and chief science officer of the natural-language processing startup Hugging Face, who was not part of the research. “This is not a toy test. This is why this is super exciting.”

From tokens to vokens

Let’s first sort out some terminology. What on earth is a “voken”?

In AI speak, the words that are used to train language models are known as tokens. So the UNC researchers decided to call the image associated with each token in their visual-language model a voken. Vokenizer is what they call the algorithm that finds vokens for each token, and vokenization is what they call the whole process.

The point of this isn’t just to show how much AI researchers love making up words. (They really do.) It also helps break down the basic idea behind vokenization. Instead of starting with an image data set and manually writing sentences to serve as captions—a very slow process—the UNC researchers started with a language data set and used unsupervised learning to match each word with a relevant image (more on this later). This is a highly scalable process.

The unsupervised learning technique, here, is ultimately the contribution of the paper. How do you actually find a relevant image for each word?

Vokenization

Let’s go back for a moment to GPT-3. GPT-3 is part of a family of language models known as transformers, which represented a major breakthrough in applying unsupervised learning to natural-language processing when the first one was introduced in 2017. Transformers learn the patterns of human language by observing how words are used in context and then creating a mathematical representation of each word, known as a “word embedding,” based on that context. The embedding for the word “cat” might show, for example, that it is frequently used around the words “meow” and “orange” but less often around the words “bark” or “blue.”

This is how transformers approximate the meanings of words, and how GPT-3 can write such human-like sentences. It relies in part on these embeddings to tell it how to assemble words into sentences, and sentences into paragraphs.

There’s a parallel technique that can also be used for images. Instead of scanning text for word usage patterns, it scans images for visual patterns. It tabulates how often a cat, say, appears on a bed versus on a tree, and creates a “cat” embedding with this contextual information.

The insight of the UNC researchers was that they should use both embedding techniques on MS COCO. They converted the images into visual embeddings and the captions into word embeddings. What’s really neat about these embeddings is that they can then be graphed in a three-dimensional space, and you can literally see how they are related to one another. Visual embeddings that are closely related to word embeddings will appear closer in the graph. In other words, the visual cat embedding should (in theory) overlap with the text-based cat embedding. Pretty cool.

You can see where this is going. Once the embeddings are all graphed and compared and related to one another, it’s easy to start matching images (vokens) with words (tokens). And remember, because the images and words are matched based on their embeddings, they’re also matched based on context. This is useful when one word can have totally different meanings. The technique successfully handles that by finding different vokens for each instance of the word.

For example:

Here is her contact.
Some cats love human contact.

The token is the word “contact” in both examples. But in the first sentence, context suggests that the word refers to contact information, so the voken is the contact icon. In the second sentence, the context suggests the word refers to touch, so the voken shows a cat being stroked.

The researchers used the visual and word embeddings they created with MS COCO to train their vokenizer algorithm. Once trained, the vokenizer was then able to find vokens for the tokens in English Wikipedia. It’s not perfect. The algorithm only found vokens for roughly 40% of the tokens. But that’s still 40% of a data set with nearly 3 billion words.

With this new data set, the researchers retrained a language model known as BERT, an open-source transformer developed by Google that predates GPT-3. They then tested the new and improved BERT on six different language comprehension tests, including SQuAD, the Stanford Question Answering Dataset, which asks models to answer reading comprehension questions about a series of articles, and SWAG, which tries to trip up models with subtleties of the English language to probe whether it’s merely mimicking and memorizing. The improved BERT performed better on all of them, which Wolf says is nothing to sneeze at.

The researchers, Hao Tan, a PhD student, and Mohit Bansal, his advisor, will be presenting their new vokenization technique in two weeks at the Conference on Empirical Methods in Natural Language Processing. While the work is still early, Wolf sees their work as an important conceptual breakthrough in getting unsupervised learning to work for visual-language models. It was a similar spark that helped dramatically advance natural-language processing back in the day.

“In NLP, we had this huge breakthrough over two years ago, and then suddenly NLP was a field where a lot of things were happening and it kind of got ahead of all the other AI fields,” he says. “But we have this problem of connecting text with other things. So it’s like this robot that is only able to talk but cannot see, cannot hear.”

“This paper is one example where they managed to connect it to another modality and it works better,” he says. “You could imagine that maybe some of these techniques could be reused when you want to leverage this really powerful language model in a robot. Maybe you use the same thing to connect the robot’s senses to text.”

Read more

Want to create engaging content on Instagram? Looking for a content strategy to create conversion-focused content? To explore how to create Instagram content that attracts your ideal clients, I interview Alex Tooby on the Social Media Marketing podcast. Alex is an Instagram strategist who specializes in helping female entrepreneurs promote their businesses using Instagram. Her […]

The post Instagram Content Strategy: Creating Content That Draws Customers to You appeared first on Social Media Examiner | Social Media Marketing.

Read more
1 2,602 2,603 2,604 2,605 2,606 2,686