Bits: Farhad’s Week in Tech: Netflix Gets a Star, and Google’s Conflicted Ad Blocker

But as Netflix keeps driving trucks of money to TV’s top producers — it plans to spend $8 billion on new content this year — the script has flipped. More and more, it’s the people who haven’t cut the cord who are missing out. Today, if you subscribe just to regular TV and don’t bother with Netflix, Amazon, Hulu and other online services, you’re missing some of TV’s biggest and most acclaimed shows. Plus, you’re paying more.

Cable subscriptions have been declining for years, and last year the decline accelerated. But the YouTube expansion shows that cutting the cord keeps getting easier, and the Netflix deal shows how non-cable services keep getting more attractive. This suggests an even faster pace of decline for cable. The bottom is going to fall out of the market, probably very soon.

Do you want Google as your ad blocker?

Chrome, Google’s popular web browser, can now block ads. This seems weird because Google is the internet’s largest advertising company. So how does an advertising company design ad-blocking software? Very, very carefully, it turns out, in a process rife with conflicts.

Google argues that over all, the internet ad business has been great for the economy and society, but that a few bad apples are ruining it all. Google, then, wants to “maintain a balance,” because “if left unchecked, disruptive ads have the potential to derail the entire system,” the company said in a blog post this week. (In a recent column, I took a different view, arguing that the digital ad business is at the root of most of the internet’s problems.)

So Google’s new ad blocker is designed to block some ads, not all of them. The software will eliminate a dozen types of ads that the company deems intrusive or disruptive. But most ads, including most from Google, would skate through just fine.

But how does Google decide which ads are too disruptive? The company says it relied on input from an ad-industry group, the Coalition for Better Ads. But as the The Wall Street Journal reported, the group’s membership and its research were heavily influenced by Google. This has led to recriminations in the industry, with rivals charging that Google is using the veil of self-regulation to diminish its competitors.

It’s an interesting fight, but it may also be an irrelevant one. What’s unclear is whether Google’s limited ad blocker will stem the rising popularity of more restrictive ad blockers. For many people who hate online ads, it could already be too late for half measures.

Which tech stories would you like to read?

The New York Times’s tech reporting team is gathering in San Francisco this week for a team-building offsite. I’m looking forward to forgetting to catch Mike Isaac in a trust fall. We’ll also be talking about some of the major themes we’re aiming to cover in the tech world this year — artificial intelligence, crypto and the responsibility of tech companies, for example.

But enough about us. What would you like to read more about? Which technologies, people and ideas in the tech world do you think deserve more scrutiny, investigation or wider notice?

Send your thoughts — but, please, not pitches for your company — to bits_newsletter@nytimes.com.

Farhad Manjoo writes a weekly technology column called State of the Art. You can follow him on Twitter here: @fmanjoo.

Continue reading the main story

As China Marches Forward on A.I., the White House Is Silent

But six months after China seemed to mimic that Obama-era road map, A.I. experts in industry and academia in the United States say that the Trump White House has done little to follow through on the previous administration’s economic call to arms.

Photo

Doctors at the radiology department of Zhejiang Provincial People’s Hospital in China use an A.I. system to view CT scans as a way to diagnose lung cancers early on. Credit Yue Wu for The New York Times

“We are still waiting on the White House to provide some direction” on how to respond to the competition, said Tim Hwang, who worked on A.I. policy at Google and is now the director of the Ethics and Governance of AI Initiative, a new organization created by the LinkedIn founder Reid Hoffman and others to fund ethical research in artificial intelligence.

China’s embrace of A.I. comes at a crucial time in the development of the technology and just as the lead long enjoyed by the United States has started to dwindle.

For decades, artificial intelligence was more fiction than science. In the past few years, however, dramatic improvements have prompted some of the biggest companies in Silicon Valley and Detroit — and China — to invest billions on everything from self-driving cars to home appliances that can have a conversation with a human.

A.I. has also become a significant part of national defense policy as military leaders and ethicists debate how much autonomy we should give to weapons that can think for themselves.

American companies like Amazon and Google have done more than anyone to turn A.I. concepts into real products. But for a number of reasons, including concerns that the Trump administration will limit the number of immigrant engineers allowed into the United States, much of the critical research being done on artificial intelligence is already migrating to other countries, including tech hot spots like Toronto, London and Beijing.

To China’s growing tech community, driving the industry’s next big thing — a mantra of Silicon Valley — is becoming a tantalizing possibility.

“Thanks to the size of the market and the rapid experimentation, China is going to become one of the most powerful — if not the most powerful — A.I. countries in the world,” said Kai-Fu Lee, a former Microsoft and Google executive who now runs a prominent Chinese venture capital firm dedicated to artificial intelligence.

The 2016 A.I. reports were shepherded by President Barack Obama’s Office of Science and Technology Policy.

The O.S.T.P., which has overseen science and technology activities across the federal government for more than four decades, is now run by the deputy chief technology officer Michael Kratsios. He had worked as a Wall Street analyst before serving as chief of staff for an investment fund run by Peter Thiel, a venture capitalist who supported Mr. Trump’s presidential run. The administration has yet to name an office director or fill four other assistant posts.

Photo

Geoffrey Hinton, a computer scientist and leading expert in artificial intelligence, has helped make the University of Toronto a center of innovation in A.I. technology. Credit Aaron Vincent Elkaim for The New York Times

In a recent interview, Mr. Kratsios was adamant that any concerns over the administration’s approach to A.I. were unfounded.

“Artificial intelligence has been a priority for the Trump administration since Day 1,” he said. Mr. Kratsios added that the administration was particularly concerned with the development of A.I. in national security and as a way of encouraging economic prosperity.

Many staff members in Mr. Kratsios’s office are exploring issues related to artificial intelligence, he said. Mr. Kratsios also meets with a committee, set up by the Obama administration, that coordinates A.I. policy across the government.

“The key thing to remember is that the front line of A.I. policy is at the agencies,” he said. “The White House is a convener and a coordinator.”

In an echo of plans laid out by the Obama administration, China’s government said it intended to significantly increase long-term funding for A.I. research and develop a much larger community of A.I. researchers.

There are several ways to do that, according to the Obama administration and China. First, educate more students in these technologies. Second, recruit experts from other countries.

At the same time, both policy statements urged companies to share more technology and data. Huge pools of data are need to “train” A.I. systems, and in the United States much of this is locked up inside companies like Facebook and Google. Mr. Lee said China already has an enormous advantage here because its large population will generate more data and its companies are more willing to share.

Artificial intelligence has been a focus of Chinese technologists for some time. By 2013, China was already producing more research papers than the United States in the area of “deep learning,” the main technology driving the rise of A.I., according to the Obama reports. Deep learning, which allows machines to learn tasks by analyzing vast amounts of data, is one of the main technologies driving the rise of artificial intelligence.

It is unclear how much China as a whole is spending. But one Chinese state has promised to invest $5 billion in A.I., and the government of Beijing has committed $2 billion to an A.I. development park in the city. South Korea has set aside close to $1 billion of its own. Canada, already home to many of the top researchers in the field, has also committed $125 million to, in part, attract new talent from other countries.

Photo

Smart home security devices at the International CES show in Las Vegas. Credit Roger Kisby for The New York Times

It is also difficult to say just how much the government of the United States is spending. Government organizations like the Intelligence Advanced Research Projects Activity, the National Institute of Standards and Technology, and the National Science Foundation continue to fund new research in universities and the private sector. According to an O.S.T.P report, the federal government spent about $1 billion a year in 2015. The Trump administration says that spending jumped to $3 billion in 2017. But the current administration said that was not an apples-to-apples comparison to the 2015 tally, because it was not certain how the Obama administration made it calculations.

“We may have a bunch of small initiatives inside the government that are doing good, but we don’t have a central national strategy,” said Jack Clark, a former journalist who now oversees policy efforts at OpenAI, the artificial intelligence lab co-founded by Elon Musk, Tesla’s chief executive. “It is confusing that we have this technology of such obvious power and merit and we are not hearing full-throated support, including financial support.”

The Trump administration’s budget for 2018 aims to cut science and technology research funding across the government by 15 percent, according to a report from the American Association for the Advancement of Science.

“They are headed in precisely the wrong direction,” said Thomas Kalil, who led O.S.T.P’s Technology and Innovation Division under President Obama. “That is particularly concerning given that China has identified this as a strategic priority.”

Over the past five years, much of the progress in A.I. technology has been led by American companies like Google, Microsoft, Amazon and Facebook. But these companies don’t need A.I. technologists to work in the United States in order to employ them.

Take Geoffrey Hinton, a major figure in the rise of A.I. at Google and across the tech industry. He recently moved back to Toronto, where he was a professor for many years. He now runs a new Google lab in that city. Last year, he took on an Iranian researcher who was denied a visa by the United States government.

Google operates another important lab in Montreal. Its London lab, DeepMind, may be home to more top-notch A.I. researchers than any other lab on earth. And Google recently unveiled new labs in both Paris and Beijing. Facebook, after creating its own lab in Canada, recently pumped 10 million euros, or more than $12 million, into its existing operation in Paris. And Amazon is opening a lab in Germany.

Inside these facilities, researchers still create technology for their American employers. As the labs grow and the products get better, some employees can be expected to leave to start their own companies and hire their own employees.

Google’s and Microsoft’s work in China has already led to Chinese start-ups like Malong, which is building image recognition systems, and a major A.I. investment fund run by Mr. Lee.

“When it is close to you, something like Microsoft Research has real economic value,” said Mr. Clark, of OpenAI.

Continue reading the main story

Why Google’s Bosses Became ‘Unpumped’ About Uber

It’s something Google understands well. Years ago, Google was tightly aligned with Apple. But the relationship soured as Google started developing smartphone software. Steve Jobs, who was Apple’s chief executive, rightly believed that Google’s Android would become a competitive threat to the iPhone.

When Google initially invested in Uber in 2013, Larry Page, Google’s chief executive at the time, and David Drummond, Alphabet’s top lawyer, were mentors, Mr. Kalanick said. Alphabet remains an investor in Uber.

The two companies, he said, had a “little brother, big brother” dynamic because Uber was “trying to get more of their time than they were willing to give.”

After Mr. Page repeatedly spurned meetings to discuss combining Uber’s ride-hailing service with Google’s work on self-driving vehicles in some sort of partnership, Mr. Kalanick said, his company started to develop its own autonomous car technology.

Uber hired a team of robotics experts from Carnegie Mellon University, deepening the division between the two companies.

“Generally, Google was super not happy, unpumped about us doing this,” said Mr. Kalanick, who stepped down as Uber’s chief executive in June. He recalled that Mr. Page had been “angsty” and asked him: “Why are you doing my thing?”

But Mr. Kalanick said the engineers from Carnegie Mellon were not enough to catch up to Google’s self-driving car project, which would become Waymo. So Uber started talking to the Otto founder Anthony Levandowski, who was still working at Google, about helping Uber develop its laser sensor technology — an essential component for self-driving cars.

As Mr. Kalanick and Mr. Levandowski discussed ways to work together, they exchanged hundreds of text messages. The messages were presented in court on Wednesday and were mostly a variation on how important it was to win the race for self-driving cars.

Mr. Levandowski told Mr. Kalanick that “second place is first” loser. Mr. Kalanick said this wasn’t the first time he had heard this; it was something his high school football coach had said.

Mr. Kalanick said he did not recall, however, what he had meant by some of the text messages he sent to Mr. Levandowski, including one that read, “Burn the village.”

In another exchange, Mr. Levandowski sent Mr. Kalanick a link to a scene from the movie “Wall Street” in which Gordon Gekko, played by Michael Douglas, argues “greed, for the lack of a better word, is good.”

When asked whether he had watched the scene, Mr. Kalanick said he thought so. His emoji response was a hint. “I mean there is a winky-face there,” he said.

Once Uber and Otto struck a basic agreement for an acquisition in April 2016, months before the deal was announced, Mr. Kalanick told Uber executives in a meeting that “golden time is over — it’s now wartime.” At another meeting, he discussed how it was necessary for Uber to find and use “cheat codes.”

Mr. Kalanick explained that this phrase was not as nefarious as it sounded. Cheat codes “are elegant solutions to problems that haven’t been thought of,” he said. Tesla, for example, has customers pay tens of thousands for cars but loads those vehicles with sensors that provide data to support its self-driving car efforts.

Mr. Kalanick said he had another testy conversation with Mr. Page in October 2016, after Uber announced its acquisition of Otto. Interestingly, Mr. Page was worried that Uber was developing its own flying car technology.

Mr. Page has personally invested in flying car technology, although it is not clear whether Alphabet is working on it.

Mr. Kalanick insisted that Uber wasn’t working on flying cars, but he said Mr. Page was mad that Uber was hiring Google employees and taking its intellectual property. To which, Mr. Kalanick said, he responded, “Your people are not your IP.”

IP is an abbreviation for intellectual property.

As Uber’s lawyers questioned him, Mr. Kalanick recalled how he had had so much fun as the company’s chief executive. He liked “being in the trenches” with small teams. But when asked about the Otto acquisition and the hiring of Mr. Levandowski, Mr. Kalanick was understated.

“It’s not as great as we had thought at the beginning,” he said.

Continue reading the main story

Why Google’s Bosses Became ‘Unpumped’ About Uber

It’s something Google understands well. Years ago, Google was tightly aligned with Apple. But the relationship soured as Google started developing smartphone software. Steve Jobs, who was Apple’s chief executive, rightly believed that Google’s Android would become a competitive threat to the iPhone.

When Google initially invested in Uber in 2013, Larry Page, Google’s chief executive at the time, and David Drummond, Alphabet’s top lawyer, were mentors, Mr. Kalanick said. Alphabet remains an investor in Uber.

The two companies, he said, had a “little brother, big brother” dynamic because Uber was “trying to get more of their time than they were willing to give.”

After Mr. Page repeatedly spurned meetings to discuss combining Uber’s ride-hailing service with Google’s work on self-driving vehicles in some sort of partnership, Mr. Kalanick said, his company started to develop its own autonomous car technology.

Uber hired a team of robotics experts from Carnegie Mellon University, deepening the division between the two companies.

“Generally, Google was super not happy, unpumped about us doing this,” said Mr. Kalanick, who stepped down as Uber’s chief executive in June. He recalled that Mr. Page had been “angsty” and asked him: “Why are you doing my thing?”

But Mr. Kalanick said the engineers from Carnegie Mellon were not enough to catch up to Google’s self-driving car project, which would become Waymo. So Uber started talking to the Otto founder Anthony Levandowski, who was still working at Google, about helping Uber develop its laser sensor technology — an essential component for self-driving cars.

As Mr. Kalanick and Mr. Levandowski discussed ways to work together, they exchanged hundreds of text messages. The messages were presented in court on Wednesday and were mostly a variation on how important it was to win the race for self-driving cars.

Mr. Levandowski told Mr. Kalanick that “second place is first” loser. Mr. Kalanick said this wasn’t the first time he had heard this; it was something his high school football coach had said.

Mr. Kalanick said he did not recall, however, what he had meant by some of the text messages he sent to Mr. Levandowski, including one that read, “Burn the village.”

In another exchange, Mr. Levandowski sent Mr. Kalanick a link to a scene from the movie “Wall Street” in which Gordon Gekko, played by Michael Douglas, argues “greed, for the lack of a better word, is good.”

When asked whether he had watched the scene, Mr. Kalanick said he thought so. His emoji response was a hint. “I mean there is a winky-face there,” he said.

Once Uber and Otto struck a basic agreement for an acquisition in April 2016, months before the deal was announced, Mr. Kalanick told Uber executives in a meeting that “golden time is over — it’s now wartime.” At another meeting, he discussed how it was necessary for Uber to find and use “cheat codes.”

Mr. Kalanick explained that this phrase was not as nefarious as it sounded. Cheat codes “are elegant solutions to problems that haven’t been thought of,” he said. Tesla, for example, has customers pay tens of thousands for cars but loads those vehicles with sensors that provide data to support its self-driving car efforts.

Mr. Kalanick said he had another testy conversation with Mr. Page in October 2016, after Uber announced its acquisition of Otto. Interestingly, Mr. Page was worried that Uber was developing its own flying car technology.

Mr. Page has personally invested in flying car technology, although it is not clear whether Alphabet is working on it.

Mr. Kalanick insisted that Uber wasn’t working on flying cars, but he said Mr. Page was mad that Uber was hiring Google employees and taking its intellectual property. To which, Mr. Kalanick said, he responded, “Your people are not your IP.”

IP is an abbreviation for intellectual property.

As Uber’s lawyers questioned him, Mr. Kalanick recalled how he had had so much fun as the company’s chief executive. He liked “being in the trenches” with small teams. But when asked about the Otto acquisition and the hiring of Mr. Levandowski, Mr. Kalanick was understated.

“It’s not as great as we had thought at the beginning,” he said.

Continue reading the main story

Tech Tip: Moving Notes in Google Keep to Other Programs

Q. I have a bunch of notes in the Google Keep app on my Android tablet, but I’d like to move them into a word-processing program. Is there a way to do this from the tablet besides cutting and pasting the content into an email message and then into a document on my computer?

A. Google Keep, the multimedia note-taking app for Android, iOS and the desktop web browser, can quickly transfer the text of a selected note to a Google Docs file or another program with a simple menu command. If you are using Google Keep for Android, tap open the note you want to use. In the bottom-right corner of the screen, tap the small square icon to open the Action menu.

Photo

From the Action menu in Google Keep, you can add collaborators to the selected note, dress it up with colors or labels, delete it, copy it or send the content over to a Google Docs file or other compatible program. Credit The New York Times

The menu offers several options, like adding a color or a collaborator. Tap the Send command and then choose either “Copy to Google Docs” or “Send via other apps.” If you use the Google Docs word-processing program, the text of your note is copied into a new file stored with the rest of your documents. If you do not use Google Docs, select the “other apps” option, which lets you send the text by email, post it to a social-media service, upload it to a connected cloud server or share it with another program.

You can also drag Google Keep notes into an open Google Doc file right in your computer’s browser. To do so, open or create a new Google Doc. Go to the Tools menu and choose Keep Notepad. A “Notes from Keep” panel appears on the side of the browser window, and you can drag your notes right into the open Google Doc.

Continue reading the main story

She Warned of ‘Peer-to-Peer Misinformation.’ Congress Listened.

In 2016, they monitored thousands of Twitter accounts that suddenly started using bots, or automated accounts, to spread salacious stories about the Clinton family. They watched as multiple Facebook pages, appearing out of nowhere, organized to simultaneously create anti-immigrant events. Nearly all of those watching were hobbyists, logging countless hours outside their day jobs.

Photo

Colin Stretch, general counsel of Facebook, left, Sean Edgett, acting general counsel of Twitter, and Kent Walker, senior vice president and general counsel of Google, testified earlier this month at a Senate Intelligence Committee hearing examining social media influence in the 2016 elections. Credit Eric Thayer for The New York Times

“When I put it all together and started mapping it out, I saw how big the scale of it was,” said Jonathan Albright, who met Ms. DiResta through Twitter. Mr. Albright published a widely read report that mapped, for the first time, connections between conservative sites putting out fake news. He did the research as a “second job” outside his position as research director at the Tow Center for Digital Journalism at Columbia University.

Senate and House staff members, who knew of Ms. DiResta’s expertise through her public reports and her previous work advising the Obama administration on disinformation campaigns, had contacted her and others to help them prepare for the hearings.

Rachel Cohen, a spokeswoman for Senator Mark Warner, Democrat of Virginia, said in a statement that researchers like Ms. DiResta had shown real insight into the platforms, “in many cases, despite efforts by some of the platforms to undermine their research.” Mr. Warner is a member of the Senate Intelligence Committee.

One crucial line of the questioning — on how much influence Russian-bought advertisements and content had on users — was the result of work by Ms. DiResta and others with a Facebook-owned tool. “Facebook has the tools to monitor how far this content is spreading,” Ms. DiResta said. “The numbers they were originally providing were trying to minimize it.”

Indeed, at the congressional hearings, the tech companies admitted that the problem was far larger than they had originally said. Last year, Mark Zuckerberg, Facebook’s chief executive, said it was a “crazy idea” that misinformation on Facebook influenced the election. But the company acknowledged to Congress that more than 150 million users of its main site and a subsidiary, Instagram, potentially saw inflammatory political ads bought by a Kremlin-linked company, the Internet Research Agency.

Ms. DiResta contended that that is still just the tip of the iceberg. Minimizing the scope of the problem was “a naïve form of damage control,” she said. “This isn’t about punishing Facebook or Twitter. This is us saying, this is important and we can do better.”

Photo

Ms. DiResta became interested in misinformation on social media while researching the anti-vaccine movement. Credit Jason Henry for The New York Times

In response, Facebook said it had begun organizing academic discussions on disinformation.

“We regularly engage with dozens of sociologists, political scientists, data scientists and communications scholars, and we both read and incorporate their findings into our work,” said Jay Nancarrow, a Facebook spokesman. “We value the work of researchers, and we are going to continue to work with them closely.”

A graduate of Stony Brook University in New York, Ms. DiResta wrote her college thesis on propaganda in the 2004 Russian elections. She then spent seven years on Wall Street as a trader, watching the slow introduction of automation into the market. She recalled the initial fear of overreliance on algorithms, as there were “bad actors who could come in and manipulate the system into making bad trades.”

“I look at that now and I see a lot of parallels to today, especially for the need for nuance in technological transformations,” Ms. DiResta said. “Just like technology is never leaving Wall Street, social media companies are not leaving our society.”

Ms. DiResta moved to San Francisco in 2011 for a job with the O’Reilly Alpha Tech Venture Capital firm. But it wasn’t until the birth of her first child a few years later, that Ms. DiResta started to examine the dark side of social media.

“When my son was born, I began looking into vaccines. I found myself wondering about the clustering effects where the anti-vaccine movement was concentrated,” Ms. DiResta recalled. “I was thinking, ‘What on earth is going on here? Why is this movement gaining so much momentum here?’”

She started tracking posts made by anti-vaccine accounts on Facebook and mapping the data. What she discovered, she said, was that Facebook’s platform was tailor-made for a small group of vocal people to amplify their voices, especially if their views veered toward the conspiratorial.

“It was this great case study in peer-to-peer misinformation,” Ms. DiResta said. Through one account she created to monitor anti-vaccine groups on Facebook, she quickly realized she was being pushed toward other anti-vaccine accounts, creating an echo chamber in which it appeared that viewpoints like “vaccines cause autism” were the majority.

Soon, her Facebook account began promoting content to her on a range of other conspiratorial ideas, ranging from people who claim the earth is flat to those who believe that “chem trails,” or trails left in the sky by planes, were spraying chemical agents on an unsuspecting public.

“So by Facebook suggesting all these accounts, they were essentially creating this vortex in which conspiratorial ideas can just breed and multiply,” Ms. DiResta said.

Her published findings on the anti-vaccine movement brought her to the attention of the Obama administration, which reached out to her in 2015, when officials were examining radical Islamist groups’ use of online disinformation campaigns.

She recalled a meeting with tech companies at the White House in February 2016 where executives, policy leaders and administration officials were told that American-made social media platforms were key to the dissemination of propaganda by ISIS.

It was during that time that she first met Jonathan Morgan, a fellow social media disinformation researcher who had published papers on how the Islamic State spreads its propaganda online.

“We kept saying this was not a one-off. This was a toolbox anyone can use,” Ms. DiResta said. “We told the tech companies that they had created a mass way to reach Americans.”

A year and a half later, they hope everyone is finally listening. “I think we are at this real moment, where as a society we are asking how much responsibility these companies have toward ensuring that their platforms aren’t being gamed, and that we, as their users, aren’t being pushed toward disinformation,” Ms. DiResta said.

Continue reading the main story

Google Docs Glitch That Locked Out Users Underscores Privacy Concerns

Has anyone had @googledocs lock you out of a doc before? My draft of a story about wildlife crime was just frozen for violating their TOS.

Working away happily on @googledocs with a response to reviewers. Suddenly: "This document is in violation of Terms of Service". #WTF pic.twitter.com/o2pjoTTTWo