Vectra raises $36M for its AI-based approach to cybersecurity intrusion detection


With the trend of growing cybercrime showing no indication of abating, a startup called Vectra that has built an artificial intelligence-based system called Cognito to detect cyberattacks and mobilise security systems to respond to them has raised $36 million to expand its R&D and business development.

This Series D comes on the back of a strong year for the startup, with 181 percent growth in customer subscriptions between 2016 and 2017, and Vectra’s CEO Hitesh Sheth said he expects the same this year. Typical customers are large enterprises (which is why you don’t see much about pricing on the site) and includes players in the financial, healthcare, government, tech and education sectors. The list the company disclosed to me includes LiveNation/Ticketmaster, Pinterest, Kronos, Tribune Media, Verifone, Agilent, Texas A&M University and DZ Bank in Germany.

This latest round is being led by Atlantic Bridge Capital, with participation from Ireland’s Strategic Investment Fund (ISIF) and Nissho Electronics Corp. Previous investors Khosla Ventures, Accel Partners, IA Ventures, AME Cloud Ventures, DAG Ventures and Wipro Ventures also participated. The company’s total raised to date is $123 million, and while it is not disclosing its valuation, its pre-money valuation of just under $344 million, according to PitchBook, based on its last funding round in March 2016, is likely getting a big boost after the growth it has seen. Also for context, one of its closer competitors, Darktrace, was last valued at $825 million.

Vectra’s growth — and the round that it has raised — underscores one of the bigger challenges in the market at the moment for enterprises and other organizations.

While there are a number of solutions out there for trying to block malicious hackers and their various techniques, and there are systems in place for stopping them when they are found, there is a gap in the market for the moments where cyber criminals evade the best blocks and then proceed to steal data, sometimes for months or more.

The Winter Olympics in Korea, as one recent example, suffered an attack that was only detected after the malicious hackers had already been sucking up data for 120 days.

“One of the issues for enterprises today is that it’s never been more hostile. The operating assumption is that you will get breached,” said Hitesh Sheth, president and CEO of Vectra. His company’s solution, he says, is not to try to change that currently immutable fact, but to drastically shrink the length of an otherwise months-long attack to minutes and hours.  “The only control you really have is what will you do once you are breached.”

Vectra does this using AI. The thinking here is that, if you are working with large enterprises, there are many places, services, apps and end points that need to be assessed for inconsistencies in how they are being queried and used in the network. Systems that are automated and use machine learning to essentially mimic the behavior of security specialists are the best at doing this kind of searching and identification.

Sheth claims that while there are a number of other intrusion and threat detection services out in the market — Darktrace, Cisco’s intrusion detection (built around a number of acquisitions) and RiskIQ being some of them — Vectra is the only one of these that is built on AI algorithms from the ground up. “AI is a bolt-on for most security players, but this is all we do.”

He also says that the other aspect of its service that helps it stand out is its focus on network, rather than end-point, traffic. “If devices are compromised, end point logs are compromised.”

Sheth describes this latest round as its “path to profitability,” where it could be the last one Vectra needs before it tips into the black itself — a big feat for an SaaS service that also has its sights on an IPO longer-term.

“What is a fad in the valley is to raise as much as possible and then some more,” he said. “Investors can win but I’m not sure employees do. You want to rase as much as possible but you need to see how to scale.” He said initially the company wanted to raise between $25 million and $30 million but “interest was super high and it was oversubscribed, so we accommodated investors that we thought would add value.”

The connection with the Irish strategic investment stems out of the fact that Vectra is going to build an R&D center in Dublin. This came first and the investment came second, Sheth said.

The company selected Dublin because it had considered London and Barcelona — there are already three centers in the US, in Austin, Cambridge San Jose — but backed away from the former because of uncertainties around Brexit, and the latter because of political upheaval. Ireland, he believes, will only grow in prominence for its position as the only English-speaking market still fully in the European Union.

“This is an exciting investment for ISIF, which promises significant economic impact for Ireland,” said Fergal McAleavey, head of private equity at ISIF, in a statement. “It is encouraging to see Ireland leverage its emerging expertise in artificial intelligence by attracting businesses such as Vectra that are on the leading edge of technology. With cybersecurity becoming such a critical issue for all organizations, we are confident that Vectra will deliver a strong economic return on our investment while creating high-value R&D employment here in Ireland.”

Meanwhile, company’s growth is what swayed the lead investor.

“We have been impressed by the remarkable growth of Vectra in this fast-moving cybersecurity market,” said Kevin Dillon, managing partner at Atlantic Bridge Capital, in a statement. “The increasing volume, creativity and effectiveness of cyberattacks means that enterprises must adopt AI to automate cybersecurity operations. We look forward to helping the company expand its global enterprise footprint.”

Featured Image: Getty Images

Cognoa’s AI platform for autism diagnosis gets first FDA stamp


Cognoa has gained regulatory recognition for its machine learning software as a class II diagnostic medical device for autism — meaning the digital health startup is now positioned to submit an application for full FDA clearance.

It’s a first but important regulatory step for a business that was founded back in 2014, and plays in a still nascent digital health space where untested ‘wellness’ apps are far more plentiful than medical technologies with robust data to prove out the efficacy of their interventions.

Discussions with the FDA started in early 2017, says Cognoa CEO Brent Vaughan, adding that it’s hoping to gain full FDA clearance this year.

He says the ultimate goal for the US startup is to become a standard part of domestic health insurance-covered medical provision — and for that FDA clearance is essential to opening the doors.

We first covered the Cognoa at launch in 2014 and the following year when it was still being careful to describe its technology as a screening rather than a diagnostic system.

It’s since gathered enough data to be confident in using the ‘D’ word — having run a pilot with 250,000 parents, offering free screening for their children so it could gather more data to refine its machine learning models.

“We were lucky that we had investors,” says Vaughan. “There’s not a huge business model in providing free screening services to kids, right, because we were certainly never going to sell ads. That wasn’t the goal.

“It took a little patience but in the process of providing free screening and at least showing parents how to navigate their way to the front of a line as more of an information service we were able to build the data models to support a development of a diagnostic device actually a couple of years sooner than we originally thought we would. So it ultimately paid off for us.”

Cognoa has raised around $11.6M in investor funding to date, according to CrunchBase, from the Chinese private investment group Morningside. Vaughan tells TechCrunch it’ll likely be looking to raise another round by the end of this year.

It has also conducted multiple studies over the last 2.5 years across the US, including blinded control trials and side-by-side comparisons of its different versions — working with children’s hospitals and secondary care centers. It now bills its technology as a “pediatric behavioral health diagnostics and digital therapeutics platform”.

The initial machine learning model, which was targeted at screening for autism, was based on the work of Stanford pediatrics and psychiatry professor Dennis Wall. The model itself was built by combining and structuring existing datasets of behavioral observations on about 10,000 children.

Though, as noted above, Cognoa has continued to refine its autism model with structured contributions from parents participating in the pilot and inputting data via its app. (Aka: If an AI service is free, you’re the training data.)

  1. Mockup - Assessment

  2. Mockup- Activity Details

  3. Cognoa app

  4. Mockup- Assessment Results

“In our last study we were able to come through with a sensitivity of greater than 90 per cent,” Vaughan tells TechCrunch. “In our first algorithm… targeting autism, we would find it over 90 per cent of the time — and when we said it was autism it was correct well over 80 per cent of the time.

“What we see when we look in the data, and that we’re quite interested by, is when we say it’s autism or it looks like autism and it wasn’t… we were able to show [the FDA] that they were often very similarly related conditions.”

Vaughan says a lot of the team’s early work focused on figuring out how to create a product that enables non-healthcare professionals (i.e. parents) to capture robust data in a reproducible way. “One of the… questions that came up quite early, even from early potential investors and clinicians, was can you actually get parents to give you the information on which you could base a clinical diagnostic decision? Can you get them to do this reproducibly without a clinician being in a room?… So we certainly had to address that.

I remember sitting down with one venture capitalist who looked at me and said, you know what — you’re never going to find 5,000 parents that are going to do this.

“I remember sitting down with one venture capitalist who looked at me and said, you know what — you’re never going to find 5,000 parents that are going to do this. And that are going to be able to do this reproducibly,” he continues. “Within a couple of years we were up over a quarter of a million parents that had actually done it — and we learned a lot about how to reproducibly collect information on which you can build a clinical diagnosis but collecting it outside of the clinical setting. Parents providing us information in their living room in the evening. So that was certainly one major step for us. And in doing that we showed that the unmet need was much, much bigger than we originally had estimated.”

As well as aiming to support earlier diagnosis than parents might be able to get if they had to wait for specialist appointments for their child to be monitored in person, Cognoa’s platform provides guidance on actions (it calls them “activities”) parents can take themselves to help manage their children’s condition. Which in turn provides more opportunities for response data to be fed back so its models can keep learning and refining recommendations.

While the first focus is autism, with the aim of trying to shrink intervention times to improve long term outcomes for children — given what Vaughan describes as a “well-documented” link between earlier intervention and better autism outcomes — the intent is to address other behavioral conditions too, in time, such as ADHD.

“For us we see this — even the autism clearance that we’re looking forward to in the future — that’s just a step down the path of being able to be the platform that can diagnose an entire spectrum of these developmental conditions,” he says.

Interestingly, Vaughan concedes that the learning element of AI-based technologies can cause unintended problems in healthcare service provision, saying some clinicians it talked to early on raised concerns that by widening access to autism screening the startup risked making an existing diagnosis bottleneck worse by increasing demand for specialist services without there being a parallel increase in resource to avoid creating even more of a backlog.

Which is exactly the kind of serious, knock-on consequence that’s possible when unproven ‘disruptive’ technologies change existing dynamics and bring new pressures to bear on a critical and sensitive industry like healthcare. It also seems especially true of AI technologies which need to be fed with lots of data before they can learn to become really useful.

So how to conduct responsible training of machine learning models presents something of an existential challenge for AI and healthcare startup initiatives — and one which has already opened up operational pitfalls for some very well resourced tech giants.

“Back in 2014 and 2015 we were really starting down the path of let’s just prove that we can triage these kids and find them earlier. And a lot of people embraced that, but there was certainly some that were pretty thoughtful who said if you guys find the kids earlier and the problem in the system is that kids that are identified and referred to specialists for appointments are currently waiting between one and three years to get a diagnosis, aren’t you just going to be making the problem worse?” he says.

“So then we had to sit down and say listen, step one is being able to show that we can just screen these kids. But longer term we think we can really aid in getting a faster diagnosis. But we were very careful to not say, publicly, that we thought that we could diagnose these kids because we thought it would just be too controversial. And the idea of using an AI-based platform, the idea of collecting information primarily from the parent, from the caregiver and from the child, that was pretty controversial.”

Another change that’s being driven by AI-based software targeting the healthcare industry is to regulatory regimes — with regulators like the FDA needing to come up with new systems and processes for assessing and managing software designed to get better over time.

“The FDA is struggling with how to regulate AI-based software because the idea of the FDA is they look at a version of a product and that product once cleared by the FDA does not change — and the idea of AI and machine learning, which is what our product is based on, is that it’s learning and it gets better,” says Vaughan, talking about its discussions with the regulator. “And so understanding with the FDA how we were going to control and document that learning — those were some of the discussions where we walked in with ideas but not very clear understanding what the outcome would be.”

While he believes the FDA will likely take a case-by-case approach to the challenge of regulating AI platforms, he suggests companies will probably have to operate using a versioning system — whereby they restrict ongoing learning to the research lab, releasing a next version of a model into the wild only once the step change in their model has also gained regulatory approval.

“It’s the algorithm part of the device that [the FDA] feel the strongest about in terms of how they regulate it,” he says. “And keep in mind this is evolving, and their thinking might also evolve on this, but for us they look at the algorithm part and we can certainly, in our software, lock down a current version of the algorithm. And we can allow that to not change in the production version of the product — and at the same time we can have a research arm that’s continuing to evolve. And you could start to think about versioning coming out in the future.”

“So I think it’ll be a little bit more of a stair-step approach,” he adds. “With periodic reviews by the FDA. And I think that they’re in parallel trying to think of a way to streamline that approach going forward because of the flexibility that these products have. So I think it’ll be a little bit of a hybrid between continuous machine learning which seems quite difficult and the old style, which was quite waterfall.”

Featured Image: Cognoa

The sudden death of the website

You may not know me or even my company, LivePerson, but you’ve certainly used my invention. In 1995, I came up with the technology for those chat windows that pop up on websites. Today, more than 18,000 companies around the world, including big-name brands like T-Mobile, American Express, Citibank and Nike, use our software to communicate with their millions of customers. Unlike most startup founders who saw the birth of the internet in the mid-1990s, I am still CEO of my company.

My longevity in this position gives me a unique perspective on the changes that have happened over the past two decades, and I see one happening right now that will radically transform the internet as we know it.

When we started building websites in the mid-’90s, we had great dreams for e-commerce. We fundamentally thought all brick-and-mortar stores would disappear and everything dot-com would dominate. But e-commerce has failed us miserably. Today, less than 15 percent of commerce occurs through a website or app, and only a handful of brands (think: Amazon, eBay and Netflix) have found success with e-commerce at any real scale. There are two giant structural issues that make websites not work: HTML and Google.

The web was intended to bring humanity’s vast trove of content, previously cataloged in our libraries, to mass audiences through a digital user experience — i.e. the website. In the early years, we were speaking in library terms about “browsing” and “indexing,” and in many ways the core technology of a website, called HTML (Hypertext Markup Language), was designed to display static content — much like library books.

But retail stores aren’t libraries, and the library format can’t be applied to online stores either. Consumers need a way to dynamically answer the questions that enable them to make purchases. In the current model, we’re forced to find and read a series of static pages to get answers — when we tend to buy more if we can build trust over a series of questions and answers instead.

The second problem with the web is Google. When we started to build websites in the ’90s, everyone was trying to design their virtual stores differently. On one hand, this made them interesting and unique; on the other, the lack of industry standards made them hard to navigate — and really hard to “index” into a universal card catalog.

Then Google stepped in around 1998. As Google made it easier to find the world’s information, it also started to dictate the rules through the PageRank algorithm, which forced companies to design their websites in a certain way to be indexed at the top of Google’s search results. But its one-size-fits-all structure ultimately makes it flawed for e-commerce.

Now, almost every website looks the same — and performs poorly. Offline, brands try to make their store experiences unique to differentiate themselves. Online, every website — from Gucci to the Gap — offers the same experience: a top nav, descriptive text, some pictures and a handful of other elements arranged similarly. Google’s rules have sucked the life out of unique online experiences. Of course, as e-commerce has suffered, Google has become more powerful, and it continues to disintermediate the consumer from the brand by imposing a terrible e-commerce experience.

I am going to make a bold prediction: In 2018, we will see the first major brand shut down its website.

There also is a hidden knock-on effect of bad website design. As much as 90 percent of calls placed to a company’s contact center originate from its website. The journey looks like this: Consumers visit a website to get answers, become confused and have to call. This has become an epidemic, as contact centers field 268 billion calls per year at a cost of $1.6 trillion.

To put that in perspective, global advertising spend is $500 billion, meaning the cost of customer care — these billions of phone calls — is three times more than a company’s marketing expenses. More importantly, they create another bad consumer experience. How many times have we been put on hold by a company when it can’t handle the volume of incoming queries? Websites and apps have, in fact, created more phone calls — at increased cost — and upended digital’s promise to make our lives easier.

There is something innate to our psychology in getting our questions answered through a conversation that instills the confidence in us to spend money. This is why there is so much chatter about bots and AI right now. They tap into an inner understanding about the way things get done in the real world: through conversations. The media are putting too much focus on bots and AI destroying jobs. Instead, we should explore how they will make our lives easier in the wake of the web’s massive shortfalls.

As I have discovered the truth about e-commerce, in some ways it made me feel a sense of failure from what my hopes and dreams were when I started in the industry. I have a lot of hope now that what I call “conversational commerce” — interactions via messaging, voice (Alexa and so on) and bots — will finally deliver on the promise of powering digital commerce at the scale we all dreamt about.

I am going to make a bold prediction based on my work with 18,000 companies and bringing conversational commerce to life: In 2018, we will see the first major brand shut down its website. The brand will shift how it connects with consumers — to conversations, with a combination of bots and humans, through a messaging front end like SMS or Facebook. We are already working with several large brands to make this a reality.

When the first website ends, the dominoes will fall fast. This will have a positive impact on most companies in transforming how they conduct e-commerce and provide customer care. For Google, however, this will be devastating.

Amazon may be developing AI chips for Alexa


The Information has a report this morning that Amazon is working on building AI chips for the Echo, which would allow Alexa to more quickly parse information and get those answers.

Getting those answers much more quickly to the user, even by a few seconds, might seem like a move that’s not wildly important. But for Amazon, a company that relies on capturing a user’s interest in the absolute critical moment to execute on a sale, it seems important enough to drop that response time as close to zero as possible to cultivate the behavior that Amazon can give you the answer you need immediately — especially, in the future, if it’s a product that you’re likely to buy. Amazon, along with Google and Apple, are at the point where users expect technology that works and works quickly, and are probably not as forgiving as they are to other companies relying on problems like image recognition (like, say, Pinterest).

This kind of hardware on the Echo would probably be geared toward inference, taking inbound information (like speech) and executing a ton of calculations really, really quickly to make sense of the incoming information. Some of these problems are often based on a pretty simple problem stemming from a branch of mathematics called linear algebra, but it does require a very large number of calculations and a good user experience demands they happen very quickly. The promise of making customized chips that work really well for this is that you could make it faster and less power-hungry, though there are a lot of other problems that might come with it. There are a bunch of startups experimenting with ways to do something with this, though what the final product ends up isn’t entirely clear (pretty much everyone is pre-market at this point).

In fact, this makes a lot of sense simply by connecting the dots of what’s already out there. Apple has designed its own customer GPU for the iPhone, and moving those kinds of speech recognition processes directly onto the phone would help it more quickly parse incoming speech, assuming the models are good and they’re sitting on the device. Complex queries — the kinds of long-as-hell sentences you’d say into the Hound app just for kicks — would definitely still require a connection with the cloud to walk through the entire sentence tree to determine what kinds of information the person actually wants. But even then, as the technology improves and becomes more robust, those queries might be even faster and easier.

The Information’s report also suggests that Amazon may be working on AI chips for AWS, which would be geared toward machine training. While this does make sense in theory, I’m not 100% sure this is a move that Amazon would throw its full weight behind. My gut says that the wide array of companies working off AWS don’t need some kind of bleeding-edge machine training hardware, and would be fine training models a few times a week or month and get the results that they need. That could probably be done with a cheaper Nvidia card, and though wouldn’t have to deal with solving problems that come with hardware like heat dissipation. That being said, it does make sense to dabble in this space a little bit given the interest from other companies, even if nothing comes out of it.

We reached out to Amazon for a comment on the story and will update when we hear back. In the mean time, this seems like something to keep close tabs on as everyone seems to be trying to own the voice interface for smart devices — either in the home or, in the case of the AirPods, maybe even in your ear. Thanks to advances in speech recognition, voice turned out to actually be a real interface for technology in the way that the industry thought it might always be. It just took a while for us to get here.

There’ a pretty big number of startups experimenting in this space (by startup standards) in the promise of creating a new generation of hardware that can handle AI problems faster and more efficiently while potentially consuming less power — or even less space. The companies like Graphcore and Cerebras Systems that are based all around the world, with some nearing billion-dollar valuations. A lot of people in the industry refer to this explosion as Compute 2.0, at least if it plays out the way investors are hoping.

Chinese police are using smart glasses to identify potential suspects


China already operates the world’s largest surveillance state with some 170 CCTV cameras at work, but its line of sight is about to get a new angle thanks to new smart eyewear that is being piloted by police officers.

The smart specs look a lot like Google Glass, but they are used for identifying potential suspects. The device connects to a feed which taps into China’s state database to root out potential criminals using facial recognition. Officers can identify suspects in a crowd by snapping their photo and matching it to the database. Beyond a name, officers are also supplied with the person’s address, according to the BBC.

Chinese state media reports that the technology has already facilitated the capture of seven individuals, while 35 others using fake IDs are said to have been found.

The glasses have been deployed in Zhengzhou, the capital of central province Henan, where it has been used to surveil those traveling by plane and train, according to the Wall Street Journal. With Chinese New Year, the world’s largest human migration, coming later this month, you’d imagine the glasses could be used to surveil the hundreds of millions of people who travel the country, and beyond, for the holiday period.

China’s has been criticized in many quarters for the way it uses its database, and facial recognition tech, in relation to ethnic minorities. A system deployed in Xinjiang — a province with a population of some 10 million ‘Uighur’ Muslims, is reportedly designed to notify authorities when “target” individuals go beyond their home or place of work, according to Bloomberg.

Uighurs have been the targets of roving VPN crackdowns and smartphone confiscation. Back in 2010, the government shut the region’s internet connect for a 10-month period following violence between Uighurs and Han Chinese.

Related: China’s CCTV surveillance network took just 7 minutes to capture BBC reporter

Featured Image: China News Service via WSJ

Foxconn to plug at least $340M into AI R&D over five years


Manufacturing giant Foxconn has said it will make a major investment in artificial intelligence-based R&D as it looks for new business growth opportunities in a cooling global smartphone market, Nikkei reports.

“We will at least invest some 10 billion New Taiwan dollars ($342M) over five years to recruit top talent and deploy artificial intelligence applications in all the manufacturing sites,” said chairman Terry Gou.

“It’s likely that we could even pour in some $10BN or more if we find the deployments are very successful or can really generate results.”

Gou added that the ambition is to become “a global innovative AI platform rather than just a manufacturing company”.

Data put out this week by Strategy Analytics records a 9 per cent fall in global smartphone shipments in Q4 2017 — the biggest such drop in smartphone history — which the analyst blames on the floor falling out of the smartphone market in China.

“The shrinkage in global smartphone shipments was caused by a collapse in the huge China market, where demand fell 16 percent annually due to longer replacement rates, fewer operator subsidies and a general lack of wow models,” noted Strategy Analytics’ Linda Sui in a statement.

On a full-year basis, the analysts records global smartphone shipments growing 1 percent — topping 1.5 billion units for the first time.

But there’s little doubt the smartphone growth engine that’s fed manufacturing giants like Foxconn for so long is winding down.

This week, for example, Apple — Foxconn’s largest customer — reported a dip in iPhone sales for the holiday quarter. Though Cupertino still managed to carve out more revenue (thanks to that $1k iPhone X price-tag). But those kind of creative pricing opportunities aren’t on the table for electronics assemblers. So it’s all about utilizing technology to do more for less.

According to Nikkei, Foxconn intends to recruit up to 100 top AI experts globally. It also said it will recruit thousands of less experienced developers to work on building applications that use machine learning and deep learning technologies.

Embedding sensors into production line equipment to capture data to feed AI-fueled automation development is a key part of the AI R&D plan, with Foxconn saying earlier that it wants to offer advanced manufacturing experiences and services — eyeing competing with the likes of General Electric and Cisco.

The company has also been working with Andrew Ng’s new AI startup Landing.ai — which is itself focused on plugging AI into industries that haven’t yet tapping into the tech’s transformative benefits, with a first focus on manufacturing — since July.

And Gou confirmed the startup will be a key partner as Foxconn works towards its own AI-fueled transformation — using tech brought in via Landing.ai to help transform the manufacturing process, and identify and predict defects.

Quite what such AI-powered transformation might mean for the jobs of hundreds of thousands of humans currently employed by Foxconn on assembly line tasks is less clear. But it looks like those workers will be helping to train AI models that could end up replacing their labor via automation.

Featured Image: Matt Wakeman/Flickr UNDER A CC BY 2.0 LICENSE

MIT is aiming for AI moonshots with Intelligence Quest


Artificial intelligence has long been a focus for MIT. The school’s been researching the space since the late ’50s, giving rise (and lending its name) to the lab that would ultimately become known as CSAIL. But the Cambridge university thinks it can do more to elevate the rapidly expanding field.

This week, the school announced the launch of the MIT Intelligence Quest, an initiative aimed at leveraging its AI research into something it believes could be game-changing for the category. The school has divided its plan into two distinct categories: “The Core” and “The Bridge.”

“The Core is basically reverse-engineering human intelligence,” dean of the MIT School of Engineering Anantha Chandrakasan tells TechCrunch, “which will give us new insights into developing tools and algorithms, which we can apply to different disciplines. And at the same time, these new computer science techniques can help us with the understanding of the human brain. It’s very tightly linked between cognitive science, near science and computer science.”

The Bridge, meanwhile, is designed to provide access to AI and ML tools across its various disciplines. That includes research from both MIT and other schools, made available to students and staff.

“Many of the products are moonshoots,” explains James DiCarlo, head of the Department of Brain and Cognitive Sciences. “They involve teams of scientists and engineers working together. It’s essentially a new model and we need folks and resources behind that.”

Funding for the initiative will be provided by a combination of philanthropic donations and partnerships with corporations. But while the school has had blanket partnerships in the past, including, notably, the MIT-IBM Watson AI Lab, the goal here is not to become beholden to any single company. Ideally the school will be able to work alongside a broad range of companies to achieve its large-scale goals.

“Imagine if we can build machine intelligence that grows the way a human does,” adds professor of Cognitive Science and Computation, Josh Tenenbaum. “That starts like a baby and learns like a child. That’s the oldest idea in AI and it’s probably the best idea… But this is a thing we can only take on seriously now and only by combining the science and engineering of intelligence.”

Featured Image: Erikona/Getty Images