It’s a first but important regulatory step for a business that was founded back in 2014, and plays in a still nascent digital health space where untested ‘wellness’ apps are far more plentiful than medical technologies with robust data to prove out the efficacy of their interventions.
Discussions with the FDA started in early 2017, says Cognoa CEO Brent Vaughan, adding that it’s hoping to gain full FDA clearance this year.
He says the ultimate goal for the US startup is to become a standard part of domestic health insurance-covered medical provision — and for that FDA clearance is essential to opening the doors.
It’s since gathered enough data to be confident in using the ‘D’ word — having run a pilot with 250,000 parents, offering free screening for their children so it could gather more data to refine its machine learning models.
“We were lucky that we had investors,” says Vaughan. “There’s not a huge business model in providing free screening services to kids, right, because we were certainly never going to sell ads. That wasn’t the goal.
“It took a little patience but in the process of providing free screening and at least showing parents how to navigate their way to the front of a line as more of an information service we were able to build the data models to support a development of a diagnostic device actually a couple of years sooner than we originally thought we would. So it ultimately paid off for us.”
Cognoa has raised around $11.6M in investor funding to date, according to
It has also conducted multiple studies over the last 2.5 years across the US, including blinded control trials and side-by-side comparisons of its different versions — working with children’s hospitals and secondary care centers. It now bills its technology as a “pediatric behavioral health diagnostics and digital therapeutics platform”.
The initial machine learning model, which was targeted at screening for autism, was based on the work of Stanford pediatrics and psychiatry professor Dennis Wall. The model itself was built by combining and structuring existing datasets of behavioral observations on about 10,000 children.
Though, as noted above, Cognoa has continued to refine its autism model with structured contributions from parents participating in the pilot and inputting data via its app. (Aka: If an AI service is free, you’re the training data.)
Mockup - Assessment
Mockup- Activity Details
Mockup- Assessment Results
“In our last study we were able to come through with a sensitivity of greater than 90 per cent,” Vaughan tells TechCrunch. “In our first algorithm… targeting autism, we would find it over 90 per cent of the time — and when we said it was autism it was correct well over 80 per cent of the time.
“What we see when we look in the data, and that we’re quite interested by, is when we say it’s autism or it looks like autism and it wasn’t… we were able to show [the FDA] that they were often very similarly related conditions.”
Vaughan says a lot of the team’s early work focused on figuring out how to create a product that enables non-healthcare professionals (i.e. parents) to capture robust data in a reproducible way. “One of the… questions that came up quite early, even from early potential investors and clinicians, was can you actually get parents to give you the information on which you could base a clinical diagnostic decision? Can you get them to do this reproducibly without a clinician being in a room?… So we certainly had to address that.
I remember sitting down with one venture capitalist who looked at me and said, you know what — you’re never going to find 5,000 parents that are going to do this.
“I remember sitting down with one venture capitalist who looked at me and said, you know what — you’re never going to find 5,000 parents that are going to do this. And that are going to be able to do this reproducibly,” he continues. “Within a couple of years we were up over a quarter of a million parents that had actually done it — and we learned a lot about how to reproducibly collect information on which you can build a clinical diagnosis but collecting it outside of the clinical setting. Parents providing us information in their living room in the evening. So that was certainly one major step for us. And in doing that we showed that the unmet need was much, much bigger than we originally had estimated.”
As well as aiming to support earlier diagnosis than parents might be able to get if they had to wait for specialist appointments for their child to be monitored in person, Cognoa’s platform provides guidance on actions (it calls them “activities”) parents can take themselves to help manage their children’s condition. Which in turn provides more opportunities for response data to be fed back so its models can keep learning and refining recommendations.
While the first focus is autism, with the aim of trying to shrink intervention times to improve long term outcomes for children — given what Vaughan describes as a “well-documented” link between earlier intervention and better autism outcomes — the intent is to address other behavioral conditions too, in time, such as ADHD.
“For us we see this — even the autism clearance that we’re looking forward to in the future — that’s just a step down the path of being able to be the platform that can diagnose an entire spectrum of these developmental conditions,” he says.
Interestingly, Vaughan concedes that the learning element of AI-based technologies can cause unintended problems in healthcare service provision, saying some clinicians it talked to early on raised concerns that by widening access to autism screening the startup risked making an existing diagnosis bottleneck worse by increasing demand for specialist services without there being a parallel increase in resource to avoid creating even more of a backlog.
Which is exactly the kind of serious, knock-on consequence that’s possible when unproven ‘disruptive’ technologies change existing dynamics and bring new pressures to bear on a critical and sensitive industry like healthcare. It also seems especially true of AI technologies which need to be fed with lots of data before they can learn to become really useful.
So how to conduct responsible training of machine learning models presents something of an existential challenge for AI and healthcare startup initiatives — and one which has already opened up
“Back in 2014 and 2015 we were really starting down the path of let’s just prove that we can triage these kids and find them earlier. And a lot of people embraced that, but there was certainly some that were pretty thoughtful who said if you guys find the kids earlier and the problem in the system is that kids that are identified and referred to specialists for appointments are currently waiting between one and three years to get a diagnosis, aren’t you just going to be making the problem worse?” he says.
“So then we had to sit down and say listen, step one is being able to show that we can just screen these kids. But longer term we think we can really aid in getting a faster diagnosis. But we were very careful to not say, publicly, that we thought that we could diagnose these kids because we thought it would just be too controversial. And the idea of using an AI-based platform, the idea of collecting information primarily from the parent, from the caregiver and from the child, that was pretty controversial.”
Another change that’s being driven by AI-based software targeting the healthcare industry is to regulatory regimes — with regulators like the FDA needing to come up with new systems and processes for assessing and managing software designed to get better over time.
“The FDA is struggling with how to regulate AI-based software because the idea of the FDA is they look at a version of a product and that product once cleared by the FDA does not change — and the idea of AI and machine learning, which is what our product is based on, is that it’s learning and it gets better,” says Vaughan, talking about its discussions with the regulator. “And so understanding with the FDA how we were going to control and document that learning — those were some of the discussions where we walked in with ideas but not very clear understanding what the outcome would be.”
While he believes the FDA will likely take a case-by-case approach to the challenge of regulating AI platforms, he suggests companies will probably have to operate using a versioning system — whereby they restrict ongoing learning to the research lab, releasing a next version of a model into the wild only once the step change in their model has also gained regulatory approval.
“It’s the algorithm part of the device that [the FDA] feel the strongest about in terms of how they regulate it,” he says. “And keep in mind this is evolving, and their thinking might also evolve on this, but for us they look at the algorithm part and we can certainly, in our software, lock down a current version of the algorithm. And we can allow that to not change in the production version of the product — and at the same time we can have a research arm that’s continuing to evolve. And you could start to think about versioning coming out in the future.”
“So I think it’ll be a little bit more of a stair-step approach,” he adds. “With periodic reviews by the FDA. And I think that they’re in parallel trying to think of a way to streamline that approach going forward because of the flexibility that these products have. So I think it’ll be a little bit of a hybrid between continuous machine learning which seems quite difficult and the old style, which was quite waterfall.”
Featured Image: Cognoa