Aurora will power Byton EV’s autonomous driving features


Aurora, the self-driving startup founded by Google self-driving car project alum Chris Urmson, along with Tesla Autopilot developer Sterling Anderson, CMU robotics expert and Uber vet Drew Bagnell, and a team of industry experts, will be making the autonomous smarts for Byton’s forthcoming electric vehicle. Byton, a startup that had a splashy debut at CES earlier this year.

Byton’s Concept electric SUV is a car with a lot of interesting tech features, aside from its all-electric drive train. The vehicle has a massive, dashboard-covering display that incorporates information readouts, entertainment options and vehicle controls. It’s a screen that seems somewhat ill-suited for the task of paying attention to the road while driving, and the Byton car also has front seats that swivel towards the inside of the vehicle so that those in the front can better interact with those in the back.

Both of those features are more geared toward a future in which autonomous driving is a ready and viable option for Byton owners. The car is aiming for a 2019 starting ship date, by which time it’s possible self-driving features won’t seem such a distant dream. And now we know that Byton has a technology partners on the autonomous driving side of things with the technical know-how to make it an even more realistic expectation.

Aurora, despite officially breaking cover only just last year, is already working with a range of automakers on their autonomous driving technology, including Volkswagen and Hyundai. Aurora CEO Chris Urmson explained that its goals mean it’s happy to work with companies at all stages of development and maturity to help make self-driving a practical reality.

“Our mission is to deliver the benefits of self-driving technology safety, quickly and broadly,” he said in n interview. “So for us to have that broad part, it means we have to work with a nudger of great partners, and we’re very fortunate with the folks we have [as partners] to date… this is how we help the business, and we look forward to being able to engage with others in the future.”

For Byton and Aurora, this partnership will kick off with pilot test driving in California sometime soon,  and Byton hopes to eventually tap Aurora with its goal of fielding premium electric consumer vehicles with SAE Level 4 and Level 5 autonomous capabilities.

Aurora as a company is excited about its progress during its first year in operation, and is ramping up staffing and attracting key talent in a very competitive industry thanks to its pedigree and founding team, Urmson tells me.

“It started with a handful of us, a couple in my living room here in California, and a couple in Pittsburgh. We’ve been growing the team, that’s been one of the core focuses of this last year,” he said. “In my previous gig I had the privilege of helping build that program from day one, to a massive organization certainly leading the space, and now with Sterling and Drew, we have the opportunity to build version two of that, and learn from our experience, and build an organization and build a technology that can have a huge impact on the world, and do that quickly and safely.”

[embedded content]

Aurora will power Byton EV’s autonomous driving features


Aurora, the self-driving startup founded by Google self-driving car project alum Chris Urmson, along with Tesla Autopilot developer Sterling Anderson, CMU robotics expert and Uber vet Drew Bagnell, and a team of industry experts, will be making the autonomous smarts for Byton’s forthcoming electric vehicle. Byton, a startup that had a splashy debut at CES earlier this year.

Byton’s Concept electric SUV is a car with a lot of interesting tech features, aside from its all-electric drive train. The vehicle has a massive, dashboard-covering display that incorporates information readouts, entertainment options and vehicle controls. It’s a screen that seems somewhat ill-suited for the task of paying attention to the road while driving, and the Byton car also has front seats that swivel towards the inside of the vehicle so that those in the front can better interact with those in the back.

Both of those features are more geared toward a future in which autonomous driving is a ready and viable option for Byton owners. The car is aiming for a 2019 starting ship date, by which time it’s possible self-driving features won’t seem such a distant dream. And now we know that Byton has a technology partners on the autonomous driving side of things with the technical know-how to make it an even more realistic expectation.

Aurora, despite officially breaking cover only just last year, is already working with a range of automakers on their autonomous driving technology, including Volkswagen and Hyundai. Aurora CEO Chris Urmson explained that its goals mean it’s happy to work with companies at all stages of development and maturity to help make self-driving a practical reality.

“Our mission is to deliver the benefits of self-driving technology safety, quickly and broadly,” he said in n interview. “So for us to have that broad part, it means we have to work with a nudger of great partners, and we’re very fortunate with the folks we have [as partners] to date… this is how we help the business, and we look forward to being able to engage with others in the future.”

For Byton and Aurora, this partnership will kick off with pilot test driving in California sometime soon,  and Byton hopes to eventually tap Aurora with its goal of fielding premium electric consumer vehicles with SAE Level 4 and Level 5 autonomous capabilities.

Aurora as a company is excited about its progress during its first year in operation, and is ramping up staffing and attracting key talent in a very competitive industry thanks to its pedigree and founding team, Urmson tells me.

“It started with a handful of us, a couple in my living room here in California, and a couple in Pittsburgh. We’ve been growing the team, that’s been one of the core focuses of this last year,” he said. “In my previous gig I had the privilege of helping build that program from day one, to a massive organization certainly leading the space, and now with Sterling and Drew, we have the opportunity to build version two of that, and learn from our experience, and build an organization and build a technology that can have a huge impact on the world, and do that quickly and safely.”

[embedded content]

Elon Musk’s self-driving strategy still doesn’t include LiDAR


Elon Musk’s vision of autonomous driving differs from a lot of Tesla’s competitors in that he’s been adamant that LiDAR isn’t a core component of his approach. It’s a stance that provokes a lot of debate among experts in the field, many of whom, including former Tesla employees, disagree that full autonomy can be managed with a sensor load out that doesn’t include LiDAR.

“We have to solve passive optical image recognition extremely well in order to be able to drive in any environment and in any conditions,” Musk said on today’s Tesla quarterly earnings call. “At the point where you’ve solved it really well, what is the point in having active optical, which means LiDAR. In my view, it’s a crutch […] that will drive companies towards a hard corner that’s hard to get out of.”

Musk said that they’d want to do active photon generation in the radar range because that’s something that can see through small occlusions, so it’s not possible to do it from visual info and apply machine learning to work out variances in lighting conditions, etc. He said that he “finds it quite puzzling that companies would choose to do active photon generation in the wrong wavelength,” however, meaning working with the laser spectrum, since it’s very pricey.

In fact, Musk called it ‘expensive, ugly and unnecessary,” and added that even though he’s still clearly set against its inclusion in Tesla’s autopilot designs, it’s still possible his own bet in this area isn’t 100 perhaps correct.

“Perhaps I’m wrong, in which case I’ll look like a fool,” he said. “But I’m quite certain that I’m not.”

The strongest argument in Musk’s favor might be that the best current drivers also lack LiDAR arrays – humans, which depend primarily on standard passive optical sensing when doing their own driving.

Continental taps Nvidia for its full-scale autonomous vehicle platform


Continental is the latest top-tier automotive supplier to work with Nvidia, and the latest to announce its intent to build a full-scale, top-to-bottom autonomous driving system. Continental (which might be most familiar from its tire division, but which supplies a range of automotive parts and systems across the industry) will be using Nvidia’s DRIVE autonomous vehicle platform for its system, and hopes to bring them to market by 2021.

The Continental AV systems will work together using dedicated engineering teams provided by both sides, and with base technologies underlying the Continental offering that include Nvidia’s DRIVE Xavier system-on-a-chip, DRIVE OS and its DRIVE AV software offerings, the companies announced, and Continental will supply ASIL-D security certification expertise, as well as radar, camera and LiDAR sensor solutions.

Continental’s offering aims to provide automakers and other prospective clients with autonomy from Levels 2 through 5, which spans advanced cruise control features similar to Tesla’s Autopilot, Nissan’s PROPilot and GM’s SuperCruise, all the way up to true self-driving without so much as a steering wheel or a gas/brake pedal in sight.

Nvidia’s partnerships already span the industry and include many top-tier OEMs and suppliers, but Continental is still a formidable addition to the mix, and one that will help with its long-term positioning in the autonomous vehicle market.

We were in an accident during an automated driving tech demo

[embedded content]

As a transportation reporter, I’ve been in a lot of cars using either autonomous or semi-automated Advanced Driver Assistance (ADAS) features, both on public and private roads, and yet I’d never been in an accident during a demonstration drive for any of these features before now.

On Tuesday, January 30, myself and two TechCrunch video production staff were riding in a modified Hyundai Genesis equipped with technology created by autonomous systems startup Phantom AI, traveling south on Bayshore Freeway near Millbrae, CA.

In addition to the three TechCrunch staff, there were also two Phantom AI team members in the car, including founder and CEO Hyunggi Cho, and President and co-founder Chan Kyu Lee. Lee was at the wheel during the test, which was a demonstration of the startup’s Autopilot-like SAE Level 2 partial autonomous system, designed to maintain lane position, maintain distance between itself and vehicles ahead, and switch lanes automatically once directed using the turning indicator.

During the demonstration, while the L2 system was engaged and we were traveling at 60 MPH according to Phantom (though the human-machine interface in the video above shows 70 MPH as our set cruising speed), a pickup truck ahead dropped a poorly secured garbage bin from its cargo bed onto the roadway. The car in front, a white Nissan Rogue, applied the breaks to prevent hitting the trash bin, and our driver noticed the sudden stop and applied the brake in the Genesis with force to attempt to prevent a collision, but there was very little he could do at that stage and our car collided with the Nissan going approximately 20 MPH.

The Genesis suffered significant damage to the front end, as you can see above, though it was still technically drivable (but did begin leaking radiator fluid) and we first pulled to the shoulder as instructed by the California Highway Patrol, who were on the scene nearly immediately. We were then directed by the Patrol to a Chevron station just beyond a nearby exit, where we parked on the shoulder of a turnaround and gave our contact information to a patrolman, while the drivers involved provided full statements to the authorities.

Cho told us that the Automatic Emergency Braking system would’ve normally engaged at that point and prevented the collision, but it was disabled for the demo because it had been throwing too many false positives and was undergoing tuning. Lee’s manual disengagement, which resulted from pressing the brake pedal to the floor, as you can see in the video, didn’t occur fast enough for us to shed all our speed.

Cho immediately made sure no one was hurt (we weren’t, just shaken up) and after being cleared to leave the scene, we decided to follow through and move to an area of Foster City, where we rode in the startup’s Level 4 autonomous test car (a separate Genesis) around sparsely populated residential streets without incident. That may seem like an odd choice given we’d just experienced an accident riding in a car equipped with a different version of the company’s technology, but the follow-up demo was a separate system, speeds were much lower and there was a different safety driver at the wheel, and we were also honestly just all a little rattled and confused following the collision.

We provided Phantom AI with the opportunity to supply a full account of what occurred from their perspective, based on reviewing the data from the vehicle, and they supplied a breakdown of what occurred from their perspective, which you can read in full below. The video at the top of this article also documents the crash from within the vehicle, using a Rylo 360-degree camera. Footage was trimmed for length and exported to 1080P frames from a 4K 360-degree source automatically using Rylo’s software, and the license plate of the vehicle hit was obscured for privacy, but otherwise the footage is unedited.

We were conducting a demo of the Adaptive Cruise Control (ACC) feature, and Lane Keeping Assist (LKA) feature of our L2 system.

Our Automatic Emergency Braking (AEB) functionality was disabled for demo purpose.

For a frame of reference we’ll use the moment of impact at t, and express the time line in t minus x seconds.

t minus 6: 50m apart from the front car; our car was cruising at 60MPH with L2’s Cruise Control feature active

t minus 6: A trash can from the pickup truck 2 or 3 cars apart started flying off the trunk

t minus 5: Our driver realized that something was up as the car in front abruptly decelerates

t minus 4 to t minus 3: Our driver began taking over the L2 system; braking started and our L2 system was disengaged (can be clearly seen in the HMI display)

t minus 3 to t: Braking was applied as strong as possible, with braking force : 5~6m/s^2 (you can see from the speedometer we went from 60MPH -> 25MPH for 3 seconds). As you can see the driver was doing everything he could within these 3 seconds, expecting an impact.

t minus 0: impact at ~20MPH

So our driver could’ve clearly had a slightly faster reaction time, applied brake 1-1.5 seconds earlier, and braked harder since the maximum possible braking of a vehicle is about 8m/s^2.
But at that instant since he had you, a reporter in the car, he was clearly overwhelmed and the anxiety caused the slight delay in reaction time for decision making.

In hindsight we should’ve enabled our emergency braking functionality, as this would’ve been a rare opportunity to test/demonstrate our automated braking system which would’ve braked harder than the human driver. What happened was an unfortunate minor incident that we had a human driving error which had we enabled our full system, could have been avoided.

Waymo orders thousands of Pacificas for 2018 self-driving fleet rollout


Waymo has ordered thousands of new Chrysler Pacifica minivans from FCA to help populate its autonomous ride-hailing fleet, which it will open to the public in 2018, the company says. The public launch of its Pacifica-based self-driving ride hailing service is set to occur sometime later this year, after Waymo starts testing its minivans without anyone behind the wheel, achieving true Level 4 autonomy for their designated bounded test area in Arizona.

The total size of the vehicle commitment isn’t exactly known as of yet, but FCA has already supplied Waymo with 500 vehicles in total at least, and now that number will cross into the “thousands” as Waymo prepares for its public launch, and for the expansion of said service beyond its initial target launch market of Phoenix, where Waymo has been conducing its first pilot trial involving members of the public as passengers and customers.

The delivery of the new vehicles will being late in the year, and the new additions to the autonomous fleet will be rolled out “across multiple U.S. cities,” according to Waymo.

Waymo worked directly with FCA engineers to build its autonomous driving tech into the Pacifica, a minivan with plenty of cabin comforts for rear seat passengers already built in. The van platform from Chrysler is also “ideal” for accommodating Waymo’s autonomy tech in terms of its electrical, powertrain, and structural systems, as well as its chassis design, according to the self-driving tech provider.

The plan for deployment for Waymo’s self-driving services is to focus on ensuring absolute safety and reliability in strictly defined areas, and the expand the boundaries of said service over time. Waymo isn’t the only one hoping to launch autonomous driving services in some capacity soon, however; GM’s Cruise is looking to deploy “at scale” in 2019, and Uber says it’ll have limited commercial availability of its service in around 18 months.

While the U.S. waits, China has been CRISPRing human cancer patients since 2015


While the U.S. is just gearing up to the idea of CRISPRing its first humans, China seems to be benefitting from the “move fast and break things”…or cut them with the CRISPR scissors motto.

As the Wall Street Journal reports, China has already gene-edited 86 people using CRISPR-Cas9 since 2015. Unhindered by rules and regulations like the ones we have in America to prevent science experiments gone wrong, Shanghai doctor Wu Shixiu has been using the technique on cancer patients.

In fact, it only took a half hour one afternoon, according to the Journal, for hospital administrators to sign off on Dr. Wu’s plans.

It’s not clear if these experiments worked — though some preliminary reports suggest some of the various trials have had some success. However, this is not the first time China’s used CRISPR on humans — often with disastrous results. Chinese scientists tried unsuccessfully to genetically modify human embryos using CRISPR in 2016 but found at least two-thirds to have genetic mutations and only a fraction of the 28 surviving embryos (out of 86 total tested) contained the replacement genetic material.

At least nine other CRISPR trials have been conducted on humans in China, according to listings in the U.S. National Library of Medicine database. The Journal found at least two other trials had been conducted on humans in China using the technique since 2015.

While some worry this gives China the edge in pioneering CRISPR medical techniques, others urge caution and patience with such a new technology. Both the U.S. and Europe, where the technique was first jointly discovered by Jennifer Doudna and Emmanuelle Charpentier, have implemented conservative regulatory measures and have yet to start human trials.

“It is hard to know what the ideal is between moving quickly and making sure patients are safe,” Dr Carl June, the lead scientist for the CRISPR research team at the University of Pennsylvania, where the first human trials are slated to take place once researchers there can get through several regulatory hurdles, told the Journal.

Featured Image: Bryce Durbin