Industry

Tesla releases new Full Self-Driving Beta update, and it’s a big one

Tesla has started to push a new Full Self-Driving (FSD) Beta software update with improvements based on over 250,000 training video clips from its fleet.

Based on the release notes, it’s a big update.

FSD Beta enables Tesla vehicles to drive autonomously to a destination entered in the car’s navigation system, but the driver needs to remain vigilant and ready to take control at all times.

Since the responsibility lies with the driver and not Tesla’s system, it is still considered a level two driver-assist system despite its name. It has been sort of a “two steps forward, one step back” type of program, as some updates have seen regressions in terms of the driving capabilities.

Tesla has been frequently releasing new software updates to the FSD Beta program and adding more owners to it.

The company now has around 100,000 owners in the program and with more people in it, it is expected to have more data to train its neural nets.

Today, Tesla has started using a new FSD beta software update (2022.12.3.10) and according to the release notes, it is one of the most extensive updates to date.

Interestingly, Tesla notes for the first time the number of video clips pulled from the fleet and used to train certain new behaviors. The automaker has mentioned a total of over 250,000 new video clips used in the training set for this update.

Tesla also said that it has removed three older neural nets from the system, which enabled 1.8 frames per second improvement in the system frame rate.

The release notes also mentioned many more improvements – several of them related to the level of confidence in which the system takes action, which has been a source of frustration for using FSD Beta in the past.

You can read more about all the improvements in the release notes below:

FSD BETA v10.12 Release Notes

  • Upgraded decision making framework for unprotected left turns with better modeling of objects’ response to ego’s actions by adding more features that shape the go/no-go decision. This increases robustness to noisy measurements while being more sticky to decisions within a safety margin. the framework also leverages median safe regions when necessary to maneuver across large turns and accelerating harder through maneuvers when required to safely exit the intersection.
  • Improved creeping for visibility using more accurate lane geometry and higher resolution occlusion detection.
  • Reduced instances of attempting uncomfortable turns through better integration with object future predictions during lane selection.
  • Upgraded planner to rely less on lanes to enable maneuvering smoothly out of restricted space.
  • Increased safety of turns crossing traffic by improving the architecture of the lanes neural network which greatly boosted recall and geometric accuracy of crossing lanes.
  • Improved the recall and geometric accuracy of all lane productions by adding 180,000 video clips to the training set.
  • Reduced traffic control related false slowdowns through better integration with lane structure and improved behavior with respect to yellow lights.
  • Improved the geometric accuracy of road edge and line predictions by adding a mixing/coupling layer with the generalized static obstacle network.
  • Improved geometric accuracy and understanding of visibility by retraining the generalized static obstacle network with improved data from the autolabeler and by adding 30,000 more video clips.
  • Improved recall of motorcycles, reduced velocity error of close-by pedestrian and bicyclists, and reduced heading error of pedestrians by adding new sim and autolabeled data to the training set.
  • Improved precision of the “is parked” attribute on vehicles by adding 41,000 clips to the training set. Solved 48% of failure cases captured by our telemetry of 10.11.
  • Improved detection recall of far-away crossing objects by regenerating the dataset with improved versions of the neural networks used in the autolabeler which increased data quality.
  • Improved offsetting behavior when maneuvering around cars with open doors.
  • Improved angular velocity and lane-centric velocity for non-VRU objects by upgrading it into network predicted tasks.
  • Improved comfort when lane changing behind vehicles with harsh deceleration by tighter integration between lead vehicles future motion estimate and planned lane change profile.
  • Increased reliance on network-predicted acceleration for all moving objects, previously only longitudinally relevant objects.
  • Updated nearby vehicle assets with visualization indicating when a vehicle has a door open.
  • Improved system frame rate +1.8 frames per second by removing three legacy neural networks.

Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

Products You May Like

26 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *