Business

Tesla Releases Full New Self-Driving Beta Update With High-Level Changes

Written by admin

Tesla is starting to release a new Full Self-Driving (FSD) beta software update that includes many high-level changes that should have a positive impact on performance.

FSD Beta allows Tesla vehicles to drive autonomously to a destination entered into the car’s navigation system, but the driver must remain alert and ready to take control at all times.

Since the responsibility lies with the driver and not with Tesla’s system, it is still considered a level two driver assistance system despite its name. It was somewhat of a “two steps forward, one step back” program, as some updates saw regressions in terms of driving capabilities.

Tesla frequently released new software updates for the FSD Beta program and added more owners to it.

The company now has over 100,000 people in the FSD Beta program and plans to expand it to everyone who purchases access in North America by the end of the year with a few more software updates to fine-tune the system.

Since it’s November already and it usually takes Tesla at least a month to deliver a new FSD Beta update, we expect Tesla to be an update or two away from the larger release. promised.

Today, the automaker began rolling out a new FSD beta update (v10.69.3) to employees for internal testing, which usually means it will soon be rolled out to customer fleet beta testers.

According to the release notes below, the update does not include any new features, but it does include many high-level updates to Tesla’s neural networks to improve overall system performance.

Tesla Full Self-Driving Beta v10.69.3 Patch Notes Patch Notes via Not a Tesla App:

– Upgraded object detection network for photon counting video streams and recycling of all parameters with the latest self-tagged datasets (with special focus on low visibility scenarios).

– Improved architecture for better accuracy and latency, higher recall of distant vehicles, reduced speed error of intersecting vehicles by 20%, and improved VRU accuracy by 20%.

– Converted the VRU Velocity network to a two-stage network, which reduced latency and improved pedestrian crossing speed error by 6%.

– Converted the non-VRU attribute network to a two-stage network, which reduced latency, reduced incorrect lane assignment of crossing vehicles by 45%, and reduced incorrect parking predictions by 15%.

– Reformulated the Vector Lanes autoregressive grammar to improve lane precision by 9.2%, lane recall by 18.7%, and fork recall by 51.1%. Includes a full network update where all components have been recycled with 3.8 times the amount of data.

– Added a new “road marking” module to the Vector Lanes neural network that improves lane topology error at intersections by 38.9%.

– Upgraded the occupation network to align with the road surface instead of the ego for improved detection stability and improved hilltop recall.

– Reduced running time of candidate trajectory generation by approximately 80% and improved fluency by distilling an expensive trajectory optimization procedure into a lightweight scheduler neural network.

– Improved decision making for short-term lane changes around gores due to richer modeling of the trade-off between road exit and the trajectory required to cross the gore region.

– Reduced false slowdowns for pedestrians near crosswalks by using a better model for pedestrian kinematics.

– Added control for more accurate object geometry detected by the general occupation network.

– Improved control of vehicles that stray from our desired path through better modeling of their cornering/lateral maneuvers, thus avoiding unnatural slowdowns.

– Improved longitudinal control when shifting around static obstacles by looking for achievable vehicle motion profiles.

– Improved smoothness of longitudinal control for lane vehicles during high relative speed scenarios by also considering relative acceleration in trajectory optimization.

– 26% reduction in the latency of the photon-control object system in the best case thanks to the adaptive planning of the planner, the restructuring of the trajectory selection and the parallelization of the calculation of the perception. This allows us to make decisions faster and improves reaction time.

FTC: We use revenue-generating automatic affiliate links. After.


Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

#Tesla #Releases #Full #SelfDriving #Beta #Update #HighLevel

About the author

admin

Leave a Comment