Tesla and Elon continue to develop 4680 batteries, AI4 and AI5 inference chips and Tesla Dojo chips. I have projected that Tesla could get trillions in value from cracking making Dojo chips being broadly useful for xAI for large language model training. 4680 development work continues despite issues scaling towards the original 100 GWh per year goal. Despite being years behind schedule the work continues because having 4680 would make Tesla less dependent upon other battery companies like CATL and have a supply chain that is less dependent upon China.
Tesla and xAI can win with Dojo chips that are half to one third as performant as the latest Nvidia chips. Tesla would be able to save billions and tens of billions in critical component spending and increase supply in a bottlenecking component.
There is similar logic to the Dojo chip effort. Tesla needs Dojo to reduce dependence upon Nvidia, TSMC and Taiwan.
TSMC said they have Dojo D2 chips in production.
Elon at the All in Summit last year says that Dojo D2 chips will be in mass production late in 2025.
Getting to 10x D1 or in the range of Nvidia B200 would mean avoiding the 75% or so margins charged by Nvidia and addressing the issue of supply and being able to choose to optimize for your own main workloads.
Tesla has been hiring a lot of new staff for the Dojo team who are in production ramping roles.
Elon and others have talked about significantly more performance for D2 (10X D1 statement and competitive with B200 statements).
Tesla has a networking patent which enables microsecond communication between Dojo chips. This can make it easier to scale Dojo chips for a coherent memory AI training.
Elon has said at earnings calls and at the All in Summit that he was more confident about the Dojo bet paying off. This was said after the D1 (dojo 1) exapods were mostly built. He said in the all in summit for confidence in D2 volume production late 2025 and D3 in 2026. This sounds like billions more to do an expanding dependence on Tesla Dojo.
Business wise there would be huge value which I projected of closer to competitive D2 and D3 chips. Escalating those efforts seems similar to the 4680 batteries. Cracking dry cathode proved harder but having production capability is strategically important.
Google almost completely uses their TPU (Tensor Processing Chips). Having control of your own AI chip production and AI technology stack is critical for Tesla and XAI. Tesla will never stop the Dojo, AI5 chip programs and the 4680 and other Tesla battery programs.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.