Please enjoy ARK's weekly newsletter curated by our thematic research team and designed to keep you engaged with disruptive innovation.
Not rendering correctly? View this email as a web page here.
Starlink Is Riding Down Wright’s Law’s Cost Curve, & More...
ARK • Disrupt
It's Your weekly innovation newsletter
It's Monday, January 6, 2025. Please enjoy ARK's weekly newsletter curated by our thematic research team and designed to keep you engaged with disruptive innovation.
Starlink Is Riding Down Wright’s Law’s Cost Curve
By Sam Korus | @skorusARK Director of Research, Autonomous Technology & Robotics
SpaceX disclosed striking data about Starlink V3 in its 2024 Progress Report.1 Each V3 satellite, for example, delivers 1 Terabit per second (Tbps) of downlink speed, ten times that of V2 Minis. In other words, given the improved satellites and the larger launch vehicle, a Starship V3 launch should add 60 Tbps to the network, over twenty times a V2 Mini launch.
SpaceX continues to ride down Wright’s Law’s cost curve for satellite bandwidth capacity, as shown below. Clearly, competitors are finding it difficult to match SpaceX’s cost declines.
As ARK published in 2023,2 based on Wright’s Law, satellite bandwidth costs should decline ~45% for every cumulative doubling in Gigabits per second in orbit. Since 2004, the cost of satellite bandwidth has dropped 7,500-fold, from $300,000,000 to $40,000/Gigabits per second (Gbps). Thanks to Starship,3 costs could fall another 40-fold to ~$1,000/Gbps by 2028. Because 1Gbps can serve 200 customers at a capital cost of ~$1,000/Gbps, SpaceX could recoup its Starship investment with a one-time charge of $5 per customer.4
Sources: ARK Investment Management LLC, 2023, based on data from VanderMeulen et al. 2015.5 For informational purposes only and should not be considered investment advice or a recommendation to buy, sell, or hold any particular security. Forecasts are inherently limited and cannot be relied upon. Past performance is not indicative of future results.
Technology Could Cause An Inflection In Economic Activity During The Second Half Of This Decade
Entering 2025, real GDP growth was averaging slightly above 3%, the global average since 1900. While the consensus forecast is that growth will continue at that rate through the end of the decade before decelerating, ARK’s research suggests that the macroeconomic inflection will be sooner, and very much to the upside.
According to the International Monetary Fund (IMF), global GDP will increase from $127 trillion in 2024 dollars to $150 trillion by 2030, as shown by the yellow bar in the chart below. In contrast, ARK forecasts that real growth will accelerate to more than 7%, pushing GDP above $190 trillion in 2030, as shown in the purple extension of the yellow bar in the chart below.6
Source: ARK Investment Management LLC, 2025. This ARK analysis draws on a range of external data sources, as of December 31, 2024, which may be provided upon request. For informational purposes only and should not be considered investment advice or a recommendation to buy, sell, or hold any particular security. Past performance is not indicative of future results. Forecasts are inherently limited and cannot be relied upon.
Over time, economic growth has changed in a step-function fashion, as shown in the chart above. Moreover, the time between step-changes has shrunk, the most recent change caused by the second industrial revolution.
ARK’s research suggests that Artificial Intelligence (AI) is igniting a new global growth wave, evidence of which will cumulate during the next five years. Consistent with classic economic theory, AI is in the process of boosting productivity—substituting capital for labor—which should kick-start rapid gains in economic output. If it becomes a near-perfect substitute for labor, as we believe is likely, AI could create self-reinforcing cycles of reinvestment and accelerated growth.
To illustrate that dynamic, a Tesla Cybercab is likely to provide 8x the miles of transportation services relative to a traditional vehicle that costs the same or more to produce. Priced at a rate per mile competitive with personally owned vehicles, such a service would be highly profitable for Tesla. Because the market potential is vast—~$10 trillion annually, by our estimates—Tesla is likely to reinvest its profits in the production of more Cybercabs, instead of paying them out to shareholders.
In other words, the Cybercab’s economic productivity is likely to feed directly into the production of more Cybercabs which, in turn, should prove wildly productive economically. Not only would Tesla produce 8x the transportation service for the same amount of capital associated with traditional auto production, but autonomous vehicles also would free up time for the passengers that otherwise would operate the vehicles mechanically.
Robotaxis and humanoid robots could increase physical production dramatically, while liberating human time otherwise tied up in household chores. Moreover, agentic AI could automate administrative processes that otherwise would bog down human potential. Finally, an aging labor force could remain vital and contribute productively, thanks to the biological insights likely sparked by the convergence of AI and multiomics.
Put simply, the convergence among technologies that are changing at an accelerated rate should turbocharge economic growth during the next five years. Buckle your seatbelts!
Is Quantum Computing A Threat To Bitcoin?
New "Bitcoin Brainstorm" featuring ARK's Cathie Wood and Frank Downing
The Power Of Compounding Is Scaling AI Hardware And Software
By Frank Downing | @downingARK Director of Research, Next Generation Internet
Even as reports suggest7 that the cost to train and operate frontier AI models is growing seemingly out of control, new hardware efficiencies and algorithmic improvements in software are providing an underappreciated tailwind to progress in AI.
ARK observes that the cost to train AI models is falling 75% at an annual rate, and that inference costs are falling even faster, at 90%. Based on our research, roughly half of the steep cost declines are from new hardware advances. Nvidia increased the amount of high bandwidth memory per chip by ~75% in the H200 compared to the H100, for example, doubling peak inference performance. The other half of cost declines are in the software layer, as AI developers achieve the same performance with less compute resources. Examples of innovation at the software layer include flash attention, which increases training efficiency by 2.8x in GPT models, and speculative decoding, which speeds up inference by 2-3x. We detailed this paradigm in ARK’s Big Ideas 2024.8
Combining both hardware-level and software-level improvements, training costs have fallen by half every six months, much faster than the two years suggested by Moore’s Law. When applied to inference, cost declines are leading to tremendous improvements in the scalability of “inference-time compute.” As a result, OpenAI’s latest o3 models are solving problems that previously were impossible for large language models (LLMs). On the ARC-AGI benchmark—a series of problems created to challenge AI models on tasks easy for humans but very difficult for AI—o3 requires thousands of dollars of compute9 to solve a problem that most humans can do in a few minutes, for example. Given declines at an annualized rate of 90%, exorbitant costs today should drop to pennies.
Beyond compounding cost declines, underappreciated is that algorithmic efficiencies can be applied as a fleet-wide software update, compared to hardware advancements, which apply only to the portion of the installed base that is newly purchased in a given year. In theory, then, a software optimization that doubles inference performance could double the revenue-generating potential of a two-year-old GPU!
That said, given the competitive dynamics in the LLM space, much of that value creation is likely to return to users in the form of lower prices and better performance. This analysis considers only “core” model performance. As the industry moves toward AI agents, compounding systems from multiple models and tools should extend their functionality cost-effectively, suggesting miles to go before hitting a plateau in AI capability.
1 Starlink. 2024. “Internet from Space for Humans on Earth.”
2 Big Ideas 2023. ARK Investment Management LLC.
3 Starship is SpaceX’s next generation rocket and satellites.
4 This assumes an oversubscription ratio of 20: 20x more people are paying for the service than are actively using it at a given time. Note also that this calculation does not incorporate satellite lifespans, satellite utilization, and ground-based infrastructure costs, all of which will impact costs and pricing decisions.
5 VanderMeulen, R. et al. 2015. "High-Capacity Satellite Communications - Cost-effective Bandwidth Technology." Space Symposium, Technical Track.
6 All figures in 2024 dollars.
7 Seetharaman, D. 2024. “The Next Great Leap in AI Is Behind Schedule and Crazy Expensive.” The Wall Street Journal.
8 ARK Investment Management LLC. 2024. “Big Ideas 2024: Disrupting the Norm, Defining the Future.” See especially p. 26 in the section on Artificial Intelligence. This ARK analysis is based on a range of data sources, which are available upon request. See Benaich, N. 2023 “State of AI Report.” Air Street Capital. Touvron, H. et al. 2023. “Llama 2: Open Foundation and Fine- Tuned Chat Models.” arXiv. Yang, C. et al. 2023. “Large Language Models as Optimizers.” arXiv. Leviathan, Y. et al. 2022 “Fast Inference from Transformers via Speculative Decoding.” arXiv. Dao, T. 2023. “FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning.” Center for Research on Foundation Models. Stanford University. Forecasts are inherently limited and cannot be relied upon. For informational purposes only and should not be considered investment advice or a recommendation to buy, sell, or hold any particular security. Past performance is not indicative of future results.
9 Chollet, F. 2024. “OpenAI o3 Breakthrough High Score on ARC-AGI-Pub.” ARC PRize.
This Newsletter is for informational purposes only and does not constitute, either explicitly or implicitly, any provision of services or products by ARK Investment Management LLC (“ARK”). Investors should determine for themselves whether a particular service or product is suitable for their investment needs or should seek such professional advice for their particular situation. All content is original and has been researched and produced by ARK unless otherwise stated therein. No part of the content may be reproduced in any form, or referred to in any other publication, without the express written permission of ARK. All statements made regarding companies, securities or other financial information contained in the content or articles relating to ARK are strictly beliefs and points of view held by ARK and are not endorsements of any company or security or recommendations to buy or sell any security. By visiting and/or otherwise using the ARK website in any way, you indicate that you understand and accept the terms of use as set forth on the website and agree to be bound by them. If you do not agree to the terms of use of the website, please do no access the website or any pages thereof. Any descriptions of, references to, or links to other products, publications or services does not constitute an endorsement, authorization, sponsorship by or affiliation with ARK with respect to any linked site or its sponsor, unless expressly stated by ARK. Any such information, products or sites have not necessarily been reviewed by ARK and are provided or maintained by third parties over whom ARK exercises no control. ARK expressly disclaims any responsibility for the content, the accuracy of the information, and/or quality of products or services provided by or advertised on these third-party sites. For full disclosures, click here.