The availability of data was previously taken as a background assumption. The system continued as long as blocks were being made and data was being posted somewhere. Such an attitude was effective in the times when throughput was low and demand was predictable. It does not stand up to contemporary execution conditions.
With the increase in the number of chains, data availability turns into the bottleneck around which everything turns. The speed of execution, decentralization, and even the security assumptions deliberately rely on the ability of large volumes of data to be accessed by many separate parties simultaneously. When such breaks, systems do not fail with a flourish, but rather, they decay, become centralized or bring in the element of trust that was not supposed to be there.
@Walrus 🦭/acc identified this change at an early stage. It does not consider availability as a backend service, but as infrastructure which has to withstand load, and which has incentives which vary with actual usage. The fact that framing is important is that it transforms DA into a discrete risk something that systems can make rational decisions about.
Data availability ceases to be optional as it can be executed faster and in a modular fashion. It becomes the bottleneck which determines the realistically possible.

