Promise versus Reality: Trends in Big Data and AI 2025

2025 is shaping up to be yet another huge year for technology and geopolitics. AI has so much hype, it can be hard to cut through and get a sense of reality. Yet, the initial promise of technology rarely predicts how it ultimately develops and integrates into society. The space between the promise and reality offers insight into some interesting trends for 2025.
This column follows two big shifts. First, that the hype cycle of AI has peaked. Second, that a shifting global order is splintering support for international collaboration and increasingly militarising AI.
The AI hype has peaked.
In 2023, AI became a feature rather than the future and by 2024 the AI hype had peaked. In 2025, we’ll see more nuance in the conversation about AI. We can anticipate conversations about data, purpose, the way AI is used fairly (or not so fairly) in society and war, value for money and governance.
There has been a growing sense for some time that generative AI, at least in its current form, is heading towards the end of its lifecycle, with some predicting valuations to drop precipitously. At Davos in January this year, Meta’s Yann LeCun suggested that within three to five years large language models (LLMs) would make way for other AI, which may not have the current limitations.
While AI will continue to shape industries and drive innovation, generative AI passed the ‘Peak of Inflated Expectations’ in 2024. Hopefully, this portends more meaningful innovation and implementation of exciting new products that don’t rely on the worst parts of the data economy.
Consumers and companies want and need products that add value and seize the potential of AI as quickly as possible. DeepSeek’s emergence showed, among many other things, that there is opportunity for vast improvements in model efficiency. We can expect a trajectory of continual improvement on existing models, especially open source.
No existing generative AI technology solves factual inaccuracies or fabrication—an inherent limitation of LLMs. The problems with the reliability of information persist, and many users report banal and boring outcomes. Plus, the models themselves do not yet appear to have a sustainable business model, despite offering paid services for ‘improved performance’ and obtaining unprecedented multi-billion-dollar investments. Microsoft’s Satya Nadella said last month that the world has yet to turn AI hype and spending into meaningful economic lift.
We could see a competitive reset in the AI market, focusing development on technologies that add real value, improve people’s lives and contribute to, rather than increase, global challenges such as reliable information, climate change and stable governance. It could also result in geographical market and supply chain diversification.
Another possibility is that DeepSeek’s emergence may reinforce the global competition on LLMs, at the expense of other applications of AI. Focus on LLMs and AGI–at the expense of other AI applications, likely worth trillions–could lead to missed commercial opportunities, disillusioned users, increased social harms and rocky markets.
AI is here to stay, but if the hype cycle has peaked, then what’s next? Three areas of technology worth watching closely are quantum, ‘living intelligence’–or synthetic biology and neurotechnology. They will likely all rely heavily on AI but create globally disruptive innovations in the next few years.
The global world order is shifting.
Trade wars, supply chain security and isolationism threaten global collaboration on AI development and its regulation. AI will increasingly be a source of conflict rather than cooperation as we see continued splintering of support in multi-lateral forums and the increasing use of state power to influence global technology access and regulation.
While 2024 was one of the most active anti-trust legal landscape globally in decades, this seems likely to change in 2025. Under Trump’s administration, federal regulatory action is expected to decline, with a shift away from enforcing legacy guardrails. This will combine with more aggressive nationalism. For example, a recent Executive Order signals the intent to flex US state power, including tariffs, fines and ‘other burdens or responsive actions’, on countries, such as Australia, Canada and the Europe, who regulate US tech companies, data flows, or enforce domestic anti-competitiveness.
The Paris AI Action summit symbolised the fissures in global approaches to AI. Sixty countries, including France, China, India, Japan, Australia and Canada, signed a declaration for “inclusive and sustainable” AI. However, the United Kingdom and United States notably refused to sign. The tone of Vance’s address at the Paris AI Summit was clear, “I am not here to talk about AI Safety. I am here to talk about AI opportunity.”
Indeed, there has been a search for ‘safe AI’ for the past few years as AI Safety Institutes have sprung up. Now, it seems nobody wants to talk about AI safety. U.S. President Donald Trump revoked former President Joe Biden’s Executive Order aimed at promoting the safe, secure and trustworthy development and use of AI. In a growing trend of militarised AI, Google recently abandoning a long-standing commitment to prohibit the use of AI in weapons or surveillance.
A desire for less regulation—domestic and global—has seen the Trump administration admonish allies and threaten nations on AI regulation. Vance’s address at the Paris AI summit warned global leaders about attempting to regulate not just AI but all US tech companies. This rhetoric, (including Vance’s address to the Munich Security Conference, blasting European leaders), splinters support in multi-lateral forums and brings into question the foundational norms of international relations and global institutions.
The reality is that AI technologies and their infrastructures (data, connectivity, energy generation and access, compute capacity, and workforce) which I’ve described elsewhere as the “architectures of AI”, are critical sources of geopolitical power. How innovative developments progress, the way they are regulated globally, and the way nation states wield their AI power will continue to unfold in 2025. Currently, China, which has the toughest AI laws in the world including on safety—is out innovating everyone.
To-date, much of the way we think about technology has been shaped by tech companies themselves. In 2025, I suggest that will change. Increased awareness of alternative business models–combined with nations and people gaining a clearer idea of the technologies they want, will see more exciting futures imagined. I think we’ll see a change in how people and nations approach AI.
Globally, I think we will see some course correction on the AI hype–and hopefully some exciting innovations as a result–although this too may end up as a source of conflict. The realities of regulating big tech companies, when their actions are backed by the US government is likely to scare nations–or cause an alternative coalition. This is occurring as the United States is transitioning its role in the international global order from one as a stable superpower, to a less clear destination.
Dr Miah Hammond-Errey is the CEO of Strat Futures, host of the Technology & Security podcast and an adjunct Associate Professor at Deakin University.
This article is published under a Creative Commons License and may be republished with attribution.