Companies frequently find themselves revisiting fundamentals: What should each guideline actually cover? AI chatbots, unlike human readers, are unforgiving in the face of contradictions and ambiguity. Every datapoint fed into the system can entail a multitude of potential risks, privacy concerns, regulatory hurdles, legal implications, and so on. It’s critical to ensure that sensitive information – personal data, customer secrets and confidential material – doesn’t end up unprotected in the training data. Neural networks don’t “forget”, making compliance with regulations such as the EU’s General Data Protection Regulation (GDPR) or contractual obligations a serious concern.
Even when AI systems only analyze data, access control remains key. In the early days of tools like Microsoft Copilot, chatbots accidentally revealed sensitive information – including employee salaries – because they had access to erroneously shared Excel files and other unprotected sources. Serious incidents of this nature underscore the importance of strict access controls and a well-designed data architecture.
While these challenges can be addressed with carefully designed data pipelines and targeted AI use cases, the return on investment (ROI) isn’t immediate or self-evident. Companies typically have to endure a prolonged phase characterized by heavy spending on efforts to cleanse and structure data without tangible results in sight. Instead, ROI tends to come gradually, as trust in the systems grows and the complexity of the data is slowly unraveled, which can cause even major proponents to have second thoughts.
The counterargument: The promise of transformation
AGI visionaries point out that AI is not just useful for analyzing data – it can also help clean and structure it so it’s easier for systems to process and understand. Smaller use cases often reveal how contradictory information is spread across different systems or how personal experience or informal communication tend to be used to fill any gaps in knowledge. These smaller projects frequently push companies to reassess their data strategy: What information does actually matters? Where should the central source of truth be? A common takeaway from systematic analytics of this nature is that companies tend to store too much data, instead of prioritizing a smaller set of high-quality, reliable information. Targeted data cleansing with the help of AI can improve efficiency and lay the groundwork for better decision making.
Looking beyond standard business functions, AI is set to reshape entire industries. Multimodal systems, which bridge different data types and modes of interaction, are breaking down traditional boundaries. In synthetic biology, for instance, AI is accelerating our ability to understand biological systems and discover new medicines. In manufacturing and logistics, AI-driven robotics is transforming operations in ways that were once unimaginable. AI is not just about automating tasks. Used smartly, it opens up entirely new ways to solve complex problems.
The ROI challenge and the way forward
The development of self-driving cars offers a useful analogy: while the technology has existed for years and can reliably handle many scenarios, humans still need to be ready to take control at a moment’s notice. The same applies to current AI applications: they can improve efficiency but still require human oversight. This can slow returns on investment in the short term. However, as the technology matures and information management improves, the long-term potential for early adopters may grow exponentially.
Forward-thinking companies are already building specialized AI agents with clearly defined roles, access rights and targeted use cases. These agents can collaborate and complete tasks together, but their effectiveness depends heavily on the quality and structure of the data they access. Still, AI agents based on large language models face notable limitations – most importantly, the lack of reliable memory.
Some business functions – especially those with lower risk and well-defined parameters – are better suited for early AI adoption. This mirrors how self-driving cars were first introduced on highways and in controlled urban environments. More complex or sensitive processes, however, will require longer transition periods, improved data infrastructure and closer oversight.
AI is also gaining momentum in Switzerland’s financial sector. According to the latest EY survey of the Swiss banking landscape, AI jumped from 19th to 6th place in the ranking of bank priorities. The share of banks already using AI more than doubled in a year, from 6% to 15%. Today, the most frequent use cases are process automation (55%) and compliance (54%).