
Published on 30/04/2026
Looking Back at the Netural AI Symposium | Part 2 of 3
Topics
- Events
- Insights
This article is the second instalment in our series on the Netural AI Symposium, held on April 23 at the newly opened QUADRILL in Linz.
In the first part, we covered the talks by Albert Ortig and Dr. Sepp Hochreiter. Now, with Carina Seidel and Martin Obermayr, we explore how AI ideas turn into genuinely effective applications – and what role data plays in getting there.
The complete recording of the Netural AI Symposium is available here.
Data: The underestimated foundation of AI transformation | Carina Seidel
Carina Seidel is an expert in Advanced Analytics, BI, and AI at SPAR Austria. Her mission? Generating real value from data. Her core messages at the AI Symposium were: AI transformation does not begin with algorithms – it begins with data; and data quality is not an IT issue but rather a company-wide responsibility.
The essential building blocks of high-quality data, according to Carina Seidel, are accuracy, consistency, completeness, timeliness, validity, and uniqueness. She illustrated these through a practical example from the food retail sector: every product must be uniquely identifiable and carry complete metadata – covering seasonality, perishability, and the like. On its journey from supplier to shop floor, it passes through numerous systems. Each must enforce clear rules (correct units, no negative pricing) and capture and transmit data in a consistent format. Where this breaks down, gaps emerge in the data chain, records are duplicated, and forecasts are distorted.
SPAR Austria's forecasting model achieves 90% accuracy, reducing food write-offs and spoilage. That success was made possible only through a sustained commitment to data quality – and by dismantling data silos through a centralised cloud solution, unified metric definitions, active involvement from business units, and a data governance framework with clearly defined processes and accountabilities. Because as Carina Seidel put it: when everyone is responsible, no one is responsible.
In the realm of data cleansing, the greatest impact comes from duplicate detection, standardisation of input data, automated outlier identification, and plausibility checks. Filtering out problematic records from reports does not solve the underlying problem – it merely conceals it. Making data quality issues visible is the prerequisite for improvement.
Equally important is the organisational culture. Data-driven decisions – whether made by humans or AI – will be met with scepticism if data is not understood as a valuable asset across the business. That holds true even when the underlying data is correct.
AI that matters: How sustainable AI applications are created | Martin Obermayr
Sustainably successful AI applications require two things: genuine impact and a wow-factor, argues Martin Obermayr, Director of Strategy & Consulting at Netural. That wow-factor is no mere nice-to-have; it is a prerequisite for successful rollout, because it motivates and persuades.
Yet Austrian organisations frequently fall short on both counts. Some become mired in endless business cases and ROI calculations before a single practical step is taken. Others never progress beyond isolated POCs or fail to transition them into production, meaning the AI impresses in the proof-of-concept phase, then quietly disappears, undone by legal concerns, lack of buy-in, unclear objectives, weak performance or a poor user experience.
The solution? Strategy and execution must run in parallel from the outset – with C-level commitment, a structured overview of use cases across all areas of the business, and prioritisation based on genuine business value. Data, technology, governance, and risk must be considered in tandem.
At least as important, however, is a mindset shift and genuine culture change. That means, on one hand, setting realistic expectations of POCs and, on the other, moving away from "what can AI do?" towards "what does our organisation actually need?" Those who fail to make this shift will ultimately find themselves searching for a problem to fit a solution they already have. AI must also be positioned correctly within the organisation: framing it primarily as a cost-cutting instrument risks losing people entirely. After all, the question of who will voluntarily help make their own role redundant rather answers itself.
On the technical side, success requires the right AI model (and as noted in part 1, that means not relying exclusively on transformer architectures), the right system and data architecture, the integration of AI with codified knowledge and deterministic tools, and seamless embedding into users' existing workflows.
Agentic enterprise solutions are opening up entirely new possibilities for connecting whole system landscapes through AI. But to realise their full potential, they need context – including implicit process knowledge: the routines, assumptions, exceptions, and heuristics that experienced employees apply every day. Without access to that knowledge, no AI agent can perform to its full capability.
Despite the complexity involved, Martin Obermayr is in no doubt that the investment is worthwhile. AI enables faster and more cost-effective tendering for complex industrial systems, knowledge retention when key personnel retire, improved quality in sales conversations, and much more besides. And the earlier the groundwork is laid, the sooner an organisation is truly ready to go.
You have questions about implementing AI solutions within your organisation? Get in touch!








