A recent Access Hospitality survey has found there is an AI trust gap as adoption increases
More than half of hospitality businesses in the UK and Ireland are concerned for their data privacy when rolling out artificial intelligence.
According to a new survey conducted by AI-powered hospitality software platform Access Hospitality, 45% of F&B businesses and hotels are worried about sharing company data with AI tools too.
The software firm surveyed 1,000 businesses and 8,000 consumers across six international markets to understand operators’ and consumers’ attitudes towards data privacy and secure AI adoption.
In the UK and Ireland, more than 28% of hospitality operators are already rolling out AI across multiple departments this year, while a further 20% are exploring what AI could do for them.
However, as adoption rises, there remains a trust gap, with growing concerns about how this data is processed, monitored and secured.
The biggest concerns for UK and Irish operators are data security and privacy (51%) and data protection regulations (38%). This is followed by nearly one-third (31%) of operators who are concerned about their limited understanding of AI tools.
Businesses with fewer than 25 venues are the most hesitant, with more than half worried about sharing data, compared to just 33% of larger venues. This could suggest that access to internal expertise and frameworks plays a role in AI confidence.
These privacy concerns are contributing to mixed feelings about the increased use of AI in hospitality, with 41% of UK consumers worried and 37% excited.
Champa Magesh, managing director of Access Hospitality, said: “The message for operators is clear. With 41% of UK consumers sceptical about the increased use of AI in hospitality and 28% worried about their privacy, businesses must prioritise transparency and security to build trust.”
The Access team recommended that hospitality businesses put a clear AI policy in place and inform all colleagues, educate and train staff members and use secure AI platforms.
Chief information officer Connor Whelan added: “When it comes to AI, having a clear policy is important, but it has to be backed by the right technical controls. People need to know what they can and can’t put into an AI system, and the technology should enforce those boundaries, not just rely on good intentions. That combination – policy, controls and regular reassessment – is how businesses meaningfully reduce risk.”
The Caterer recently held an AI – building trust in the workplace webinar, which discussed five key ways to adopt AI with confidence.