Is the future bright?
McKinsey recently predicted that up to 70% of business tasks could be automated by 2030 thanks to Gen AI.
Meanwhile, according to Terrence Sejnowski, a computational neuroscientist and professor at the Salk Institute and UC San Diego, the future is actually beyond the scope of our limited imaginations. ChatGPT and other large language models may not be able to set goals or have long-term memory, but Sejnowski is convinced they will. [The Future of AI is Now]
Recent advances in AI may seem revolutionary, but it appears we’re only just getting started. So where does that leave us humans?
The good…
AI can improve efficiency, reduce cost and increase productivity by automating boring or repetitive actions. Tasks within jobs are being influenced by digitalisation rather than whole jobs being replaced. People will increasingly need to interact with machines as part of their jobs.
The use of digital assistants has become a widely accepted way to engage with customers, with chatbots being available 24/7. And only recently, a ‘surgical robot’ performed the first ever autonomous gall bladder surgery, without any human assistance!
The not-so-good…
The most obvious downside to AI is job displacement. Goldman Sachs has estimated that 300 million full-time jobs will be lost in the next decade. There’s also the danger of potential bias from incomplete data as AI is a powerful tool that can be easily misused.
Designers must provide ‘representative’ data, as the downside of AI is that it does not treat all users the same - and in practice, rushed applications of AI have resulted in systems with racial and gender biases.
And the dark side
We are already facing some of the negative outcomes of AI. In its current form, AI can adversely influence human decision making at many levels – from viewing habits to purchasing decisions, from political opinions to social values. What’s more, AI is still largely unregulated and unrestrained.
As we continue on our AI journey, the issue remains: how do you encourage innovation while protecting users and unwitting developers from misuse - whether accidental or not?

Getting down to business
In the work environment, closing the perception gap between business enthusiasm for AI and analyst scepticism of AI is critical for business growth. To do this, leaders need to create an environment where people feel empowered to use AI to enhance their expertise without compromising security. Without this, the organisation risks falling into a hype cycle where AI is overpromised but underdelivered.
In cybersecurity for example, where the margin for error is razor-thin, collaboration between AI systems and humans is critical. As these tools mature and demonstrate real-world impact, trust should grow, especially when their use is grounded in transparency, explainability, and accountability. And when AI is thoughtfully integrated and aligned with employees’ needs, it becomes a reliable asset that drives long-term resilience and value across the organisation.
BAD AI
In a recent episode in our series of BAD Vibes, our Head of Behavioural Strategy, Laura Ansloos talked about how we can use behavioural science to drive effective AI adoption in the workplace. She believes there’s currently an inconsistency between organisational readiness and AI adoption rates, and a gap between appetite and people's perception that it’s completely taking over our lives.
By looking at AI through the wider lens of context, authenticity, and ethical stance, we can tap into the psychological needs (connection and purpose) that human beings have and what it will take to help someone adopt or adapt their workplace approach.
Contact us at BAD now to find out how we can help you navigate AI adoption in your organisation.