Generative AI is often hailed as this decade's most transformative technology, with the potential to revolutionize the workplace. Yet its impact across enterprises has been limited. Consider how many roles in your organization have been fundamentally altered by AI in the past year. Likely, the answer is very few.
Experts share this skepticism. MIT’s Daron Acemoglu predicts that AI will impact less than 5% of tasks over the next decade, and increase US productivity by only 0.5% and GDP growth by only 0.9%. The underwhelming reality contrasts sharply with the initial hype, which promised a total transformation of the enterprise.
Despite the gap between expectations and outcomes, investment in AI remains strong. 73% of U.S. companies have adopted AI in some area of the business, and the generative AI market is projected to reach $1.3 trillion by 2032.
Enterprises are also heavily investing in AI copilots, smart assistants that support decision-making by automating key tasks, often positioned as a key to unlocking AI's full potential. Forbes reports that 51% of companies are adopting AI for process automation and Microsoft shared that over 77,000 organizations are using their AI copilot tools.
Despite these investments, enterprises continue to face significant challenges when it comes to adoption and impact. In this article, we’ll look at the factors hindering AI’s true potential and explore the technological capabilities needed to overcome them.
Why copilots for security?
It’s easy to see why AI copilots are increasingly viewed as the next step for security teams. They promise to enhance efficiency, improve interoperability across systems, and dramatically reduce response times in the face of security threats.
The efficiency gains are particularly appealing; by automating mundane tasks, practitioners can be freed up to focus on more complex issues. But the same question remains: can AI copilots truly deliver on these promises?
AI copilots: problems and solutions
1. Privacy and security concerns
The problem:
Privacy and security are paramount for any enterprise. AI copilots, which require access to sensitive data, naturally raise security concerns, which can slow down adoption. According to a recent Tines report, 66% of CISOs consider data privacy a challenge to AI adoption.
The solution:
AI must be supported by robust privacy and security features. The user’s data shouldn’t leave the stack, travel on the open internet, get logged, or be used for training. Enterprise-grade controls including role-based access, confirmation messages, and audit logs, are must-haves.
2. Lack of secure access to proprietary data
The problem:
AI copilots often require access to data from across the company’s systems to function effectively, but security concerns can limit this access, reducing the copilot’s effectiveness.
The solution:
The ideal AI chat interface provides the user with access to proprietary data in real-time, to help enhance decision-making. This can only be done when robust security and privacy guardrails are in place, and when the copilot can easily connect to all the relevant tools in the stack.
3. AI that can’t take action on your behalf
The problem:
One of the big promises made by current AI copilots is their ability to take action, but there may be unexpected challenges if they only work with a specific set of tools.
The solution:
Teams need an AI chat interface that can perform action through workflow automation, but only when approved users instruct it to do so. This would help enterprises reduce response times and improve overall efficiency.
4. Siloed to specific products
The problem:
Data is often scattered across many tools and systems - the average security team has access to 76 tools. If one copilot can’t connect to all of them, multiple copilots may be needed, which is costly and can slow down response times.
The solution:
An ideal AI product integrates with any technology via API - the user would define the tools it can access and the specific actions it can perform within each tool.
5. Lack of visibility or choice on the LLMs provided
The problem:
Users sometimes lack clarity on which large language model (LLM) their AI copilot uses, potentially leading to further security and privacy risks.
The solution:
AI chat interfaces should clearly indicate which LLM they run. If a choice of models is available, this information should be shared also.
What AI vendors need to offer
To help AI have a meaningful impact on an organization’s growth, teams should focus on:
- Security and privacy - robust security features that protect sensitive data
- Interoperability - seamless integration with existing systems
- Ability to act - the capability to take action when given permission by an approved user
One of the first AI products to satisfy these three criteria is Tines Workbench.
An AI chat interface built on the same secure infrastructure as the Tines automation and orchestration platform, Workbench puts the user in control - they determine what AI can and cannot do.
After connecting Workbench to any tool in their stack, users can give it permission to do things like send a message in Slack, look up an employee in BambooHR, create a ticket in Jira, get detections in CrowdStrike Falcon, or lock down a device in Jamf.
The copilot and AI space is changing fast, but understanding the key features that drive meaningful impact is crucial.
Learn how Tines Workbench helps security teams work and respond faster.