Microsoft adds a 'Security Copilot' to its AI assistant line-up
Microsoft is on a roll in terms of announcing new AI assistants across its product line. On March 28, officials introduced the latest addition: Microsoft Security Copilot.
Powered by Open AI's GPT-4 generative AI technology, Security Copilot — like the several other "Copilots" Microsoft has announced this month — looks and works like a chatbot. It's meant for security professionals who need quick security information to help them isolate and hunt threats. What makes Security Copilot different from other chat bots is its use of the trillions of signals that Microsoft collects in its own security intelligence work, officials said, as well as with information from external security agencies like the National Security Agency. Customers will be able to train Security Copilot on their own data.
Security Copilot integrates with Microsoft Sentinel, Defender and Intune. Users can use Security Copilot to ask for summaries of information about vulnerabilities; to assist with incident and code analysis; and to keep track of alerts from other tools.
Microsoft has published a demo site with suggested scenarios for Security Copilot. Among its potential theoretical uses:
- Identify an ongoing attack, assess its scale, and get instructions to begin remediation based on proven tactics from real-world security incidents
- Discover whether your organization is susceptible to known vulnerabilities and exploits. Examine your environment one asset at a time for evidence of a breach
- Summarize any event, incident, or threat in minutes and prepare the information in a ready-to-share, customizable report for your desired audience
Security Copilot is currently available via a private preview. Microsoft is not providing a public date as to when Security Copilot will be more broadly available or how it will be packaged, licensed or priced.