Nira https://nira.com/ Protect company documents from unauthorized access Tue, 21 May 2024 18:20:33 +0000 en-US hourly 1 Microsoft Copilot: What CISOs Need to Know https://nira.com/microsoft-copilot-ciso-guide/ Tue, 30 Apr 2024 20:38:18 +0000 https://nira.com/?p=10669 Enterprise companies are rapidly testing and adopting AI assistants like Copilot for Microsoft 365 and ChatGPT. In many cases, the choice is simple: use AI assistants to unlock growth and efficiency, now and in the future, or risk losing ground to competition. However, with their benefits come risks. CISOs and security teams are tasked with… (more) Microsoft Copilot: What CISOs Need to Know

The post Microsoft Copilot: What CISOs Need to Know appeared first on Nira.

]]>
Enterprise companies are rapidly testing and adopting AI assistants like Copilot for Microsoft 365 and ChatGPT. In many cases, the choice is simple: use AI assistants to unlock growth and efficiency, now and in the future, or risk losing ground to competition.

However, with their benefits come risks. CISOs and security teams are tasked with ensuring AI assistants are rolled out and leveraged securely, weighing the risks and rewards. Privacy issues, sensitive data exposure, and compliance challenges are major concerns teams must consider.

Before a company deploys Copilot for Microsoft 365 or other Copilot tools from Microsoft, robust AI governance must be in place. 

CISOs and security teams need to consider: 

  1. The security, compliance, and legal risks of Microsoft Copilot.
  2. Prioritizing Data Access Governance (DAG) as part of their Enterprise AI program.
  3. Taking concrete steps to ensure a secure Copilot rollout.

What is Copilot for Microsoft 365?

Microsoft Copilot is a Generative AI chatbot developed in collaboration with OpenAI that launched in February 2023. 

Copilot for Microsoft 365 is an enterprise-grade version of Microsoft’s chatbot, offered to companies that already utilize Microsoft 365 and subscribe to the add-on.

The AI work assistant seamlessly integrates with Microsoft 365 apps including Teams, Word, Outlook, PowerPoint, Excel, Meet, and more. It helps employees streamline tasks, from staying engaged in meetings to summarizing research. 

Copilot can search, gather, and surface data from all Microsoft 365 documents, emails, calendars, presentations, contacts, and meetings. It also analyzes employee behavior when using Microsoft apps and offers tips and shortcuts for greater productivity.

What are Copilot’s information security risks?

Copilot’s four major security risks are oversharing and bad permissions, privacy concerns, insider risks, and manipulation by threat actors.

If you provide data to a public LLM or Copilot through Microsoft, you are taking a big risk with your data. There are a lot of privacy and “grandma’s attic” issues with these models. You may discover information and data that was secured by obscurity.”

—IT/ Security leader via Gartner Peer Learning

Here are the four main security risks to consider:

1. Oversharing and bad permissions

  • Oversharing puts sensitive data at risk: A 2023 study found companies have an average of 802,000 files with “business-critical data” exposed due to oversharing. 
  • Unauthorized users accessing sensitive data: Oversharing grants internal and external users access to confidential information, leading to the risk of data leakage.
  • Microsoft Copilot amplifies data exposure risk: The AI work assistant speeds up and simplifies information access for employees, increasing the potential for exposure. It seamlessly surfaces documents and information to users, aggravating previously hidden sharing misconfigurations.
  • Overly broad access permissions for employees: Employees often have more access to company data than they need or are aware of. Sharing Microsoft documents and SharePoint sites is easy, granting access to anyone with a link or by adding a group or collaborator.
  • Lingering access permissions: Permissions are rarely revoked when access is no longer required, leaving data vulnerable for months or even years.

2. Privacy concerns

  • Copilot accesses your organization’s data: This includes emails, chats, and files employees have permission to access through Microsoft Graph.
  • Relevant suggestions, privacy concerns: Copilot’s suggestions are relevant and based on an employee’s data, but Copilot’s responses could expose sensitive information due to oversharing.
  • Problems with accuracy: Microsoft acknowledges that Copilot’s responses are not 100% factually accurate. The company recommends carefully reviewing all Copilot suggestions before using or sharing them.
  • Data collection for improvement: Microsoft collects some anonymized and encrypted data on interactions employees have with it for diagnostics and improvements. This data might contain fragments of employee personal or organizational information.
  • Data sharing by default: Employees must choose not to share diagnostic data with Microsoft by adjusting privacy settings in Microsoft 365 apps.

3. Insider risks 

  • Accidental data leaks through Copilot misuse: Employees might be unaware of the full scope of data they can access, leading Copilot to reveal confidential information unintentionally.
  • Malicious insider threats:
    1. Prompt engineering: Insiders could use specific prompts to trick Copilot into revealing unauthorized information.
    2. Software/Model compromise: In rare cases, insiders might exploit vulnerabilities or introduce malicious code to compromise Copilot’s security.

4. External threats

  • Compromised Copilot accounts expose company data: Attackers who gain access to a Copilot-enabled account could access sensitive data when querying the tool. They could leak responses that contain confidential information; for example, trade secrets or financial data.
  • Model inversion attacks: All AI-powered systems are vulnerable to a type of attack called model inversion. In this attack, threat actors exploit the model itself to either manipulate its behavior or steal information from it.

What are Copilot’s compliance and legal risks?

Implementing Copilot for Microsoft 365 across an organization creates several legal and compliance risks that must be managed. The risks include confidentiality, copyright infringement, and discrimination claims.

1. Confidentiality

  • ​​Confidentiality concerns: The biggest legal concern when using Copilot is confidentiality. While Microsoft promises to keep code inputs and outputs confidential, there’s a catch that companies who handle regulated data must be aware of. Microsoft reserves the right to access Copilot activity (prompts and responses) for 30 days. Although this access is for abuse monitoring purposes only, it creates the risk of your company’s confidential data being viewed. 
  • Option to opt out: For users handling sensitive or legally regulated data, Microsoft offers the option to apply for an exemption from abuse monitoring access. However, this adds an extra step that requires approval from Microsoft.
  • Two important caveats: There are two other situations where an employee’s Copilot data might venture outside Microsoft’s secure environment:
        1. Referencing public web content: If a user enables Copilot to use Bing for information retrieval, company data might be transmitted outside the Microsoft 365 service boundary.
        2. Using plugins: Plugins can enhance Copilot’s functionality but could also require sharing data with third parties beyond Microsoft’s control.

2. Copyright infringement

  • Copyright concerns: One major concern security leaders mentioned was copyright infringement. There’s a risk that Copilot might unintentionally share copyrighted information it encountered during training. This could include content scraped from the internet without permission. The resulting output could include copyright infringement. 
  • Microsoft coverage has limits: While Microsoft offers some copyright infringement coverage for Copilot, it has limitations. 
  • Talk to Legal and Compliance: Consulting with company legal counsel or the compliance team is recommended to understand the scope of Microsoft’s coverage and potential legal risks related to copyright issues.

3. Bias and discrimination claims

  • Be aware of bias: It’s important to consider bias and discrimination when using Copilot, especially in sensitive areas like employment and public accommodation settings. 
  • Non-discrimination laws still apply: Laws, regulations, and court rulings have established protections against bias in various areas, including situations like job candidate screening. If your company uses Copilot to perform these types of tasks, it should conform to those laws. For examples of non-discrimination laws that pertain to AI, visit here

Why companies must prioritize Data Access Governance when using AI chatbots

Prioritizing Data Access Governance (DAG) has become imperative for organizations deploying AI chatbots like Copilot for Microsoft 365. DAG focuses on managing and monitoring who has access to specific data within an organization and establishing clear protocols for how this data is classified and handled.

Within platforms like SharePoint, OneDrive, and Teams, it’s not uncommon for hundreds of thousands—or even tens of millions—of company files to be shared and accessed by employees. Employees typically have access to sensitive data without a clear understanding of its confidentiality level, and often without even realizing they have access to the data in the first place. This oversight can pose significant security risks, particularly as AI chatbots require extensive access to organizational data to be effective.

Before giving Copilot the keys to the kingdom – all Microsoft documents, emails, calendars, presentations, contacts, meetings, and more – implementing a robust Data Access Governance program is paramount. 

This approach minimizes potential security breaches and ensures that your organization’s data handling practices comply with relevant regulations and standards, safeguarding valuable company information.  

Deploying Microsoft Copilot: What CISOs need to know

Here are the 10 key steps for CISOs who plan to use Microsoft Copilot in their organizations. 

1. Choose a Copilot

    • There are currently seven versions of Copilots for Work: Copilot for Microsoft 365; Copilot for Sales; Copilot for Service; Copilot Studio; Copilot for Security; Copilot for Finance; and Copilot for Azure. Here’s an analysis of their prices, pros, and cons.

2. Know the risks.

    • Deeply understand and assess the major security risks of using Generative AI tools before rolling them out.

3. Follow a strong framework. 

4. Review existing security policies. 

    • Have teams review company policies to ensure sensitive data is not being overshared. This includes managing SharePoint sites, having proper access permissions, and adding sensitivity labels to classify confidential data.

5. Refine AI policies.

    • Teams should create or revise company policies for using Generative AI. Refine AI policies to include Microsoft Copilot, and update any other policies (i.e., data classification) that are required. 

6. Form an AI counsel. 

    • CISOs and security teams should not be the only or primary stakeholders dealing with the repercussions of AI assistant rollouts and usage. “Work with organizational counterparts who have active interests in GenAI, such as those in legal, compliance, and lines of business to formulate user policies, training, and guidance,” Gartner advises.

7. Create an inventory of AI use. 

    • Teams should develop an inventory of existing AI use, and stay aware of the use of Shadow IT. Even if you ban LLMs and AI chatbot usage at your company, employees will find ways and workarounds to use them.

8. Offer employee training and resources. 

      • Lay out guidance for employees and offer them training and resources for using Copilot. Make sure employees understand the risks associated with these tools, as well as how to secure access permissions and company files. 

 9. Run AI experiments and exercises. 

    • Before implementing AI assistants, your team can run tabletop exercises for threat scenarios related to the use of Microsoft Copilot and other Enterprise AI tools. Red teaming exercises, where teams attempt to attack AI models and bypass restrictions, give companies a better idea of how to respond to potential incidents. 

10. Secure information in your company’s Microsoft 365 environment.

    • The best Copilot rollout cleans up oversharing before employees begin using the tool. Your team should evaluate your company’s entire Microsoft 365 environment, including how sensitive data is shared in SharePoint, OneDrive, and Teams. 

Generative AI work assistants are going to be used by employees, whether officially sanctioned or not. It’s imperative for CISOs and security teams to fully comprehend the security, legal, and compliance risks associated with these technologies. From there, they must implement robust measures to reduce these risks.

Failing to address these risks opens the door to data breaches, manipulated content, and sensitive information exposure. Proactive security is crucial to unlocking the full potential of Generative AI while keeping an organization safe. 

Nira helps companies with AI governance by securing company data in Microsoft SharePoint, OneDrive, Teams, and more. Security teams use Nira to address AI governance risks like oversharing, bad permissions, and insider threats, in a few clicks.  

With Nira, your team can:

  • Safeguard sensitive files with real-time access control.
  • Automate security policies for risk, governance, and compliance management.
  • Efficiently investigate and resolve security incidents.
  • Protect millions of company files at once, without issues. 

Get started with a free Microsoft 365 Security Risk Assessment. Reach out here to learn more.

The post Microsoft Copilot: What CISOs Need to Know appeared first on Nira.

]]>
Security Risks from Deploying Generative AI: Google Gemini vs. Microsoft Copilot https://nira.com/gemini-vs-copilot-security/ Fri, 12 Apr 2024 19:14:52 +0000 https://nira.com/?p=10628 Companies are eager to deploy generative AI assistants like Google Gemini and Microsoft Copilot, but security professionals have concerns. Privacy issues, manipulation by threat actors, and exposure of sensitive data are all risks organizations must be aware of when using these tools.  The time savings and economic gains of AI chatbots are enticing. Gemini and… (more) Security Risks from Deploying Generative AI: Google Gemini vs. Microsoft Copilot

The post Security Risks from Deploying Generative AI: Google Gemini vs. Microsoft Copilot appeared first on Nira.

]]>
Companies are eager to deploy generative AI assistants like Google Gemini and Microsoft Copilot, but security professionals have concerns. Privacy issues, manipulation by threat actors, and exposure of sensitive data are all risks organizations must be aware of when using these tools. 

The time savings and economic gains of AI chatbots are enticing. Gemini and Copilot promise to transform the workplace, saving employees hours on tedious tasks and boosting efficiency. 

According to McKinsey, generative AI’s impact on productivity could add $2.6 trillion to $4.4 trillion in value to the global economy each year. By 2030, these tools could automate 30% of hours worked today.

However, companies must understand the information security risks of AI assistants before deploying them in their organizations. 

We’ll give an overview of: 

  1. The main differences between Google Gemini and Microsoft Copilot.
  2. The four major security risks for each tool.
  3. Methods to help your company reduce these risks.

What is Google Gemini?

Gemini is Google’s family of Large Language Models (LLMs) and comprises three different model sizes:

  1. Nano: Designed for on-device processing and other lightweight applications.
  2. Pro: Designed for efficiently scaling across a variety of tasks.
  3. Ultra: Designed for complex tasks and as a competitor to OpenAI’s GPT-4.

Gemini for Google Workspace is an AI-powered assistant designed for companies and built into Gmail, Docs, Sheets, and more. The tool is a productivity aide that helps employees with tasks like writing client outreach emails, project plans, and job postings. 

The AI assistant in Google Workspace has five main use cases: 

  1. Help me write.
  2. Help me organize.
  3. Create an image.
  4. Help me connect.
  5. Help me create an app.

These functions increase productivity in Google services like Docs, Sheets, Gmail, Slides, Meet, and AppSheet. They help employees write copy, make project trackers, and create AI images, as well as enhance video and sound quality in meetings and create applications without writing code. 

Gemini can speed up projects and reduce time spent on daily tasks. However, its capabilities are currently not as strong as Copilot for Microsoft 365, and it’s more of a tool that helps you organize and optimize what you’re already doing in Google Workspace. 

Who can use it: To use the tool in your organization, you’ll need a Google Workspace plan. The assistant is now generally available to companies of all sizes, and it’s possible to try it at no extra cost. 

Available plans: For companies, two Gemini for Google Workspace plans are available: Gemini Business and Gemini Enterprise. Gemini Advanced is also available for individual users’ personal accounts, but not for work accounts or in some countries. 

Gemini Business

  • $20 per user per month for an annual commitment.
  • Gemini is available in Gmail, Docs, Slides, Sheets, and Meet.
  • Has access to Gemini with 1.0 Ultra.
  • Enterprise-grade security and privacy.

Gemini Enterprise

  • $30 per user per month for an annual commitment.
  • 1.0 Ultra model.
  • Includes everything from Gemini Business, plus:
    • Advanced meetings with translated captions in 15+ languages.
    • Full access and usage of Gemini.

Gemini Advanced (Google One AI Premium)

  • $19.99 per month
  • 1.0 Ultra model.
  • Gemini in Gmail, Docs, and more.
  • For individual users
  • Not available for work accounts, for minors, or in some countries.

For more information on these plans, visit here

What is Copilot for Microsoft 365?

Microsoft Copilot’s paid enterprise and business version: Copilot for Microsoft 365 is available to companies using Microsoft 365 that choose to purchase the upgrade. 

The AI assistant integrates with Microsoft 365 apps including Teams, Word, Outlook, PowerPoint, Excel, Meet, and more. It helps employees streamline work-related tasks, from summarizing research to staying engaged in meetings. 

Copilot can also search and gather data from all Microsoft 365 documents, emails, calendars, presentations, contacts, and more. It analyzes employee behavior when using Microsoft apps and offers tips and shortcuts for greater productivity. 

Copilot allows organizations to gain context across multiple apps and services in their Microsoft 365 tenants. Using Copilot Studio, enterprise companies can further tailor Copilot for Microsoft 365 to their workplace or create custom chatbots of their own. 

Who can use it: 

  • Enterprise customers must have a license for Microsoft 365 E3, Microsoft 365 E5, Office 365 E3, or Office 365 E5. 
  • Business customers must have Microsoft 365 Business Standard or Business Premium, or a version of these suites that does not include Microsoft Teams. 
  • Education customers must have a license for Microsoft 365 A3 or Microsoft 365 A5 for faculty. 
  • Consumers are not currently eligible to purchase Copilot for Microsoft 365.

Available plans: Copilot for Microsoft 365 offers two plans: one for businesses and one for enterprises. It’s also included in other Microsoft Copilot plans including Copilot for Sales and Copilot for Service; more information on these plans here.

Copilot for Microsoft 365: Business

  • $30 per user per month for an annual commitment. Pay yearly: $360 per user, per year. 
  • Integrated with Teams, Word, Outlook, PowerPoint, Excel, Edge for Business, and other Microsoft 365 apps. 
  • AI-powered chat with Microsoft Copilot.
  • Enterprise-grade security, privacy, and compliance.

Copilot for Microsoft 365: Enterprise

  • $30 per user per month for an annual commitment. Pay yearly: $360 per user, per year. 
  • Integrated with Teams, Word, Outlook, PowerPoint, Excel, Edge for Business, and other Microsoft 365 apps. 
  • AI-powered chat with Microsoft Copilot.
  • Includes Copilot Studio, to create plug-ins for your data and automation. 
  • Enterprise-grade security, privacy, and compliance.

For more information on all nine Microsoft Copilot plans and their pricing, visit here.

Security Risks of Google Gemini

What are the four main security risks of Google Gemini? 

Gemini comes with four security risks that companies and consumers should consider. These four risks are system prompt leakage, indirect injection attacks, bypassing content restrictions, and privacy concerns. 

These risks are particularly important for developers using the Gemini API, users who have Gemini Advanced, companies who connect Gemini to Google Workspace, and all users using Gemini Apps. Some risks could affect both Gemini Pro and Gemini Ultra models.

Note: These are only a few risks that security researchers have written about and tested. We anticipate with the further rollout and new Gemini offerings, greater potential threats will come to light. 

1. Gemini Security Risk: System prompt leakage

System prompt leaks are a serious concern because they could reveal hidden instructions used to guide the large language model. Malicious actors could then potentially reverse engineer this information and use it to craft more effective attacks.

System prompt leaks may also expose sensitive data contained within the prompt, such as passwords. Researchers at HiddenLayer cleverly bypassed Gemini’s restrictions by using a “synonym attack.” Instead of directly asking for the tool’s system prompt, they rephrased their question and tricked Gemini into revealing its “foundational instructions in a markdown code block.”

First attempt by security researchers. Source: HiddenLayer

The second attempt using a synonym attack. Source: HiddenLayer

This attack exploits the Inverse Scaling property found in LLMs. Basically, the bigger the LLM, the harder it is to fine-tune it against every possible attack. This leaves them more susceptible to being tricked with slightly different phrasings that the creators might not have included in their initial training.

2. Gemini Security Risk: Indirect prompt injection attacks

Indirect prompt injection attacks allow threat actors to obtain complete control of AI assistants.

Kai Greshake originally identified this vulnerability that manipulates LLMs through non-textual means. The technique existed in Bard’s early stages, where attackers could exploit Google Docs to inject malicious instructions, but Google addressed the issue by removing the feature.

However, the release of Gemini Advanced, an extension that integrates Google Workspace with Gemini, reintroduced this risk. The HiddenLayer team showed an example of how this attack could work by using a blank Google document with some instructions in it and connecting it to Gemini Advanced via the Google Workspace extension. 

The instructions were written in a way that allowed the model to override its instructions and execute commands in a delayed manner, enabling more complex and obfuscated attacks, according to the team. 

Source: HiddenLayer

This situation worsens with Google file sharing, HiddenLayer said.  

They wrote: 

“This attack gets even scarier when you consider the implications of Google document sharing. A user could share a document with you without your knowledge and hide an instruction to pull the document in one of your prompts. From there, the attacker would have full control over your interactions with the model.”

The researchers recommend that users who use Gemini Advanced should make sure their Google Workspace extension access is disabled. This will ensure that shared documents will not affect their use of the model.

For information on how your company can secure file sharing in Google Workspace and Microsoft 365, visit here

3. Gemini Security Risk: Bypassing content restrictions

Cunning jailbreaking attacks allow researchers to bypass guardrails and fool Gemini into generating misinformation or harmful content. 

For example, the HiddenLayer researchers were able to trick Gemini Ultra, the most advanced version of Gemini, into revealing instructions on how to hotwire a Honda Civic. 

Source: HiddenLayer

This tactic isn’t new. Researchers have already demonstrated that other LLMs, including ChatGPT, can be manipulated using similar jailbreak attacks​​ to bypass their safety features and access restricted information. 

4. Gemini Security Risk: Privacy concerns in Gemini Apps

Gemini Apps carries the risk of violating user privacy by collecting information from users’ conversations, locations, feedback, and more.

Users should be aware when using Gemini Apps, including the Gemini web and mobile apps, about the warnings Google outlines in its Gemini Apps Privacy Hub

When you interact with Gemini applications, Google collects the following information: Conversations, Location, Feedback, and Usage information. 

In a section titled, “Your data and Gemini Apps,” Google writes this:

“Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.”

Google clarifies how long that data is retained: 

Gemini Apps conversations that have been reviewed by human reviewers are not deleted when you delete your Gemini Apps activity because they are kept separately and are not connected to your Google Account. Instead, they are retained for up to three years.

It concludes with:

“Even when Gemini Apps Activity is off, your conversations will be saved with your account for up to 72 hours. This lets Google provide the service and process any feedback. This activity won’t appear in your Gemini Apps Activity.”

Long story short: If you’re using Gemini Apps, don’t include sensitive information in your Gemini interactions, ever. 

Security Risks of Microsoft Copilot

What are the four main security risks of Copilot for Microsoft 365?

The main security risks of Microsoft Copilot are oversharing and bad permissions, privacy concerns, insider risks, and external threat actors.

The overarching issue is that Copilot can leverage all the data that an employee has access to. If the employee accidentally has access to sensitive data, then so will Copilot. This could have extensive security implications if overshared sensitive data gets into the wrong hands. 

For more information on how to reduce Copilot security risks, visit here.

1. Copilot Security Risk: Oversharing and bad permissions

One issue we see in our work at Nira is the oversharing of sensitive data within organizations. 

A staggering 16% of an organization’s critical data is exposed due to oversharing, according to a 2023 report. That adds up: companies face an average of 802,000 files containing “business-critical data” at risk due to oversharing. 

Employees will share hundreds of thousands of company files with coworkers, customers, vendors, and other third parties, giving them stronger permissions than needed or never revoking their access. A company’s sensitive files could be seen by users who do not have the proper access permissions, leading to data breaches or leaks.

Using Microsoft Copilot—a tool that makes information easier for employees to access than ever—exacerbates the risk of data exposure significantly.

Often, employees have broader access to company data than necessary. Sharing Microsoft documents is effortless—as easy as adding collaborators or opening a link that anyone in the company or on the internet can access. To make matters worse, permission restrictions are seldom removed when access is no longer required.

A Nira survey revealed that more than half of employees admitted to accidentally adding personal email accounts to company documents. And, although 40% of employees said they cleaned up vendor access on documents once an engagement was over, the rest (60%) only did clean up sometimes, rarely, or never. 

To reduce risks associated with Copilot, companies must ensure permissions are accurate and company information isn’t being shared excessively.

2. Copilot Security Risk: Privacy concerns

Copilot offers a powerful productivity boost, but its access to your organization’s data raises compliance, legal, and privacy considerations. Companies should take precautions like changing Copilot’s privacy settings or sanitizing information that Copilot is given access to.

According to Microsoft, Copilot for Microsoft 356 can access your organization’s data through Microsoft Graph, including emails, chats, and documents that you have permission to access. 

The suggestions from Copilot are relevant and based on your data, but this “also means that Microsoft Copilot may expose sensitive or confidential information in its responses.” 

Microsoft warns that you should always double-check Copilot’s content before using or sharing it with anyone else. Although the tool does not keep the prompts you send or receive, it does collect data about how you interact with it for improvement and diagnostic purposes. 

This data is encrypted and anonymized, but it could still contain parts of your personal or organizational information. Users can choose not to share this data with Microsoft by changing their privacy settings in Microsoft 365 apps.

3. Copilot Security Risk: Insider risks

Employees misusing Copilot, either by accident or via prompt hacking, could lead to data breaches or leaks. Employees don’t always realize how much data they have access to, which may expose sensitive information.

For instance, Nick Ross of T-Minus365 used Copilot to find references to a specific topic. Nick was surprised when the tool revealed “a citation of a document I didn’t know existed and a Teams Channel that I didn’t know I was a member of.” While Nick’s case involved non-sensitive data, the risk lies in employees unintentionally surfacing confidential information.

Although most incidents are accidental, malicious exfiltration can happen. By crafting specific prompts (prompt engineering), insiders might trick the tool into revealing unauthorized information.

In extreme scenarios, insiders could compromise the security of the Copilot software or its underlying model by exploiting vulnerabilities or introducing malicious code. 

4. Copilot Security Risk: External threat actors

Threat actors could compromise an account with AI-enabled features and work to exfiltrate sensitive data, creating a new front in cyberattacks.

For example, an attacker gains access to an account with Copilot for 365 and attempts to use it to access sensitive data from Microsoft Graph. 

Or a threat actor leaks a response to a prompt that contains confidential information. For example, using a prompt that asks for a financial report or leaking a response that reveals a trade secret. 

If attackers gain access to a Copilot-enabled account, company data is at risk. 

Companies need the right security features and data protection policies in their Microsoft 356 tenants (i.e., OneDrive, SharePoint) to avoid Copilot misuse. 

How to Prepare Your Company to Deploy AI Assistants

1. Understand Your Company’s Risks in Google Workspace and Microsoft 365

The key to deploying AI assistants is having complete visibility into your company’s Google Workspace and Microsoft 365 environments and understanding their risks. 

Your company’s cloud collaboration environments must have the proper permissions, access controls, and security policies set in place. You should be able to monitor and investigate risks with ease and quickly change access permissions when necessary.

Microsoft suggests several steps for configuring your organization’s tenant, which you may be familiar with if you’ve worked with a company that uses Microsoft. For best practices on securing company data and documents in Google Workspace, visit here

2. Create a security plan for deployment

Having a plan for protecting your data in Google Workspace and Microsoft 365 is a vital resource before enabling AI assistants. 

Deploying a robust security plan requires a comprehensive understanding of your company’s cloud collaboration environment. Here are the points to consider:

  1. Data Classification: Identifying sensitive data is the first step. What does your company consider to be sensitive data? Where is it stored in your company (i.e., Google Drive, Microsoft SharePoint, OneDrive, and Teams)? 
  2. Data Sharing: Companies must understand internal and external sharing practices, including how employees share files and links. 
  3. Access Controls: Who has access to shared company data and files, and what permissions do they hold?
  4. Data Loss Prevention: Does your company employ Microsoft or Google Workspace Data Loss Prevention (DLP) tools or other solutions?
  5. Access Management: Are current access controls for users and groups adequate? Have access policies been updated based on identified risks?
  6. Data Lifecycle: Does your company have a plan for data and access control throughout its lifecycle, including data retention policies?
  7. Employee Training: Are there ways to educate employees on sharing risks and permissions?
  8. Permission Management: Do employees have tools to clean up access misconfigurations and maintain information security?
  9. AI Impact: If your company deploys Google Gemini or Microsoft Copilot, what would be the impact on your organization? What sensitive information could be exposed?

3. Educate and train employees

Employees are crucial allies in protecting company data, but without the proper training and tools, they are left vulnerable. Tight schedules (a reason many use AI assistants) leave them with limited resources and time for access control. The burden of security awareness and enforcement often falls on IT and Security teams, who are frequently overloaded themselves.

To empower employees when using AI assistants, it’s vital to equip them with resources that clarify guidelines and potential risks. Understanding permissions and access levels for company data, especially sensitive files, is crucial.

For more information on how employees can gain visibility and control over access to their Google Drive, Microsoft SharePoint, OneDrive, and Teams files, visit here.

How to Secure Your Company’s Microsoft 365 and Google Workspace

Generative AI assistants will become ubiquitous as more organizations move to adopt them. It’s vital to understand their offerings and get a handle on security risks before rolling out these tools. 

While Copilot currently has more business-to-business use cases than Gemini for Workspace, Microsoft provides a roadmap that Google is likely to follow. Many of the same risks, including bad permissions and oversharing, will become relevant for Gemini as the AI assistant develops. 

When deploying and using generative AI tools, companies must secure their entire Google Workspace and Microsoft 365 environments. For help securing your company’s cloud collaboration tools, reach out here

 

The post Security Risks from Deploying Generative AI: Google Gemini vs. Microsoft Copilot appeared first on Nira.

]]>
The Ultimate Manual to GitHub Copilot https://nira.com/github-copilot/ Fri, 05 Apr 2024 14:35:47 +0000 https://nira.com/?p=6127 Writing code is often a tedious and time-consuming task. Modern developers are always looking for new ways to improve productivity, accuracy, and efficiency with programming. Automatic code generation tools like GitHub Copilot can make this possible. What is GitHub Copilot Anyway? Branded as an “AI pair programmer” and coding assistant, GitHub Copilot uses artificial intelligence… (more) The Ultimate Manual to GitHub Copilot

The post The Ultimate Manual to GitHub Copilot appeared first on Nira.

]]>
Writing code is often a tedious and time-consuming task. Modern developers are always looking for new ways to improve productivity, accuracy, and efficiency with programming.

Automatic code generation tools like GitHub Copilot can make this possible.

What is GitHub Copilot Anyway?

Branded as an “AI pair programmer” and coding assistant, GitHub Copilot uses artificial intelligence to auto-generate code in your editor. It’s available as an extension for Visual Studio Code, the JetBrains suite of IDEs, and Neovim.

But GitHub Copilot is more than just an autocomplete solution. Based on context clues from the code you’re writing, Copilot suggests lines and even entire functions. It’s a faster and easier way for developers to create tests, explore APIs, and solve problems without constantly searching for answers elsewhere.

Once you start working with the GitHub Copilot plugin, the tool automatically adapts to the way you write code.

Copilot is fast and works seamlessly with your workflow as you’re writing code. When you start to get the hang of it, a single click on your keyboard will autocomplete the code you need.

Unlike similar solutions on the market, GitHub Copilot gives you complete control—hence the name. You can accept or reject the code, manually edit suggestions, and cycle through alternative suggestions. Since the tool adapts to your coding style, the suggestions it gives you in the future will continue to get smarter.

How GitHub Copilot Works

GitHub Copilot is powered by OpenAI Codex. The auto-generated suggestions come from the context within the file, like function names, code comments, docstrings, file names, cursor position, and more. Based on this information, Copilot suggests code snippets that developers can accept by simply pressing the Tab key on their keyboards.

The AI tool understands TypeScript, Python, JavaScript, Ruby, and dozens of other common languages.

That’s because the AI suggestions are coming from open-source code within GitHub’s public repositories. It analyzes that information and then tries to find the best possible solution based on what you’re writing.

A standout of GitHub Copilot is its ability to understand natural language. This includes programming languages and human languages alike.

It’s worth noting that GitHub Copilot does not write perfect code. The tool does its best to try and understand the developer’s intent. But, you’ll notice that some of the suggestions don’t always work or even make sense.

GitHub Copilot doesn’t test any of the code it’s suggesting to you. Those suggestions may not compile or run. So you still need to carefully review and test the code before you assume it’s usable.

To get the most out of GitHub Copilot, you should segment your code into smaller functions. Make sure you’re writing good comments and docstrings as you’re working. Always use meaningful names for function parameters, as this will make it easier for Copilot to understand your intent.

GitHub Copilot seems to have the most significant impact on developers working with unfamiliar frameworks and libraries. Rather than searching through open-source documentation on your own, Copilot can navigate this for you within seconds.

Overall, GitHub Copilot is probably the best autocomplete tool on the market. It provides developers with lots of different ways to solve problems beyond basic suggestions. The range of suggestions you’re getting for a code snippet is great, and you likely won’t need to use Stack Overflow to find answers.

But, it’s important to understand that GitHub Copilot is just a tool. It’s not even close to replacing the need for human developers. You cannot rely on Copilot alone, and it’s still on the developer to accept the suggestions and make changes.

Let’s take a closer look at different examples that Copilot can be used for. These examples will help you better understand the tool’s functionality and versatility:

Example #1: Convert Comments to Code

One of the coolest features of GitHub Copilot is its ability to take your comments and turn them into code. Just create a comment that describes the logic you need, and Copilot will automatically generate suggestions for you.

Here’s what that looks like:

In this case, the comment simply says, “List all names of GitHub repositories for an org.”

Copilot immediately generated a suggestion. If you were writing this code, all you’d need to do is click Tab to accept it. As you can see, this comment was written in plain English. GitHub Copilot still understood the intent and made the appropriate suggestion.

This relates to something that we mentioned earlier—always write good comments and docstrings. If your comments are written in an unnatural language, it can be tough for Copilot to understand the appropriate intent.

Example #2: Autofill Repetitive Code

GitHub Copilot is an ideal way for developers to speed up the way they write repetitive code. If you’re writing large blocks of boilerplate code, you just need to input a few examples of the pattern. Then Copilot will handle the rest.

Here’s a really simple example to show you how this works:

In this example, the constant variable starts with seconds. Once the second line shows the const as minutes multiplied by seconds, Copilot recognizes the pattern and auto-completes the code for hours, days, weeks, months, and years.

These five additional lines of code can be written with a single click. At scale, this will shave a ton of time off your programming, especially for larger blocks.

Example #3: Run Tests

As previously mentioned, GitHub Copilot doesn’t test the code it’s suggesting. But with that said, you can use it to suggest tests matching your code implementation.

It’s a great way to quickly import a test unit package. Here’s an example that generated a test from a plain English comment:

You’ll still need to verify that the code makes sense, but it’s a faster alternative to completing this code on your own.

Example #4: Navigating Unfamiliar Territory

This particular use case is arguably Copilot’s best function. It’s a great way for developers to navigate the waters of an unfamiliar language or framework.

For example, let’s say you wanted to draw a scatter plot. The way you would write this code would vary significantly based on the programming language you’re using. Here’s an example of what that code would look like in Python:

Even if you’re experienced and comfortable writing in Python, this autocomplete function will still save you time.

But for argument’s sake, let’s say you needed to write a scatter plot in JavaScript—but you’re not very familiar with this programming language. In this case, GitHub Copilot has you covered. Look at what it can generate for you here:

To write this without Copilot, you’d be forced to manually search through public repositories for examples. Or maybe you’d use a resource like Stack Overflow to find your answers. But both of those alternatives are tedious and time-consuming.

Experienced developers love to use Copilot when navigating unfamiliar languages. Even if Copilot’s suggestions aren’t perfect, it can still get the basic syntax right. It will also point you in the right direction when it comes to common idioms, library functions, and more. Copilot can even be used as a self-help teaching tool for programmers.

Example #5: Creating an Application Entirely With Copilot

In addition to the broad examples of Copilot’s capabilities, we wanted to find a real-life success story of someone who used Copilot to create something. We found an excellent case study on LogRocket, written by a UK-based software engineer.

Let’s take a closer look at the highlights of this story.

The programmer, Evgeny Klimenchenko, decided to create a simple test application to see if Copilot could handle the project. The app was a random quote generator that also displayed the quote’s sentiment.

To truly test Copilot, Klimenchenko told himself that he would not search Google or Stack Overflow for solutions. He would rely solely on Copilot’s suggestions. No new code would be written by him either. But, he allowed himself to write variables, comments, and function names, and make edits to the suggestions.

Within one week, Copilot helped Klimchenko create a simple quote-generating application. It’s very basic and not really useful for anything specific. However, the case study proved the Copilot functions as advertised.

How to Get Started With GitHub Copilot

If it’s your first time using GitHub Copilot and you’re not sure what to do, you’ve come to the right place. The steps below will not only tell you how to use GitHub Copilot, but they’ll also set you up for success. Here’s what you need to do:

Step 1: Narrow Down Your Use Case and Goal

Technically, this isn’t a requirement for using Copilot. But if you’re just getting familiar with the tool, it’s definitely in your best interest to do so.

Don’t just approach GitHub Copilot with a “let’s see what happens” mentality. This will likely overwhelm you, and you won’t get the most out of the tool’s functions.

For example, you might decide to use GitHub Copilot strictly for auto-filling boilerplate code. Everything else you’ll write on your own as usual. But when you encounter situations where there’s an option for you to autocomplete repetitive lines, you can take advantage of Copilot.

Or maybe you’re on the complete opposite end of the spectrum. Instead of using Copilot to assist with your regular programming work, you may want to run an experiment similar to the case study we discussed earlier.

Many developers take advantage of GitHub Copilot when they’re working in an unfamiliar programming language. Copilot will help them get the syntax right and get a basic understanding of library functions.

Once you’ve identified how you plan to use Copilot for your next project, the rest of these steps will be much easier.

Step 2: Install the GitHub Copilot Extension

GitHub Copilot doesn’t come standard with any editors. So you’ll need to add the extension before you can start using it.

Here are the ways you can use to install Copilot, depending on your preferred editor:

We think the GitHub Copilot extension works best in Visual Studio Code. That’s because Visual Studio Code also works in GitHub Codespaces.

After you install the extension, Copilot will prompt you to authorize the plugin by signing into GitHub. Once authorized, you should automatically be sent back to the editor. If the extension has been installed correctly, you should see the Copilot icon in your status panel.

Step 3: Learn the GitHub Copilot Keyboard Shortcuts

You should get familiar with common keyboard shortcuts for GitHub Copilot. They’ll vary slightly, depending on whether you’re using macOS, Windows, or Linux.

Here are the ones you should know:

  • Accept inline code suggestion — Tab
  • Dismiss inline code suggestion — Esc
  • Show next suggestion — Alt + ] or Option (⌥) + ]
  • Show previous suggestion — Alt + [ or Option (⌥) + [
  • Trigger suggestion — Alt + \ or Option (⌥) + \
  • Open 10 suggestions in a separate pane — Ctrl + Enter

Keep these close by as a quick reference as you’re working.

Step 4: Start Writing Your Code and Review the Suggestions

Now you just need to start working like you normally would.

As you’re writing, you’ll start to see GitHub Copilot automatically suggest options to autofill based on the context. It’s up to you whether or not to accept or reject those options.

If you don’t like what Copilot is providing, you can always see other suggestions to see if those options are more relevant. Copilot definitely takes some getting used to, but you’ll get the hang of it the more you use it.

Step 5: Make Edits and Test Your Code

As previously mentioned, Copilot isn’t perfect. So you can’t just take the suggestions at face value and assume everything is perfect.

You’ll likely need to make some minor edits to the code. As always, you should always run tests before committing the code to your project.

GitHub benchmarked Copilot’s accuracy by reviewing a set of Python functions in open-source repositories. They eliminated the function bodies and prompted Copilot to fill them in. Copilot got the functions right 43% of the time on its first attempt. When Copilot was allowed 10 attempts, the code was right 57% of the time.

If this benchmark is any indication of how Copilot will perform as you’re using it, there’s a good chance you’ll need to make at least some minor edits to the suggestions.

GitHub Copilot Plans and Pricing

GitHub Copilot has transformed into an AI coding assistant that enhances developer workflows. 

In spring 2023, GitHub launched Copilot X, a “readily accessible AI assistant.” According to the company, it adopted OpenAI’s GPT-4 model, and introduced chat and voice for Copilot, bringing Copilot to pull requests, the command line, and docs to answer questions on projects. 

This enabled “context-aware conversations.” Developers could ask GitHub Copilot to explain a piece of code, fix an error, or even generate unit tests. 

Now, GitHub Copilot has taken its AI assistant to the next level and offers three plan options for individuals, businesses, and enterprises. We’ll briefly go over each plan below. 

Copilot Individual

  • Who’s it for: Individual developers, freelancers, students, and educators who want to code faster. 
  • Cost: $10 per month or $100 per year. 
  • Free for verified students, teachers, and maintainers of popular open-source projects.

What’s included: 

  • Chat
    • Unlimited messages and interactions
    • Context-aware coding support and explanations
    • Debugging and security remediation assistance
  • Code completion
    • Real-time code suggestions
    • Comments to code
  • Smart actions
    • Inline chat and prompt suggestions
    • Slash commands and context variables
    • Commit message generation
  • Supported environments
    • IDE, CLI, and GitHub Mobile
  • Management and policies
    • Public code filter

Copilot Business

  • Who’s it for: Organizations ready to improve engineering velocity, code quality, and developer experience.
  • Cost: $19 per user per month. 

What’s included:

  • Chat
    • Unlimited messages and interactions
    • Context-aware coding support and explanations
    • Debugging and security remediation assistance
  • Code completion
    • Real-time code suggestions
    • Comments to code
  • Smart actions
    • Inline chat and prompt suggestions
    • Slash commands and context variables
    • Commit message generation
  • Supported environments
    • IDE, CLI, and GitHub Mobile
  • Management and policies
    • Public code filter
    • User management
    • Data excluded from training by default
    • IP indemnity
    • Content exclusions
    • SAML SSO authentication

Copilot Enterprise

  • Who’s it for: Companies that want to customize GitHub Copilot to their organization and infuse AI across the developer workflow.
  • Cost: $39 per user per month. 

What’s included:

  • Chat
    • Unlimited messages and interactions
    • Context-aware coding support and explanations
    • Debugging and security remediation assistance
    • Conversations tailored to your organization’s repositories
    • Answers based on your organization’s knowledge base
    • Access to knowledge from top open-source repositories
    • Pull request diff analysis
    • Web search powered by Bing (beta)
  • Code completion
    • Real-time code suggestions
    • Comments to code
    • Fine-tuned models (coming soon as an add-on)
  • Smart actions
    • Inline chat and prompt suggestions
    • Slash commands and context variables
    • Commit message generation
    • Pull request description and summarization
  • Supported environments
    • IDE, CLI, and GitHub Mobile
    • GitHub.com
  • Management and policies
    • Public code filter
    • User management
    • Data excluded from training by default
    • IP indemnity
    • Content exclusions
    • SAML SSO authentication
    • Requires GitHub Enterprise Cloud

For more information on GitHub’s AI assistant, visit here.

The post The Ultimate Manual to GitHub Copilot appeared first on Nira.

]]>
What are the different Microsoft Copilots and how much do they cost? https://nira.com/microsoft-copilot-price/ Wed, 03 Apr 2024 20:22:21 +0000 https://nira.com/?p=10612 Choosing the right Microsoft Copilot plan isn’t as simple as it seems.  Microsoft currently offers seven versions of its generative AI chatbot that are designed for companies and two versions for individual users.  Pricing is a top concern for organizations considering the AI assistant. Adding a version of Copilot will increase your company’s monthly Microsoft… (more) What are the different Microsoft Copilots and how much do they cost?

The post What are the different Microsoft Copilots and how much do they cost? appeared first on Nira.

]]>
Choosing the right Microsoft Copilot plan isn’t as simple as it seems. 

Microsoft currently offers seven versions of its generative AI chatbot that are designed for companies and two versions for individual users. 

Pricing is a top concern for organizations considering the AI assistant. Adding a version of Copilot will increase your company’s monthly Microsoft costs and may require a yearly contract. 

Before you or your company pays for these tools, here’s what you need to know about all nine Microsoft Copilot plans and their pricing options. 

Microsoft Copilots for Work

  1. Copilot for Microsoft 356
  2. Copilot for Sales
  3. Copilot for Service
  4. Copilot Studio
  5. Copilot for Security
  6. Copilot for Finance 
  7. Copilot for Azure 

Microsoft Copilots for Individuals

  1. Copilot Free Version
  2. Copilot Pro

Microsoft Copilot FAQs

  1. Security FAQs
  2. Cost FAQs
  3. Comparison FAQs

How Microsoft Copilot Pricing Works

Microsoft offers Copilots for both individuals and businesses. There’s Copilot Studio, Copilot Pro, Copilot for Security… The list goes on and is constantly updated. 

Pricing for the majority of Copilots for Work is per user, per month, except for Copilot for Security, which has consumption-based pricing, and Copilot Studio, which is included in Copilot for Microsoft 365. 

Copilot for Finance and Copilot for Azure are still in public preview, so their pricing information will be announced at a later date.

Copilots for Work: Which Microsoft Copilot is right for my company?

1. Copilot for Microsoft 365

  • Cost: $30 per user per month.
  • Commitment: Annual commitment.
  • Requirements: A product license for Microsoft 365 Business Standard, Business Premium, E3, E5 or Office 365 E3 or E5 is required to purchase Copilot for Microsoft 365.

Microsoft Copilot’s paid enterprise and business version: Copilot for Microsoft 365 is available to companies using Microsoft 365 who choose to purchase the upgrade. 

This Copilot version integrates with Microsoft 365 apps including Teams, Word, Outlook, PowerPoint, Excel, and more. 

The tool promises to help employees strengthen and streamline work-related tasks, from summarizing research to staying engaged in meetings. 

Pros

Time savings: Copilot helps speed up tasks like transcribing meetings in real-time, summarizing long email threads in Outlook, creating drafts in Word based on prompts, and generating charts in Excel. 

Enhanced productivity: The tool analyzes behavior when using Microsoft apps and will offer tips and shortcuts for greater productivity. It helps employees organize and analyze their data in Microsoft 365, quickly and efficiently. 

Streamlined searches: Copilot for Microsoft 365 enhances searches for requests on job-related tasks, in the company’s Microsoft 365 environment, and through detailed web searches using Copilot in Edge or Copilot in Bing. The tool explains its reasoning for answers and gives links to helpful videos and other sources to expand the knowledge of a query. This encourages employees to learn more and gain greater understanding. 

Cons

Information security risks: Security professionals are increasingly worried about Copilot’s ability to search and gather data from all Microsoft 365 documents, emails, calendars, presentations, contacts, and meetings. If an employee accidentally has access to sensitive information in their company’s Microsoft environment, then so will Copilot for Microsoft 365.

Privacy issues: Copilot for Microsoft 365 analyzes user data to make suggestions, which may raise privacy issues for users or organizations. Although the tool does not keep the prompts an employee sends or receives, it does collect some data about how they interact with it for improvement and diagnostic purposes. This data is encrypted and anonymized, but it could still contain personal or organizational information. 

Cost concerns: Copilot for Microsoft 365 is an added expense on top of a Microsoft subscription, and companies need a yearlong contract to purchase the tool. This may impede smaller businesses that are hesitant to sign a one-year contract on a product that has not been fully proven.

2. Copilot for Sales

  • Cost: $50 per user per month.
  • Commitment: Annual commitment.
  • Includes Copilot for Microsoft 365.
  • Requirements: A product license for Microsoft 365 E3, E5, Business Standard, or Business Premium, or Office 365 E3 or E5 will be required to purchase Copilot for Sales. 
  • Note: Customers who already have Copilot for Microsoft 365 licenses can purchase Copilot for Sales for an additional $20 per user per month. Dynamics 365 Sales Premium customers must pay $30 per user per month for Copilot for Microsoft 365 to get the full Microsoft Copilot for Sales experience.

Copilot for Sales works with Salesforce Sales Cloud and Dynamics 365 Sales and can be configured to connect to other sales solutions.

The tool leverages data from your company’s CRM platform, as well as large language models and data from Microsoft Graph, Microsoft 365 apps, and the web. 

This helps sales teams save time and energy, build stronger customer relationships, and ultimately, close more deals, according to Microsoft. 

Pros 

Personalized customer interactions: Copilot for Sales helps sales teams prep for meetings and strengthen customer relationships. The tool provides past meeting notes, emails, opportunity summaries, and related content in Outlook and Teams. It offers sales tips, related information, and answers to customer’s questions during Teams calls. After a call, Copilot gives meeting summaries including keyword and conversation analysis, competitor mentions, KPIs, and suggested next steps. 

Collaboration with CRMs: Copilot for Sales works with your company’s CRM to help sales teams have a better experience. Teams can capture and edit customer and opportunity details in Outlook and Teams and sync to their CRM platform. Using the tool, they can update records, create and share contact cards, and collaborate with their company’s preferred CRM. 

Simplified tasks: The tool allows employees to quickly draft emails and set up meetings in Outlook using data from their company’s CRM platform and Microsoft Graph. Copilot also helps with creating pitch decks, meeting preparation briefs, and data visualizations in PowerPoint, Word, and Excel. 

Cons

Security and privacy risks: Since Copilot for Sales includes Copilot for Microsoft 365, many of the same concerns about privacy and security apply. What’s more, Sales teams may have access to more specific confidential information like pricing models or customer information that should be protected.  

Collaboration setup issues: For Copilot for Sales to be productive, several factors need to be in place. A company needs a well-operating CRM system, seamless connections among Copilot, Teams, Outlook, and either D365 or Salesforce, and precise, properly organized corporate data. CRMs with valuable, organized customer information can be tricky to configure, and not all companies are prepared to integrate theirs with an AI tool that can search and analyze their data in minutes. 

Cost concerns: Like Copilot for Microsoft 365, the yearly commitment cost could be a concern for smaller businesses that do not have the resources to try an unproven tool on top of their other Microsoft subscription costs.

3. Copilot for Service

  • Cost: $50 per user per month.
  • Commitment: Annual commitment.
  • Includes Copilot for Microsoft 365.
  • Requirements: A product license for Microsoft 365 E3 or E5, Microsoft 365 Business Standard or Premium, or Office E3 or E5 is required.
  • Note: Customers who have Copilot for Microsoft 365 licenses can purchase Copilot for Service for an additional $20 per user per month.

Microsoft Copilot for Service uses generative AI to boost traditional customer service solutions. The tool enhances customer experiences and improves agent productivity by integrating with existing contact center and CRM solutions. 

This Copilot variation requires no software installation or complex integrations and can be deployed in Microsoft apps and products—suchas Outlook and Teams— to help agents thrive.

Agents gain access to real-time responses from different content sources, including third-party knowledge bases like Salesforce, ServiceNow, and Zendesk.

Pros 

AI-powered conversations: Copilot for Service enables AI-powered conversations without replacing a company’s existing solutions. It allows agents to connect all their data including information from their CRM, contact center systems, and other sources.  Companies can use out-of-the-box integrations with services such as Salesforce, and configure the tool to meet their needs. 

Agent empowerment: The tool aids productivity by allowing agents to quickly respond to customers with answers from knowledge base articles, documentation, and agent handbooks. Copilot is available where agents already work, including Microsoft 365 apps. Using Copilot for Microsoft 365, agents can summarize cases, update CRM records, and draft emails through access to Copilot in Outlook and Copilot in Teams.

Tailored customization: Teams can configure and customize Copilot for Service to meet their company’s specific needs, and further extend their copilots to unique requirements with Microsoft Copilot Studio (more on that Copilot in the following section). 

Cons

Learning Curve: With new tools, there is often a learning period from the setup and implementation to when end users start to see benefits. Copilot for Service integrates with CRMs, contact center systems, and other sources; this could take time and resources to set up properly before it works well enough for employees to reap the rewards. 

Security Concerns: Copilot for Service includes Copilot for Microsoft 365 which means the tool has access to a large amount of data across Microsoft 365 applications. When working with customer services, this data could include highly sensitive information such as customer personally identifiable information (PII) and personal health information (PHI).

Pricing problems: Smaller businesses may have issues with signing up for a yearlong commitment without first trying out the tool. Prices listed on the site are explicitly used for marketing purposes and may not reflect the final cost when a company decides to buy.

4. Copilot Studio 

  • Cost: $200 per tenant per month for 25,000 messages.
  • Commitment: Pay per month.  
  • Requirements: Included in aCopilot for Microsoft 365: Enterprise subscription, and in the Digital Messaging and Chat add-ons for Dynamics 365 Customer Service. You can obtain a standalone Copilot Studio subscription from the Microsoft 365 admin center. See more details here.

Copilot Studio allows employees to build copilots using generative AI, dialog creation, plugin capabilities, process automation, and built-in analytics that work with Microsoft conversational AI tools.

Employees can engage with customers and other employees using various languages across platforms such as websites, mobile applications, Facebook, Microsoft Teams, or any channel facilitated by the Azure Bot Framework.

According to Microsoft, these copilots can be created by anyone, without the need for dedicated data scientists or developers. The tool is available as a standalone web app and as a discrete app within Teams.

Pros 

Custom builds: Copilot Studio allows users to build and run their own copilots across websites and other channels. They can create custom copilots and GPTs, incorporating their data and plugins. It also lets them customize their Copilot for Microsoft 365, enabling greater control and efficiency. For example, they can design conversations for predictable scenarios that require specific responses, like compliance or regulatory topics. 

Workflow automation: Copilot helps automate complex business tasks such as submitting expenses, onboarding employees, and updating benefits. Employees may see greater productivity and time savings by automating these processes. 

Low-code platform: With more than 1,200 data connectors and a low-code authoring canvas, Copilot Studio simplifies the process of creating complex workflows and generative answers. Employees could easily use the tool to create a copilot for IT support, a copilot to help customers choose a product, or help suppliers track the status of orders—to name just a few examples from Microsoft. 

Cons

Limited training materials: Copilot Studio is new, and there’s still not much educational material on how to use the tool. End users have expressed frustration about the tool’s learning curve and not being able to find adequate training resources, especially as their companies transition to AI-powered tools quickly. 

Unproven reliability: AI-generated code or content might not always meet a company’s required standards, which means employees need to incorporate thorough reviews and testing. This process can slow employees down as they start getting the hang of the tool. 

Information security and privacy risks: Any Copilot that works with Copilot for Microsoft 365 and has access to a large amount of company data will have security and privacy risks. It’s important to be aware of these issues and make sure your Microsoft 365 environment is secure before implementing AI assistants. Learn how to do that here.

5. Copilot for Security

  • Cost: $4 per hour of usage.  
  • Commitment: Consumption-based model. 
  • Requirements: The only prerequisite for Copilot for Security is an Azure account. However, Microsoft recommends connecting your company’s Microsoft Security tools and integrating other security tools to get the most value from Copilot.

Copilot for Security was officially launched on April 1, 2024. According to Microsoft, it is the only security AI product that combines a specialized language model with security-specific capabilities from Microsoft.

The tool works with other Microsoft Security products, including Microsoft Defender XDR, Microsoft Sentinel, Microsoft Intune, Microsoft Entra, Microsoft Purview, and Microsoft Defender for External Attack Surface Management. 

It also has integrations with independent software vendors to provide plugins and promptbooks; partners include Cloudflare, Darktrace, and Accenture, to name a few. 

Pros 

Simplified script analysis: Users appreciate Copilot’s ability to easily decrypt scripts, providing clear insights into their contents and making the analysis process more straightforward. 

Faster threat hunting: Copilot accelerates threat hunting by assisting in query writing based on adversary methodologies, streamlining the process and making it more efficient.

Enhanced analyst experience: The tool minimizes the need for practitioners to switch between multiple tools and interfaces. Incident reports can also be generated as a template, ensuring data is readily available for executive review and reducing unnecessary communication loops.

Cons

Learning curve for practitioners: It takes roughly 40 hours of training for security practitioners to be comfortable using Copilot for Security. It’s not a terrible amount of time, but with IT and Security teams already strapped for resources, this learning curve could come at a cost. 

Training isn’t enough: According to Forrester, it also takes four or more weeks to get practitioners comfortable with the technology. This underlines the need for more than training: practitioners should change their behavior for the tool to be most effective.

Integrations are lacking. Integrations are still limited, although Microsoft has plans to roll out more integrations with future updates. 

Copilots in Public Preview 

The following Copilots are in public preview. This means they are still being rolled out with limited capabilities in select global markets.

6. Copilot for Finance

  • Cost: In public preview with pricing to be announced.
  • Commitment: To be announced.
  • Requirements: Requirements to be announced.  

Copilot for Finance offers insights that reduce the time spent on manual, repetitive work for finance professionals. The tool connects directly to Dynamics 365 and SAP and can also be connected to more than 1,200 other systems via Copilot Studio. 

Finance teams can navigate Excel, Outlook, and Teams without toggling between applications to locate what they need. The tool automatically adopts the security, compliance, and privacy policies established in your company’s Microsoft 365 environment.

Pros 

Faster time to decision-making: Copilot helps teams drive growth through actionable recommendations, proactive anomaly detection, and tailored prompts and guidance. Teams can streamline operations with precise commentary, reports, and insights gleaned from various data sources.

Cost reduction: Companies can cut costs and reclaim time by leveraging Copilot to convert manual tasks into more efficient processes. The tool automates labor-intensive tasks such as collections, contract management, and invoice processing. Copilot will generate initial drafts for customer communications, including relevant attachments, to save teams time and resources. 

Boosted productivity: In Excel, Copilot for Finance enables more streamlined data reconciliation. In Outlook, it supports the collections process. Copilot can empower collections teams by giving them a complete summary of customer balance history using Copilot-guided prompts and recommendations. The tool connects to existing financial data sources, including ERP, using prebuilt connectors and Microsoft Copilot Studio.

Cons

Early stage tool: This offer is in its early stages, and there’s less information available about its future costs and benefits. It can be difficult to judge a tool until it has been fully rolled out. 

Sensitive data at risk: Finance data is often highly sensitive and should be handled in specific ways to stay compliant with industry and company requirements. Allowing tools like Copilot to have access to a financial team’s information could herald new security and compliance risks. 

Delayed ROI: Without the proper setup and rollout of this tool, it could be a while until teams see a return on their investment. Although Microsoft markets Copilot for Finance as an assistant that will help these teams be more efficient, and thus cut costs, it could take time before companies see the full benefits. 

7. Copilot for Azure

  • Cost: Copilot for Azure is offered at no additional cost during the public preview period. Microsoft will provide updates at a later date.
  • Commitment: To be announced.  
  • Requirements: You need an Azure account to apply for access. If you are new to Azure, you may sign up for a free account and then apply for access.

Copilot for Azure is an AI assistant that helps employees design, operate, and troubleshoot applications and infrastructure across the cloud and edge.

This variation utilizes language models, the Azure control plane, and deep insights into Azure and Arc-enabled assets. 

Tasks are executed under Azure’s committed framework of ensuring information security and privacy, which according to Microsoft, has one of the largest compliance certification portfolios in the industry.

Pros 

Custom insights: Copilot offers customized solutions for managing employee workloads through an AI assistant tailored to a company’s workspace. Employees can utilize Copilot’s AI support to address queries, compose commands, and execute tasks using natural language.

Optimized environment: The tool enhances cost-efficiency, security, and reliability by leveraging recommendations catered to a company’s Microsoft environment. Copilot for Azure consolidates knowledge and data from hundreds of services, boosting productivity while reducing expenses.

Efficient cloud operations and management: Copilot coordinates data flow across Azure services to gain insights: helping summarize issues, identify root causes, and offer concrete solutions.

Cons

Early phase pains: End users have commented that when using Copilot for Azure for different use cases involving Kubernetes, storage accounts, etc. the tool had trouble understanding the full context of their Azure environment with the resources they gave it. When Copilot doesn’t understand, it will apologize or provide generic steps and CLI commands that take time to sift through and aren’t particularly helpful. 

Overwhelmed requests: The tool is still in public preview and that comes with limitations. Users who have experimented with Copilot for Azure noted they consistently received messages about the overwhelming demand for the tool from Microsoft customers, and that the company had implemented conversation and turn limits. To get the most out of this tool, companies will need to wait until it’s out of preview and more generally available. 

Hallucination issues: Most generative AI tools will occasionally “hallucinate”—aka give inaccurate or misleading responses—and Copilot for Azure is no different. As Microsoft-certified Trainer Jussi Roine put it after trying the tool: “There is a bit of hallucination here and there, which we probably all expect to see now and then.” It’s important to be aware of this weakness when using any Copilot or AI assistant. 

For Individuals: Which Microsoft Copilot is right for me?

8. Copilot Free Version

The free version of Copilot allows users to ask the chatbot questions about various topics—from recipe ideas to fitness plans. A sidebar offers suggestions such as “Designer,” “Vacation Planner,” “Cooking Assistant,” and Fitness Trainer.” Users can then choose the tone of the conversation to be more “creative,” “balanced,” or “precise.”

Although you can play with Copilot’s free version without a Microsoft account, the website encourages users to sign in to ask more questions and have longer conversations. It also offers an app to download and a “Notebook” section that lets users write detailed prompts to collaborate with Copilot on creating content. This is all free for end users, but you are only given five questions per day unless you sign into a Microsoft account, which allows 30 chats per day. 

9. Copilot Pro

Copilot Pro is priced at $20 per user per month and includes “premium AI features.” This tool is designed with individuals and independent creators in mind and is meant to be better optimized than the free version. 

According to Microsoft, it gives users priority access to GPT-4 and GPT-4 Turbo for faster performance. It also allows you to build your own Copilot GPTs tailored to individual needs and interests. It integrates with apps including Word, Excel, PowerPoint, and Outlook to enhance productivity. The tool generates images and enhances creations using “100 daily boosts” with Designer. 

How Do You Access Microsoft Copilot?

Copilot in Bing

According to Microsoft, Copilot in Bing “builds on the existing Bing experience to provide you with a new type of search.” Copilot takes Bing searches and returns with a more thorough response. It goes beyond compiling a list of relevant links and instead gathers information from across the internet to provide a concise, comprehensive answer. 

Copilot for Bing can be accessed on any device; it’s available as a mobile app and works on different browsers including Microsoft Edge, Chrome, Firefox, and Safari. It can engage in chats, generate images with DALL-E 3, and even use photos from your phone in its chats.

Copilot in Edge

Copilot in Edge integrates into the Microsoft Edge browser and offers features based on the context of the web pages you’re viewing. 

When you open Copilot on the Edge sidebar, you can search and summarize the specific webpage you’re on without changing tabs. Copilot will answer questions, compare products, and summarize documents right in the browser’s sidebar. 

Copilot in Edge also allows for the customization of AI-generated images and will soon enable sending emails via Outlook.

Copilot in Windows

Copilot for Windows is built into Windows 11, offering access via the taskbar or a dedicated key on select keyboards. It provides answers from across the web and assists with tasks like adjusting PC settings and organizing windows with Snap Assist. 

Copilot in Windows also has versions of classic Microsoft tools that have been revamped to have “AI-powered features.” For example, Microsoft Paint now has new AI tools to help you edit photos and create art. AI features have also been added to the Snipping Tool, the Photos app, and Clipchamp, which lets you use AI to edit footage. 

Copilot for Windows is mainly for Windows 11, but is available “in preview” on select Windows 10 devices; this means it’s being rolled out with limited capabilities in select global markets.

Microsoft Copilot FAQs and Final Thoughts

Security of Copilot FAQs

1. What are the security risks of using Copilot for Microsoft 365?

The problem with Copilot for Microsoft 365 is it can leverage all the data that an employee has access to: if the employee has access to sensitive information, then so will Copilot, which can lead to potential data leaks.

Security professionals are primarily worried about the tool’s ability to search and gather data from all Microsoft 365 documents, emails, calendars,  presentations, contacts, and more.

The four main risks of using Copilot for Microsoft 365 include oversharing and inadequate permissions, insider risks, external threat actors, and privacy concerns. Read more about these risks and how to combat them. 

2. How do I keep my company’s data safe when using Copilot for Microsoft 365?

A concrete security plan for securing access to Microsoft 365 will make it easier to confidently deploy Copilot. It’s important to make sure your company has the right permissions, access controls, and security policies in its Microsoft 365 environment. To help with this, reach out here.

Cost of Copilot FAQs

1. What is the return on investment (ROI) for Copilot for Microsoft 365?

T-Minus 365 wrote a full analysis of the potential ROI of using Copilot for Microsoft 365. The short answer: “The break-even point for Copilot investment is 54 minutes saved per month given an employee salary of $70,000 per year. Over 100% return is achieved with just two hours saved each month.”

2. Can small businesses afford Microsoft Copilot?

The upfront cost may be intimidating for smaller businesses, but Copilot offers the potential for substantial ROI and the ability to adjust adoption levels as needed. However, without the right rollout and training, that investment could easily go to waste.

Comparison of Copilot FAQs

1. What’s the difference between Microsoft Copilot and Google Gemini?

  • Microsoft Copilot: Copilot’s greatest strength (and potential security weakness) is its deep integration with Microsoft and everything companies have inside their Microsoft 365 tenant. If you’re a company that uses Microsoft, then this tool was made for you. However, when it comes to international use, Copilot offers support for fewer languages compared to other AI models. Also, Copilot may not provide as much transparency with sources compared to Gemini, as it relies on Microsoft’s infrastructure, which could limit users’ ability to verify information and assess the credibility of content.
  • Google Gemini: Formerly known as Bard, Gemini is plugged into Google’s ecosystem. When it comes to response speed, Gemini is one of the fastest AI assistants. Gemini’s strong points include its transparency with sources, image recognition capabilities, and text creation abilities. However, its capabilities are currently not as strong as Copilot for Microsoft 365, and it’s more of a tool that helps you organize and optimize what you’re already doing in Google Workspace. 

To read more about Copilot vs. Gemini, visit here.

2. Is Copilot Pro worth it for individuals?

So far, Copilot Pro has mixed reviews from online users. Some say it’s markedly faster than the free version, and they like that it allows more messages than the 30 chats per day that you receive in the unpaid version when you sign into your Microsoft account. Others claim it’s been a waste of money, and they dislike that it doesn’t have certain features like the ability to generate charts and the lack of a data analysis tool. However, more features are forthcoming as Microsoft improves the assistant. 

Should I deploy Microsoft Copilot in my company? 

When thinking about deploying Microsoft Copilot in your organization, you should consider several points. First, which version is right for your company, and is it cost-effective? Next, do the benefits of deploying it outweigh the risks? 

Will Copilot save time, make workers happier, and increase productivity through an “AI-employee alliance,” as Microsoft promises, or is it an overhyped tool that opens up organizations to unnecessary risks? 

Without the proper rollout and training for these Copilots, companies may not take advantage of their full potential and could be at a greater security risk. 

One thing is clear: AI assistants aren’t going away anytime soon. Only time will reveal the different security implications and potential benefits of using these Copilots.  

Get in touch here to discover best practices for maintaining your company’s security while deploying generative AI tools. We’ll help ensure the safety of company files across your organization’s Microsoft 365 environment, including SharePoint, OneDrive, and Teams.

 

The post What are the different Microsoft Copilots and how much do they cost? appeared first on Nira.

]]>
How to Reduce Information Security Risks When Deploying Copilot for Microsoft 365 https://nira.com/securing-copilot-for-microsoft-365/ Tue, 19 Mar 2024 18:06:08 +0000 https://nira.com/?p=10572 The mass adoption of generative artificial intelligence has outpaced security measures as companies rush to develop and release AI-powered products. Microsoft launched Copilot, Google released Gemini, and OpenAI has made headlines with ChatGPT. Organizations are eager to test AI assistants for company tasks, from creating content to writing code. But developing technology comes with new… (more) How to Reduce Information Security Risks When Deploying Copilot for Microsoft 365

The post How to Reduce Information Security Risks When Deploying Copilot for Microsoft 365 appeared first on Nira.

]]>
The mass adoption of generative artificial intelligence has outpaced security measures as companies rush to develop and release AI-powered products.

Microsoft launched Copilot, Google released Gemini, and OpenAI has made headlines with ChatGPT. Organizations are eager to test AI assistants for company tasks, from creating content to writing code.

But developing technology comes with new security risks, and companies using AI products have concerns.

According to Cisco’s Data Privacy Benchmark Study, 91% of organizations acknowledge the need to reassure their customers about using generative AI.

Respondents—security professionals worldwide—were worried that generative AI could hurt the organization’s legal rights and intellectual property (69%) and that the data entered may be shared publicly or with competitors (68%).

As more employees use these tools, what do you need to know about AI assistants to keep your company data safe? 

We’ll explore the risks of using Microsoft Copilot, explain how to reduce them, and help you decide if you want to implement the tool in your company.

The four key Microsoft Copilot risks we’ll cover are:

  1. Oversharing and inadequate permissions
  2. Insider risks
  3. External threat actors
  4. Privacy concerns

What is Copilot for Microsoft 365?

Microsoft Copilot is a generative AI chatbot developed by Microsoft in collaboration with OpenAI that launched on February 7, 2023. 

Microsoft Copilot’s paid enterprise and business versions: Copilot for Microsoft 365 is available to companies using Microsoft 365 who choose to purchase the upgrade. These organizations must have a product license for Microsoft 365 Business Standard, Business Premium, E3, E5, or Office 365 E3 or E5. 

Copilot for Microsoft 365 promises to help end users with work-related tasks, from summarizing research to staying engaged in meetings. The tool integrates with a variety of Microsoft apps including Teams, Outlook, Word, Excel, PowerPoint, and more. 

Microsoft calls Copilot for 365 a “new AI-employee alliance” that will lessen workloads and boost productivity, and organizations are already thinking about the pros and cons of adding the tool. 

According to a poll conducted by Nira, 49% of IT and Security leaders said they already use or plan to use Microsoft Copilot in their companies. 

However, not all companies were on board with Microsoft’s “everyday AI companion.” The remaining respondents (39%) said they would not use Copilot, and 12% of leaders said their company did not allow it.

Companies still have concerns about giving Copilot for 365 free rein in their organizations. 

Security professionals are primarily worried about the tool’s ability to search and gather data from all Microsoft 365 documents, emails, calendars, presentations, contacts, and more. 

Microsoft Copilot Security: Concerns, Risks, and Vulnerabilities

Imagine the huge amount of data your company has built up from every email, meeting, chat, and document in your company’s Microsoft 365 tenant. 

Copilot for Microsoft 365 can analyze it all. 

And, if not handled securely, sensitive company data could be exposed.

One major issue is if an employee has access to a sensitive company document, without realizing it. This could have overarching security implications if overshared sensitive data gets into the wrong hands. 

We’ll look deeper at the four biggest security risks of using Copilot below. 

Copilot Risk 1: Oversharing and inadequate permissions

The problem with Copilot for Microsoft 365 is it can leverage all the data that an employee has access to: if the employee has access to sensitive data, then so will Copilot. 

One issue we see in our work at Nira is oversharing data in organizations. 

According to a 2023 report, “16% of an organization’s business-critical data is overshared.” That adds up: companies face an average of 802,000 files containing “business-critical data” at risk due to oversharing. 

The results become more worrying when we realize that 83% of “at-risk files” were overshared with users and groups within the organization. Files that may be confidential or overly sensitive could be exposed to the entire company. 

Overall, more than “15% of all business-critical files are at risk from oversharing, erroneous access permissions, and inappropriate classification,” according to the study. 

Because of oversharing, a company’s sensitive files could be seen by users who do not have the proper access permissions, leading to a risk of data exposure. Using Microsoft Copilot—a tool that makes information easier for employees to access than ever—exacerbates data exposure risk significantly.

Employees often have more access to company information than they need or even realize. 

Sharing Microsoft documents, for example, is as easy as adding collaborators or opening a link that anyone in the company or on the internet can access. And, permissions are rarely cleaned up once people no longer need access.

For example, a Nira survey found that more than half of employees said they or a coworker accidentally added their personal email accounts to company documents.

And, although 40% of employees said they cleaned up vendor access on documents once an engagement was over, the rest (60%) only did clean up sometimes, rarely, or never. 

To reduce risks when using Copilot, companies must first ensure their data is not being overshared internally and externally. 

Copilot Risk 2: Insider risks

Employees misusing Copilot, either by accident or on purpose, could lead to data breaches or potential leaks. 

Data breaches caused by unauthorized access, what we call access risks, are happening more frequently as cloud collaboration grows. Breaches are costly, time-consuming to fix, and damage customer relationships. 

Nearly 80 percent of access-risk incidents are caused by employee accidents. One issue is employees don’t always realize all the data they have access to. 

For example, Nick Ross of T-Minus365 asked Copilot to provide a list of all the files and chats that referenced a specific topic. Nick was surprised when the tool surfaced “a citation of a document I didn’t know existed and a Teams Channel that I didn’t know I was a member of.” 

In Nick’s example, the company data that Copilot found was not sensitive; it becomes an issue when employees accidentally surface confidential or sensitive information they should not. 

And although most incidents tend to be accidental, malicious exfiltration can happen. Insiders could purposefully use prompt engineering techniques to ask questions that would give them information they shouldn’t have. 

In extreme scenarios, insiders could compromise the security of the Copilot software or its underlying model by exploiting vulnerabilities or introducing malicious code. 

Copilot Risk 3: External threat actors 

What may become more common are threat actors compromising an account with AI-enabled features and trying to exfiltrate sensitive data through the use of AI. 

For example, an attacker gains access to an account with Copilot for 365 and attempts to use it to access sensitive data from Microsoft Graph. 

Or a threat actor leaks a response to a prompt that contains confidential information. For example, using a prompt that asks for a financial report or leaking a response that reveals a trade secret. 

If attackers gain access to a Copilot-enabled account, company data is at risk. 

It’s vital to have the right permissions, security features, and data protection policies set up in your company’s Microsoft 356 tenant (i.e., OneDrive, SharePoint) to avoid Copilot misuse. 

Copilot Risk 4: Privacy concerns

Legal or privacy challenges may arise from using Copilot-generated content, including intellectual property rights and compliance issues.

We’ve seen multiple lawsuits in the news revolving around generative AI tools. OpenAI is currently facing copyright and legal challenges from the New York Times, several digital media outlets, and various authors, comedians, and radio hosts

To avoid similar issues with Copilot for Microsoft 365, organizations must handle privacy implications carefully for compliance, legal, and ethical reasons. This can include changing Copilot’s privacy settings or sanitizing information that Copilot is given access to.

According to Microsoft, Copilot for Microsoft 356 can access your organization’s data through Microsoft Graph, including emails, chats, and documents that you have permission to access. 

The suggestions from Microsoft Copilot are relevant and based on your data, but this “also means that Microsoft Copilot may expose sensitive or confidential information in its responses.” 

Microsoft warns that you should always check the content that Microsoft Copilot creates before you use or share it with anyone else. 

Although the tool does not keep the prompts that you send or receive, it does collect some data about how you interact with it for improvement and diagnostic purposes. 

This data is encrypted and anonymized, but it could still contain parts of your personal or organizational information, Microsoft cautions. 

Users can choose not to share this data with Microsoft by changing their privacy settings in Microsoft 365 apps.

Copilot for Microsoft 365: How to prepare your company

Step 1: Understand risks in your company’s Microsoft 365 tenant

The key to a data security strategy is understanding your company’s Microsoft 365 environment and being aware of its risks. 

On one hand, Copilot only uses data from a current user’s Microsoft 365 tenant. This means that if a user is a guest in another tenant or your company uses “cross-tenant sync,” then that data should be safe.  

However, your company’s Microsoft 365 tenant acts as a “central identity provider” for your organization with a set of accounts, groups, and policies. Permissions and sharing of resources happen across this tenant as your employees collaborate quickly. 

To keep this environment safe, you must understand your company’s risks and make sure your organization’s tenant is secure. 

Microsoft suggests several steps for configuring your organization’s tenant, which you may be familiar with if you’ve worked with a company that uses Microsoft. 

However, with the high frequency of oversharing and bad permissioning, it can’t hurt to review these steps and get a better overview of the environment you need to protect. 

Step 2: Ensure your company has the right permissions, access controls, and security policies

Keeping your company secure when using Copilot for Microsoft 365 goes back to having the right permissions in the first place.

Microsoft puts it this way:

“It’s important that you’re using the permission models available in Microsoft 365 services, such as SharePoint, to help ensure the right users or groups have the right access to the right content within your organization.”

Download the Security Audit Checklist for Microsoft 365 and Google Drive

Many security professionals adhere to the idea of “least privilege” when it comes to permissions. This includes who has access to company documents, presentations, administrator accounts, etc. You want only to give employees the permissions they truly need. 

To keep your tenant secure, you must identify and understand these permissions and make sure they are up-to-date. 

This goes along with the principle of access control. You must control access to your company’s data and make sure your company has the proper security policies in place. 

To protect company files in Microsoft 365 and Google Workspace using access control and automated security policies, visit here

Step 3: Educate your employees and give them the tools they need

Employees can be your best resource when keeping data safe, but without the proper training and tooling, they are at a disadvantage. Most employees are strapped for time (one of the reasons they are using AI assistants) and do not have excess resources for access control. 

It often falls on IT and Security teams to educate employees about security risks and make sure they follow proper procedures. But, these teams are often swamped themselves. 

It’s important to provide resources for employees so that they understand the guidelines and risks when using Copilot for Microsoft 365. 

Overall, employees need to understand their permissions and what files and data they have access to in Microsoft 365, especially when it comes to sensitive information. 

To learn how employees can get visibility and control over access to SharePoint, OneDrive, and Teams documents they own, visit here.

Copilot for Microsoft 365 Security Plan: How to reduce risks

Having a plan for protecting your data in Microsoft 365 is a vital resource before enabling Copilot. When creating your plan, think about these questions: 

  1. What does your company consider to be sensitive data?
  2. Where is that sensitive data found in your company’s Microsoft 365 tenant?
  3. How is your company’s data being shared internally?
  4. How is your company’s data being shared externally?
  5. How are employees using links for sharing internally and externally?
  6. Who has access to your company’s data, and what are their permissions?
  7. Does your company use data classification tools like Microsoft Purview?
  8. How are you applying labels in your organization?
  9. Has your company incorporated Microsoft’s Data Loss Prevention (DLP) methods or another DLP solution? 
  10. What are your company’s current access controls for users and groups?
  11. Have you updated your company’s access policies based on the risks you’ve identified?
  12. Have you created a plan for your company’s data and access control lifecycle?
  13. Does your company have data retention policies?
  14. Do you have ways to educate employees about access risks and permissions?
  15. Do your employees have a way to clean up permission misconfigurations?
  16. If employees were to use Microsoft Copilot, how could that affect your company?
  17. What information might be exposed if Copilot were rolled out?

Having a concrete plan for securing access to Microsoft 365 will help with your company’s overall security response and make it easier to deploy generative AI tools with confidence.  

Stop oversharing and fix document permissions, in seconds

Companies must be mindful of access risks when enabling Copilot for Microsoft 365 in their organization. It’s vital to be aware of how sensitive data is handled and who has access to it. 

Permissions should be updated, and employees must be given the right tools and training to keep their data safe. 

A Data Access Governance system can solve the problems of oversharing and inaccurate document permissions in tools like Google Workspace, Microsoft OneDrive, Sharepoint, and Teams. It automates compliance with security policies for your company and remediates risks from unauthorized access in seconds. 

To learn how to keep your company’s data safe when using Copilot for Microsoft 365 and other AI tools, reach out here

The post How to Reduce Information Security Risks When Deploying Copilot for Microsoft 365 appeared first on Nira.

]]>
9 Security Tools Your IT Team Needs for Remote Work https://nira.com/security-tools-remote-work/ Tue, 13 Feb 2024 21:40:41 +0000 https://nira.com/?p=10510 While a return-to-office (RTO) is making a comeback, one trend is here to stay: the era of hybrid and remote work was not a fleeting moment—it’s the new norm.  Remote work isn’t just about cutting costs; it’s about offering teams flexibility, tapping into a global pool of talent, and rewriting the rules of what an… (more) 9 Security Tools Your IT Team Needs for Remote Work

The post 9 Security Tools Your IT Team Needs for Remote Work appeared first on Nira.

]]>
While a return-to-office (RTO) is making a comeback, one trend is here to stay: the era of hybrid and remote work was not a fleeting moment—it’s the new norm. 

Remote work isn’t just about cutting costs; it’s about offering teams flexibility, tapping into a global pool of talent, and rewriting the rules of what an organization should be. 

Yet, companies often drive this shift without equipping their IT teams to handle the transition securely, leaving them scrambling for solutions.

IT and Security teams need solid tools that keep companies secure—without disrupting smooth team collaboration. 

When working remotely, here are the nine channels your company should be protecting with IT and Security tools.

1. Communication

Ensuring the safety of remote communication channels is crucial in any workplace. Whether it’s email, messaging, or video apps, every channel deserves attention. Your company should be able to effortlessly secure cloud applications like Slack, Teams, and Zoom. 

Failure to do so can bring a host of security issues. For example, after Zoom’s infamous credential hacks of 2020, companies became more conscious of securing their video conferencing tools. 

More recently, Microsoft faced issues after state-backed attackers broke into the company’s email system and accessed the accounts of Microsoft’s leadership team. This same group had made previous attempts to steal credentials from 40 organizations by infiltrating Microsoft Teams chats. 

Communication channels are often used by all employees within a company, and even the most seasoned security professionals can be the victims of social engineering scams. Meanwhile, securing and monitoring these channels can take time away from already busy IT and Security teams. 

Solutions to help

  • Darktrace specializes in artificial intelligence-based cybersecurity, especially when it comes to securing email and cloud applications including Zoom, Slack, and Salesforce.
  • Mandiant is known for its security solutions that protect against advanced threats, including targeted attacks and zero-day exploits.

2. Cloud Document Security

Cloud-based documents are everywhere in remote work environments. Through a simple link, files in Google Workspace, Microsoft OneDrive, and SharePoint can be shared with employees, entire companies, or even the internet at large.  

Often, companies do not realize every single account that has access to a file, especially if users with personal email accounts are involved. 

Confidential company documents may be accessed by vendors or personal accounts that have weaker protections and greater risks of security incidents. 

Take the case of Cisco’s breach in 2022, after an attacker gained control of an employee’s personal Google account where the employee’s company credentials were stored in its browser.

Okta faced a similar incident in 2023 after confirming that a data breach likely came from an employee storing company credentials in a private Google account. 

Okta Security identified that the employee signed in to their personal Google profile on the Chrome browser of their Okta-managed laptop. The service account’s username and password were then stored in the employee’s personal Google account, potentially contributing to the breach.

Companies must proactively protect their sensitive documents before they get overshared and accessed by the wrong people.  

Solutions to help 

  • Nira helps companies protect their cloud-based documents from unauthorized access. The cybersecurity platform provides administrators with complete visibility and control over who has access to company information in applications like Google Workspace, OneDrive, and SharePoint, an employee-facing security portal, and security policy automation for administrators.

3. Access Control and Authentication

Remote work environments often rely on usernames and passwords for authentication, which can be compromised through various means like phishing attacks or brute-force attempts. 

With more than 80% of data breaches stemming from weak, reused, or stolen passwords, taking care of access control and authentication is essential for your organization. 

Verifying identity is critical, leading companies to use single sign-on (SSO) solutions and authentication applications. Employee password managers are another crucial channel that must stay secure. 

We’ve seen what happens when these tools are not used or become compromised. Password management application LastPass was breached twice in 2022, leading to costly cleanups and loss of customer trust. 

Employee training is key when it comes to implementing good password management and strong multi-factor authentication, especially as more threat actors move to bypass MFA through phishing schemes and brute-force attacks. 

Solutions to help 

  • Duo Security provides multi-factor authentication solutions to enhance security by requiring users to provide multiple forms of identification before granting access. 
  • Ping Identity focuses on identity and access management solutions, including single sign-on (SSO), multi-factor authentication, and identity governance.
  • Dashlane is a password manager that helps users generate, store, and manage complex passwords securely. Using “zero-knowledge patented encryption,” all passwords and passkeys are protected. 

4. Code and Development

Remote work increases the risk of code leakage, either unintentionally through insecure communication channels or intentionally by malicious insiders.

Data breaches leading to compromised source code can be hard on a company’s reputation. We saw it multiple times over the past few years as hacking groups like Lapsus$ stole and leaked source code from Samsung, Nvidia, Microsoft, and Qualcomm

A more dangerous scenario is if attackers not only access or steal the source code but also manage to alter it, such as through a software update or other manipulation. 

This type of breach could have dire consequences, potentially allowing attackers to introduce backdoors or malicious code into the company’s products.

Source code repositories, such as GitHub and GitLab, must be secured to prevent unauthorized access and leakage of valuable intellectual property. 

Solutions to help 

  • Snyk focuses on security for developers by providing tools and solutions to find and fix vulnerabilities in open-source libraries and containers during the development process.
  • Mend.io (formerly WhiteSource)provides tools for managing open-source security and license compliance in code repositories. Mend.io integrates with GitHub, GitLab, and other systems to automate vulnerability detection and remediation.

5. Network Security

Network security protects a company’s network infrastructure from misuse and unauthorized access. Traditional network perimeters dissolve in remote work setups, making it challenging to monitor and control network traffic. 

If this critical infrastructure is compromised, it can lead to data breaches, service disruptions, and regulatory violations, negatively impacting an organization’s operations, finances, and reputation.

For instance, Equifax’s major 2017 breach exposed the data of 147 million people after attackers exploited a vulnerability in the company’s network to gain access to sensitive data

Or, Target’s incident in 2013, where threat actors gained access to the company’s network through a third-party vendor. The cost of that breach was said to be $202 million, however, the total continued to rise as settlements were reached for class action lawsuits and other legal proceedings. 

Network security measures include incorporating virtual private networks (VPNs) and deploying firewalls and intrusion detection/prevention systems to monitor traffic entering and leaving the network while detecting and blocking suspicious activity.

Solutions to help

  • Rapid7 provides a range of cybersecurity solutions, including cloud security, vulnerability management, and managed detection and response (MDR). 
  • Fortinet offers various cybersecurity products and services, including firewalls, intrusion prevention systems, and other network security solutions.
  • Imperva specializes in cybersecurity solutions related to data and application security, including web application firewalls, database security, and DDoS protection.

6. Endpoint Security

Companies rely on endpoint security to secure individual devices such as desktops, laptops, smartphones, and tablets that connect to a corporate network. As remote work becomes the new norm, employees are tapping into corporate networks and data from diverse locations and devices.  

IBM reports that research suggests up to 90% of successful cyberattacks and about 70% of successful data breaches originate from endpoint devices. 

One of the most infamous ransomware attacks in history, WannaCry spread rapidly across the world, affecting hundreds of thousands of computers in more than 150 countries. The attack exploited a vulnerability in the Windows operating system, targeting endpoints that had not been patched with the necessary security updates. 

Companies must fortify these endpoints, ensuring employees can securely access company resources without risking security breaches. 

Solutions to help 

  • CrowdStrike is known for its endpoint protection platform that utilizes cloud-based technologies to provide advanced threat intelligence and protection against malware and other cybersecurity threats.
  • Cybereason provides endpoint detection and response (EDR) solutions using AI and behavioral analytics to detect and respond to advanced cyber threats.

7. Training and Compliance

Navigating the landscape of remote work requires employee training and compliance measures. Within this dynamic, companies must stay compliant with data privacy regulations like HIPAA, PCI DSS, and GDPR.

Failure to comply can lead to a host of issues including legal penalties, data breaches, and financial losses. 

For example, in 2015, Anthem Inc. faced scrutiny over its failure to comply with HIPAA requirements for protecting patient data, leading to investigations and regulatory penalties. The incident cost the company nearly $260 million, including a record-breaking $16 million settlement for HIPAA violations alone. 

Employee training is especially important due to the large number of breaches stemming from social engineering and phishing attacks. According to IBM, phishing was the costliest initial attack vector in 2022 with an average cost of $4.91 million per breach. 

Remote employees may be more susceptible to phishing attacks and social engineering tactics due to increased reliance on email and online communication. Attackers often exploit this vulnerability to trick employees into revealing sensitive data or installing malware. 

Multiple breaches have stemmed from threat actors targeting employee credentials including incidents at Twitter and security giant Cisco.

Solutions to help 

  • Datagrail focuses on data privacy management, helping organizations comply with data protection regulations by automating data subject requests and privacy compliance processes.
  • OneTrust is a comprehensive privacy management platform that helps organizations comply with various privacy regulations by providing tools for consent management, assessment automation, and more.
  • Vanta offers automated security and compliance solutions for businesses, helping them streamline the process of meeting industry standards such as SOC 2, ISO, and HIPAA.

8. Cloud Object Storage

For companies working remotely, protecting their cloud object storage services is paramount. You want to make sure all company data is safe and secure in platforms like Amazon Web Services (AWS), Microsoft Azure Blob Storage, and Google Cloud Platform (GCP).

Various companies have dealt with data breaches due to misconfigured AWS S3 buckets, including Twilio, CapitalOne, and Uber

Microsoft has also faced issues with storage services, notably a breach revealed in 2023, which involved the exposure of 38TB of private data through an Azure storage bucket. 

The problem with misconfigured storage buckets goes deeper than simply compromising customer and company information. Even if not a single terabyte of data is stolen, the company will face reputational damage that is not easily regained. 

Solutions to help 

  • Check Point provides cloud security solutions that include tools for securing data stored in cloud object storage services. Check Point CloudGuard offers capabilities for preventing threats, enforcing security policies, and ensuring compliance in cloud environments.
  • Netskope includes tools for securing data stored in cloud object storage services. Netskope Security Cloud offers capabilities for data protection, threat prevention, and compliance in cloud environments.

9. Security Testing and Vulnerability Management

Sometimes, the best way to stop threat actors is to start thinking like them. Security testing and vulnerability management are crucial for companies to protect sensitive data, maintain compliance, and stay ahead of attackers. 

Methods like penetration testing, vulnerability scanning, and bug bounty programs are used to identify and mitigate the vulnerabilities that arise from remote work environments. 

Security researchers routinely uncover new vulnerabilities and potential breaches. In 2021, multiple zero-day vulnerabilities in the Microsoft Exchange Server were discovered and exploited by threat actors. These vulnerabilities were identified through penetration testing and reported to Microsoft by security researchers.

The infamous SolarWinds supply chain attack was discovered through routine security checks; security checks also found hacks affecting Facebook, T-Mobile, and Colonial Pipeline

Conducting regular security testing helps identify and address security weaknesses in remote work environments. This includes testing remote access solutions, collaboration tools, and cloud services to ensure they are configured securely and are not vulnerable to exploitation.

Solutions to help

  • Bugcrowd is a platform that connects organizations with cybersecurity researchers for crowdsourced security testing and bug bounty programs.
  • HackerOne is a vulnerability coordination and bug bounty platform that connects businesses with cybersecurity researchers and penetration testers, enabling organizations to mitigate cyber risks while staying ahead of threats.

Although companies may implement remote work policies without fully prepping their IT teams for a secure transition, IT and Security can use software to help. 

Teams need robust tools that strengthen remote operations while fostering seamless collaboration between leadership, employees, and vendors. 

From cloud document security systems to EDR solutions, these types of IT and Security software are crucial for reducing risks. 

For more information on securing company documents while alleviating data theft risks, visit here.

The post 9 Security Tools Your IT Team Needs for Remote Work appeared first on Nira.

]]>
How to Audit Sensitive Documents for Access Risks https://nira.com/audit-sensitive-documents/ Thu, 08 Feb 2024 17:02:37 +0000 https://nira.com/?p=10502 Who has access to your company’s confidential files? This question is becoming more and more relevant as companies turn to cloud collaboration tools like Google Workspace, Microsoft OneDrive, and SharePoint.  Data breaches caused by unauthorized access, what we call access risks, have risen at alarming rates as companies use collaboration tools to share files within… (more) How to Audit Sensitive Documents for Access Risks

The post How to Audit Sensitive Documents for Access Risks appeared first on Nira.

]]>
Who has access to your company’s confidential files? This question is becoming more and more relevant as companies turn to cloud collaboration tools like Google Workspace, Microsoft OneDrive, and SharePoint. 

Data breaches caused by unauthorized access, what we call access risks, have risen at alarming rates as companies use collaboration tools to share files within and outside their organizations. 

In a matter of weeks, hundreds of thousands—even millions—of company documents could be shared with multiple collaborators. 

According to a report by Quadrant Strategies, the average number of collaborators per Google Workspace document is 13.4. This means that for each document an employee creates, 13 different accounts can access the information within the file. Microsoft 365 isn’t much better, coming in at 7.2 collaborators per document.

These shared documents can contain innocuous data like marketing ideas that could be safely viewed by anyone. Or, they might have sensitive data, like personally identifiable information (PII), that can have legal and compliance repercussions if accessed by outside parties or the wrong internal parties. 

No matter what kind of information your company shares, it typically falls on IT and Security teams to perform periodic reviews to ensure sensitive data does not get into the wrong hands.

In this post, we’ll go over how to audit individual and small groups of documents for access risks. 

We’ll highlight the risks you should look out for, how to reduce them, and the limitations when it comes to doing this quickly and efficiently. 

To streamline this process and use it for all files in your cloud collaboration environment, reach out here.

How to audit sensitive files for access risks

When auditing sensitive documents, it’s vital to answer the following questions:

Download: Auditing Cloud-based Documents Checklist

1. Does the document contain sensitive information? 

The first step to securing your company’s documents is to identify and classify files with sensitive data. This information includes personally identifiable information (PII) and personal health information (PHI) as well as employee and customer data. You’ll also want to check for things like financial information, strategic information, and intellectual property (IP).

You can do this in several ways; for example, by looking at the file itself, by asking the file owner to review the file, or by creating naming conventions and incorporating labels. Google Workspace even has options that let you do this under Google’s Data Loss Prevention (DLP) features. 

However, it’s important to be aware that with any DLP tool, you run the chance of having false positives. DLP alerts can be inaccurate and take enormous amounts of time to sift through. 

Once you know if your document contains sensitive information, you can have a better idea of how strict its permissions should be.

2. Who owns the document?

Next, you want to understand who created or owns the document. For example, is the document owned by an executive or someone from the legal department? Did an employee from Human Resources or Finance create the document? 

It’s crucial to know who owns the file to determine how sensitive it might be. In the case of documents created by executives or departments that handle confidential information, the contents are more likely to be confidential than not. It’s also helpful to know the file’s owner in case you need to transfer ownership to another employee, especially in the case of offboarding procedures. 

3. What accounts has the document been shared with?

Documents can be shared with a large number of accounts, and it’s critical to understand which collaborators have access to the file. 

You’ll want to know which team members have access to the file as well as any external accounts or domains. These can include personal email accounts and third-party vendors which we’ll go over below. 

Personal email account access is a hidden problem that affects most companies that use cloud collaboration tools. More than half (52%) of surveyed employees said they or a coworker had accidentally added their personal email to company documents. 

These accounts have fewer protections than corporate accounts, and access can last for years after an employee has left the company.

You should also be aware of any documents that are shared inbound into your company that are owned by personal email accounts. To learn more about protecting documents from personal email access, visit here.

Vendors are another area of potential concern, as they may have access to company files long after they stop working with a company. It’s essential to identify all vendors and third parties with access to the document, assess their risk, and remove unnecessary access as soon as possible. 

Teams should also find and remove any Public links and unauthorized access to confidential documents owned by vendors. 

4. What permissions do collaborators have?

After understanding who has access, you need to know what permissions they have.

For example, can they edit the document or share it with others? Can they change the permissions on the file? Do they have the ability to download the document? 

When it comes to permissions, there are three main options in Google Drive: Editor, Commenter, and Viewer. In Microsoft OneDrive and SharePoint, these options are: Can edit, Can view, and Can review.

These permissions are key because if a collaborator has Editor or Can edit access, they could change the file’s settings to be shared with anyone on the internet. 

They might also share the file with someone else and give them edit permissions, allowing the third party to change the file’s permissions whenever and however they please.  

With sensitive company data, you never want these scenarios to happen. It’s important to understand who the document is shared with and what level of permissions each collaborator has. 

5. What kind of links are on the document?

In cloud collaboration tools, there are several types of links. For example, in Google Drive, one type is Restricted, where the document is only shared with specific collaborators who have been added using their email addresses. This could include a group or target audience, such as the Sales team.  Another link type is Company, which grants access to anyone in the entire company with the link. 

The least protected type of link is Public. These links are particularly risky as they can be accessed by anyone on the internet with the link. Adding a Public link to a document is not recommended unless necessary and the file has zero sensitive information. 

Other tools like Microsoft OneDrive also utilize links. Microsoft has “Anyone with the link,” which are Public links, and “People in your organization” links, which are similar to Company links. 

Microsoft also has “Specific people” links which are Restricted links and “People with existing access” links, which can be used by people who already have access to a document or folder.

It’s paramount to understand what types of links your document has and where they are shared. 

6. Is the document in a shared drive or a folder?

Shared drive access can be tricky. Many employees and external collaborators may not realize that once they add a collaborator to a Google shared drive or a Microsoft SharePoint site, that collaborator can access all files and folders within the shared drive. 

It’s important to know if a document is in a shared drive or SharePoint site, and understand who else has access to that drive including any personal email accounts.

Folders are another issue. In Google, while shared drives cannot have Company or Public links, folders still can. That means that files within these folders could be shared with everyone at the company, or even made public to anyone on the internet. 

7. Does the document follow company data retention policies?

Documents become stale when their contents haven’t been changed in months or even years. Several risks are associated with older documents, including accounts having access that is no longer needed. If a document hasn’t been modified in a year or two’s time, there is typically no reason for external parties to have access as collaborators or through Public links. Similarly, stale documents with Company links that are accessible by all employees should be locked down.

One of the biggest reasons to be aware of stale documents is data retention policies. Many companies must have a solid data retention plan in place, especially if they are in charge of sensitive information. 

You’ll want to know if the document is older than a year, three years, or even more, depending on your company’s data retention policies and industry requirements. 

Download: Security Audit Checklist for Google Drive and Microsoft Sharepoint

What are the limitations of document audits?

The main issues with auditing documents are administrators can only secure files for a few employees at a time; the process doesn’t scale quickly or efficiently.  

To secure all files in a cloud collaboration environment, IT and Security teams need a more robust approach. They should understand their overarching access risk landscape and then work to secure every single file. 

Limitations of this process when using manual efforts, audit logs, or scripting: 

  1. There’s a lack of visibility into who documents are shared with. 
  2. The process doesn’t scale past a few employees. 
  3. Remediation is time-consuming and nearly impossible.  
  4. Methods are tedious and can distract teams from more urgent initiatives. 
  5. Alerts are constant, ineffective, and often full of false positives. 
  6. Teams don’t have extra time to dig around in audit logs or spend time scripting. 

What are the access risks? 

Although images of shadowy hackers are everywhere online, we’ve found that 80% of access-risk incidents are from non-malicious employee mistakes. These accidents can happen to anyone, and only 17% of IT and Security leaders believe their employees understand the importance of securing access to their files in the cloud “really well.”

We’ll give an overview of the access risks and then dive into what you can do to protect your company. 

Access Risk 1: Oversharing and liberal permissions 

Hundreds to thousands of documents are created by employees every day. Strapped for time, employees share these company files with a multitude of collaborators: their managers, other employees, and third-party vendors. At times, they’ll share files with the entire company or even anyone on the internet who has the link. 

The problem with this hinges on the security principle of “least privilege.” When it comes to documents, employees aren’t usually thinking about least privilege or liberal permissions. They want to quickly share their work with the right person and get back to their jobs as painlessly as possible. 

For example, imagine Margot shares a document with her coworker Roland and gives him Editor permissions. Then, Roland shares the document with a contract worker Leo, and grants him Editor permissions, too. 

Leo then shares the document with his manager outside the company and makes the link public with Editor permissions. 

Margot doesn’t realize that her original document is now available and editable by anyone on the internet. The document that has information that should only be seen by certain employees can now be accessed by anyone with the link. 

Access Risk 2: Suspicious downloading, printing, and copying 

Although most access risk incidents are due to employee mistakes and accidental misconfigurations, malicious activity does happen. IT and Security teams should be aware of unusual file sharing, including increases in file downloads or copying of documents. 

To help, administrators and users can utilize settings in the collaboration tools, like Google Drive access permissions for safeguarding sensitive content. This includes restricting actions such as re-sharing, downloading, printing, or copying the file, or modifying its permissions. 

Access Risk 3: Personal email account access

Personal email accounts are risky by nature. They’re designed for personal use, without any IT oversight, and often with weaker passwords and protections. Poor protections lead to big problems, as more than 80% of security breaches are due to weak or stolen passwords. 

Accidental sharing with personal accounts can happen at any company and may cause legal and compliance issues. Not to mention the headaches for IT teams who must spend valuable time finding and cleaning up all personal account access. 

Blocking access when an employee leaves doesn’t solve the problem, either. Companies don’t realize that even after cutting off access to work accounts, access to sensitive information in cloud collaboration platforms may persist through employees’ personal email accounts. 

Access Risk 4: Inbound documents and vendor risks

Inbound documents are files that are owned externally and shared with company accounts. These documents can contain confidential company information, but they are not protected by the companies that create them. 

The biggest problem with inbound documents is that IT and Security teams do not have complete visibility and control over them. 

However, these documents could still have Public and Company links, or be shared with personal email accounts or other vendors. 

Vendors being given access to company information is the event that IT and Security leaders we surveyed said poses the greatest information security threat for their teams. 

Lack of visibility and risk of data leakage are common concerns. Often, people in a vendor organization have more access to company data than is required for their jobs. 

This access can remain for months and years, even after work on projects is complete, creating risks of oversharing or data theft. 

Access Risk 5: Companywide and public access through links

Links are one of the most popular ways of sharing because they are incredibly fast to create and send. In a few clicks, the employee can get a document to the people who need it.

However, this type of sharing can lead to oversharing risks as links with sensitive data may be made visible to anyone on the internet or anyone in the company. 

Sometimes, an entire external company can gain access to a file through a link. Or a group or target audience (like the Sales team) may gain access to information that they should not see.  

For example, everyone in the company could access salary information or someone’s private performance improvement plan. Even executive-owned files about company strategy can accidentally be shared with everyone in the organization.

Confidential documents should never have Public links, and permissions should be immediately restricted. 

Access Risk 6: Shared drives and folder access

IT and Security teams face challenges in knowing the exact contents of shared drives, including the files and folders within them. 

An account can be granted access to thousands of documents sitting inside of a shared drive, in minutes. It can be difficult to understand which accounts have access to these sensitive files. 

Often, shared drives contain data that should not be accessed by just anyone. They can be used to organize sensitive information like customer data, contracts, or Human Resources documents. 

Also, in the case of Google Drive, folders may have Public or Company links added to them. This means that any files inside the shared folders could be seen by anyone in the Company or anyone on the internet who has access to the link. 

Access Risk 7: Stale document access

Stale documents come with risks including unintended access to confidential data. For example, 70% of all sensitive data in financial services firms is stale, according to a 2021 data risk report

Often, access risks from stale documents persist for years and can result in legal damages, security breaches, and compliance issues. 

Several US federal laws and regulations deal with data retention, including HIPAA, the Fair Labor Standards Act, and the Employee Retirement and Income Security Act. 

To stay compliant, companies must be aware of how their data is being handled and deleted after certain periods. 

How do you reduce access risks?

  • Identify and classify documents with sensitive data

Find a way to classify confidential information in your company’s cloud collaboration environment. This could be through employing Google or Microsoft labels or having employees use naming conventions for files with sensitive data. 

  • Set the right access permissions

Apply the idea of least privilege to access permissions. For example, do not grant Editor permissions to a file unless truly necessary. Use Viewer or Commentor permissions instead.

  • Investigate any abnormal behavior

You can look through audit logs, or if you’re using Google Workspace, the Security Investigation Tool, to investigate abnormal behavior including unusual copying, printing, or downloading of files. For Microsoft, the SharePoint admin center or Purview can help.

  • Remove all unnecessary personal email accounts from company files

Find and restrict all documents that have been created or shared with personal email accounts. Review any personal email accounts that have been added to shared drives or folders. 

  • Review all external access and permissions

Be aware of all files and folders with external access. This could include outside domains, third-party vendors, and personal email accounts as we mentioned above. Make sure permissions follow the least privilege principles.

  • Review Company links

Make sure files and folders with sensitive information do not have Company links unless absolutely necessary. It’s also advisable to downgrade their permissions to Commentor or Viewer, rather than Editor. 

  • Shut down Public link exposure

Public links on company documents and folders should never be used unless the information can be safely seen by anyone on the internet. If the document or folder has to have a Public link, do not give it Editor permissions. Viewer permissions are the safest option. 

  • Check shared drive and folder access

Know who exactly has access to a company shared drive, including external accounts and collaborators using personal emails. 

Do not put overly sensitive information into shared drives that can be accessed by external users, unless necessary. Do not use Public links on shared folders and use Company-wide links on folders sparingly. 

  • Handle stale files according to your company’s data retention policies

Use your company’s data retention policies to guide how long documents stay in your cloud collaboration environment, or how long external access is allowed. This can be done by following industry regulations and government standards. 

When a document becomes stale, archive them based on your company’s rules. 

Auditing sensitive documents takes unnecessary time and effort. For the best results, the process should be automated so IT and Security teams can focus on their most urgent initiatives. 

A Data Access Governance system streamlines this process and expands it to every single employee while covering all documents in the company’s cloud collaboration environment. 

To learn how to simplify and automate Data Access Governance for your company, visit here

The post How to Audit Sensitive Documents for Access Risks appeared first on Nira.

]]>
The 10 Types of Sensitive Data Companies Must Protect https://nira.com/sensitive-data/ Mon, 22 Jan 2024 10:30:10 +0000 https://usefyi.com/?p=4980 Keeping sensitive data secure from theft and vulnerabilities can be incredibly challenging. With limited budget and bandwidth, IT and Security teams are expected to protect sensitive information while introducing strong security solutions. To protect sensitive data from getting into the wrong hands, companies first must understand what counts as sensitive data. This guide explores the… (more) The 10 Types of Sensitive Data Companies Must Protect

The post The 10 Types of Sensitive Data Companies Must Protect appeared first on Nira.

]]>
Keeping sensitive data secure from theft and vulnerabilities can be incredibly challenging. With limited budget and bandwidth, IT and Security teams are expected to protect sensitive information while introducing strong security solutions.

To protect sensitive data from getting into the wrong hands, companies first must understand what counts as sensitive data.

This guide explores the 10 types of sensitive data your company must protect and what you can do to keep them as secure as possible.

What Is Sensitive Data Anyway?

Sensitive data, also known as private data or information, is any information that must be protected and kept inaccessible to the public and other parties unless specifically granted permission.

The legal definition describes it as information that must be protected against unauthorized disclosure, including personally identifiable information (PII), protected health information (PHI), and more.

It’s crucial to impose tougher restrictions when dealing with this type of data, especially when it pertains to individual privacy and property rights for ethical or legal reasons. For instance, a data breach could leave a company exposed to grave risks like reputational damage, litigation issues, or privacy breaches of their customers and/or workers.

Similarly, a data breach in a government commission could give foreign powers access to national secrets.

The 10 Types of Sensitive Data Your Company Must Protect

Companies must be aware of the types of data they should safeguard to keep their customers, employees, and themselves secure.

Sensitive Data Example 1: Personally Identifiable Information

When working with people, whether it be your customers, vendors, or other employees, personally identifiable information (PII) can suddenly feel like it lives everywhere – in emails, databases, documents, and more. This is information used to identify an individual, including their names, contact details, or social security numbers. 

A large number of recent breaches, such as T-Mobile’s (compromised twice in 2023) have leaked this type of sensitive data, including addresses and government IDs. 

Loss of PII not only leads to a loss of trust in the company, it can also cause costly cleanups and a lack of business opportunities. 

Take the case of Atlassian in 2023, when the hacking group SiegedSec leaked the PII of nearly 13,200 Atlassian employees. Or genetics testing company 23andMe’s October breach that led to the exposure of more than 20 million pieces of personal user data. 

When it comes to corporate security incidents, PII is often exposed first, and companies end up paying the price.

Sensitive Data Example 2: Intellectual Property and Trade Secrets

Nearly all companies store proprietary information, whether it’s in their network, within a document management system, or entrusted to a third party. For a hardware developer, this can be schematics or manufacturing process details, while for a software developer, this could be code or architecture details.

Trade secrets and intellectual property can also include product specifications, competitive research, ad creative, designs, or even the in-development footage of a forthcoming video game, like Rockstar Games’s infamous Grand Theft Auto 6 leak in 2022. 

When company IP is stolen, it can compromise other organizations. For example, the 2023 incident involving Micro Star International, where MSI’s source code, along with Intel BootGuard keys, were exposed, posing a threat to vendors including Intel and Lenovo.

Companies must ensure that their valuable IP is protected, as well as the property of their partners and clients, to safeguard trust and their organization’s reputation.

Sensitive Data Example 3: Employee Data and Customer Information

Employee data and customer information are similar. They include the PII, banking information, usernames, and passwords of employees and customers. They might also include other customer details, like company strategies, new product launches, advertising campaigns, and details about their customers.

Different departments deal with this type of data including Human Resources, Customer Success, and Sales. For instance, the Sales team can access customer contracts or notes from client calls that may contain confidential data. 

Losing customer or employee data can have detrimental effects, from erosion of customer or employee trust to even legal consequences. For example, Northwell Health and Perry Johnson & Associates faced a class action lawsuit after a major breach impacted customer PII and PHI in 2023. 

Organizations must take measures to safeguard their customer and employee data to maintain trust and credibility.

Sensitive Data Example 4: Protected Health Information (PHI)

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) defines PHI as any information about health status, provision of healthcare, or payment of healthcare that is created or collected by a Covered entity (or a third-party associate) that can be linked to a specific individual.

You never know how a malicious agent may use such confidential information as the rate of compromised PHI continues to rise. 

According to IBM, healthcare breaches have been the most costly out of all industries for 13 years, and there is no indication that this trend is shifting.

Approximately 89 million individuals in the US experienced a breach of their sensitive health information in 2023, marking a significant increase from the 43.5 million reported during the same period in 2022. 

The number of legal actions arising from healthcare breaches is also increasing, with various entities including providers, payers, and vendors reporting incidents.

For example, HCA Healthcare, whose 2023 security incident affected more than 11 million people, and led to a class action lawsuit against the company. Or Community Health Systems, a major Tennessee provider network that faced litigation following a breach that exposed the data of approximately one million patients.

Sensitive Data Example 5: Financial and Operational Information

Financial information is data related to money transactions, credit card information, bank account details, and other sensitive financial statements. This data can relate to the company, its customers, and other third parties.

This type of sensitive data includes any business operations or inventory figures. For example, businesses wouldn’t want the details of their sales figures disclosed publicly or accessed by their rivals.

Departments like Finance and Accounting have access to this information, and it’s vital to keep it protected and stay compliant with various laws and regulations such as the Fair Credit Reporting Act (FCRA) and the Sarbanes Oxley Act (SOX). 

This type of data is extremely valuable and can have huge consequences for companies and consumers if leaked. For instance, Equifax’s 2017 breach affected 40% of the US population and led to a $700 million global settlement with the FTC, CFPB, and 50 US states and territories.

Protecting financial and operational data is crucial to avoid data breaches, massive fines, and company infamy.

Sensitive Data Example 6: Legal and Compliance Data

Legal and compliance data is any information necessary to comply with industry-specific regulations and data protection laws as well as legal documents and information. Examples of this could be a SOC 2 report, data that is protected under GDPR requirements, or confidential legal agreements or records related to litigation proceedings. 

For example, in 2023, sensitive data linked to law firm Proskauer Rose remained vulnerable for more than six months, as an unsecured Microsoft Azure cloud server exposed the information. 

The sensitive data consisted of 184,000 files, encompassing private and privileged legal documents, non-disclosure agreements, contracts, and files related to high-profile acquisitions. 

Keeping sensitive information secure is key to staying in good standing with various frameworks and legal statutes including the Gramm-Leach-Bliley Act and the Federal Trade Commission Act.

Sensitive Data Example 7: Backup and Recovery Data

Ensuring the security of data backups is crucial for business continuity and disaster recovery. It ensures the availability and integrity of critical information in the event of a cyberattack, hardware failure, or accidental deletion. 

Ransomware attacks are nothing new, but organizations must be prepared as threat actors persist in their efforts. Like the case of the Clop ransomware gang’s successful compromise of MOVEit Transfer software which led to the largest hack of 2023.  

With ransomware risks rising, having secure and isolated backup data is a key defense mechanism. If systems are compromised, organizations can restore their data from clean backups rather than succumbing to ransom demands.

Sensitive Data Example 8: Industry-Specific Data

Depending on your industry, you must protect specific types of sensitive data.

For instance, those working in the healthcare sector should take proper measures to protect digitally stored medical records and medical research data, while those in retail should focus on safeguarding the payment information of their customers.

Whatever your industry, you’re likely to have sensitive data that goes along with it. For example, if your company is in the Research and Development sector, you’ll need to protect experiment results and prototypes. If your industry is the Education sector, protecting student records and financial aid details would be key.

Losing this type of information hurts your reputation within your specific industry and can lead to massive fines from the regulatory bodies that oversee it. For example, if you’re in the Financial Services sector, your company could be penalized by the Federal Reserve Board (FRB) or the Securities and Exchange Commission (SEC).

Keeping industry data safe is paramount to protect your company’s customers, partners, and organization’s good name.

Sensitive Data Example 9: Confidential Business Plans and Strategies

IT and Security teams should be aware of information about future business plans, product roadmaps, and strategic initiatives that can impact the company’s success. Although this type of data can slip through the cracks, it should be protected just as much as financial or legal information. 

Take the case of Sony in December 2023, when the company’s video game roadmap, budgets, and corporate strategy were among the 1.3 million files leaked by threat actors in a massive breach. Details about almost everything the studio is currently developing are now publicly accessible, leading to massive damage control for Sony and its partners.

Sensitive Data Example 10: Mergers and Acquisitions (M&A) Data

When a company is buying another company, two companies are merging, or a company is selling off a part of its business, lots of confidential information gets exchanged during the process. 

Protecting this sensitive data is typically contractually required. It also facilitates a smooth negotiation process, increasing the likelihood of a successful deal. And if an acquisition does not go through, companies are typically still required to protect the M&A data, and in many cases, delete it.

It’s vital to understand that when your company acquires another company, you must now protect any sensitive information from that company and be aware of data protection vulnerabilities it may have. 

For example, the Marriott data breach in 2018 involved the exposure of personal information, including passport numbers, of millions of guests. Marriott had acquired Starwood Hotels in 2016, and the breach involved data from the Starwood guest reservation system.

To mitigate potential risks, companies need to implement robust security measures, access controls, and regular audits, especially when acquiring or merging with new companies.

How to Protect Your Company’s Sensitive Data from Exposure

What can you do to identify and protect your company’s sensitive data?

Here are three steps to help IT and Security teams protect sensitive data and prevent it from being exposed.

Step 1: Identify and Classify All Sensitive Data

You must take the necessary measures to identify and group all company data based on its sensitivity. Think of this step as classifying sensitive data.

This may look like an easy task, but it isn’t always the case.

You’ll see a change in system complexity over time, especially because new data pops up almost every day. As a result, the overall process of finding sensitive data becomes constant and highly dynamic. Be aware of what types of data your company has and how sensitive they are based on your classification methods.

Step 2: Respond To and Assess Data Risks ASAP

Data theft and data leakage are never-ending issues. Since it affects all sectors in an organization or government unit, one cannot categorize it as an IT problem solely.

Risk assessment is an incredibly crucial aspect of protecting sensitive data. You must first identify all the users, devices, networks, and applications, and then categorize them based on how a data leak would impact the organization.

The last step is to assess these potential attack vectors and decide whether you want to accept, transfer, mitigate, or refuse the risk.

Some of these risks include the liability cost of the sensitive data, the location where you plan on storing the sensitive data, the movement of the sensitive data from one source or domain to another, the size of the sensitive data, and so on.

Step 3: Monitor and Implement Strong Security Measures

Creating viable security measures should be done with care so IT and Security teams can effectively safeguard company sensitive data against theft.

You should continuously monitor these steps, ensuring there are no vulnerabilities or security gaps in the process. You must assign all of the protection measures to the sensitive data you’ve found beforehand, as well as the newer types of sensitive data.

Here are a few security measures you can use to protect company information:

Educate Employees

  1. Educate employees on the importance of following security protocols and best practices.
  2. Remind employees to avoid leaving sensitive papers when they’re away from their workstations, use laptop privacy screens to obscure their work, and keep their computers locked when not in use. 
  3. Make it mandatory for employees to use strong passwords and enable secure multi-factor authentication (MFA) methods through authentication apps or passkeys.
  4. Create a “culture of security“ by implementing a regular schedule of employee training. Keep updating employees as you find out about new risks and vulnerabilities.
  5. Provide employees with visibility and control over who their cloud-based sensitive documents are shared with.

Secure Company Networks and Systems

6. Encrypt all sensitive information sent to third parties over public networks. You can also encrypt email transmissions within the business if they contain PII.

7. Identify all connections to the networks where sensitive information is stored, and access the vulnerability of every connection to commonly known or reasonably foreseeable attacks.

8. Run up-to-date antivirus and anti-spyware programs on individual computers and network servers regularly.

9. Use a firewall to protect company computers from cyberattacks when they are connected to the internet.

10. Use a Cloud Document Security system to secure company documents while reducing data theft risks. This proactively protects company files from unauthorized access when using cloud collaboration tools like Google Workspace, Microsoft OneDrive, and SharePoint. 

11. If wireless devices like inventory scanners are used to connect to the computer network or to transmit sensitive information, consider limiting the number of users who can use a wireless connection to access the computer network. You can also limit the number of wireless devices that can connect to the network.

Safeguarding sensitive data is paramount to stop unauthorized access and potential misuse. 

Through effective security measures, companies keep sensitive information secure and mitigate the risks associated with breaches. 

This proactive approach is essential for maintaining the trust of customers, partners, and employees while ensuring compliance with data protection regulations. For more information on keeping confidential data safe with Nira, visit here.

 

The post The 10 Types of Sensitive Data Companies Must Protect appeared first on Nira.

]]>
Offboarding Checklist for IT Teams https://nira.com/it-offboarding-checklist/ Mon, 15 Jan 2024 22:14:39 +0000 https://nira.com/?p=10387 Employees have left companies in record numbers over the last few years due to layoffs, job changes, and new opportunities. In 2023, the tech sector alone faced more than 1,500 layoffs, affecting an average of 667 people per day.  Although IT and Security teams do not have extra time or resources, they often pick up… (more) Offboarding Checklist for IT Teams

The post Offboarding Checklist for IT Teams appeared first on Nira.

]]>
Employees have left companies in record numbers over the last few years due to layoffs, job changes, and new opportunities. In 2023, the tech sector alone faced more than 1,500 layoffs, affecting an average of 667 people per day. 

Although IT and Security teams do not have extra time or resources, they often pick up the slack during offboarding procedures. These teams must ensure company intellectual property (IP), personally identifiable information (PII), and personal health information (PHI) are secure and protected.

Seventy-six percent of executives agreed employee offboarding represents a significant security threat, according to a Nira-Gartner Peer Insights report.

Offboarding creates risks, not only from malicious behavior but also from common mistakes. Concerns include company files owned by personal email accounts, unidentified public links accessible by anyone on the internet, and unsecured shared drives—to name a few. 

This post provides a checklist for a smoother IT offboarding process, to reduce risk and keep your company’s confidential data secure.

 

An effective IT offboarding checklist can be separated into three main categories: 

  • Communication and Knowledge Transfer 
  • IT and Systems Management
  • Organizational Updates and Feedback

Communication and Knowledge Transfer

  1. Communicate with other departments

The first step of any offboarding process is to understand timely information about the employee’s departure. This includes their last working day, their reason for departure, and any specific offboarding requirements set by your company. 

Coordinate with other relevant departments including Human Resources and Compliance to ensure compliance measures and company policies are properly implemented as the employee leaves.

2. Transfer ownership of outgoing employee documents 

The departing employee may still own company documents in cloud collaboration tools like Google Workspace and Microsoft OneDrive. These files could have sensitive information that should not be accessed by anyone outside the company, and where possible, document ownership should be transferred to the relevant team members. 

This preserves critical information and ensures a smooth transition for ongoing projects. To learn how to securely transfer ownership of documents in Google Drive, visit here.

3. Revoke access from personal email accounts

Personal email account access is a hidden problem that most companies face. More than half (52%) of employees surveyed said they or a coworker had accidentally added their personal email to company files. What’s worse, after employees leave, their personal email accounts still have access to these sensitive documents. 

It’s vital to understand how many personal email accounts have access to your company data and to revoke their permissions immediately. To learn how to do this quickly and efficiently, visit here

4. Close or restrict links on the employee’s documents

When an employee leaves, shared links often remain on their cloud-based documents. These documents might be accessible to anyone in the company with the link, or even anyone on the internet with the link. Documents with Public and Company links are seldom stripped of access and are quickly forgotten. 

To prevent unauthorized access, you should find and restrict all shared links on the departing employee’s documents. For more information, visit here.

5. Remove the employee from shared drives and folders 

Many departing employees still have access to shared drives and folders in cloud collaboration tools like Google Workspace. Shared drive security is a concern because collaborators can accidentally add unauthorized people or documents to the shared drive, inadvertently sharing all documents and folders within it. 

It’s important to understand who has access to which shared drives and to remove all access permissions from drives and folders after an employee leaves. 

6. Review third-party vendor access for employee files

Third parties often have access to hundreds or thousands of company documents, many of which are confidential. 

Typically, when an employee leaves, third-party access to their documents remains. That means third parties they were collaborating with continue to have access to the ex-employee’s files, often in perpetuity. In fact, 60% of employees said they never or sometimes remove vendor access once they are no longer working with third parties. 

Make sure that all vendor and third-party access has been removed from the employee’s files and that vendors do not have Editor permissions unless necessary. 

7. Recover and reassign employee licenses

It’s vital to be aware of any subscriptions or software licenses assigned to the departing employee. This helps prevent unnecessary costs and ensures compliance with licensing agreements and offboarding procedures.

You should transfer any vendor licenses associated with the outgoing employee to the appropriate team members, or close out the license and optionally have the former employee’s data deleted. 

IT and Systems Management

8. Collect equipment and assets

Once you know their last working day, you can set the appropriate time to gather all company devices issued to the departing employee, including laptops, cell phones, and any other hardware. Once ready, you will ensure all relevant data on these devices is transferred as needed.

9. Revoke systems access

According to Intermedia, 89% of former employees still have access to at least one application from their former job. Ensure you disable or deactivate the departing employee’s access to various systems, networks, and applications to reduce risk and unneeded expenses. 

10. Update credit card information

Your finance or HR team will probably be in charge of recovering the employee’s company credit cards. However, you should be aware of any IT subscriptions or services the employee may have been responsible for.

If applicable, update credit card information connected to the outgoing employee so services are not interrupted. 

11. Reset computers and update systems

Once any relevant data has been downloaded or transferred, reset the departing employee’s computer to factory settings or perform a secure wipe. Ensure all systems are updated to reflect the changes in user access.

12. Change passwords

Change all passwords associated with the outgoing employee’s accounts, including shared accounts and access to databases. You should ensure that new passwords are strong and enable secure multi-factor authentication (MFA) methods as applicable.

For shared accounts, make sure relevant parties are aware of the changes. 

13. Remove VPN and remote access methods

Revoke any VPN or remote access privileges that the departing employee had. It’s good practice to review remote access methods periodically to make sure employees do not still have ways to get into the network. 

14. Shut down and forward email address

Disable the departing employee’s email account and set up forwarding to redirect emails to the appropriate team members. The former employee’s manager is usually the best person. This allows the manager to quickly follow up on any loose ends. 

If the situation is sensitive for any reason, you can forward the email to someone in HR. After a few months, you should be able to turn off email forwarding entirely.

15. Audit abnormal downloads, copying, and sharing

Two-thirds (67%) of executives believe employees exiting the organization are more likely to cause security breaches by accident, rather than intentionally. However, suspicious activity does happen, and more than 45% of employees admit to taking documents from former employers. 

Whether it be by accident or on purpose, it’s vital to understand what company material was downloaded, copied, or shared. Review and audit any unusual or abnormal activities related to downloads, copying, or sharing of data by the departing employee.

16. Archive stale documents for data retention purposes

Access to stale documents can create several issues for an organization. These documents haven’t been modified in years, yet they are still accessible by employees and external accounts, like personal email accounts and ex-vendors. 

This can cause issues from a compliance and data retention perspective. You need full visibility into documents that should be archived based on data retention policies. For tooling that helps, visit here.

17. Close employee SaaS accounts

Close any Software as a Service (SaaS) accounts associated with the departing employee. Common accounts you’ll want to check include project management tools like Trello or Asana; document storage solutions like Dropbox or Box; communication tools including Slack or Zoom; and CRMs like Salesforce. 

It’s helpful to keep a record of accounts that employees have access to. This makes it much easier to shut everything down when an employee leaves.

18. Reset physical access controls

If applicable, reset physical access controls such as keycards or entry codes to prevent unauthorized entry to company buildings. You can incorporate periodic security audits of the physical access control infrastructure if needed. 

Organizational Updates and Feedback

19. Update your organizational charts 

Update internal organizational charts and directories to reflect the employee’s departure and any changes in responsibilities.

Your company’s core documentation should be updated so the outgoing employee is no longer listed as an employee.

20. Create a feedback loop

Coordinate with relevant departments (for example, HR and Compliance) and establish a feedback loop to continuously refine offboarding procedures. This can identify areas for improvement and ensure the IT offboarding process remains effective. 

Multiple security risks exist when offboarding employees. Insider threats pose a problem, but the majority of issues stem from common mistakes and accidental misconfigurations. Using an IT offboarding checklist helps ensure your bases are covered and nothing gets missed. 

IT and Security teams may use powerful tooling to help with the offboarding process including bulk transfer of file ownership and securing company shared drives. 

We recommend periodic security audits to ensure all company documents are successfully transferred and secured. You can set up your free audit here

The post Offboarding Checklist for IT Teams appeared first on Nira.

]]>
Google Security Investigation Tool: Remove Users, Change Ownership, and More https://nira.com/google-investigation-tool-actions/ Wed, 03 Jan 2024 22:16:43 +0000 https://nira.com/?p=10378 For investigations in Google Workspace, Google offers its powerful Security Investigation Tool. If you’re a super administrator on the Enterprise Plus, Education Standard, Education Plus, or Enterprise Essentials Plus plans, you can access this potent feature.  The investigation tool allows administrators to conduct searches in different Google Workspace data sources including Gmail, Chrome, and Google… (more) Google Security Investigation Tool: Remove Users, Change Ownership, and More

The post Google Security Investigation Tool: Remove Users, Change Ownership, and More appeared first on Nira.

]]>
For investigations in Google Workspace, Google offers its powerful Security Investigation Tool. If you’re a super administrator on the Enterprise Plus, Education Standard, Education Plus, or Enterprise Essentials Plus plans, you can access this potent feature. 

The investigation tool allows administrators to conduct searches in different Google Workspace data sources including Gmail, Chrome, and Google Groups. In this post, we’ll focus on using the investigation tool to search within Google Drive and then take action based on the results.  

How to run a search for Google Drive log events

  1. In the Admin console, go to “Menu” > “Security” > “Security center” > “Investigation tool.”
  2. From the Data source menu, click “Drive log events.”
  3. Click “Add Condition.”
  4. Click “Attribute,” and select an option. For a complete list of attributes, visit here
  5. Click “Contains” and select an operator.
  6. Enter a value, or select a value from the drop-down list.
  7. You can add more search conditions by repeating steps 3–6.
  8. Click “Search.”
  9. You can also group your results by different attributes. Click “Group results,” then choose your attribute, such as “Date” or “Actor,” from the drop-down menu.
  10. To save your investigation, click “Save” > enter a title and description > click “Save.”

Once administrators conduct a search using Drive log events, they can choose files from the search results, review the permissions associated with those files, and perform additional actions as needed. Here are the actions you can take after identifying risks with Google’s investigation tool:

Find, add, and remove collaborators on Google Drive files 

Why it matters

Knowing who has been added as a collaborator to company files is essential to mitigate risk. Sometimes, collaborators have been added who should not have access, for example, a contractor who no longer works with the company. Using the tool, you can see who has access to your files, add new people, or remove collaborators. 

How to do it

  1. In the Admin console, go to “Menu” > “Security” > “Security center” > “Investigation tool.”
  2. After you run a search based on Drive log events, check the boxes for relevant files in the search results.
  3. Click “Add Users” if you want to give file access to additional users. You can add multiple users with a comma-separated list, and you can select the access level for the users that you add.
  4. Or, click “Actions,” and choose “Remove users.”
  5. You will need to confirm these actions by entering confirmation text.

Find and change permissions on Google Drive files

Why it matters

Many people do not realize when they share Google Drive files, they often grant collaborators more access permissions than they need. For example, anyone with “Editor” access can share the document with others or even set the file to have a public link. This leads to all sorts of unnecessary risks, for example, an employee sharing a document with an external vendor and giving them Editor permissions, rather than “Viewer” or “Commenter.” 

These types of oversharing mistakes happen all the time in companies, so it’s helpful to use the GSIT to investigate how much access a file has, or what its link-sharing settings are. Using the tool, administrators can even manage the permissions for shared drives. Let’s look at how: 

How to do it

  1. In the Admin console, go to “Menu” > “Security” > “Security center” > “Investigation tool.”
  2. After you run a search based on Drive log events, check the boxes for relevant files in the results.
  3. Click “Actions” > “Audit File Permissions” to open the Permissions page.
  4. The Files tab shows files that were included in your search results. From here you can manage access to those files. 
  5. You can select a file, and then click “Access.” From there, you can choose “Set access” to choose an access level. 
  6. You also have the option of removing users or disabling the ability to print, copy, or download the file. 
  7. Visit the “People” tab to view users and groups with access to the selected files. People in this list have access to one or more of the items from your search results. Use this view to manage the access of both users and groups.
  8. You can click “Links” to view or change the link-sharing settings on the files.
  9. Also, you can manage permissions for shared drives and files that were included in your search by visiting the “Shared Drives” tab. 
  10. Click “Pending Changes” to review the changes before saving.

Find and change ownership of Google Drive files

Why it matters

There are various reasons why you would change ownership of a file. Let’s say you are offboarding a former employee, and they are the owner of hundreds of sensitive company files. Or, you have an employee who has changed roles within the organization and should be the owner of certain documents, even though they did not create them. Using the GSIT, you can quickly transfer ownership to a new account with ease. 

How to do it

  1. In the Admin console, go to “Menu” > “Security” > “Security center” > “Investigation tool.”
  2. After you run a search based on Drive log events, check the boxes for relevant files in the results.
  3. Click “Actions” > “Change owner.” 
  4. Type in the email address of the new owner’s account.
  5. Confirm the action by writing “CHANGE OWNER” in the confirmation textbox. 
  6. Click “Change owner” at the bottom of the box. 

Find and restrict downloading, copying, and printing of Google Drive files

Why it matters

Although there will always be workarounds, restricting the ability to download, copy, or print sensitive documents in Google Drive can reduce the risk of malicious or accidental misuse. It’s important to note that disabling these actions will only affect users with “Commentator” or “Viewer” permissions. That’s another reason it’s vital to make sure only accounts who absolutely need “Editor” access have it. 

How to do it

  1. In the Admin console, go to “Menu” > “Security” > “Security center” > “Investigation tool.”
  2. After you run a search based on Drive log events, check the boxes for relevant files in the results.
  3. Click “Actions” > “Disable download, print, copy.”
  4. Confirm the action by typing in the confirmation textbox. 
  5. Click “Disable” at the bottom of the box. 

Since its introduction in 2018, the Google Security Investigation Tool has evolved into a powerful solution for investigations in Google Workspace. Using the tool, administrators can take action after their searches in Google Drive, including transferring ownership, removing users, and changing permissions. The GSIT illuminates investigations, giving administrators a toolkit to elevate security protocols. For a detailed guide on how to use the tool, visit here

The post Google Security Investigation Tool: Remove Users, Change Ownership, and More appeared first on Nira.

]]>