Astrica | Resource Center

Why Now Assist isn’t freaking out my internal CISO

Written by James Hamilton | Nov 21, 2024 10:50:13 PM

Last week at the World Forum in Toronto, I had a conference attendee come up to Astrica's booth and ask about Now Assist. While this is by no means uncommon at such events, she had a reason that stuck with me: "My boss is afraid of it". Now Assist (and Generative AI as a whole) is so new that organizations don't know what they don't know - especially in regards to security. There are news articles every week about potential security concerns for GPT and other LLMs (large language models). I'll be the first to tell you that my superpower is anxiety - so why am I not freaked out by Now Assist?

 

The Basics

Let's start with talking about the architecture of what Generative AI is and does and then move on to how Now Assist's structure is different (in a good way!):

  A user requests the GPT to provide an answer on something (a prompt).

  That request is fed into an LLM (like GPT or Now LLM) and it is processed through multiple neural networks. A neural network holds connections signifying patterns and relationships between concepts. Think how 'water' might be related to 'drink' and 'swim'. The LLM has been built based on fed data  (this is what you hear about in the news- GPT scraping the internet for building such neural networks).

  Based on the neural networks, GPT predicts an answer- it will find the next most likely word based on the prompt and what words have preceded in the answer (e.g. the most likely word after 'a neural network' might be 'processes'). It's based on probabilities not on reasoning or comprehension. It repeats this process until it creates a full answer.

 

How is Now Assist Different?

It's easiest to answer by dispersing an assumption: the neural networks are not built on your data as an in-house LLM might. You are not feeding all of your data into the Now LLM. The Now LLM is built on industry leading practices (think more 'what makes a good resolution note' and less 'make a model of resolution notes based on all the resolution notes in the client's instance').

The architecture of each skill is built in this fashion- you (or ServiceNow) identifies a context for the skill and feeds a portion of that context as a piece of the prompt submitted to the Now LLM API.

Let's give an example. I want to generate a knowledge article from an incident.

  The context of the skill usage is the incident itself.
  The elements of that context fed to the LLM is determined in the configuration of the skill. As of today, you are not able to modify the field inputs to the out of box skills themselves that come out of box (though it is implied to be at a future release after Xanadu).  You still will have full knowledge of what fields contribute to the prompt and the ability to encourage user behavior around those fields- provide a separate notes area for AI restricted data for example. For the example of generating a knowledge article from an incident, the relevant data is the incident's short description, description, resolution notes, work notes, and comments.

You have full visibility into what is consumed. None of the current Now Assist features consume large volumes of data like the other machine learning capabilities have (such as Task Intelligence).

 

Where does my data go?

It isn't entering a third party- its staying within ServiceNow's systems and the Now LLM, much as your data has before it.

🔑 Most key: "The data used to generate the response is deleted from the compute hubs after the response has been generated. The result is returned to the ServiceNow instance. The input and output data isn't cached or stored on the compute hub and is transient".

In non-techy speech, ServiceNow is not using your data for subsequent answers. ServiceNow's own security policies are readily available online- both for Now Assist & in general (please see links below).

Still not comfortable? ServiceNow has the ability to configure masking sensitive data before its sent to the LLM, all through a configurable interface.

 

The main point...

ServiceNow isn't storing your data inside the LLM as part of a neural network to build answers for other clients. It isn't taking huge batches of your data to compute answers either- each skill determines and context and likely sends less data than you'd expect. ServiceNow's security policies are published and known. Compared to home-grown LLMs or publicly available ones, I find the CISO in me is significantly calmed when approaching this technology.

If you have questions, need more detailed guidance or want to discuss your specific challenges, feel free to request a meeting —we’re here to help

 

Some helpful links:

User data usage policy for Now Assist

Overview of ServiceNow's Security Program

Data Processing Addendum - ServiceNow

Data Security Addendum - ServiceNow