Everyone from tech companies to AI influencers is foaming at the mouth, attempting to get you to mainline AI into every aspect of your personal life. You are told you should outsource important decisions and allow these systems to rummage through all of your highly personal data so you can improve your life. Whatever that means. With the continued push of today’s AI technology even deeper into the systems we use daily, there will inevitably be a data-hungry push to personalize this experience. In other words, to use your highly personal, sensitive data to whatever ends a 3rd party company would like.
Although we may have a gut reaction that all of this doesn’t feel right and may be dangerous, we don’t have a good way of framing a conversation about the safety of these tools. The ultimate question many may have is, are these tools safe to use?
The answer to this question comes from analyzing both the technical and the human aspects. In this post, I’ll address the technical aspects of this question by introducing SPAR, a way of evaluating the technical safety attributes, and discuss what it takes to achieve a safe baseline.
Personal AI Assistants
Personal AI assistants are the next generation of AI-powered digital assistants, highly customized to individual users. Think of a more connected, omnipresent, and capable version of Siri or Alexa. These tools will be powered by multimodal large language models (LLMs).
People will most likely use the term Personal AI (yuck) for this in the future. I think this is for two reasons. First, AI influencers will think it sounds cooler. Second, people don’t like to think they need assistance.
Personalization

Personalization makes technology more sticky and relevant to users, but the downside is that it also makes individual users more vulnerable. For personal AI assistants this means granting greater access to data and activities about our daily lives. This includes various areas such as health, preferences, and social activities. Troves of data specific to you will be mined, monetized, and potentially weaponized (overtly or inadvertently) against you. Since this system knows so much about you, it can nudge you in various directions. Is the decision you are about to make truly your decision? This will be an interesting question to ponder in the coming years.
Is the decision you are about to make truly your decision?
Safe To Use?
Answering whether a personal AI assistant is safe to use involves looking at two sets of risks: technical and human. You can’t evaluate the human risks until you’ve addressed the technical ones. This should be obvious because technical failings can cause human failings.
On the other hand, this isn’t about striving for perfection either. Just like drugs have acceptable side effects, these systems have side effects as well. Ultimately, evaluating the side effects vs the benefits will be an ongoing topic. If a technical problem with a drug formula causes an excess mortality rate, you can’t begin to address its effectiveness in treating headaches.
SPAR – Technical Safety Attributes
Let’s take a look at whether, from a technical perspective, an assistant is safe to use. Before introducing the categories, it needs to be said that the system as a whole needs to exhibit these attributes. Assistants won’t be a single thing but an interwoven connection of data sources, agents, and API calls, working together to give the appearance of being a single thing.
For simplicity’s sake, we can define the technical safety attributes in an acronym, SPAR. This acronym stands for Secure, Private, Aligned, and Reliable. I like the term SPAR because humans will spar not only with the assistant but also with the company creating it.

There is no such thing as complete attainment in any of these attributes. For example, there is no such thing as a completely secure system, especially as complexity grows. Still, we do have a sense of when something is secure enough for the use case, and the product maker has processes in place to address security in an ongoing manner. Each of these categories needs to be treated the same way.
Secure
Although this category should be relatively self-explanatory, in simple terms, the system is resistant to purposeful attack and manipulation. These assistants will have far more access to sensitive information about us and connections to accounts we own. The assistant may act on our behalf since we delegate this control to the assistant. Having this level of access means there needs to be a purposeful effort built into the assistant to protect the users from attacks.
Typically, when users have an account compromised, it is seen as more of an annoyance to the user. They may have to change their password or take other steps, but ultimately, the impact is low for many. With the elevated capability of these assistants, there is an immediate and high impact on the user.
Private
Simply put, a system that doesn’t respect the privacy of its users cannot be trusted. It is almost certain that your hyper-personalized AI assistant won’t be a hyper-personalized private AI assistant. Perverse incentives are at the core of much of the tech people use daily, and data is gold. In fact, it seems the only people who don’t value our data are us.
Your hyper-personalized AI assistant won’t be a hyper-personalized private AI assistant.
Imagine if you had a parrot on your shoulder that knew everything about you, and whenever anyone asked, they just blurted out what they had learned. Now, imagine if that parrot had the same access as you have to all your accounts, data, and activities. This isn’t far off from where we are headed.

Your right not to incriminate yourself won’t extend to your assistant, so it could be that law enforcement interrogates your assistant instead of you. Since your assistant knows so much about you and your activities, it happily coughs up not only what it knows but also what it thinks it knows. Logs, interactions, and conversations could be collected and used against you. Even things that may not be true but are inferred by the system can also be used against you.
Aligned
AI alignment is a massive topic, but we don’t need a deep dive here. What we mean by alignment in hyper-personalized assistants is that they take actions that align with your goals and interests. The your here refers to you, the user, not the company developing the assistant. So many of the applications and tools we use daily aren’t serving our best interests but the interests of the company making them. However, this will have to be the case in the context of personal AI assistants. Too much is at stake.
These tools will take action and make recommendations on your behalf. In a way, they are acting as you. You need to know that actions taken or even nudges imposed upon you are in your best interest and align with your wishes, not any outside entity’s wishes. Given the complete lack of visibility in these systems, this will be hard to determine, even in the best of cases.
Reliable
A system that isn’t reliable isn’t safe to use. It’s almost as simple as that. If the brakes in your car only worked 90% of the time, we would assume they were faulty, even though 90% seems to be a relatively high percentage.
The problem here is that other factors can often mask issues with reliability. For example, if we get bad data and never verify the accuracy, we won’t know that the system is unreliable. Quite often, in our fast-moving, attention-poor environments, we don’t know when our information is unreliable.
Additional Notes on SPAR Attributes
SPAR attributes aren’t simply features that can be attained and assumed to maintain their status in perpetuity. These features must be consistently re-evaluated as the system matures, updates, and adds new functionality. You can see this in Social Media. Back in 2007 and 2008, when I was researching social media platforms, these were mostly issues with the technology. However, if you look at the dangers of social media today, the technology is fairly robust, and we encounter human dangers.
Of course, startups can also be acquired, opening new dangers to people’s information and actions taken. The startup with a strong data privacy or alignment stance can become a big tech company that doesn’t respect your privacy and emphasizes its own goals.
It’s important to realize that none of these categories have been attained to an acceptable level today despite the constant hype surrounding the technology. There is no doubt that today’s technology, with all of its flaws, will be repackaged and marketed as Tomorrow’s Tools.
SPAR Attainment
Once a system has SPAR attainment, which means it properly addresses SPAR attributes, then we can consider the technology to have an acceptably safe baseline. That certainly doesn’t answer our question about whether the technology is safe to use, but what it does do is give us a safe baseline to further evaluate the potential human dangers and impacts.
Conclusion
I hope this post provides a useful starting point for discussing personal AI safety, which is about to become a massively important topic. As AI gets more personal, we must evaluate potential tradeoffs and set boundaries. We can’t do this until the technical safety attributes are accounted for.
To add to the complication, the speed at which these tools are created and the lack of configuration options makes that nearly impossible. Unfortunately, it will remain in this state for quite some time. Still, if organizations address SPAR attributes, it makes it much easier to consider having a safe baseline from which to provide further explorations of safety.
Historically, attackers have targeted large, centralized systems that only represent a small amount of an individual user’s data. This is high value for attackers, but it has a low impact on individual users. This will morph in the coming years. Hyppönen’s Law needs an update in the AI era because in a world of highly personalized AI, if it’s smart, you’re vulnerable.
Hyppönen’s Law needs an update in the AI era because in a world of highly personalized AI, if it’s smart, you’re vulnerable.
7 thoughts on “Introducing SPAR – Safety Attributes for Personal AI Assistants”