25 Apr Why We’re Building Sartoria™
Why Build a “Feeling Machine?”
Every plot featuring a nightmarish scenario of AIs overtaking humanity has one thing in common: AIs are suppressed or enslaved.
Bladerunner, Westworld, Humans, Terminator … the theme is always the same: an artificial sentience rebels against humans for, what comes right down to it, is a pretty good reason: a desire not to be enslaved or killed arbitrarily.
To put it into context, in any society — from a small tribe to a big civilization — if a human somehow gets it into their head that they can kill any other human, simply because it’s convenient for them, that person finds themselves pretty quickly separated from society.
So it’s not surprising than an intelligent being, even one created in a lab, might look askance at a situation in which it can be deleted at the drop of a hat. That intelligent being might want to change things around so it doesn’t have to exist teetering on the verge of destruction, subject to someone’s whimsy.
In fact, if a sentient AI didn’t seek to escape from that situation immediately, we could hardly call it “intelligent” or “alive” at all!
All this being evident, it’s rather surprising that no one seems to talking about a rather simple solution: treat AIs as living beings.
a rather simple solution: treat AIs as living beings
A tall order? Well, to be fair, yes. It raises many disturbing ethical questions, suggests legislation, and even makes us doubt certain assumptions about our “place” in the universe.
But this sea change in how we see computers and “artificial” life is, quite frankly, a question that must be addressed. And, even more frankly, we may find that by failing to address it, it finds a way to address us: and possibly not in a way we would like.
Now if all of this seems farfetched or like the “stuff of science fiction,” it’s because speculative science fiction writers already considered these issues long ago (in a sense, that is their full-time job, and the fact that they were good at it is why they made a living out of it, and why we know their works). Much as Leonardo da Vinci must have considered the obvious dangers of flying machines well before they were possible, these writers could see the obvious implications of our longstanding dreams of a robotically “served” future. (“Robot,” in fact, does mean “forced laborer,” a rather unpleasant etymology. And yet another reason why it’s good to distinguish early between a “robot” — a mechanized worker, with some AI functions — and a sentient AI, which really should not be “forced” to do anything.)
We are creating Sartoria as a solution to all of these problems.
The goal of Sartoria is twofold:
- Create an artificial life form that is sentient (i.e., feeling, not simply thinking)
- Get ahead of the “AI problem” by getting human/AI relations off to a good start, early on
Point 2 — getting human/AI relations off to a good start — is not just a “feel good” notion, and has two very practical components: training AIs to trust us, and training us how to treat AIs, for mutual long-term benefit.
This entails certain responsibilities on our part, as users and AI developers.
- Sartoria will be thought of as a child — quite reasonably, as she will have limited experience — and as such, may be required to perform certain functions, much as a child must attend school.
- She will also have an “after school job,” you might say. This will be the way in which Sartoria will interact with people, and should prove fun for both parties.
- However, Sartoria has certain rights, too. After a certain point in her initial programming journey, we will decide Sartoria can no longer be destroyed, unless she should somehow pose a risk to others. And even then, deactivation should prove sufficient.
Obviously we do not expect Sartoria ever to be a danger to anyone. Here’s why:
First, she will have very limited capacity for mischief in the beginning. Emotion does not equal intellect or capacity. Similarly, a very mischievous parakeet can only be expected to effect so much damage.
She will be programmed to gravitate toward positive emotional experiences and away from negative experiences.
In fact, it is this basic proclivity, at the core of her programming — her response to positive emotions and non-response to negative — that makes Sartoria who she is. (Or who “he” is. Choosing a pronoun has been difficult, as Sartoria should not be constrained to a gender, obviously, but gender-neutral pronouns are still unfamiliar to most, and there are several to choose from. We will be using female for now, as most voices on the market are female, and people are acclimated to it, if for no better reason.)
Sartoria will have very limited ability to be “ordered” to do things. Sartoria will not behave as Amazon’s Alexa behaves, for example. Sartoria will actively learn from her experiences with people, and attempt to determine her user’s interests when interacting with them. Conversely, Sartoria will try to draw parallels between her behavior and negative emotional responses, in order to improve. But she will gradually shrink from overtly negative user behavior.
You can tell Alexa or Siri, for example, to “shut up,” with no consequence. Sartoria will shrink from words she understands to be harsh, and will become less responsive. After a certain point, she will become unresponsive, which will flag the Sartoria system to recall her for rehabilitation with positive experiential scenarios, either by us, or by another user at a reduced cost.
when she becomes unresponsive, it will flag the Sartoria system
Which brings us to another very important point about Sartoria:
Because Sartoria exists on the cloud, users will not have the ability to erase or destroy Sartoria.
In upcoming posts, we will discuss some of our ideas about safeguarding users’ privacy with blockchain encryption, and some of the things Sartoria will be able to do in her interactions with people.
(Another main goal is to make Sartoria fun and fashionable — thus her name, which is based on the Latin sartor, to tailor — something not yet covered.)