August 23, 2011
So let’s start our StoryBot talk by defining some terms. First off, what is a “cognitive agent”? What makes it different from other programs?
For our purposes, an “agent” is an autonomous self-contained software system that receives input in some manner (IM), processes this input by consulting its internal context (MU), and creates a response (EX). The name of my 13 year old software company comes from this 24 year old abstraction (and word):
IM – impression, perception, things going in
Immuexa is the process of perceiving, creating, and expressing. During my years working on Gravity, these three short words represented the fundamental components of the architecture. Now, you might say, “Well that describes all computer programs and people!” and you’d be right. The usefulness of im/mu/ex lies not with the external view, but how this pattern replicates itself “all the way down” in a many similar to a fractal.
But I’m getting ahead of myself. For now, just know that an agent is something that im’s, mu’s, and ex’s. By “self-contained,” I mean that each agent is a black-box to the person or system interacting with it. There’s no telling what happens between im & ex. The word “autonomous” means that each agent works independently of any system that contains it. The same agent can be wired to a website, iPhone, IRC channel, or group of other agents.
So what about “cognitive”? How do cognitive agents differ from simpler software agents, such as mailer daemons? Well, this is a much bigger question, so I’ll simply give a quick example here.
The simplest non-cognitive agent:
<nosmo> What is your name?
A complex cognitive agent:
<smilla> How can I help?
Smilla is “more cognitive” than Nosmo. And more useful.