Home M3AAWG Blog Campaign Genius CEO will Deliver Keynote in Brooklyn: Matthew Dunn’s Perspective on AI
Posted by the M3AAWG Content Manager

The Messaging Malware Mobile Anti-Abuse Working Group (M3AAWG) is excited to bring you the insights of Dr. Matthew Dunn, founder of Campaign Genius, at the  59th General Meeting in Brooklyn. Dr Dunn will be speaking about  Artificial Intelligence (AI) solutions and their adoption in email.  

With more than three decades of experience leveraging the latest technologies to communicate, Dr. Dunn will offer both a current look at AI and a view of the future. 

Dr. Dunn is a serial entrepreneur and executive with wide-ranging experience.  He has been a startup CEO, Fortune 1000 Senior Vice-President and CIO, Microsoft veteran, consultant, technology standards organization Executive Director, and university professor.  He is also an award-winning writer, director, designer, and inventor, holding over a dozen patents in diverse fields.

M3AAWG is honored to host Dr. Dunn ahead of time here in the blog as he offers a perspective on AI that helps frame the current state and hopefully generates some questions for this must-see keynote speaker in Brooklyn.

“Come here.”

Say that out loud as if calling a stubborn puppy.

“Come here.”

Then again as a romantic invitation. 

“Come here.”

And again, calling attention to a bird just alit.

In my first career (stage director and acting teacher), some variant of this exercise was a standard. Actors don’t recite words from a page; words are actions.  Actors work tirelessly to craft a compelling arc of actions for their character. They hoard verbs like software developers (much later career) stash code snippets.  You can buy books of verbs like “Actions: The Actor’s Thesaurus” if you’re interested.

In watching and participating in the cultural drama of AI ‘becoming a thing’, it has occurred to me that dramatic elements like character, action, intention and relationship will become important in how AI technologies are put to use.

As a technology, AI is not new; the label is 50 years old, and some key concepts and algorithms not much younger.  But somewhere in the last year, AI had a Sputnik moment and became “a thing.”  It’s running very much the usual arc of a new fundamental technology — fear, anticipation, outsized short-term prospects (and investments), and undersized long-term notions of how these technologies will probably change things.

While I’m following the various creative-industry strikes (SAG • AFTRA) keenly, and am very much on the side of the strikers, my focus in this post is not to talk about “AI writing screenplays” and such things.  Rather, I’m thinking about the longer-term decisions and designs that will be required to create workable relationships between us (humans) and seemingly-intelligent, autonomous agents.

In other words, if one of the many acronym collisions in “A.I.” is ‘Assistive Intelligence’, how is “it” going to have to act for us to accept assistance?

I don’t think it’s entirely accidental that text-based, conversationally-designed AI — yes, ChatGPT — pushed AI to the cultural fore. ChatGPT (and Claude, and Bard) are amazing.  But there were amazing AI models behind use-it-every-day, take-it-for-granted tools already — translation, navigation, search, gaming and more.  They didn’t get every-other-conversation to be about AI. Along comes a keyboard-mode, types-too-slow chatterbot, and all of sudden we’re seeing pictures of red-eyed Terminators and headlines about job loss everywhere.

The fact that anyone with a browser could try ChatGPT with a few clicks was a big factor — but they could try its eccentric-art-school-grad sibling DALL•E2 a few months earlier just as easily, and that didn’t cause the same cultural gasp. But chatty, conversational written language strikes us differently.  It mimics a mode of person-to-person communication already in daily use (SMS, Slack, etc.)  It has the cadence of conversation — perhaps intentionally, perhaps to conserve GPUs. And it has the on-the-page ambiguity of written language, which is key.

ChatGPT does have speech patterns, no doubt by careful design.  It practices good listening, echoing back portions of prompts.  It taps out well-structured prose, in complete sentences, and more. It’s polite to a fault — borderline obsequious.  AI expert S. Martin anticipated the challenges of chat AI, hallucinations and all, in 1979:

 

ChatGPT speaks. 

But does it act? 

Words typed on a screen convey information, but as everyone who’s ever sent an email or text knows, words typed on a screen frequently do a poor job of conveying what you mean; action and intention aren’t clear.  So perhaps one of the non-obvious reasons that ChatGPT, specifically, became the Sputnik moment for AI is because the ambiguity of typed speech leaves the imagination free to ascribe intelligence and human-like qualities — more of both than merited, judging from the range of cultural reactions.

Fair enough; ChatGPT 3 was a baby, 3.5 a toddler and 4.0 at best an adolescent. 

As AI technologies evolve, how we relate to them will matter more and more.  No doubt AI broadly speaking will be pervasive and (as now) mostly invisible.  But there’s an efficiency limit to how much people will be willing to engage with AIs, plural, as “intelligent agents” – just as there’s a limit to how many active person-to-person relationships we’re likely to maintain.  (Dunbar’s number — 150 or so — is an example.)  We won’t want to have ‘a relationship’ with every possible AI.

That brings us back to “Come here.”  There are myriad factors in how we relate to someone (or in this case, some ‘thing’), but the dramatist’s argument would be that the actions taken by those in relationships are critical.  Person A doesn’t walk out on Person B because of the words spoken, but because of the action those words embody in how, when, where and why they’re spoken.  Making AI entities ‘relatable’ will require serious design of the pattern of actions they take — not just the words they speak.

That’s pretty speculative; let’s ground it a bit in a here-and-now example: Siri.  Siri is a bore.  Siri is not “a person”, and not relatable.  Siri is speech-as-an-interface. Yes, Siri is a 12 year old technology (iPhone 4s, 2011). Speech recognition, speech synthesis and so on have evolved considerably since 2011, but that’s not what makes Siri unrelatable — rather, it’s that Siri has all the personality of a talking DOS prompt. Siri always does what you ask — or tries to. Siri has cute responses, but they’re canned and predictable.  Unlike ChatGPT, Siri doesn’t keep track of context — what you asked a minute ago doesn’t really affect the next answer.  Worst of all, Siri is generic - same agent, same name, same set of voices, same responses on every iPhone. Nobody really wants a relationship with Everyman (apt but gratuitous drama-history reference).

This is a way-out-on-the-horizon idea about how AI will evolve, but over time, I think that nebulous thing called “personality” will become the AI-era version of “brand.”  Yes, AI technologies will be pervasive and ubiquitous (as they have been for a number of years, honestly.)  Most of them will be invisible.  The ones that have enough personality, and the right personality, to become “a relationship” will be in the pole position of the AI ecosystem.  “My AI” will do many of the tedious things I currently have to handle; the person or company that falls foul of it (Her? Him? They?) won’t ever reach me.  The message flow that M3AAWG shepherds so carefully will be sorted, “read”, deleted and SPAM-foldered by (an) AI.  (If you’ve ever had an executive assistant, you’ll understand the appeal of that prospect!)

 An AI is intentional phrasing; I expect that we’ll end up with a few — perhaps even one — that we ‘relate’ to directly. Building an AI capable of the actions that build relationships will require (among many others) the talents and insights of the folks currently picketing.

The companies that provide the winners — the AI assistants-for-life — will have unprecedented influence, reach and power.  I’ve got a guess about the company that’s currently in the best position for that. I look forward to discussing all this during my keynote session in Brooklyn and hope you will join me for the Guided Keynote discussion over lunch on Tuesday. 

 —Dr. Matthew Dunn, Campaign Genius
    matthew@campaigngenius.io

 

The views expressed in DM3Z are those of the individual authors and do not necessarily reflect M3AAWG policy.