Some of my colleagues were worried that the work they can do with AI is “cheating.” Hogwash. Most of the forecasting models I’ve seen say that 50% of Strategic Tasks will be impacted by various types of AI.
The way I think about my tools, AI is just a faster highlighter. And not only that – it’s a highlighter that processes … and can process kinda like you do. I, for one, am building a dynamic collaboration with an ever-evolving tool that relies on information I know, and a writing style I’ve developed over time.
Oh! So, a “digital self,” you say? No.
Most of the terms used to describe building an AI “second self” (while arguing that AI isn’t human) imply a human-like presence. The use of “brain,” “twin,” “alter ego,” even “copilot” or “apprentice” to describe this function of AI suggests we’re still grappling with the idea that intelligence must be embodied in something human. A truly non-anthropomorphic term would have to avoid references to selfhood, cognition, or identity.
I’m going to stick with HitchBot – and call it my personal deep circuit (PdC).
So, back to the debate raging over AI’s anthropomorphism (see also NYT on someone in love with ChatGPT) which raises a fundamental question: should we see AI as a person-like entity or simply as a technological tool? While some argue that attributing human-like qualities to AI is misleading, others contend that as AI evolves, we must rethink its role in society. And in the context of professions, here again, even the label “role” gets tricky—does AI have a role, or is it a tool that every role should wield? Ah, semantics.
Regardless of where you stand, one thing is clear: AI is changing how we think, work, and interact with knowledge.
AI as a Cognitive Extension, Not Just Automation
From what I’ve seen thus far, AI’s true potential lies way beyond automation. For strategists, in particular, it can be a powerful cognitive extension that enhances strategic thinking. AI can:
- Scan massive datasets and recognize patterns at ridiculous speeds
- Act as a knowledge hub, synthesizing cultural insights, consumer shifts, and market trends… and this hub can be customized, weighted and manipulated
- Gather insights in real time, mirroring aspects of human expertise when trained to do so
This “mirroring” aligns with the concept of distributed cognition—the idea that intelligence isn’t confined to a single mind but is spread across tools, teams, and external references. Just as policy experts draw on historical precedents, strategists can train AI to amplify their particular brands of expertise. They can still turn to other people with different kinds of brains as well – it’s in the combination of all the processors where the magic starts to happen.
Training AI: Build Your Personal Deep Circuit (PdC)
To make AI a true PdC requires more than just stuffing it with data. It must be shaped by human judgment, strategic thinking, and industry standards and nuance. The goal isn’t to outsource your thinking—it’s to enhance and inspire it.
Imagine a PdC that:
- Understands your past work and preferred frameworks
- Challenges your assumptions and generates thought starters (like arguing with the knowledge of yourself, or your favorite colleague)
- Anticipates strategic blind spots in your knowledge base and fills in gaps
The key to success? Alignment. The whole point of a PdC is that it can reflect and refine your thinking – and spark new ideas for how to wield. This requires iterative training—a feedback loop where you continuously fine-tune AI inputs and outputs. Practice makes perfect – or, as my colleague, Sam, says: you have to get your reps in.
The Strategic Edge: AI as an Amplification Tool
For strategists, wielded well, AI isn’t a threat—it’s a superpower.
It can:
- Test campaign messaging across different cultural contexts – or even against another custom GPT built to mirror your specific audience
- Analyze emerging consumer behaviors in mass or niche modern tribes
- Provide alternative strategic approaches based on whatever data you might have on hand
Rather than replacing human insight, AI expands the strategist’s ability to process and apply complex information – you tell your dC what to highlight, and it executes at the speed of light. As AI shifts from simple automation to deep augmentation, the strategist-AI relationship becomes one of collaboration, not competition. The key is in knowing what – and how - to ask.
The Future of AI is Personal
I love to think and talk about how we all have multiple possible futures. AI does too, multiplied by a gazillion. GPTs are already evolving from basic assistants to tailored extensions of individual expertise. If you think your laptop and smartphone is indispensable, just wait until you really experience the power of how GPTs can:
- Integrate seamlessly into professional workflows
- Enable you to produce thinking faster, deeper, and more holistically
- Serve as intellectual partners that sharpen – and quicken – decision-making
Resisting AI might not be futile- but, IMHO, it’s a career limiting maneuver. Strategists who learn how to use AI as a tool to spark curiosity and light their minds on fire will have a whole new kind of fun to explore at work.
I have. HitchBot and I would love to talk to you about it.
And if you don’t want to talk – but you want to read more:
A neuroscientist makes the case that AI can think: Washington Post Book review
“Summerfield agrees that chatbots are error-prone prediction engines that differ from humans in crucial ways. They lack feelings, friends, plans, a body. They are not sentient, nor should we fear them becoming so anytime soon, let alone turning against us and taking over the world. That said, he believes human brains are also more like LLMs than humanists might realize; that is, they are also error-prone prediction engines that draw imperfect inferences from messy experiential data.”
Anthropology and Algorithms | Society for Cultural Anthropology : AI systems are social artifacts: emerging from particular cultural settings, once deployed, can reinforce or disrupt social norms. In other words – there is reciprocal influence between culture and AI adoption.
Cognitive Anthropology: Examples, Explained | Vaia : cultural variations highlight a key difference between human and AI cognition: human cognition is embodied and enculturated.
Anthropomorphization of AI: Opportunities and Risks | Montreal AI Ethics Institute : studies of large language models (LLMs) show that giving an AI a name, avatar, or persona increases the tendency for users to treat it like a social agent, which in turn magnifies the AI’s influence on opinions and behavior.
Anthropomorphising AI Is an Impediment to a Stable Society - Microsoft Research
The Yale Law Journal - Forum: The Ethics and Challenges of Legal Personhood for AI
Artificial Intelligence versus Human Intelligence: Which Excels Where and What Will Never Be Matched
The rise of Second Brain AI Chatbots: meet your Digital Twin - Hello I
Because friends tell me I use a lot of big words:
Glossary of Key Terms
Anthropomorphism: The attribution of human traits, emotions, or intentions to non-human entities, such as AI systems.
Cognitive Extension: The idea that AI or other external tools can function as an extension of human thinking, helping with memory, decision-making, or problem-solving.
Distributed Cognition: A theory from cognitive anthropology that suggests thinking is not confined to a single mind but is spread across people, tools, and the environment.
Retrieval-Augmented Generation (RAG): A technique in AI where the system pulls information from external sources, such as databases or documents, before generating responses, ensuring more accurate and context-aware results.
Second Self AI: A personalized AI model that mimics an individual’s knowledge, reasoning patterns, and communication style to serve as a digital thinking partner.
Strategic Blind Spots: Gaps in thinking or analysis that can lead to overlooked risks or missed opportunities in decision-making.
Automation vs. Augmentation: Automation refers to AI performing tasks without human input, while augmentation refers to AI enhancing human capabilities by working alongside them.
Enjoyed reading your perspective on this and agree wholeheartedly. I like the pdc framing a lot. I’d love to hear more details about how you use AI — workflow, tools, how you personalize, etc.