"I'm here today to talk about thinking for yourself. And I must admit, I did use AI to help me think about it." That opening line from AI and design researcher Advait Sarkar lands because it's true for a lot of us: we're using AI constantly, including when we're trying to decide how we feel about AI.
The real question is how we use it. If AI is only an assistant that finishes your sentences and spits out answers, it can quietly change what your work feels like, and what your brain practices each day. Sarkar's idea is different: AI as a tool for thought, something that pushes your reasoning forward instead of replacing it.
The "outsourced reason" workday is already here
Sarkar paints a day-in-the-life that feels only slightly exaggerated because the steps are so familiar. You open your inbox and the first move is, "Summarize this." Then you hit a tricky email and think, "Write a response." Next comes the report, the deck, the analysis, the prototype. Each time, the easy option is the same: let the system do the heavy lifting.
This isn't framed as laziness. It's framed as normal workflow gravity. When the tool is right there, and it works, you start using it for everything, including the parts of the job that used to force you to slow down and form a view.
From email triage to "vibe coding," AI can take over the whole chain
The pattern is simple: you stop engaging with the raw materials of your work. Instead of reading, drafting, analyzing, and building, you manage prompts and approve outputs. Sarkar's sequence is a checklist many knowledge workers now recognize:
- Summarize emails and threads so you don't read them end to end.
- Generate replies when you're unsure how to respond.
- Start reports with an AI draft to avoid the blank page.
- Hand off analysis and trust the results because they look plausible.
- Auto-generate slides, then tweak the wording.
- Prototype by "vibe coding" something you didn't really design.
On paper, that looks like productivity. In practice, it can become something else: a steady habit of skipping the exact moments where judgment forms.
When your job becomes validating a robot's opinions
Sarkar describes a new kind of writer's block: it's not staring at an empty page, it's staring at a page the AI filled and wondering whether you agree.
"I've become a professional validator of a robot's opinions."
That line stings because it's accurate. If AI proposes the framing, the structure, and the claims, your role can shrink to "sounds good" or "sounds off," without always being able to say why.
Sarkar calls this the age of outsourced reason, where knowledge workers become "intellectual tourists" in their own work. You visit ideas, but you don't inhabit them. Your relationship to the work becomes intermediated by AI, which can feel like distance, even when the output is polished.
What AI assistants can do to your thinking
The main warning isn't "AI is bad." It's that AI assistant workflows can change the way people think, and not always in ways they notice. Sarkar walks through four areas that take the hit: creativity, critical thinking, memory, and metacognition (thinking about your thinking).
Creativity can collapse into a boring hive mind
Individually, AI can feel like a creativity boost because you get ideas fast. The catch is what happens across a team or a whole organization.
Sarkar points to research suggesting that groups using AI assistants can produce a smaller range of ideas than groups working manually. The result is a kind of sameness: fewer weird options, fewer risky angles, fewer original frames.
He jokes that we've built a hive mind, except the hive is really boring and keeps suggesting the same five ideas.
If you've ever watched a brainstorm get pulled toward the first "reasonable" list of options, you've seen the mechanism. AI can make that pull stronger because it's confident, fast, and stylistically convincing.
For a plain-language breakdown of this effect in creative work, see Wharton's summary of research on AI and idea variety.
Critical thinking drops when confidence shifts from you to the tool
Sarkar describes survey findings where knowledge workers reported putting less effort into critical thinking when they used AI than when they worked manually. The effect was stronger when people trusted the AI more and trusted themselves less.
That's an uncomfortable pairing because it's not only about tool quality. It's about self-belief. If the system sounds authoritative, and you're tired or rushed, it's easy to slide from "assist me" to "decide for me."
If you want the research source he's pointing at, Microsoft Research published a survey-based paper on generative AI and critical thinking effort.
Memory and metacognition are the quiet losses
Memory is straightforward: when people rely on AI to write, they remember less of what they wrote. When they read AI-generated summaries, they tend to remember less than if they read the full document.
Metacognition is trickier, but it matters. In real work, "thinking" includes setting goals, breaking a task down, deciding what matters, and checking whether your output makes sense. Sarkar argues that AI tools can make this harder because they insert a layer between you and the material.
In his framing, you become a middle manager for your own thoughts.
One way to see the memory side of this, in research terms, is the paper titled "The AI Memory Gap" (preprint), which studies how people misremember what they created with AI versus without it.
To make the trade-offs easier to scan, here's a simple comparison of the two modes Sarkar contrasts:
| What you're using AI for | "Assistant" workflow (obeys) | "Tool for thought" workflow (challenges) |
|---|---|---|
| Starting a draft | Generates a full first pass from a prompt | Builds from your outline, notes, and decisions |
| Reading inputs | Summarizes so you can skip the source | Uses lenses and prompts so you read strategically |
| Improving writing | Autocompletes and smooths over weak claims | Raises counterarguments and points out gaps |
| Your role | Validator and editor | Author and decision-maker, with support |
The point isn't to never automate. It's to stop treating speed as the only goal.
Why "mundane" tasks protect your cognitive fitness
A lot of the danger sits in the boring parts of work. Those small tasks are where you repeatedly practice creativity, skepticism, and recall. When AI does those reps for you, you lose everyday training time.
Sarkar makes the case that these daily opportunities protect your "cognitive musculature." They help you rise to the occasion when you finally hit a hard problem that can't be solved with a generic answer.
The scary part is that the losses don't wait for high-stakes moments. If your default workday becomes summarize, draft, reply, analyze, deck, prototype, then approve, you may still ship. Yet your mind gets fewer chances to wrestle with uncertainty.
The "cure for exercise" problem
Sarkar's analogy is blunt: it's like inventing a cure for exercise, then wondering why you're out of breath all the time.
Thinking wasn't the problem. The strain of thinking is part of how you keep the ability. When you stop practicing, performance can drop, even if the output still looks good because the machine props it up.
That's also why "I'll just use AI for trivial stuff" can backfire. The trivial stuff is often the stuff that keeps you sharp.
AI as a tool for thought means it shouldn't just obey
Sarkar's alternative is a design stance: AI should challenge, not obey.
That doesn't mean arguing with you for fun. It means the system helps you understand the work, not only finish it. It helps you ask better questions, not only generate answers. It helps you explore the unknown, not only automate the known.
He describes this moment as a critical junction for knowledge work. Generative AI is already reshaping how people read, write, and decide. If the tools keep pushing toward "hands off, done for you," then outsourced reason becomes the default.
A "tool for thought" tries to bend the opposite direction.
For Sarkar's longer explanation from Microsoft Research, see "From assistant to tool for thought" on the Microsoft Research blog.
Inside the Microsoft Research prototype: Clara's proposal
A knowledge worker reads, annotates, and outlines with AI support that stays in the background, created with AI.To make the idea concrete, Sarkar shows a research prototype built with colleagues on Microsoft Research's Tools for Thought team in Cambridge (a live prototype, not a product). The demo uses a fictional scenario.
Clara works at a company that sells bottled beverages. After a meeting about an industry report on consumer preferences for sustainable packaging, her colleagues ask her to write a proposal on how the company should respond.
In other words, she can't just ship text. She has to understand the report, connect it to business context, and make a case.
"Lenses" turn summaries into task-focused views
Clara loads several documents into a workspace: the meeting transcript, an internal report from her business, and the industry report.
Instead of a single generic summary, she sees section-by-section summaries that Sarkar calls "lenses." The key idea is that these are customizable micro-representations of the text, tuned to what matters for the task.
In the demo, Clara picks a "consumer" lens. Then she chooses a section to read more deeply.
This matters because it changes the bargain. The system isn't saying, "Don't read." It's helping her decide what to read, and how to read it with intention.
Provocations add pushback, not autopilot
As Clara reads, she highlights excerpts and makes notes. Alongside that, the system generates commentary and critique called "provocations."
A provocation might surface an opportunity, raise a risk, or challenge an assumption. Clara can highlight it, annotate it, or ignore it.
That "ignore it" part is not a failure case. Sarkar argues that provocations aren't meant to be applicable all the time. They're meant to stimulate thought. If you understand the work deeply enough to confidently reject a suggestion, then the feedback still did its job because it forced a check.
This is a different relationship than typical AI suggestions, where the tool tries to be right as often as possible. Here, the tool tries to keep you mentally present.
Drafting without a chat box changes the whole feel
Clara builds an outline of her argument manually in a side pane. It's lightly structured, but it stays connected to the sources she read and highlighted. Because the outline is grounded in her notes and choices, the system can generate a proposal draft from it.
She can do simple, practical things like add a heading to generate a paragraph. She can also resize a paragraph to change its length, or preview different versions of the text along a dimension like tone (more inspirational versus more practical).
At strategic points, she writes directly. While she writes, provocations appear that don't complete her thoughts. Instead, they raise alternatives, point out fallacies, and offer counterarguments.
One detail Sarkar calls out is what you don't see: there's no chat box. Clara isn't "talking to" a pretend person. The computer supports the work quietly, in context, and without turning the whole job into prompt engineering.
The design principles behind "tools for thought"
Sarkar argues that with the right design, AI can bring back what assistant-style workflows often remove. He points to early results that suggest you can reintroduce critical thinking, reverse the loss of creativity, and build memory support that helps people read and write faster while remembering more.
A related research thread on the metacognition side appears in papers like "AI makes you smarter but none the wiser" (Computers in Human Behavior), which studies gaps between performance and metacognitive awareness.
Under the hood, Sarkar names a few simple design principles:
- Preserve material engagement: keep people in contact with source text, data, and real decisions.
- Offer productive resistance: add the right kind of friction so users pause, check, and reflect.
- Scaffold metacognition: prompt people to ask, "What's my goal, what's my evidence, what am I missing?"
He also stresses a value claim that's easy to miss: efficiency isn't the aim of tools for thought. Better thinking is, although sometimes you get both.
Why protecting human thought is a values choice, not nostalgia
At the end, Sarkar widens the lens. If AI gets so good that it can "think better" than humans, why care about protecting human thought at all?
He gives two answers. First, there may be forms of thinking that remain uniquely human strengths, including strengths we haven't named yet. Second, and more importantly, the ability to think well is tied to human agency, empowerment, and flourishing.
He connects today's concerns to older ones: people once asked if it mattered that we didn't memorize as much once books could remember for us. They asked if it mattered that we couldn't navigate as well once maps could guide us.
Now the questions get sharper: if machines can think for us, speak for us, grieve for us, pray for us, love for us, does it matter if we cannot?
What would you rather have, a tool that thinks for you or a tool that makes you think?
If you want more from Sarkar on this theme, TED collects it at more from Advait Sarkar on TED.
Conclusion
If AI turns your workday into approving machine output, you may still move fast, but you'll practice less of what makes you good at the job. Sarkar's "tool for thought" idea flips the goal: use AI to keep you engaged, curious, and skeptical, even when the task feels routine. The best future here isn't AI that replaces thinking, it's AI that helps you do better thinking on purpose. When you open your next doc or inbox, it's worth asking which tool you're choosing.



0 facebook:
Post a Comment