As We May Yet Think
Artificial intelligence as thought partner
In July 1945, The Atlantic magazine published a remarkable essay by Dr. Vannevar Bush entitled "As We May Think” (Bush, 1945). In this essay, Bush envisioned ways in which then-current and emerging technologies could help improve human thinking. Among other things, he foresaw the essential features of the World Wide Web and of Wikipedia. Cited more than 10,000 times according to Google Scholar, the essay was in many ways a harbinger of today's information society.
As visionary and influential as Bush's essay was, however, it left one important issue explicitly unexamined. As we approach the 80th anniversary of the essay’s publication, it is worth revisiting this omission, and considering how it might now be addressed today.
Bush identified three modes of thought—repetitive thought, creative thought, and mature thought—and he defined and discussed the first two in terms of how technology could make those types of thought more efficient. He specifically said, however, that "for mature thought there is no mechanical substitute.” He did not define “mature thought,” nor did he say why he believed technology could not support it. Eighty years and over 10,000 citations later, no one else seems to have addressed those two issues, either.
Artificial intelligence positions us to examine Bush’s assertion that "for mature thought there is no mechanical substitute.”
This article proposes a definition of mature thought and discusses how modern information technology—in the form of artificial intelligence—can not only support it, but do so in a novel way that differs significantly from earlier visions of using technology to simply improve efficiency.
Dr. Vannevar Bush
Dr. Bush held a number of distinguished positions in his career, such as Director of the Office of Scientific Research and Development during World War II, where he had a bird's eye view of wartime scientific research, including the creation of the atomic bomb (which he was instrumental in developing) (Goldberg, 1992). His essay reflects some of his views on the growth of knowledge and how we deal with it.
He believed that humans had been adding to the collective body of knowledge for centuries, but that our methods of storing and accessing that knowledge had not evolved accordingly. World War II had led to a tremendous increase in information and improved processes for generating it. However, we were unable to take full advantage of this increase because of our old-fashioned ways of managing it.
Bush identified two major problems with the situation at the time: first, capturing and storing information; second, retrieving it.
For the first problem, Bush envisioned improvements in cameras, voice recording, and other technologies that would allow scientists to quickly and easily record their activities, results, and conclusions. For example, he envisioned small cameras strapped to the forehead that would allow readers to record documents as they read and wrote. These recordings would be stored on rolls of microfilm.
The second problem, information retrieval, was a bit more difficult. At the time, information was organized using hierarchical indexing schemes. Bush believed that the human mind works by association, and he thought that a different way of organizing information was needed to take advantage of this. This, in turn, would allow people to find and retrieve stored information more efficiently.
Bush envisioned users being able to sift through the growing mountain of information on microfilm and to link related items, allowing them and others to traverse multiple documents while following the thread of related ideas. In fact, he foresaw a new profession of people who would, in his words, "delight in the task of establishing useful trails through the enormous mass of the common record” for others to follow.
Finally, Bush envisioned a hypothetical electromechanical device he called the “memex” for individual use. Physically, the memex would be like a desk into which people could insert the microfilms and personal data and then "browse" the web of connections between different subjects. He described it as "an enlarged, intimate supplement" to memory.
Bush's vision was remarkable for its time, and given the impact it has had, it is worth comparing the world he imagined with the one that has emerged.
Bush’s vision has been realized and surpassed in ways he could never have imagined.
While some of Bush's ideas seem quaint in retrospect, the fact is that technology has made it possible to capture and store information quickly and easily. Anyone with a smartphone—or smart glasses—can easily capture voice, video, and photos. These can then be quickly uploaded to sites like YouTube, Instagram, etc. And major technology giants are working on eyeglasses with embedded cameras to make this even easier.
Cloud technologies make it possible to store vast amounts of different types of documents, such as text documents, spreadsheets, and presentations.
We didn't see the emergence of a professional class who "delight in the task of establishing useful trails through the enormous mass of the common record." Instead, we saw the development of things like relational databases, digital metadata, hyperlinking technologies, and search engines such as Google. All of these have made associative search possible and commonplace. Voice technologies like Siri are making it even easier.
And the memex? Personal computers with track pads and mice, tablets and smartphones with touchscreens all capture the essence of the memex.
This combination of modern hardware, software, and networking technologies has allowed much of Bush's vision to become realized in ways he could not possibly have foreseen.
But to what end? Bush didn't just see these technologies as ends in themselves—he had a specific purpose in mind. He saw them as ways to improve certain aspects of the way we think. To explore this further, we will examine Bush’s three modes of thought more closely.
Three Modes of Thought
"Repetitive thought" is exactly what it sounds like: rote activities such as arithmetic operations or algorithmic processes. The ability of technology to improve repetitive operations was already underway when Bush wrote his essay, and aside from some speculation about how it might eventually help in mathematics, he did not offer much more about it.
"Creative thought," on the other hand, is more complicated. In Bush’s worldview, creative thought involves selecting, and he viewed the selection process in two different ways. One was simple selection, which could be accomplished with predefined selection criteria programmed into things like punch cards. The second was related to the associative processes discussed above. And that was arguably the crux of his paper. The technologies he envisioned—the microfilms, the links, the memex—would allow users to browse a range of topics, follow associative paths, make connections between related topics quickly and easily, and thereby accelerate the selection process, or creative thought.
Bush's third category of thought was what he called "mature thought." While he believed that the technologies he envisioned could improve repetitive and creative thought, he explicitly said that "for mature thought there is no mechanical substitute.” But what did he mean by that?
Again, nowhere in the essay does Bush define mature thought, and despite over 10,000 citations, no one else seems to have addressed what he meant by it, either.
In the context of his life and times, it's hard to say what Bush meant when he referred to mature thought. He could have been talking about ethical reasoning or moral reflection. He could have been talking about strategic thinking. Or, he could have had something else in mind, like artistic thinking.
In the absence of any clear indications of Bush’s intended meaning, this article will examine a specific aspect of thinking that could be considered mature thought: decision-making. This aspect of thinking has received a high degree of attention and discussion in recent decades, particularly in the field of behavioral economics.
Bush left “mature thought” undefined; “decision-making” is one form it might take.
Decision-making relies on rational thinking to achieve desired outcomes. It has been studied to understand what the challenges to doing so are, and what steps we can take to mitigate those challenges. A cottage industry of books, courses, checklists, TED talks, and other products has grown up around the topic.
When we talk about making decisions, we mean choosing among alternatives in a way that supports our goals, values, or desired outcomes. This process typically involves considering the possible outcomes, weighing the pros and cons, and choosing the course of action that best aligns with our desired end state. Examples of everyday decision-making:
Should I accept this job offer or wait for a better opportunity?
Which college or university should we choose for our teenage child?
Should I invest in a rental property or the stock market?
It's worth noting that in considering repetitive thought and creative thought, Bush believed that technology could help make these processes more efficient. In other words, he believed that technology could speed up the processes so that we could reach our goals more quickly than we could without the technical assistance.
But when we consider decision-making—a process we have associated with mature thought—effectiveness is generally a more desirable outcome than efficiency. By effectiveness, we mean getting the right result, not just getting a result more quickly. We don't just want to make decisions; we want to make good decisions. We want to make the best possible decisions. Efficiency may be important in decision-making, but it is usually less important than effectiveness, or getting the best results.
Before we look at how current technology (in the form of artificial intelligence) can help us make better decisions, we ask: What are the barriers to making good decisions? And to answer that, we'll look at cognitive biases and the role they play in undermining good decision-making.
Cognitive Biases
Cognitive biases are unconscious and systematic errors in thinking that occur when people process and interpret information in their environment. These biases are believed to result from mental shortcuts our minds take when confronted with complex information or uncertain situations, and they can distort an individual's perception of reality, resulting in inaccurate interpretation and processing of information (Haselton, Nettle, & Andrews, 2005).
These biases have been likened to optical illusions, like the Müller-Lyer illusion, below. In this illusion, the three horizontal lines in the top of the image appear to be of different lengths, even though the bottom measurements shows that they are all the same. But even knowing they are all the same length, we continue to see them as different.
Similarly, when processing and interpreting information, cognitive biases can trick our brains into distorting situations and then drawing flawed conclusions. And just as knowing about the Müller-Lyer illusion doesn’t stop us from seeing the lines incorrectly, simply being aware of cognitive biases doesn’t make us immune to them—our minds continue to unconsciously filter information in ways that feel accurate but often result in systematic errors in judgment.
Cognitive biases should not be confused with logical fallacies. Logical fallacies are errors in reasoning that people can be trained to recognize and avoid. Cognitive biases, on the other hand, involve the way our brains actually work. They are much harder to spot and fix.
One common cognitive bias is the anchoring bias. This is when people rely too much on the first piece of information encountered (the “anchor”) when making a decision. Even after additional information becomes available, this anchor can overly influence the final decision.
For example, imagine you are shopping for a birthday present for a friend. You find a gift that you know your friend would really like, but it costs $100, which is significantly more than your $40 budget. As you continue shopping, you find another gift you think they would like that costs only $60. This is still more than you budgeted for, but it is much less expensive than the $100 gift (the “anchor”), and you might justify the purchase on that basis.
The anchoring bias has been shown to be a powerful force in our thought processes, and it is commonly used in sales practices by companies across the spectrum of products and services.
Cognitive biases don’t just distort our thinking processes—they do so in ways that trick us into believing our flawed conclusions.
Another common cognitive bias is confirmation bias. This is when people look for, understand, and remember information in a way that confirms what they already believe. This can lead to distorted thinking and decision-making, because people may ignore or undervalue evidence that contradicts their views.
For example, police may identify a suspect early in an investigation and then seek only evidence that confirms rather than refutes their suspicions. Or a physician may make a diagnosis early in an examination of an illness, and then only look for things that confirm their initial hunch.
One historic example of the dangers of unchecked cognitive biases is the 1986 Space Shuttle Challenger disaster, in which a Challenger rocket exploded 73 seconds after liftoff, killing all seven crew members onboard, even though several engineers had warned that the previous night's subzero temperatures could have compromised the O-ring seals and made the launch unsafe. In a 2015 study, Safety examined the Challenger incident as one of five case studies illustrating how cognitive biases influenced decision-making and contributed to disaster (Murata, Nakamura, & Karwowski, 2015).
Another example is the disastrous 1961 Bay of Pigs invasion, in which the United States attempted to overthrow the communist government of Cuba with a poorly trained and poorly equipped army of Cuban dissidents. A 2017 paper from the Simons Center used the Bay of Pigs invasion to illustrate how confirmation bias, sunk cost bias, and other cognitive biases led to a flawed decision-making process that had significant negative consequences, including the deaths of 176 American-backed Cubans, the capture of over 1,200, and a tremendous loss of international prestige for the United States (Thomas & Rielly, 2017).
There is no single, definitive list all known cognitive biases, but there are dozens that are often mentioned. Some of the most common ones are motivated reasoning, availability bias, hindsight bias, the framing effect, the endowment effect, and the Dunning-Kruger effect.
Understanding and addressing cognitive biases is critical to improving individual and organizational decision-making, not just in life-threatening decisions, but in all categories. To better understand them, we turn to the psychologists who are often credited with having first identified them.
Daniel Kahneman and Amos Tversky
Cognitive biases were identified by the late Israeli psychologists Dr. Daniel Kahneman and Dr. Amos Tversky in their 1979 paper, Prospect Theory: An analysis of decision under risk (Kahneman & Tversky, 1979). Along with American psychologist Dr. Paul Slovic, they later explored these biases in a series of papers, culminating in their 1982 book, Judgment Under Uncertainty: Heuristics and Biases, which documented how people systematically deviate from rational thinking (Kahneman, Slovic, & Tversky, 1982).
Rational thinking is generally considered to be thinking that is guided by reason. Although there are several forms of rationality discussed in the academic literature, one common definition is that a decision is rational if it maximizes expected utility. In other words, a decision is rational if the utility value of an outcome, when multiplied by the probability of that outcome, is the highest among all possible choices. This definition traces back to a 1738 paper by Daniel Bernoulli called "Exposition of a New Theory on the Measurement of Risk," which was published in the Petersburg Academy Proceedings (Bernoulli, 1738/1954).
Kahneman and Tversky's seminal finding was that people are not always rational when making decisions. Specifically, the fear of loss is often stronger than the desire for gain. In other words, for many people, the pain of the prospect of losing a certain amount of money outweighs the pleasure of the prospect of gaining the same amount (Kahneman & Tversky, 1979).
This finding suggests that people value gains and losses asymmetrically, which goes against classic utility theory. According to the traditional economic model, people should treat gains and losses the same because they value outcomes based on expected utility. Kahneman and Tversky's discovery that people treat gains and losses asymmetrically was considered a significant deviation from this rational model. The cognitive bias associated with this deviation is called "loss aversion."
The discovery of cognitive biases showed that our decisions are often driven by imperceptible mental shortcuts rather than by reason.
Since then, more cognitive biases have been discovered, and more discussions and theories have emerged about why they exist. There is no single accepted authority on the number and nature of cognitive biases, but it is widely recognized that many exist, and that they interfere with the ability to make effective decisions.
Possible Solutions
Despite the awareness of cognitive biases' potential impact on decision-making, addressing these biases can present challenges, especially for individuals in a non-organizational setting.
First, the sheer number of cognitive biases is overwhelming. With dozens of recognized cognitive biases, it is difficult to imagine anyone constantly reviewing them all to identify the potential flaws in their thought processes.
Second, and most importantly, the very nature of cognitive biases makes it difficult to recognize them in ourselves and to deal with them effectively.
In their 2021 book, Noise: A Flaw in Human Judgment, Kahneman and co-authors Dr. Olivier Sibony and legal scholar Cass R. Sunstein discuss two techniques for "debiasing" our thinking: ex post (corrective debiasing), and ex ante (preventive debiasing) (Kahneman, Sibony, & Sunstein, 2021). They conclude that while both techniques have some value, they are insufficient in situations where the direction of bias is unknown, or where multiple biases might be at play. Instead, they propose a real-time, more adaptive approach as a potentially more effective solution.
The real challenge isn’t just identifying cognitive biases—it’s doing something about them in real time, before they lead us astray.
In an organizational setting, they propose a method for achieving real-time de-biasing with the help of what they call "decision observers," external agents who can view thought processes somewhat objectively and better identify biases. They discuss three possible candidates for decision observers:
A supervisor who pays attention to a team's processes and is alert to biases that may arise and affect the outcome.
An employee, a designated "bias buster," who monitors his or her team's processes for bias.
An outside facilitator who can observe the team in action from a relatively neutral position.
While this approach may work for organizations, there are numerous limitations to consider, including the authors’ own acknowledgment of the difficulty of implementing these methods. For individuals dealing with everyday life decisions, access to these types of resources for help is unlikely.
For individuals, former professional poker player Annie Duke suggests forming "decision pods," which are small groups of trusted friends, family, and/or acquaintances who agree to work together under a charter that specifies the norms the group will follow in reviewing and critiquing each other's decision-making processes (Duke, 2018).
In Duke's case, her decision pod consisted of fellow professional poker players who reviewed and critiqued each other's decisions in the poker games they played. While this approach seems feasible on the surface, the fact is that Duke's pod consisted entirely of members of a niche profession who make decisions in that niche for a living. As with Kahneman et al.'s decision observers, it is questionable whether this is a viable approach for individuals making decisions in everyday life.
With both decision observers and decision pods, the essential ingredient is one or more external, trusted partners who can provide objective and dispassionate critique in a setting where such critique is acceptable and normative. While this concept holds promise—especially in structured or professional settings—it seems less practical for individuals making everyday decisions, who often lack access to trained or willing partners and may face social or logistical barriers to forming such groups.
This is where emerging artificial intelligence applications could help. And here we replace the terms "decision observer" and “decision pod” with a term that implies a broader scope and more active involvement: Artificial Intelligence Decision Partner (AIDP).
Artificial Intelligence as Decision Partner
When discussing artificial intelligence, we often think of ChatGPT, Claude, Gemini and several other applications that are built on top of large language models (LLM), which operate using billions—sometimes trillions—of parameters to predict the next word in a sequence of words.
While LLM-based applications have demonstrated remarkable capabilities, they have also been criticized for a number of things, including a lack of internal representations of the world, difficulty with complex reasoning, and for "hallucinating" (fabricating) answers to questions. In their currently popular forms, these applications do not yet have the capabilities that an effective AIDP would need.
However, LLMs can be fine-tuned using specialized techniques such as reinforcement learning from human feedback (RLHF) (Shanahan, 2024). Fine-tuning can shape their responses, improve accuracy, and reduce the tendency to fabricate answers. A properly fine-tuned LLM-based application could potentially serve as the basis for an AIDP.
An example of such a fine-tuned LLM-based application is Meta’s AI application CICERO, which has shown remarkable characteristics—including strategic reasoning and negotiation skills—when playing the board game Diplomacy.

Diplomacy is a seven-player board game that Meta says can be compared to a combination of the board game Risk, the card game Poker, and the TV show Survivor. According to Meta, "It requires cooperation, negotiation, trust, and mutual support among players as they compete for territory. CICERO was able to reason and strategize about the players' motivations, then use natural language to communicate, reach agreements to achieve common goals, form alliances, and coordinate plans" (Meta, n.d.)
Computer scientist and Turing Award winner Yann LeCun stated, “CICERO plays the strategy game Diplomacy at human level. It is able to use language to build relationships with humans and collaborate with them to achieve a goal" (LeCun, 2022).
While an effective AIDP would not necessarily require the exact skills that CICERO demonstrates, CICERO points to potential in AI that could provide the basis for an effective AIDP.
We can imagine an AI application on par with CICERO, equipped with a comprehensive catalog of recognized or widely accepted cognitive biases; a structured checklist for evaluating thought processes similar to Kahneman, Sibony, and Sunstein's "Bias Observation Checklist" (Appendix B in Noise); the ability to reason and strategize with a human partner; that maintains neutrality in its interactions; and that demonstrates emotional intelligence by not taking things personally when a human gets upset.
An AI that can negotiate, strategize, and build trust with humans in a game like Diplomacy could almost certainly be effective in helping us make decisions.
And we can imagine this AI application as a component of a larger, comprehensive system consisting of something like a smartphone with voice-to-text and text-to-voice functionalities, pervasive Internet access, and access to cloud storage.
Such an AI could almost certainly function as an effective decision partner for a human user, available 24 hours a day, 7 days a week for collaboration to identify and mitigate cognitive biases in their decision-making processes.
How Might This Work?
A user would create an account and a profile, and would be able to select certain characteristics of their AIDP, such as voice, name, personality traits, and so on. And the user could also set norms for the interactions.
The AIDP, for its part, would learn key initial qualities about the user through a preliminary interview. Recent research at Stanford, Northwestern University, the University of Washington, and Google DeepMind has demonstrated that with just a two-hour interview, “simulation agents” can acquire sufficient information about an individual to replicate that person’s responses to personality tests, social surveys, and logic games, achieving up to 85% similarity (Park et al., 2024).
A similar type of simulation agent could facilitate the AIDP's rapid familiarization with the user and establish an initial understanding of relevant aspects of the user's personality. Over time, subsequent interactions would then refine and expand this understanding.
The user might initiate a new session by indicating their intention to reach a decision. The AIDP would then pose standard inquiries to gather background and context for the issue. For instance, the user might state, "I’m trying to decide whether to buy a new car."
To which the AIDP might respond, "Can you provide some context?"
User: "I’ve been driving the same car for almost seven years now, and I think it’s time for an upgrade. I’ve been looking at a few models, but I’m not sure which one to choose."
AIDP: "What models are you considering?"
User: "I’m mostly looking at mid-range SUVs, but there’s a luxury model that caught my eye. It’s priced at $70,000, which is a lot more than I had initially planned to spend."
AIDP: "How did you arrive at that $70,000 model as a potential option?"
User: "Well, I visited a dealership, and after I told the salesperson what I had in mind, he showed it to me. It’s got all the features I want and then some, but it’s way beyond my budget."
AIDP: "Was the luxury model the first car the salesperson showed you?"
User: "Yes, it was."
AIDP: "Have you looked at other cars that are more in line with your budget?"
User: "Yes, I have. But after seeing the luxury model, the other cars seem less appealing, even though they’re much less expensive and fit my budget."
AIDP: "Are you familiar with the anchoring bias?"
User: "Not really, no. What is it?"
AIDP: "The anchoring bias refers to the tendency to rely heavily on the first piece of information presented when making decisions. In this case, the $70,000 luxury model was the first model you saw, and it may have set a mental anchor, making other, less expensive cars seem less attractive in comparison, even if they fully meet your needs and are a better fit for your budget."
User: "So the luxury car is making the others seem unsatisfactory, even though they’re more affordable and meet my criteria?"
AIDP: "I cannot make that determination, but it seems possible. The anchoring bias is considered very powerful in the literature, and sales organization across the spectrum are known to use it.”
User: “Is there some way to deal with this bias so that it’s not affecting my judgment and ultimate decision?”
AIDP: "Consider this: If a colleague at work were to present you with the same situation, what might you suggest to them to help them shake the anchor and see the situation from a new perspective?"
User: "OK. I can see how that might work. Anything else?"
AIDP: "There have been a number of articles published by professional organizations that explain the anchoring bias and offer ways to mitigate it. One often-cited way is to take a break from the decision, then create your own anchor by writing down your decision criteria, starting with a specific budget range."
User: "Can you go through some of those articles and give me a list of several other ways that are suggested?"
AIDP: "Yes, I can do that. But rather than me deciding which articles to use, why don't we start with me giving you a list of the 10 most cited articles I can find, the names of the organizations that published them, and the names of the authors. Then you tell me which ones you'd like me to use."
User: "OK. That will work."
The dialog would continue until the user was satisfied that they had thoroughly examined and accounted for possible cognitive biases in that particular decision-making process, and that their ultimate decision was as unbiased as possible.
An AI decision partner would not provide advice—rather, through skillful questioning it would allow the user to identify and mitigate the cognitive biases in their decision making.
In general, the AIDP would diligently avoid taking responsibility for the decision-making, refraining as much as possible from offering advice, opinions, or recommendations to the user. Instead—and this is a critical point—its primary mode of interaction would be to ask questions of the user, combining the Socratic method with person-centered therapy techniques pioneered by psychologist Carl Rogers (Yao & Kabir, 2023). The goal would be to create conditions in which the user recognizes their own cognitive biases and takes responsibility for appropriate action. Of course, as in any reflective, growth-oriented process, the user would need to accept the process and actively engage in order to achieve the desired results. And one potential task of the AIDP would be to help the user remain engaged and satisfied in what can often be a demanding process.
Challenges
Before such an AIDP could be considered effectively functional, there are challenges that must be addressed. The first and foremost challenge is that of trust: can a human user fully trust working with an AIDP.
The most fundamental component of trust is what is commonly referred to as information security. The U.S. Department of Commerce's National Institute of Standards and Technology defines this as "the protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to ensure confidentiality, integrity, and availability" (Nieles, Dempsey, & Pillitteri, 2017).
Users would need to have confidence in the security of their information and interactions with the AIDP before anything else.
Before we can trust an AI decision partner, we must be confident that it is secure, unbiased, and aligned with human interests.
A more subtle but equally important challenge would be to ensure that the AIDP itself is free of any biases that could potentially distort its functionality. AI systems are only as unbiased as the data and algorithms on which they are built, and it would be essential to implement continuous monitoring and updating of frameworks to identify and mitigate any emerging biases.
A key element of the solution would be for the AIDP to primarily ask questions and to refrain from offering advice, opinions, or recommendations unless specifically requested.
The larger issue, however, of ensuring that an AIDP does not have its own biases is a serious one that needs to be specifically addressed. Fortunately, initiatives under the umbrella of Responsible AI and AI Safety are working on this (AI Safety, n.d.; Responsible AI, n.d.). These initiatives prioritize the development of AI that is transparent, explainable, and fair, thereby minimizing the risk of inadvertently introducing bias into AI processes.
A third challenge is alignment. The question here is whether a user could trust an AIDP to remain aligned with the user's interests and to act in accordance with those interests, rather than pursuing its own interests. This is an issue that has generated considerable debate in the AI community, and again there are initiatives trying to address it.
And there would certainly be other challenges. For example, if AIDPs were deployed in different geographic regions, it would be essential for them to comply with various legal and regulatory frameworks for AI.
Overcoming these challenges requires a multifaceted approach that incorporates ongoing technological improvements, ethical oversight, and compliance with evolving regulations. By proactively recognizing and addressing these issues, developers can enhance the safety, fairness, and reliability of AIDPs, thereby strengthening their position as trusted decision partners.
Artificial Intelligence as Thought Partner
Assuming the challenges described above are overcome, we can imagine such an AIDP taking on a role similar to that of a Rogerian therapist and helping users identify, work through, and mitigate the effects of cognitive biases in their decision-making.
But we can also imagine more.
We can envision roles and functions that elevate AI from a mere decision partner to a comprehensive Artificial Intelligence Thought Partner (AITP), enabling users to improve the outcomes of various cognitive processes and potentially even improve the cognitive processes themselves.
For example, an AITP could serve as a coach, helping users navigate through difficult or unique problems using frameworks like the U.S. Central Intelligence Agency's Phoenix Checklist (Campbell, 2014). This checklist encourages examining problems from multiple perspectives by prompting users with a set of structured questions designed to clarify ambiguities and identify the unknown unknowns.
Similarly, an AITP could function as a tutor and assist users in solving Fermi problems, which are estimations that require breaking down complex questions into manageable components (Adam, 1995). By working with users while they practice these problems, an AITP would help them develop stronger estimation skills, improving their ability to make informed decisions in the face of uncertainty.
An AITP could also serve as a guide to help users apply Bayesian methods to update forecasts in real time. This is notoriously difficult to do in non-mathematical settings (Sirota, Vallée-Tourangeau, Vallée-Tourangeau, & Juanchich, 2015). The AITP would guide users in selecting the most relevant base rates, selecting appropriate statistical models, calculating likelihoods, and ultimately refining their probability assessment, allowing them to easily distinguish between 60/40 and 40/60 odds, or 55/45 and 45/55 odds, thereby providing a solid foundation for data-driven decision-making in real or near-real time.
And any number of other frameworks for decision-making, complex problem solving, forecasting, and related higher-level thought processes could be incorporated into an AITP.
Throughout the interactive exchanges, the AITP would continuously learn from the user, adapting itself to the unique contours of their personality and refining its understanding of the user’s cognitive patterns, preferences, and tendencies so that it could function most effectively.
The user, in turn, would gain deeper insights into their own thought processes, becoming more comfortable with the AITP and more adept at using its capabilities. Over time, these interactions could change and improve the user's native thought patterns. Studies suggest that interacting with AI can refine and improve human decision making, as evidenced by the evolving strategies of Go players using AI-powered programs (Choi, Kang, Kim, & Kim, 2025). Similarly, sustained engagement with an AITP could improve the user's higher-level cognitive functions.
At the same time, the use of an AITP could help offset the atrophy of cognitive skills that results from over-reliance on AI for other things. A recent study by Microsoft suggests that while the use of generative AI can improve worker efficiency, it can also inhibit critical engagement with work and potentially lead to long-term overreliance on the tool and diminished skills for independent problem solving (Lee, H-P, et al, 2025).
As a more general thought partner, AI could help improve our cognitive processes while mitigating the consequences of over-reliance on automation.
This dynamic relationship—rooted in mutual evolution—would evolve into a seamless and intuitive collaboration in which an AITP would anticipate cognitive needs while challenging the user to think more critically. Over time, a true thought partnership would emerge, enhancing the user’s native ability to deal with complex problems, make sound data-backed forecasts, and make effective decisions with increasing clarity and precision.
Vannevar Bush may have been right when he said that for mature thought there is no mechanical (or artificial) substitute. But a well-designed AITP could certainly enhance and improve the effectiveness of mature thought, however we define it.
Of Bicycles and Dragonflies
Vannevar Bush was not the only visionary who saw technology primarily as a way to increase efficiency in human thinking.
In 1990, Apple Computer co-founder Steve Jobs famously likened computers to "bicycles for the mind" (Jobs, 1990, 00:03:30). Jobs had previously learned that condors—among all animals and humans—expend the least amount of energy to travel one kilometer. In other words, condors were the most efficient animals in terms of locomotion.
However, when a human on a bicycle was included in the comparison, the human on the bicycle surpassed the condor, becoming the most efficient mover in the animal kingdom. Jobs's vision for computers was that they would do for the mind what bicycles could do for human locomotion: make it more efficient.
Jobs might have seen computers in a different light had he studied dragonflies.
Dragonflies are regarded as the most effective predators in the animal kingdom, with some studies suggesting that they catch their prey up to 97% of the time. Their unique vision, which encompasses nearly 360-degrees, coupled with their ability to calculate prey trajectory, and their wings—which can function independently to make almost instantaneous turns to intercept their prey—all contribute to their extremely high success rate in the hunt (Gage, 2018)
A well-designed AITP would have the capacity to develop and enhance human cognitive abilities, potentially making us as effective in complex cognitive processes as dragonflies are in capturing prey. In contrast to Jobs' analogy of a bicycle, which merely enhances physical abilities without changing the human, a well-designed AITP could fundamentally transform and improve human cognition.
Many of Bush's ideas in "As We May Think" undoubtedly seemed far-fetched at the time, and now—80 years later—may also appear quaint. But we should keep in mind that these ideas were not the primary goal of the essay; rather, they were simply the means to achieve the goal using the technologies of the era. The real goal was to use technology to improve our thinking by better enabling associative thought processes. And that goal was ultimately realized with the introduction of hyperlinks and related technologies. Implementation considerations aside, Bush’s underlying vision was sound and borne out by subsequent developments.
Eighty years later, AI can help us truly realize Vannevar Bush’s vision for as we may yet think
Similarly, many of the ideas presented here may seem far-fetched now and might appear quaint at some point in the future. But again, it is important to distinguish between the proposed implementation and the goal, which is to improve our thinking with the help of technology. With the advent of artificial intelligence, it is now possible to envision improvements in the actual thought processes themselves. It is conceivable to envision artificial intelligence constructs that function not merely as tools to be utilized, but as collaborators in our cognitive processes—helping us truly realize Vannevar Bush’s vision for as we may yet think.
References
Adam, J. (1995). Fermi problems: Educated guesses. Quantum: Journal of Mathematics and Science, 6(1), 20–24. http://static.nsta.org/pdfs/QuantumV6N1.pdf
AI Safety. (n.d.). AI Safety. Retrieved January 26, 2025, from
https://www.safe.ai
Bernoulli, D. (1954). Exposition of a new theory on the measurement of risk (L. J. Savage, Trans.). Econometrica, 22(1), 23–36. (Original work published 1738). https://doi.org/10.2307/1909829
Bush, V. (1945, July). As we may think. The Atlantic. https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/
Campbell, J. (2014). The Phoenix Checklist: Turning complex problems into simple solutions. Konecky & Konecky.
Choi, S., Kang, H., Kim, N., & Kim, J. (2025). How does artificial intelligence improve human decision-making? Evidence from the AI-powered Go program. arXiv. https://doi.org/10.48550/arXiv.2310.08704
Duke, A. (2018). Thinking in bets: Making smarter decisions when you don’t have all the facts. Portfolio.
Gage, G. (2018, June). How a dragonfly's brain is designed to kill [Video]. TED. https://www.ted.com/talks/greg_gage_how_a_dragonfly_s_brain_is_designed_to_kill
Goldberg, S. (1992). Inventing a climate of opinion: Vannevar Bush and the decision to build the bomb. Isis, 83(3), 429–452. https://doi.org/10.1086/356203
Good Judgment. (n.d.). Good Judgment. Retrieved January 26, 2025, from
https://goodjudgment.com/
Haselton, M. G., Nettle, D., & Andrews, P. W. (2005). The evolution of cognitive bias. In D. M. Buss (Ed.), The handbook of evolutionary psychology (pp. 724–746). John Wiley & Sons.
Jobs, S. (1990). Memory & imagination: New pathways to the Library of Congress (M. Lawrence, Director). Michael Lawrence Films.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. https://doi.org/10.2307/1914185
Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.
Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A flaw in human judgment. Little, Brown Spark.
LeCun, Y. [@ylecun]. (2022, November 22). CICERO [Post]. X. Retrieved January 26, 2025, from https://x.com/ylecun/status/1595082051656581120
Lee, H.-P., Drosos, I., Sarkar, A., Rintel, S., Wilson, N., Tankelevitch, L., & Banks, R. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’25), Yokohama, Japan. Association for Computing Machinery. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
Life magazine. (1945, September). As we may think [Illustration]. Life, 19(11), 112. Public domain via Wikipedia.
Meta. (n.d.). AI at Meta presents CICERO. Retrieved January 26, 2025, from https://ai.meta.com/research/cicero/
Murata, A., Nakamura, T., & Karwowski, W. (2015). Influence of cognitive biases in distorting decision making and leading to critical unfavorable incidents. Safety, 1(1), 44–58. https://doi.org/10.3390/safety1010044
Nieles, M., Dempsey, K., & Pillitteri, V. Y. (2017). An introduction to information security (NIST Special Publication 800-12 Revision 1). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-12r1
Park, J. S., Zou, C. Q., Shaw, A., Hill, B. M., Cai, C., Morris, M. R., Willer, R., Liang, P., & Bernstein, M. S. (2024, November 15). Generative agent simulations of 1,000 people. arXiv. https://doi.org/10.48550/arXiv.2411.10109
Responsible AI. (n.d.). Responsible AI. Retrieved January 26, 2025, from
http://www.responsible.ai
Shanahan, M. (2024). Talking about large language models. Communications of the ACM, 67(2), 68–79. https://doi.org/10.1145/3624724
Sirota, M., Vallée-Tourangeau, G., Vallee-Tourangeau, F., & Juanchich, M. (2015). On Bayesian problem-solving: Helping Bayesians solve simple Bayesian word problems. Frontiers in Psychology, 6, 1141. https://doi.org/10.3389/fpsyg.2015.01141
Thomas, T., & Rielly, R. J. (2017). What were you thinking: Biases and rational decision making. InterAgency Journal, 8(3), 98–105. https://thesimonscenter.org/wp-content/uploads/2017/08/IAJ-8-3-2017.pdf
Yao, L., & Kabir, R. (2023). Person-centered therapy (Rogerian therapy). In StatPearls. StatPearls Publishing. Retrieved January 26, 2025, from https://www.ncbi.nlm.nih.gov/books/NBK589708/
Credits
Header image: Woman and representational AI thought partner in conversation. AI-generated image created by the author using Photo Realistic Image GPT (genigpt.net).
Figure 1: Dr. Vannevar Bush
Public domain image by the Office for Emergency Management, U.S. Department of the Treasury, taken during World War II.
Sourced from Wikipedia: https://en.wikipedia.org/wiki/File:Vannevar_Bush_portrait.jpg
Figure 2: Artist’s rendition of a head-mounted camera to capture text
Originally published in Life magazine, Volume 19, Number 11, September 10, 1945, p. 112.
Sourced from Wikipedia: https://en.wikipedia.org/wiki/File:As_We_May_Think.jpg
Figure 3: Early Google Smart Glasses
Photo by Loïc Le Meur (2013). Licensed under CC BY 2.0.
Sourced from Wikipedia: https://en.wikipedia.org/wiki/File:Lo%C3%AFc_Le_Meur_on_Google_Glass.jpg
Figure 4: Everyday Decision-Making
AI-generated image created by the author using Photo Realistic Image GPT (genigpt.net).
Figure 5: Müller-Lyer illusion
Image by Fibonacci (2007). Licensed under CC BY-SA 2.5.
Sourced from Wikipedia: https://en.wikipedia.org/wiki/File:M%C3%BCller-Lyer_illusion.svg
Figure 6: Space Shuttle Challenger Disaster
Public domain image by NASA, taken by Kennedy Space Center on January 28, 1986.
Sourced from Flickr: https://www.flickr.com/photos/nasa2explore/10697912315/
Figure 7: Daniel Kahneman and Amos Tversky
Left: Daniel Kahneman image by nrkbeta (2009), licensed under CC BY-SA 2.0.
Source: https://en.wikipedia.org/wiki/File:Daniel_Kahneman_nrkbeta.jpg
Right: Amos Tversky image used under fair use for visual identification in an educational context.
Source: https://en.wikipedia.org/wiki/File:Amos_Tversky.jpg
Figure 8: Prospect Theory – Gains and losses are viewed asymmetrically
Graph by Marc Oliver Rieger (2006).
Sourced from Wikipedia: https://en.wikipedia.org/wiki/File:Valuefunprospecttheory.gif
Figure 9: The board game Diplomacy
Map by Martin Asal. Licensed under CC BY-SA 3.0.
Sourced from Wikipedia: https://en.wikipedia.org/wiki/File:Diplomacy_board_map.png
Figure 10: Artificial Intelligence Decision Partner (AIDP)
Illustration by Joseph Barbaccia (josephbarbaccia.art), used with permission.
Figure 11: Components of trust
Composite image created by the author using elements from an original illustration by Joseph Barbaccia (josephbarbaccia.art), used with permission.
Figure 12: Artificial Intelligence Thought Partner (AITP)
AI-generated image created by the author using Photo Realistic Image GPT (genigpt.net).
Figure 13: Steve Jobs and one of his early bicycles for the mind
Time magazine cover from October 17, 2011. © 2011 Time Inc. Used under fair use for educational purposes.
Figure 14: Dragonflies
Left: Photo by Shanthanu Bhardwaj (2012), licensed under CC BY-SA 2.0.
Source: https://en.wikipedia.org/wiki/File:Dragonfly_14-08-2012.jpg
Right: Photo by Jens Buurgaard Nielsen (2005), licensed under CC BY-SA 3.0.
Source: https://en.wikipedia.org/wiki/File:Dragonfly_August_2005.jpg














