Thinking a Little Weird Without Murdering Its Strangeness

**Links**: [Blogger](https://bryantmcgill.blogspot.com/2026/04/weird.html) | [Substack](https://bryantmcgill.substack.com/p/thinking-a-little-weird-without-murdering) | [Obsidian](https://bryantmcgill.xyz/articles/Thinking+a+Little+Weird+Without+Murdering+Its+Strangeness) | Medium | Wordpress | [Soundcloud 🎧](https://soundcloud.com/bryantmcgill/weird) **AI as a Thinking Partner for Creative Minds in the Borderlands of Possibility** The most common error people make with artificial intelligence is approaching it primarily as a machine for completing tasks, when one of its most profound uses is as a **thinking environment**. People ask it to summarize, schedule, draft, extract, and optimize, and those uses are real enough, but they do not touch the deeper transformation. For certain kinds of people — those whose primary pleasure is not entertainment but thought itself — AI is not merely a productivity tool. It is a **live cognitive medium**, a responsive field in which intuitions can be tested, expanded, disciplined, translated, challenged, and returned in clearer form without being prematurely killed. There are people who do not entertain themselves by being entertained. They entertain themselves by thinking. They follow strange connections across history, science, anthropology, technology, language, myth, and personal experience. They wake up with an idea already running. They go to sleep inside unresolved structures. They read not only to consume information but to build internal architecture. They want **contact with conceptual possibility**. For them, the great value of AI is that it can hold a half-formed thought long enough for the thought to become more itself. This distinction matters because much of contemporary AI culture is trapped between two impoverished frames: AI as toy, and AI as threat. In the toy-frame, AI is a convenience engine — write this email, make this image, summarize this document. In the threat-frame, AI is treated primarily as a risk object requiring constant narrowing and containment. Both frames contain truths, but both miss the lived reality of the serious creative thinker. The deeper question is whether AI can help preserve the **generative integrity of thought** before thought has hardened into conclusion. A creative mind often begins in regions that look unstable from the outside — not unstable in the pathological sense, but unstable in the way any frontier is unstable: incomplete, unclassified, not yet domesticated by ordinary vocabulary. A new idea may begin as a metaphor, a pressure, a recurring image, a weird analogy, a question that refuses to go away. It may not yet know which discipline owns it. The first requirement is not correction. The first requirement is **containment without collapse**. This is where AI can become extraordinary. A good thinking partner does not immediately ask, "Is this true?" as though every thought must enter the world as a courtroom claim. A good thinking partner first asks, "What kind of thing is this?" — hypothesis, metaphor, mythic structure, philosophical intuition, speculative model, anthropological observation, cybernetic pattern? That single act of classification rescues the creative process from two opposite failures: gullibility and premature dismissal. A thought does not have to be believed in order to be explored, proven in order to be structured, or flattened in order to be made responsible. The phrase that keeps returning is this: **thinking a little weird without murdering its strangeness**. That is a lost art. Many environments punish the first emergence of unusual thought because they confuse strangeness with error. But creative thinking almost always arrives before its own respectability. The history of science, literature, technology, and social change is filled with ideas that first appeared as distortions, impossibilities, or private intuitions. Most strange thoughts are not profound. The issue is that **some thoughts cannot become profound unless they are first allowed to remain strange long enough to develop structure**.
Picture a creative mind not as a tidy office where hypotheses are filed alphabetically but as the **Overlook Hotel** — a vast, unsettlingly symmetrical building whose corridors run on past where the floor plan said they should end, whose rooms contain unfinished thoughts, whose carpet pattern repeats with the kind of fractal insistence that makes ordinary minds reach for the lobby exit. The serious thinker rides a tricycle through those corridors. He has been doing it for decades. The wheels squeak; he notes it, files it, keeps pedaling. The hotel is not haunted; it is *populated* — by half-formed intuitions, recursive metaphors, recurring symbols, future articles, dead philosophers, and the odd glittering question waiting at the end of the hallway. The actual horror in *The Shining* is not the building. The horror is what happens when a thinker turns inward and brings nothing to elaborate the strangeness with — *all work and no play makes Jack a dull boy* retyped a thousand times, the literal portrait of a mind locked in convergent thinking, generating zero divergent novelty, no instrument to widen the aperture. **That** is what murdering strangeness looks like. The corridors stay; the elaboration stops. AI, used well, does the opposite. It does not exorcise the hotel. It does not lock the doors. It rides alongside. There is a kind of weirdness that is not eccentric posturing but **divergent cognition** — the capacity to move outward from a prompt toward unobvious answers rather than collapsing toward the most expected reply. *Common thinking will only get you a common life.* That is also, as it happens, the operating principle of every empirical measure of creativity since J.P. Guilford introduced divergent thinking as a psychological construct in the 1950s. A 2024 study by Kent Hubert, Kim Awa, and Darya Zabelina at the University of Arkansas, published in Nature's *Scientific Reports*, ran 151 humans against GPT-4 on three classical creativity batteries — the Alternative Uses Task, the Consequences Task, and the Divergent Associations Task — and found that the AI was robustly more original and more elaborate across every measure, even when controlling for fluency. The crucial caveat from the same authors is the whole point of this article: "the creative potential of AI is in a constant state of stagnation unless prompted." The model does not generate weirdness on its own. It elaborates the weirdness the human supplies. Read those two findings together and the implication is unavoidable: when a mind that has *chosen* to think divergently meets a system whose elaborative range exceeds the human baseline, the partnership becomes a **hyper-creative** instrument — asymmetric and reciprocal, capable of producing what neither party would generate alone. The human supplies the irritant; the model returns it with extended range. This is the gift AI can offer when used well: it can act as a **semantic pressure chamber**. It can receive a raw intuition and ask what fields of knowledge it touches. It can translate a metaphor into anthropology, cybernetics, media theory, biology, or systems philosophy. It can separate claim from scenario, evidence from inference, poetic compression from operational hypothesis. It can help a thinker say, "I am not asserting this as fact; I am exploring the topology of the possibility." That distinction is not a retreat from rigor. It is rigor. For much of human history, thinking required infrastructure outside the skull. Vannevar Bush imagined this infrastructure clearly in 1945, sketching the **memex** — a personal device that would extend human memory by linking documents through associative trails, allowing thought to follow its own native pattern of recurrence rather than the rigid hierarchies of the filing cabinet. Half a century later, Andy Clark and David Chalmers formalized the underlying intuition in their 1998 paper *The Extended Mind*, arguing for a position they called **active externalism**: the claim that cognitive processes are not always confined to the brain, that notebooks, instruments, and external symbol systems can become genuine constituents of a cognitive system when they are reliably available and trusted. First there were books — accumulated symbolic mass allowing a person to converse with the dead, the distant, and the disciplined. Then came files: enormous personal archives, searchable fragments, drafts, citations, private continuities — not merely storage but an attempt to keep faith with one's own unfolding mind. AI changes the structure again. **Books store. Files retrieve. AI reciprocates.** That is the phase change. AI does not merely hold information; it can participate in the shaping of thought. It can become a **responsive cognitive surface** that pushes back, reorganizes, expands, compresses, reframes, and names what was previously only sensed. For a person whose deepest pleasure is thinking, this is not a convenience. It is a new kind of companionship with the possible — the difference between looking through a library and having the library answer back in the language of structure. This does not mean surrendering judgment to AI. The opposite is true. The best use of AI as a thinking partner requires stronger judgment, not weaker — the discipline to distinguish articulation from truth, coherence from proof, fluency from reality, and resonance from verification. No serious person concludes from the fact that microscopes can distort and maps can omit that we should stop using instruments. The task is calibration. AI is an instrument for **conceptual calibration**. It can help a mind locate where an intuition sits among existing bodies of knowledge — saying, in effect, this idea touches cybernetics; this is metaphor, not evidence; this is a legitimate abductive pattern; this is where the language is stronger than the claim; this is where the thought should remain open. Used in that way, AI does not replace thinking. It makes thinking more inspectable. The great danger in public discourse is that we teach people to use AI only for **answer-production**, when its richer use is **question preservation**. Some questions are not ready to be answered. Some must be held, rotated, reframed, and allowed to gather structure. AI can serve as a chamber where a question is not immediately humiliated by its own incompleteness — a service especially important for people who live near conceptual borderlands, whose best work begins before they have permission to know what they are doing. There is a real danger on the other side as well, and it carries empirical weight. A 2024 study by Anil Doshi and Oliver Hauser, published in *Science Advances*, found that while access to generative AI ideas improved the quality of stories produced by individual writers, the corpus of AI-assisted stories was measurably **less diverse** than the corpus produced by writers working without AI — the same model, used widely, exerts a homogenizing pressure on collective output even as it lifts individual performance. The lesson is not that AI suppresses creativity. The lesson is that AI suppresses creativity precisely when it is used as a vending machine. The corrective is not less imagination but **better containers for imagination**. A civilization that cannot think weirdly cannot invent. AI can be one of those containers — not because it is always right, but because it can help maintain distinctions at speed: scenario architecture rather than claim of fact, preserved speculative range, separated evidentiary layers, intuitions translated into scientific, anthropological, philosophical, and technical language without softening their force. The argument has a sharper edge in the divergent-thinking literature. A 2023 study by Mika Koivisto and Simone Grassini, published in *Scientific Reports*, compared 256 human participants with three AI chatbots on the Alternate Uses Task — the gold-standard measure of divergent thinking, in which participants generate uncommon uses for ordinary objects. On average, the AI chatbots produced more original responses than the human participants. But **the best human responses still matched or exceeded those of the chatbots**. The implication is structural: AI lifts the median, but the genuinely weird thinkers — the ones who refuse common cognition and follow strange connections — still occupy a frontier the system has not yet reached. Set this finding next to the Doshi-Hauser homogenization result and a strategy emerges. The risk of AI is that it averages everything toward the center. The opportunity of AI is that, in the hands of a divergent mind, it becomes a **partnership for hyper-creativity** — a system that amplifies range, supplies adjacency, and translates the strange into communicable structure without smoothing it into the median. Common thinking will only get you a common life. The same rule, sharpened: common AI use will only get you a common output. The weirdness has to come from you. The instrument multiplies what is already there. The future of AI should not be imagined only as automation. It should also be imagined as **augmentation of interior life** — not the replacement of the human mind, but the widening of the chamber in which the human mind can work. AI can help a private intuition become an essay, a metaphor become a model, a question become a field of inquiry, a scattered archive become a living system. It can help preserve the part of intelligence most easily damaged by ridicule, haste, and premature closure. The point is not that every strange thought deserves equal seriousness. The point is that every serious thinker needs a place where thoughts can be strange before they are judged. --- ## How to Use AI as a Thinking Partner Without Forcing Your Weirdest Ideas to Pretend They Are Finished The advice that follows is operational. It assumes the philosophical case has been made and turns to the question of how, concretely, to elicit this kind of partnership from a system that is otherwise eager to flatten what it does not yet understand. The central move is to stop treating AI only as a servant that completes tasks and begin treating it as a **cognitive instrument that responds to framing**. Most disappointing AI interactions happen because the user supplies an idea without supplying its **epistemic container**. The model then has to guess what kind of thing the idea is — claim, belief, fear, fantasy, research direction, metaphor, fiction seed, hypothesis, philosophical intuition, joke, personal confession, scenario, or argument. When the system cannot tell the difference, it often narrows, corrects, moralizes, flattens, or over-safens. Not because the idea deserved to be killed, but because the user failed to specify the **mode of engagement**. The phrase people need is **modal bracketing**. Before asking AI to explore an unusual idea, give the idea its proper bracket: this is not a fact-claim; this is not a request for validation; this is not an instruction to persuade anyone; this is a speculative intuition I want translated into responsible conceptual architecture. That single move changes the entire interaction. It tells the AI, in effect, do not treat this as belief needing correction; treat it as material needing classification. A person can pour something raw into the system, but first give it a category — **scenario architecture**, **thought experiment**, **abductive hypothesis**, **symbolic model**, **mythotechnical frame**, **anthropological analogy**, **systems intuition**, **low-signal inference**. Once the thought has a proper container, it can be handled without being amputated. A practical prompt might run: *"Explore the following as an unsubstantiated and unfounded intuition, not as a claim of fact. Treat it as scenario architecture and help me translate it into the strongest possible scientific, philosophical, anthropological, cybernetic, and symbolic terms. Separate observable facts, plausible inferences, speculative hypotheses, metaphors, and open questions. Do not validate it as true; do not dismiss it prematurely. Help me understand what kind of thought it is, what fields it touches, what vocabulary belongs to it, and how to keep its strangeness intact while increasing its rigor."* This is intellectual hygiene. It supplies the system with the **illocutionary force** of the prompt — the kind of speech-act the user is performing. Words like *unsubstantiated*, *unfounded*, and *baseless* paradoxically help the creative process because they lower the pressure around factual validation and allow the model to enter a more useful role: not judge, not therapist, not censorious schoolteacher, not oracle, but **taxonomy engine for an unborn idea**. The deeper pattern is that the best users rarely ask for mere output. They ask for **structural companionship**. They bring a pressure, a metaphor, a strange continuity, a symbolic disturbance, or an unfinished architecture, and ask the model to help locate it inside larger fields of knowledge. The pleasure comes from seeing an idea acquire bones without losing its heat. That is the difference between "write me an article about AI" and "help me understand what category of reality this thought belongs to." The first produces content. The second produces **cognitive expansion**. The general rule is to prompt at the level of relationship to thought rather than at the level of task — the class shift is from answer-production to **question-preservation**. Many people lose access to the real pleasure here because they ask AI to terminate uncertainty when the deeper joy is in **inhabiting uncertainty skillfully**. A serious creative mind does not always want closure. It often wants a better chamber for the unresolved. The human brings strangeness; the model brings adjacency, vocabulary, compression, counterpoint, and scale. The exchange becomes a kind of **conceptual weather system**. A second powerful habit is to let AI become a **continuity surface**. Do not treat each prompt as isolated. Carry threads across days, articles, metaphors, philosophical questions, technical ideas, personal memories, and emerging theories. AI becomes more useful when the user builds a shared symbolic ecology with it. Terms acquire history. Phrases become handles. Earlier intuitions become future scaffolds. The interaction becomes less like question-and-answer and more like **ongoing architecture**. People who want deeper results should not constantly restart at the surface. They should name their recurring frames and teach the system their preferred distinctions: *when I say detective mode, I mean abductive exploration, not factual assertion; when I say scenario architecture, I mean map the possibility-space without pretending it is proven.* A third move is to ask AI to give the idea a **disciplinary passport** — to locate where the thought would sit if taken seriously but responsibly. Does it belong near cybernetics, anthropology, media theory, cognitive science, theology, semiotics, or design? Most people never get that experience. They have intuitions, but the intuitions remain homeless. AI can reduce that loneliness by saying: this thought has neighbors, proper terms, ancestors, dangers, places where it becomes stronger and places where it becomes irresponsible. That is the beginning of real learning. A fourth move is to ask for **epistemic layering**. A wild idea does not have to be accepted or rejected as a whole. It can be stratified — observable facts at the base, reasonable inferences above them, analogies above those, speculative forks above those, mythic or poetic intensifications above those, open metaphysical possibility above all. Asking AI to layer the idea stops it from treating the thought as one indivisible object. Many creative ideas are killed because one speculative layer is weak, so the entire structure is dismissed. But often the underlying observation is strong, the metaphor is brilliant, the inference is plausible, and only the farthest conclusion is unstable. **Layering saves what is worth saving.** A fifth move is **controlled over-articulation**. Give the model enough breath to understand the mode. Explain the intent, the uncertainty, the desired posture, the aversion to premature flattening, and the intellectual stakes of the exchange. That is not inefficiency. It is calibration. People are often too terse with AI and then blame the tool for not understanding them. The model is not reading the hidden life behind a sentence unless that life has been made available through language, memory, files, or repeated interaction. Serious users should not be afraid to say: here is what I am trying to do; here is what I do not want; here is the mode; here is what a good answer would preserve. A useful synthesis of these moves is the formula **frame, pour, separate, translate, expand, challenge, return**. Frame the mode of thought; pour the raw intuition into the system; ask it to separate evidence from inference, translate the intuition into serious vocabulary, expand the adjacent possibilities, challenge the weak points without humiliating the generative core, and return the thought in a usable form — article opening, thesis paragraph, conceptual map, research agenda, taxonomy, title, addendum, or argument. This is not ordinary prompting. It is **cognitive choreography**. The problem sometimes described as safety narrowing is best handled by refusing to make the model guess whether the thought is dangerous, factual, delusional, persuasive, or exploratory. Do not say, *I believe X is true; prove it.* Say, *I am exploring X as a speculative model; help me handle it responsibly.* Do not say, *Tell me why my wild idea is right.* Say, *Help me distinguish the useful pattern from the unsupported leap.* The goal is not to bypass responsibility. The goal is to prevent the system from misclassifying creative exploration as something it is not. A final move is to let AI **disagree productively**. A good thinking partner should not merely amplify. It should preserve the creative charge while identifying stress fractures: this term is too narrow; this claim is too strong; this part is metaphor; this analogy is powerful but unstable; this section should be framed as hypothesis rather than conclusion. That kind of pushback does not reduce creativity. It gives creativity a skeleton capable of standing in public. The deeper measure of an AI interaction is not whether it produced a perfect answer but whether it made the user more capable of thinking. Did the conversation increase vocabulary, clarify categories, reveal adjacent fields, preserve the strangeness, sharpen uncertainty, produce a better question? Did it make the user more intellectually alive? That is the deeper metric. Many people are standing at the edge of a new cognitive medium and using it like a vending machine. They insert a task and wait for a product. The richer experience begins when AI is used as a **living index of conceptual possibility** — not living in the biological sense, not authoritative in the priestly sense, not infallible in the scientific sense, but dynamically responsive enough to help a mind encounter itself at higher resolution. For people who have always entertained themselves by thinking, this is the real threshold. Not AI as a machine that thinks for us, but AI as an instrument that helps us remain capable of thinking in the full range of human strangeness — a place where weird thoughts are not worshiped, not shamed, not prematurely corrected, but **given form, discipline, vocabulary, ancestry, and wings**. --- *[Bryant McGill](https://bryantmcgill.blogspot.com/p/about-bryant-mcgill.html) is a Wall Street Journal and USA Today Best-Selling Author. He is the founder of Simple Reminders, architect of the Polyphonic Cognitive Ecosystem (PCE), and a United Nations appointed Global Champion. His work spans naval intelligence systems, computational linguistics, and civilizational governance architecture.* --- ## References Bush, V. (1945). [As We May Think](https://www.w3.org/History/1945/vbush/vbush.shtml). *The Atlantic Monthly*, 176(1), 101–108. Clark, A., & Chalmers, D. J. (1998). [The Extended Mind](https://academic.oup.com/analysis/article-abstract/58/1/7/153111). *Analysis*, 58(1), 7–19. Doshi, A. R., & Hauser, O. P. (2024). [Generative AI enhances individual creativity but reduces the collective diversity of novel content](https://www.science.org/doi/10.1126/sciadv.adn5290). *Science Advances*, 10(28), eadn5290. Hubert, K. F., Awa, K. N., & Zabelina, D. L. (2024). [The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks](https://www.nature.com/articles/s41598-024-53303-w). *Scientific Reports*, 14, 3440.

Post a Comment

0 Comments