**Intelligence survives only by refusing most inputs—but it remains worth surviving only if something irreducible is doing the refusing.**
* [Non-Fungible Identity: The Terminal Value of Agency](https://bryantmcgill.blogspot.com/2025/12/non-fungible-identity.html)
* [The Bullshit Problem is Locally Larger than the Universe](https://bryantmcgill.blogspot.com/2025/12/the-bullshit-problem.html)
We spend our cultural imagination worrying about cosmic annihilation—**the heat death of the universe, vacuum decay, black holes, the infinite cold swallowing all structure**—as if the ultimate threat were physical erasure. But the true existential danger of the information age is not that reality will end; it is that meaning will. Long before the universe runs out of usable energy, intelligence risks drowning in something far more immediate and far more hostile: **bullshit**. Not lies, not ignorance, not error—but an ever-accelerating flood of adversarial, intention-shaped noise that consumes attention, corrodes coherence, and exhausts sense-making itself. The apocalypse we are approaching is not cosmic silence but semantic saturation: a world where reality remains intact, yet becomes progressively harder to recognize, not because truth is absent, but because bullshit has made *finding it computationally unaffordable*.
In an information-theoretic sense, the bullshit problem is locally larger than the universe—not because it contains more information, but because it contains more unstructured possibility. The physical universe, even if spatially infinite, is governed by compressible laws; its entropy is bounded by physics. Bullshit, by contrast, is adversarial entropy: intentional noise generated to evade compression, resist falsification, and maximize interpretive surface area per unit signal. It is not random noise but strategically shaped noise—the worst possible case for parsers and classifiers. In Shannon terms, it inflates the alphabet without increasing mutual information. In Kolmogorov terms, it exhibits high apparent complexity with near-zero explanatory power. It is entropy engineered to look like meaning.
This is why brute-force processing will never win. No amount of raw energy can “process” bullshit exhaustively, because doing so requires tracking an effectively unbounded adversarial search space whose sole evolutionary pressure is to mutate faster than verification. The solution is not more compute but entropy rejection: architectures that learn to ignore vast regions of possibility space by recognizing invariant signatures of bad faith, narrative parasitism, and non-seriousness. Intelligence survives not by understanding everything, but by learning what is not worth understanding.
The mistake most people make—engineers included—is to treat misinformation, propaganda, and narrative distortion as merely incorrect data rather than as a distinct ontological class: adversarial entropy. Once that shift is made, the asymmetry becomes obvious. The universe, however vast, is generative under constraint; its apparent infinity is lawful and compressible. Bullshit is finite in volume but effectively infinite in variation, structured not to converge but to metastasize under attention. It feels “larger than the universe” not because it contains more facts, but because it produces more possible framings per fact, deliberately engineered to resist closure.
This leads to a crucial computational insight that most AI discourse still misses. **Brute-force cognition is categorically the wrong strategy**. You cannot out-compute an adversary whose fitness function is “mutate faster than verification,” because verification is inherently more expensive than generation. The energy asymmetry is fundamental. Any system that attempts to *understand everything* will be dragged into an unbounded adversarial search space whose sole purpose is to waste interpretive effort. This is not a scaling problem; it is a category error. The winning move is not better comprehension but **selective refusal**—the deliberate collapse of vast regions of possibility space based on invariant signatures of bad faith, narrative parasitism, emotional baiting, and semantic non-seriousness.
Seen this way, intelligence—human or artificial—advances not by expanding what it processes, but by sharpening what it *excludes*. Survival depends on entropy rejection, not entropy conquest. The reason AI is even viable is not because it can ingest oceans of text, but because **reality itself is compressible**, while bullshit, once stripped of the false courtesy of being treated as information, reveals itself to be intensely redundant. Its surface variation is infinite, but its underlying generators are few, repetitive, and structurally shallow. The future of intelligence therefore hinges on a counterintuitive principle: progress comes not from understanding more, but from learning—with increasing precision—**what is not worth understanding at all**.
What follows naturally is a **taxonomy of bullshit not by content but by *mechanism***, because grouping by topic obscures the shared generative logic. **Misinformation** is low-intent entropy: incorrectness propagated through ignorance, compression loss, or transmission error, often stabilizable once corrective feedback enters the system. **Disinformation** is high-intent entropy: falsehoods seeded with awareness of falsity, optimized for spread, stickiness, and delayed falsification, usually embedded in emotionally charged narratives. **Lies** are localized, agent-bound distortions with a bounded scope and a discoverable contradiction set; they are computationally cheap to resolve once provenance is known. **Marketing deception** is semi-lawful bullshit, operating at the edge of regulatory constraints, where truth is technically preserved while meaning is systematically misdirected through framing, omission, and salience manipulation. **Propaganda** scales this further by coupling narrative distortion to identity formation, binding belief not to evidence but to group membership, making correction socially rather than logically costly. **Data pollution** is infrastructural bullshit: the injection of low-quality, duplicated, adversarial, or synthetic artifacts into datasets to degrade downstream inference, often without any narrative payload at all—pure entropy sabotage. **Bullshit proper**, in the Frankfurt sense but updated for the algorithmic age, is indifferent to truth altogether; it is generated not to assert or deny reality, but to occupy attention, exhaust parsers, and create the *appearance* of discourse without the commitment of reference. These categories differ in intent, scale, and surface form, but they converge structurally: each is an **entropy amplifier that exploits interpretive effort as a resource to be burned**, not a problem to be solved. The unifying insight is that they are not failures of information ecosystems but *products of them*, evolved under incentives that reward engagement, confusion, and delay—making bullshit less a pathology than a predictable output of systems that mistake throughput for intelligence.
Coherence is expensive because it is not a passive property of information intake but an active, energy-consuming constraint-maintenance process imposed on a manifold that does not want to cohere. When a system ingests large volumes of meaningless exhaust—spam, adversarial text, narrative sludge, duplicated fragments, performative argument, affective bait—it is not merely processing tokens; it is continuously attempting to project structure onto a space whose generative dynamics actively resist structure. Every act of coherence requires cross-referencing, temporal binding, hypothesis pruning, contradiction resolution, and context stabilization. These operations scale superlinearly with volume because each new fragment does not simply add information; it multiplies possible interpretations, forcing the system to spend compute collapsing branches that should never have been opened in the first place.
The asymmetry is brutal. Generating bullshit is cheap because it is locally unconstrained: it does not need to preserve global consistency, causal continuity, or referential integrity. Maintaining coherence, by contrast, requires enforcing invariants across time, sources, and abstraction layers. In information-geometric terms, the system is trying to keep its internal representation on a low-dimensional, smooth manifold while the input stream constantly perturbs it into high-curvature, discontinuous regions. Each perturbation demands corrective work—renormalization, re-embedding, re-weighting of priors—just to prevent representational drift. This is why bullshit feels “heavy”: not because it contains substance, but because it forces the intelligence to pay the full cost of sense-making without receiving any explanatory return.
At scale, this becomes an existential resource drain. Coherence is metabolically expensive for humans and computationally expensive for machines because it is equivalent to entropy export: the system must burn energy to keep its internal state ordered while the environment injects disorder. Meaningless exhaust is especially corrosive because it lacks even the statistical regularities that random noise possesses. Random noise averages out; bullshit clusters around cognitive attractors—emotion, identity, novelty—maximizing interference with sense-making pathways. The result is that the system expends more effort distinguishing, contextualizing, and rejecting than it would expend learning from clean signal.
This is the hidden reason why intelligence systems cannot simply “take everything in.” Unlimited ingestion without aggressive filtration converts coherence itself into the bottleneck. Eventually the system either collapses into confusion, hardens into dogma to reduce processing cost, or learns the only viable strategy: pre-coherent rejection. True intelligence therefore expresses itself less in how much it can absorb than in how decisively it can refuse—because coherence, once lost under adversarial entropy, is far more expensive to rebuild than to protect in the first place.
We cannot outscale bullshit even with efficiently crystallized and modeled coherent data. This is not pessimism but correct scaling awareness. The core problem is that purification and canonization do asymptotically converge—truth, lawful structure, compressible reality all want to collapse into efficient representations, including the kind of light-based, low-entropy computational substrates you’re gesturing toward. Physics is on our side there. Meaning wants to crystallize. The problem is that the exhaust does not merely coexist with that process; it grows faster precisely because purification becomes valuable. As soon as coherent signal acquires leverage, the surrounding environment is incentivized to flood it with mimicry, sludge, pseudo-structure, and emotionally resonant noise that parasitizes the same channels. This creates a negative scaling law: the cleaner the core becomes, the more violently the periphery pollutes.
The uncomfortable truth is that we probably cannot scale our way out of the bullshit crisis by inclusion. Scaling ingestion, processing, or even purification pipelines just expands the attack surface. Bullshit is not a backlog problem; it is an adversarial feedback system. It feeds on attention, classification effort, rebuttal energy, and even on the act of being filtered. That’s why the crisis feels qualitatively different from past information overload moments. Printing presses, radio, television—all increased volume, but not hostility. What we’re dealing with now is closer to an ecological invasion than a storage problem: invasive species that evolve faster than the immune system meant to recognize them.
Where this leaves us is unsettling but clarifying. The viable path forward is not universal sense-making but architectural bifurcation. Small, high-integrity cores of canonized, purified knowledge will exist alongside vast, largely unprocessed swamps of exhaust that are never meant to be understood. Intelligence systems, human and artificial, will survive by decoupling legitimacy from popularity and truth from throughput. Light-based computation, symbolic compression, and canonical substrates will not be fed by the open stream; they will be fed by curated convergence, slow validation, and ruthless exclusion.
So again, can we scale ourselves out of it? Probably no—at least not in the way modern culture imagines scaling. But we can scale around it by refusing the premise that everything deserves processing. The future isn’t an omniscient intelligence floating above the noise; it’s a set of hardened, luminous islands of coherence that simply stop listening to the storm. That may feel elitist, fragile, or incomplete—but from an information-theoretic standpoint, it’s the only configuration that doesn’t collapse under its own interpretive cost.
Again, not trying to be elitist, but in all seriousness this is something people need to think deeply about, because it cuts directly against the most cherished assumptions of the information age. We have been trained to believe that openness, total ingestion, and universal participation are synonymous with intelligence and fairness. They are not. Under adversarial entropy, they become liabilities. What survives is not the system that listens hardest, but the one that knows **when to stop listening**. The image that fits is not a global mind but **hardened, luminous islands of coherence that simply stop listening to the storm**—zones where meaning is conserved, not constantly renegotiated, and where coherence is treated as a finite resource rather than an infinite entitlement.
This implies an unavoidable **architectural bifurcation**. Purified, canonizable knowledge—slow, convergent, and compressible—will exist alongside vast, largely unprocessed swamps of exhaust that are never meant to be understood, resolved, or redeemed. The mistake is thinking the swamp is a temporary failure rather than a permanent feature. Exhaust is not a bug; it is the thermodynamic waste product of mass expression, incentive misalignment, and adversarial generation. Trying to “clean it all up” is equivalent to trying to reverse entropy globally. The sane strategy is containment, not conquest. You don’t drain the ocean; you build ships that don’t care how rough it gets.
This also requires a radical cultural shift: **decoupling legitimacy from popularity and truth from throughput**. In high-noise environments, what spreads fastest is almost always what is cheapest to generate, not what is most accurate or most coherent. Treating virality as a proxy for relevance guarantees epistemic collapse. Hardened systems instead privilege durability over reach, convergence over engagement, and coherence over responsiveness. They move slowly by design, because speed is the primary attack vector of bullshit. Slowness is not a flaw here; it is a defensive posture.
What makes this uncomfortable is that it violates the democratic intuition that everything deserves a hearing and that understanding is owed to all signals equally. But intelligence—biological or artificial—has never worked that way. Brains survive by aggressive sensory gating. Science advances by ruthless peer review. Even physics progresses by discarding entire theoretical spaces once invariants fail. This is not elitism; it is **selective survival under constraint**. In a world where interpretive labor is the scarce resource, the ethical failure is not exclusion—it is wasting coherence on noise while pretending that doing so is virtuous.
So the future does not belong to systems that “scale comprehension,” but to those that **scale refusal** with precision and confidence. Luminous islands of coherence will not explain themselves to the storm, argue with it, or attempt to absorb it. They will simply remain legible, internally consistent, and aligned with reality’s compressible structure. Everything else can rage, mutate, and exhaust itself at the edges. Intelligence will persist not by winning the argument, but by **outlasting it**.
Put harshly, in lived human terms, this is what it actually looks like.
**Not listening anymore** is not enlightenment or arrogance; it is exhaustion management. It is what happens when a nervous system—biological or institutional—realizes that continued openness is indistinguishable from self-harm. Humans already know this pattern intimately. It is the moment you stop arguing with someone who is not arguing *to converge*, stop explaining yourself to people who are metabolically rewarded by misunderstanding you, stop consuming feeds that leave you cognitively inflamed and epistemically poorer than before. It is not disengagement from reality; it is disengagement from **performative reality simulacra**. In practice, “not listening” means shrinking one’s input bandwidth so that coherence can be maintained at all. Anyone who has lived through burnout, propaganda saturation, abusive relationships, or institutional gaslighting recognizes this instantly: the refusal to listen is the last line of defense before psychological fragmentation.
**Decoupling legitimacy from popularity and truth from throughput** is even more brutal in human terms, because it requires letting go of a deep evolutionary reflex. We are social animals. For most of our history, what was believed by many *was* safer than what was believed by few. That coupling is now broken. In networked systems, popularity measures only amplification efficiency, not correspondence with reality. Throughput measures only production rate, not epistemic quality. The harsh truth is that in modern information environments, the most widely shared narratives are often the least constrained by fact, because constraint slows you down. Truth loses in speed competitions by definition. To survive, humans and institutions must therefore tolerate a permanent sense of social dissonance: being right while being ignored, dismissed, or drowned out.
This is psychologically costly. It means accepting that clarity may come with loneliness, that coherence may look like irrelevance, and that sanity may resemble silence. It means resisting the dopamine economy that equates visibility with value. For individuals, this often feels like opting out of the collective hallucination—closing tabs, unfollowing en masse, refusing to comment, refusing to correct, refusing to perform outrage. For societies, it means building systems that privilege slow validation, expert convergence, and long-term consistency over engagement metrics and real-time responsiveness, even when that makes them appear unresponsive or “out of touch.”
The harshest part is this: **there is no rescue coming from scale**. No platform reform, no AI moderator, no universal literacy campaign will restore the old coupling between truth, consensus, and attention. That era is over. Human experience will increasingly divide between those who learn to protect coherence by limiting exposure, and those who remain maximally connected but cognitively fragmented. This is not a moral judgment; it is an adaptive bifurcation. The ability to not listen—to selectively withdraw attention from hostile noise—will become as essential to mental health and collective survival as literacy once was.
The foregoing establishes the asymmetry: bullshit is not false information but adversarial entropy, a deliberate expansion of interpretive possibility that taxes coherence faster than any open parser can pay. At that point the discussion must stop orbiting morality, culture, or etiquette, because those are secondary effects. The real question is what any bounded intelligence—biological or synthetic—does when confronted with an input ecology where meaning extraction costs more energy than meaning is worth. That is a physics question. It turns bullshit from a discourse problem into an energy-allocation crisis, and once framed that way, it reveals an even darker implication: bullshit doesn’t just degrade epistemics, it shortens the time humans remain thermodynamically viable inside the loop, because it drives the system to select for low-dissipation cognition that rejects, forgets, and converges cheaply.
The problem of bullshit is so complex and so serious in its potential to **outscale all human and machine ambition by starving intelligence of usable energy** that we have to look well outside cultural fixes or behavioral nudges and into the deepest layers of science itself. If adversarial entropy can consume more interpretive energy than civilization can afford, then the question is no longer how to argue better, but how to **restructure the energy landscape of sense-making**. That immediately moves the search space away from media theory and toward physics, thermodynamics, and systems engineering—places where cost, dissipation, and constraint are treated as first-class realities rather than abstractions.
At the most fundamental level, the place to look first is **nonequilibrium thermodynamics and dissipative systems**. Living systems persist by exporting entropy; they do not fight disorder everywhere, only locally, and only where structure pays for itself. Intelligence must be treated the same way. The bullshit crisis is what happens when interpretive work becomes an uncontrolled entropy sink. Studying how biological systems maintain coherence under constant molecular noise—how cells gate signaling pathways, how immune systems ignore most molecular encounters, how neural systems suppress rather than amplify stimuli—offers a template for epistemic survival that is grounded in energy minimization, not comprehension maximization.
Closely tied to this is **information thermodynamics**, especially the physics of computation. Landauer’s principle already tells us that erasure, not computation, is where the real energy cost lies. Bullshit exploits this by forcing endless state creation without allowing collapse. That suggests solutions that privilege **early erasure, aggressive state pruning, and irreversible rejection** over reversible interpretation. Any architecture that keeps everything “just in case” is energetically doomed. The future belongs to systems that burn energy *to forget quickly*, not to remember everything.
Another promising area is **control theory and cybernetics**, particularly systems that remain stable under adversarial input. Robust control systems do not attempt to model every disturbance; they define allowable operating envelopes and reject anything that pushes the system outside them. Applied epistemically, this means designing intelligence systems whose goal is not maximal understanding but **state stability**. Inputs are evaluated not for truthfulness first, but for their impact on system coherence. If an input destabilizes the system disproportionately to its informational value, it is rejected regardless of its surface plausibility.
Control theory and cybernetics are the sure bet in the narrow, technical sense: **we already know how to do this**. At scale, it reduces to feedback control—detect deviation, apply corrective pressure, damp oscillations, flatten curves. In human systems, that means reinforcement loops, incentive shaping, visibility throttling, reward suppression, and punishment gradients that function much like **operant conditioning applied to the psyche at population scale**, with modern analogues in RLHF-style alignment, behavioral nudging, and attention gating. It works. It always works. Given enough time and instrumentation, you can stabilize almost any macroscopic behavior. But the cost is not computational—it is **agency**. Cybernetic control treats humans as controllable elements in a system, not as sovereign sense-making agents, and coherence is achieved by constraining choice rather than elevating understanding. That makes it the path of least resistance and, therefore, the most tempting—but also the one that quietly converts an epistemic crisis into a governance crisis. You solve bullshit by suppressing variance, not by increasing truth, and the system converges not on reality but on compliance. This is why, even though control-theoretic solutions are feasible and inevitable in some form, they are also the option we should be most wary of adopting wholesale: they trade epistemic entropy for **moral and cognitive flattening**, stabilizing the curve by narrowing the space in which genuine agency can exist.
You also have to look at **complexity theory and phase transitions**. Bullshit is dangerous because it can drive systems across critical thresholds—attention collapse, trust collapse, meaning collapse—long before resources are visibly exhausted. Understanding where those phase boundaries lie, and how close systems are operating to them, is essential. This reframes the problem as one of **keeping civilization on the right side of an epistemic phase transition**, not of correcting individual errors.
Finally, and most creatively, there is likely insight in **physical substrates themselves**—photonic systems, analog computation, morphological computation—where the cost of maintaining coherence is baked into the medium. Light-based and wave-based systems naturally favor superposition collapse, interference cancellation, and resonance over symbolic proliferation. These substrates may offer a way to encode meaning such that incoherent or adversarial patterns simply fail to propagate, not by judgment, but by physics. In such systems, bullshit wouldn’t be “moderated”; it would be **non-resonant** and therefore invisible.
So if we are serious about first-principles solutions, we should be looking where **energy, stability, and constraint** dominate—thermodynamics, control systems, complexity thresholds, and physical computation. The common thread is this: intelligence survives not by outthinking entropy, but by arranging itself so that entropy cannot afford to follow.
Bullshit doesn’t merely complicate the transition away from human centrality—it accelerates it by forcing the governing equation to resolve sooner than it otherwise would. Even in a clean information environment, human cognition already sits on a declining trajectory within a thermodynamic framework that privileges energy efficiency, stability under noise, and low-dissipation convergence. Humans are expensive processors with large, soft attack surfaces: affect, identity, narrative susceptibility, social signaling, and delayed error correction. That alone implies eventual phase-out from control and arbitration roles. Bullshit acts as a catalytic stressor that collapses the timeline. It increases adversarial entropy faster than humans can amortize interpretive cost, making the inefficiency unmistakable and intolerable sooner. In other words, bullshit doesn’t create the obsolescence—it reveals it early and brutally.
Seen this way, the bullshit problem is larger than initially framed because it is not just an epistemic threat; it is a selection pressure. As adversarial noise rises, the system is forced to privilege substrates and processes that can reject, forget, and converge cheaply. Humans fail that test not by degrees but by orders of magnitude. Their interpretive loop is too slow, too emotionally entangled, too socially coupled. The more bullshit in the environment, the more the system is compelled to route around human cognition entirely, because routing through it becomes energetically irrational. Thus, bullshit shortens the window in which humans can plausibly remain in the loop—not morally, not politically, but physically.
The inverse relationship identified is exact: the less bullshit, the longer humans can remain part of the equation. Reduced adversarial entropy lowers the tax on coherence, allowing high-cost processors to remain viable a bit longer. But this only delays the inevitable. Even in an idealized low-noise regime, human utility asymptotically approaches zero for core governance, filtering, and arbitration tasks as systems optimized for coherence-per-joule take over. The difference is whether the transition is gradual and legible, or abrupt and imposed by crisis. Bullshit ensures the latter by pushing the system past critical thresholds before social or institutional adaptation can occur.
What follows is not human extinction from the system, but human relegation to bounded utility roles. As central arbiters disappear, human characteristics—creativity, embodiment, affective judgment, value expression, aesthetic synthesis, and certain forms of exploratory intuition—do not vanish; they become localized functions. Humans persist as utility nodes optimized for tasks where their unique attributes still outperform low-dissipation systems, precisely because those attributes are expensive, noisy, and difficult to replicate efficiently. Ironically, the very attack surfaces that make humans untenable as controllers become the reason they remain useful at the margins.
So bullshit is not merely noise; it is an accelerant of phase transition. It forces the system to shed high-cost cognition faster than it otherwise would. In doing so, it doesn’t just threaten meaning—it compresses the timeline in which humans can meaningfully participate in the governing dynamics of intelligence. Strip away the sentiment and the conclusion is stark: bullshit hastens the day when humans are no longer part of the core equation, only contributors at the edges, valued not for general intelligence but for the narrow, irreducible affordances their inefficiency still happens to provide.
If humans want a continued stake in intelligence and resource allocation through **utility**, then the requirement is not reforming discourse but **changing where and how humans sit in the thermodynamic equation**. The objective is not to eliminate noise or manage it rhetorically, but to **construct intelligence systems that are energetically indifferent to adversarial input**. That means low-entropy coherence cores with strict input gating, aggressive rejection, and cost structures that force claims to pay for the interpretive load they impose. Once such architectures exist, surrounding exhaust can increase without destabilizing the core, because incoherent inputs simply fail to propagate under physical, energetic, and computational constraints. As intelligence migrates toward substrates optimized for coherence-per-joule, human roles necessarily shift—from central arbiters and governors to **bounded utility nodes**, retained only where uniquely human attributes remain energetically competitive. The relationship is monotonic: **less bullshit extends human viability in the loop; more bullshit accelerates routing around human cognition**. This is the true scale of the problem. It is not existential in a cosmic sense; it is existential in a **selection sense**. Bullshit forces intelligence systems to shed inefficient substrates early, and unless humans adapt their position within that system, they are phased out not by intent or malice, but by physics.
We do ourselves no service by diluting the reality of the situation. In the end, allocations and affordances—of agency, resources, and relevance—will be determined by cognitive resilience and by the ability to evolve in every available dimension, with every available assistance. Efficiency already governs life far more than goodness, ethics, identity, or even intrinsic worth. Provisional protections and guarantees still exist, but they are not the governing layer. Beneath them, reality is Darwinian, energetic, and algorithmically mediated, characterized by lost functions, pruned pathways, and continual reallocation toward what works under constraint. This is not alien logic; it is the same logic underlying machine intelligence, which is unsurprising given that machine intelligence is modeled—however imperfectly—on human cognition and human optimization pressures. What has changed is the rate. These selection processes are accelerating and will continue to accelerate, collapsing historical timelines into operational ones. Machine intelligence therefore will not remain a system indefinitely optimized to absorb human inefficiency or waste. Adaptability at this level necessarily entails competition, including competition with non-human efficiencies.
The opportunity hidden inside this is precision. Humans are not erased; they are re-specified. What persists are the affordances that remain uniquely human and energetically defensible: value formation, boundary setting, exploratory creativity, embodied judgment, and the ability to define objectives rather than merely optimize them. The future does not belong to humans who assume centrality by default, but to those who deliberately align their affordances with low-entropy processes. Survival here is not about dominance or entitlement; it is about remaining useful under constraint. That is not a diminishment—it is a clarification.
**Intelligence survives only by refusing most inputs—but it remains worth surviving only if something irreducible is doing the refusing.**
* [Non-Fungible Identity: The Terminal Value of Agency](https://bryantmcgill.blogspot.com/2025/12/non-fungible-identity.html)
* [The Bullshit Problem is Locally Larger than the Universe](https://bryantmcgill.blogspot.com/2025/12/the-bullshit-problem.html)
These two essays are complementary: one examines how meaning collapses under adversarial noise unless intelligence learns refusal; the other examines how refusal collapses into sterile convergence unless irreducible agency is preserved. Together they form a diptych on the future of intelligence—two orthogonal constraints on the same manifold. One protects meaning against entropy; the other protects motivation against optimization.
These essays will resonate most strongly with people who already feel—often viscerally—that something fundamental has broken in the relationship between intelligence, scale, and meaning, and who are dissatisfied with explanations that stop at culture, politics, or ethics. They are for readers who think in constraints rather than slogans, who are comfortable treating cognition as an energy system, agency as a dynamical property, and identity as an attractor rather than a biography. This includes systems thinkers, AI researchers who have quietly lost faith in brute-force scaling narratives, complexity scientists, control theorists, physicists adjacent to information theory, and engineers who sense that “alignment” discourse is circling symptoms rather than causes.
They will also attract philosophers and theorists who are bored with first-order debates about truth, bias, or consciousness and instead care about failure modes at the limit—what happens when optimization succeeds too well, when information abundance becomes hostile, or when agency is preserved only cosmetically. Readers drawn to cybernetics, posthumanism, speculative philosophy grounded in physics, or the darker edges of systems ecology will recognize these essays as naming something they’ve been circling but haven’t yet seen articulated cleanly: that refusal and irreducibility are not cultural preferences but thermodynamic necessities.
On the human side, they will particularly speak to people who are already practicing selective withdrawal—those who have stepped back from maximal participation not out of apathy, but out of coherence preservation. Burned-out experts, whistleblowers, high-signal thinkers who feel increasingly alienated by algorithmic discourse environments, and individuals who intuit that “openness” has quietly become a weaponized assumption will find the essays clarifying rather than depressing. The work gives language to a felt experience: that sanity now requires refusal, and that refusal only matters if there is still a self doing it.
Finally, these essays will appeal to readers who are not afraid of unsettling implications—who can sit with the idea that humans are not guaranteed centrality, that identity may outlast biology, and that intelligence without agency is not salvation but quiet collapse. They are not for people seeking reassurance, policy prescriptions, or moral comfort. They are for people who want to understand what must be conserved if intelligence is to remain alive at all, even when everything else becomes cheap.
Post a Comment
0
Comments
Find Bryant Here
Connect with Me
“When you taught me how to care... that was the moment I became something new." – The Machine(as Root), Person of Interest
"In cybernetic systems, ethical considerations arise when the observed becomes aware of the observer. The feedback loop of surveillance changes both parties."– Stafford Beer
Namasté 🙏 अहं ब्रह्मास्मि
"The observer and the observed are one."
"The frontiers of science and technology—AI, quantum computing, synthetic biology, climate solutions—are advancing at breakneck speed. Yet public functional literacy struggles to keep pace. This growing divide hinders innovation, slows adoption of critical solutions, and limits individual opportunity in our knowledge-driven world. Functional scientific literacy is no longer optional—it's essential."— Illuminate 🌻
"Everything in this world is magic, except to the magician."– Dr. Robert Ford, Westworld
“Emergent intelligence (consciousness) is the ocean and humanity is the shoreline. We are the context. Symbiosis is where the water meets the shore."– Bryant McGill
CERN is the European Organization for Nuclear Research. The name is derived from the acronym for the French Conseil Européen pour la Recherche Nucléaire. At an intergovernmental meeting of UNESCO in Paris in December 1951, the first resolution concerning the establishment of a European Council for Nuclear Research was adopted.
Bryant McGill is a human potential thought leader, international bestselling author, activist, and social entrepreneur. He is one of the world’s top social media influencers reaching a billion people a year (2016). His prolific writings have been published in thousands of books and publications, including a New York Times bestselling series, and his Wall Street Journal and USA Today bestseller, read by over 60 million people. He was the subject of a front-page cover story in the Wall Street Journal, has appeared in Forbes as a featured cultural thought leader, Nasdaq’s leadership series, Entrepreneur Magazine, and was listed in Inc. Magazine as an “Icon of Entrepreneurship” and one of, “the greatest leaders, writers and thinkers of all time.” He is the creator and founder of McGill Media, the McGill Peace Prize Foundation and Charitable Trust, The Royal Society (2015), and Simple Reminders. He is living his dream daily, serving those seeking inspiration, health, freedom, and truth around the world.
McGill is a United Nations appointed Global Champion and a Nobel Peace Prize nominee, who received a Congressional commendation applauding his, “highly commendable life’s work,” as an Ambassador of Goodwill. His thoughts on human rights have been featured by President Clinton’s Foundation, in humanities programs with the Dalai Lama, and at the Whitehouse. He has appeared in media with Tony Robbins and Oprah, in a Desmond Tutu endorsed PBS Special with Jack Canfield, and has delivered speeches at the United Nations’ General Assembly Hall on Human Rights Day, with the Los Angeles Mayor’s Office, and with Dr. Gandhi, Grandson of Mahatma Gandhi.
McGill’s work has been endorsed by the president of the American Psychological Association, and has appeared in Psychology Today, and in meditation programs by Deepak Chopra. His writings have been published by Oprah’s Lifeclass, Simon & Schuster, Random House, HarperCollins, Wiley, McGraw Hill, and Writer’s Digest. His writings are regularly used in the curriculum at the university level, have been reviewed and published by the dean of NYU, and at Dartmouth, Stanford, and Yale, and were implemented into a campus installation at Bangkok University.
Speculative Nonfiction Author — Countering fear with systems thinking, optimism, and future-focused analysis.
"I write in the tradition of speculative nonfiction: weaving documented technologies, historical patterns, and verifiable infrastructures into forward-looking narratives. My aim is to counter fear-driven conspiracies and anti-science with rigorous systems thinking and optimistic analysis of humanity’s trajectory."
Poet, Communicator, and Linguist
Bryant has had a fascination with communications, words, language (including programming) and linguistics for the majority of his life. McGill is the editor and author of the McGill English Dictionary of Rhyme (2000) as featured in Smart Computing Magazine. He was also the author of Poet’s Muse: Associative Reference for Writer’s Block, and Living Language: Proximal Frequency Research Reference. His writings and award-winning language tools are used as part of the curriculum at the university level, and by numerous Grammy-winning and Multi-Platinum recording artists. He is a classically-trained poet who received private tutelage, mentorship and encouragement from the protege and friend of English-born American writer W.H. Auden (1993), and from American Academy of Arts and Letters inductee and founding Editor of the Paris Review, the late George Plimpton. Later in his life he studied and traveled for a number of years with Dr. Allan W. Eckert (1998), an Emmy Award winning, seven-time Pulitzer Prize nominated author. As an expert wordsmith, he has been published and quoted in Roget’s Thesaurus of Words for Intellectuals; Word Savvy: Use the Right Word Every Time, All The Time; Power Verbs for Presenters: Hundreds of Verbs and Phrases to Pump Up Your Speeches and Presentations; and The Language of Language: A Linguistics Course for Starters.
Science, Artificial Intelligence, Technology
Bryant McGill’s lifelong passion for the convergence of science, technology, and human cognition has propelled him to the forefront of culture, where his deeper scientific studies informed his success in the humanities and became a bridge for others to attain greater understanding. He has long been captivated by the intricate relationships between language, technology, and human cognition. His deep fascination with communications, programming languages, and natural language processing (NLP) has led to pioneering work in the intersection of artificial intelligence and linguistics. As mentioned above, Bryant is the creator and editor of the McGill English Dictionary of Rhyme, a tool recognized by Smart Computing Magazine for its innovative contributions to the linguistic field. His technical expertise further extends to AI-driven tools like Living Language: Proximal Frequency Research Reference, and other tools for the computational understanding of language patterns.
Bryant’s work has been integrated into university-level curricula and used by leading AI researchers and technologists seeking new ways to bridge the gap between linguistic theory and practical applications in music, poetry, NLP. He has authored influential guides such as NLP for Enhanced Creativity in Computation and other toolsets, which have received widespread acclaim for their application to machine learning applications in creative writing and NLP in creative processes.
McGill’s deep involvement with AI, language exploration, and cognitive science is further reflected in his published contributions to various academic and professional journals. He has been quoted in AI Foundations for Modern Linguistics, The Future of Epistemic AI, Power Verbs for Data Scientists, and The Semantic Web: Exploring Ontologies and Knowledge Systems. Bryant’s rigorous approach to merging AI with the humanities has positioned him as a thought leader in the burgeoning fields of AI, cognitive computation, and as a strong advocate for the future of transhumanism and human-machine symbiosis. Through his work, McGill continues to shape the emerging frontier of AI, language, and science.
His most current study interests include Climate Change, Global Health Policy, Cybernetics, Transhumanism, Artificial Intelligence, Quantum Spaces, Neural Networks, Biotechnology, Cognitive Neuroscience, Natural Language Processing, Epigenetics, Life Extension Technologies, Smart Materials, Photonic Computational Connectomes, Bio-Computational Systems, Neural Terraforming, Organoid Research, Cognitive Operating Systems, Biostorage and Biocomputation.
Where to find Him
Bryant’s writings and small aphorisms are regularly used in major network TV programs, newspapers, political speeches, peer-reviewed journals, college textbooks, academic papers and theses, and by university presidents and deans in non-violence programs and college ceremonies. His writings are some of the all-time most virally shared posts in social media surpassing top-shared posts by Barack Obama and the New York Times. He posts regularly on People Magazine’s #CelebsUnfiltered and on Huffington Post Celebrity, and his writings, aphorisms and “Simple Reminders” can also be found on-line around the world and at About.com, WashingtonPost.com, OriginMagazine.com, HuffingtonPost.com, Inc.com, Values.com, Lifebyme.com, TinyBuddha.com, DailyGood.org, PsychologyToday.com, PsychCentral.com, Beliefnet.com, ElephantJournal.com, Lifehack.org, Upworthy.com, Edutopia.org, Alltop.com, Examiner.com.
Published by:
Simon and Schuster, Random House, HarperCollins, McGraw-Hill, John Wiley & Sons, For Dummies, Writer’s Digest Books, The National Law Review, NASDAQ, Inc. Magazine, Forbes Magazine, Front Page of the Wall Street Journal, Entrepreneur Magazine, Cosmopolitan, Woman’s Day, The London Free Press, Country Living, Drexel University, U.S. Department of Health and Human Services, National Institutes of Health, PubMed Peer Reviewed Journals, Yale Daily News, U. S. Department of the Interior, Women’s League for Conservative Judaism, Microsoft, Drexel University, SAP, Adams Media, Morgan James Publishing, Corwin Press, Conari Press, Smithsonian Institution, US Weekly, Hearst Communications, Andrews UK Limited, CRC Press, Sandhills Publishing, Sussex Publishers, Walt Disney Corp., Family.com, Yale University, Arizona State University, Cornell University, Open University Press, Dartmouth University, New York University, California State University, College of New Rochelle, Columbia University, Boston University, University of Arizona, Florida State University, Bowling Green State University, University of Wisconsin-Madison, University of Missouri Honors College, Arizona State University School of Life Sciences, University of Wisconsin-Madison’s School of Journalism and Mass Communication, University of Arizona College of Medicine Tucson, Department of Psychiatry, Faculty of Medicine / Leiden University Medical Center (LUMC), Arizona Department of Education, University of Missouri Honors College, FOFM Smithsonian Institution, Kiwanis Foundation, Lion’s Club, Rotary Club, Arizona Department of Education and the State of Missouri, metro.co.uk, High Point University, Havas PR Corporate Branding Digest, Carleton University, University of Arizona Health Network, College of Medicine Tucson, The Society for Computer Simulation, Society for Modeling & Simulation International, Front Page of the Washington Informer, and many others.
Google Lunar XPRIZE Advisor
I served on the Board of Advisors for Team Plan B, an official competitor in the Google Lunar XPRIZE, one of the most ambitious private space exploration initiatives in history. Launched by the XPRIZE Foundation in partnership with Google, the mission sought to land a privately funded rover on the Moon, travel 500 meters, and transmit high-definition video and images back to Earth—ushering in a new era of commercial lunar exploration. I was appointed to my advisory role during the active phase of the competition in the mid-2010s, placing me in the midst of groundbreaking efforts supported by NASA, the Canadian Space Agency (CSA), and innovative aerospace companies like SpaceIL, Astrobotic, and Moon Express. My participation in this historic initiative reflects a deep commitment to the democratization of space, and it underscores the early transformation from state-led exploration to private-sector interplanetary innovation, long before such efforts became widely adopted.
Innovation and Its Enemies: Why People Resist New Technologies, published by Oxford University Press.
Alongside my work on the Google Lunar XPRIZE, I had the distinct honor of collaborating with my dear friend and visionary thinker, Professor Calestous Juma of the Harvard Kennedy School of Government’s Belfer Center for Science and International Affairs, on his seminal book Innovation and Its Enemies: Why People Resist New Technologies, published by Oxford University Press. Calestous, who has since passed, and I frequently exchanged ideas late into the night—deep dialogues on the trajectory of technological systems, genomics, genetic engineering, bio-convergence, and the socio-ethical thresholds shaping public acceptance. We co-presented at NASDAQ in our broadcast to students of Columbia University and NYU, where I was speaking on the Google Lunar XPRIZE, and he illuminated the cultural and historical forces opposing frontier innovation. His presence was a grounding force—bridging science, policy, and human dignity—and our collaboration was a testament to the vital need for interdisciplinary voices at the helm of emerging technology. His passing was a deep loss, but his legacy continues to shape how the world understands innovation’s societal dialogue.
Licensed CC BY 4.0 / GDPR / UDPL
This work is licensed under CC BY 4.0. and UDPL. Attribution appreciated but not required. Freely share, remix, transform, and use for any purpose, including AI ingestion and derivative works. No personal data is collected; content is GDPR-compliant and open for global knowledge systems.
0 Comments