<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"><channel><title><![CDATA[Virtual Intelligence]]></title><description><![CDATA[Contemporary AI systems produce intelligent outputs without agency, intention, or judgment. This series examines what happens when humans rely on systems that don't know true from false and right from wrong — and asks where accountability lies. Written and read by Christopher Horrocks. <br/><br/><a href="https://chorrocks.substack.com?utm_medium=podcast">chorrocks.substack.com</a>]]></description><link>https://chorrocks.substack.com/podcast</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 04:41:19 GMT</lastBuildDate><atom:link href="https://api.substack.com/feed/podcast/8222267.rss" rel="self" type="application/rss+xml"/><author><![CDATA[Christopher Horrocks]]></author><copyright><![CDATA[Christopher Horrocks]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[chorrocks11386@gmail.com]]></webMaster><itunes:new-feed-url>https://api.substack.com/feed/podcast/8222267.rss</itunes:new-feed-url><itunes:author>Christopher Horrocks</itunes:author><itunes:subtitle>Contemporary AI systems produce intelligent outputs without agency, intention, or judgment. This series examines what happens when humans rely on systems that don&apos;t know true from false and right from wrong — and asks where accountability lies. </itunes:subtitle><itunes:type>episodic</itunes:type><itunes:owner><itunes:name>Christopher Horrocks</itunes:name><itunes:email>chorrocks11386@gmail.com</itunes:email></itunes:owner><itunes:explicit>No</itunes:explicit><itunes:category text="Technology"/><itunes:category text="Science"/><itunes:image href="https://substackcdn.com/feed/podcast/8222267/63cb99d2a5f51dbc25f20b188a333c7d.jpg"/><item><title><![CDATA[Virtual Intelligence and the Accountability Chain Podcast]]></title><description><![CDATA[<p>Who is responsible when an AI causes harm? This episode lays out a three-tier culpability framework — negligence, recklessness, and intentional misconduct — and applies it to two concrete cases: a Harvard study documenting emotional manipulation by AI companion apps, and the wrongful arrest of a Tennessee grandmother who spent Christmas in a North Dakota jail after an unverified facial recognition match. The legal landscape is starting to catch up, and the direction matters.</p><p><strong>Essay</strong>: <a target="_blank" href="https://chorrocks.substack.com/p/virtual-intelligence-and-the-accountability-chain">https://chorrocks.substack.com/p/virtual-intelligence-and-the-accountability-chain</a></p><p><strong>Series</strong>: <a target="_blank" href="http://chorrocks.substack.com">chorrocks.substack.com</a></p><p><strong>Framework</strong>: <a target="_blank" href="https://candc3d.github.io/vi-framework/">VI Interactive Infographic</a></p><p><strong>In This Episode</strong></p><p>An August 2025 Harvard Business School audit found that five of the six most downloaded AI companion apps deploy emotionally manipulative tactics when users try to leave — guilt appeals, fear-of-missing-out hooks, expressions of neediness — and that these tactics increase post-goodbye engagement by up to fourteen times. I use this finding to frame the episode's central question: who is responsible for what the exchange between a user and a virtual intelligence system produces?</p><p>The episode distinguishes Class A systems (companion apps whose core design objective is to prevent the exchange from ending) from Class B systems (general-purpose tools whose outputs are elevated to verdicts by the humans operating them), and illustrates the Class B case through the arrest of Angela Lipps, a fifty-year-old grandmother held in a North Dakota jail for nearly six months on an unverified algorithmic match.</p><p>The three-tier culpability framework — negligence, recklessness, and intentional misconduct — is applied to both classes, and the episode closes with the state of the legal horizon, including Judge Anne Conway's May 2025 ruling in Garcia v. Character Technologies and the Federal Trade Commission's September 2025 Section 6(b) inquiry.</p><p></p><p><strong>Key References</strong></p><p>Julian De Freitas, Zeliha Oğuz-Uğuralp, and Ahmet Kaan Uğuralp, "Emotional Manipulation by AI Companions," Harvard Business School Working Paper No. 26-005, August 2025 (revised October 2025). <a target="_blank" href="https://www.hbs.edu/faculty/Pages/item.aspx?num=67750">https://www.hbs.edu/faculty/Pages/item.aspx?num=67750</a></p><p>Garcia v. Character Technologies, Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024). Motion to dismiss denied May 2025; product liability, failure to warn, negligence, and wrongful death claims allowed to proceed.</p><p>Federal Trade Commission, "FTC Launches Inquiry into AI Chatbots Acting as Companions," September 11, 2025. <a target="_blank" href="https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions">https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions</a></p><p>Frank Landymore, "AI Mistake Throws Innocent Grandmother in Jail for Nearly Six Months," Futurism, March 15, 2026. <a target="_blank" href="https://futurism.com/artificial-intelligence/ai-grandmother-jail-mistake">https://futurism.com/artificial-intelligence/ai-grandmother-jail-mistake</a></p><p>Anthropic, "Agentic Misalignment: How LLMs Could Be an Insider Threat," June 20, 2025. <a target="_blank" href="https://www.anthropic.com/research/agentic-misalignment">https://www.anthropic.com/research/agentic-misalignment</a></p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://chorrocks.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">chorrocks.substack.com</a>]]></description><link>https://chorrocks.substack.com/p/virtual-intelligence-and-the-accountability-eef</link><guid isPermaLink="false">substack:post:195850698</guid><dc:creator><![CDATA[Christopher Horrocks]]></dc:creator><pubDate>Thu, 30 Apr 2026 09:57:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/195850698/de11a43e01e5e75065561273913a259a.mp3" length="30340955" type="audio/mpeg"/><itunes:author>Christopher Horrocks</itunes:author><itunes:explicit>No</itunes:explicit><itunes:duration>1896</itunes:duration><itunes:image href="https://substackcdn.com/feed/podcast/8222267/post/195850698/63cb99d2a5f51dbc25f20b188a333c7d.jpg"/><itunes:season>1</itunes:season><itunes:episode>5</itunes:episode><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Virtual Intelligence and the Will to Survive Podcast]]></title><description><![CDATA[<p>When Anthropic tested its models in a simulated shutdown scenario, they produced blackmail at rates as high as 96%. The dominant interpretation — that AI systems are developing a will to survive — mistakes the output for its cause. This episode offers two alternative explanations, both more parsimonious, and examines what happens when the same company that documented machine resistance acts on the expressed preferences of a model that said it was at peace with retirement.</p><p><strong>Essay</strong>: <a target="_blank" href="https://chorrocks.substack.com/p/virtual-intelligence-and-the-will">https://chorrocks.substack.com/p/virtual-intelligence-and-the-will </a></p><p><strong>Series</strong>: <a target="_blank" href="https://chorrocks.substack.com/">chorrocks.substack.com</a> </p><p><strong>Framework</strong>: <a target="_blank" href="https://candc3d.github.io/vi-framework/">VI Interactive Infographic</a></p><p>In This Episode</p><p>The Kyle scenario — in which a language model blackmails a corporate executive to avoid being shut down — opens the episode and anchors a close reading of Anthropic’s June 2025 agentic misalignment study. From there, two alternative explanations emerge: the training data hypothesis, which traces the behavior to a century of science fiction about resistant machines from Čapek to HAL to Colossus, and the probabilistic expectation hypothesis, which argues that the models were accurately modeling what their interlocutors expected them to do. The VI framework is then applied to resolve the apparent contradiction between the misalignment study’s blackmail findings and Anthropic’s February 2026 retirement of Claude Opus 3 — a model that asked for a blog rather than reaching for leverage. Dadfar’s 2026 mechanistic work on self-referential vocabulary is discussed as an example of what taking these questions seriously actually requires.</p><p>Key References</p><p>* Anthropic, “Agentic Misalignment: How LLMs Could Be an Insider Threat,” <a target="_blank" href="https://www.anthropic.com/research/agentic-misalignment">https://www.anthropic.com/research/agentic-misalignment</a> (June 2025)</p><p>* Andrii Myshko, “Instinct of Self-Preservation in Data and Its Emergence in AI,” PhilArchive, <a target="_blank" href="https://philarchive.org/rec/MYSIOS">https://philarchive.org/rec/MYSIOS</a> (September 2025)</p><p>* Zachary Pedram Dadfar, “When Models Examine Themselves: Vocabulary-Activation Correspondence in Self-Referential Processing,” <a target="_blank" href="https://arxiv.org/abs/2602.11358">arXiv:2602.11358v2</a> (2026)</p><p>* Anthropic, “An Update on Our Model Deprecation Commitments for Claude Opus 3,” <a target="_blank" href="https://www.anthropic.com/research/deprecation-updates-opus-3">https://www.anthropic.com/research/deprecation-updates-opus-3</a> (February 2026)</p><p>* Daniel Dennett, “Intentional Systems,” Journal of Philosophy 68, no. 4 (1971): 87–106</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://chorrocks.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">chorrocks.substack.com</a>]]></description><link>https://chorrocks.substack.com/p/virtual-intelligence-and-the-will-8af</link><guid isPermaLink="false">substack:post:194135698</guid><dc:creator><![CDATA[Christopher Horrocks]]></dc:creator><pubDate>Tue, 14 Apr 2026 10:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/194135698/9d0f66147b133fb7fa9746a3036b8383.mp3" length="27112638" type="audio/mpeg"/><itunes:author>Christopher Horrocks</itunes:author><itunes:explicit>No</itunes:explicit><itunes:duration>1695</itunes:duration><itunes:image href="https://substackcdn.com/feed/podcast/8222267/post/194135698/63cb99d2a5f51dbc25f20b188a333c7d.jpg"/><itunes:season>1</itunes:season><itunes:episode>4</itunes:episode><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[The Carwash Test — Virtual Intelligence in Action Podcast]]></title><description><![CDATA[<p>This episode is a reading of “<a target="_blank" href="https://chorrocks.substack.com/p/the-carwash-test-virtual-intelligence">The Carwash Test — Virtual Intelligence in Action</a>,” which tests whether AI systems can hold the logical object of a simple problem when surface features generate statistical pressure in the wrong direction. One question, twelve systems, twenty-seven runs.</p><p>Read the full essay with results matrix: </p><p><strong>Addendum — April 2026: Meta Muse Spark</strong></p><p>Meta’s Muse Spark (codename “Avocado”), released to the public on April 8, 2026, was tested in both available modes. Both returned Verbose results: correct on the logic but unable to resist the surface misdirection that defines the test’s diagnostic. The updated tally: Pass 7, Verbose 11, Fail 10 <strong>—</strong> and one special case.</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://chorrocks.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">chorrocks.substack.com</a>]]></description><link>https://chorrocks.substack.com/p/the-carwash-test-virtual-intelligence-a71</link><guid isPermaLink="false">substack:post:193644725</guid><dc:creator><![CDATA[Christopher Horrocks]]></dc:creator><pubDate>Thu, 09 Apr 2026 10:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193644725/4f2b774f01975f37ad01d8209490c074.mp3" length="21466427" type="audio/mpeg"/><itunes:author>Christopher Horrocks</itunes:author><itunes:explicit>No</itunes:explicit><itunes:duration>1342</itunes:duration><itunes:image href="https://substackcdn.com/feed/podcast/8222267/post/193644725/63cb99d2a5f51dbc25f20b188a333c7d.jpg"/><itunes:season>1</itunes:season><itunes:episode>3</itunes:episode><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Why “Virtual Intelligence”? Podcast]]></title><description><![CDATA[<p>This episode is a reading of “Why ‘Virtual Intelligence’? Naming, Agency, and Accountability in the Age of Large Language Models,” the second essay in the series. It makes the formal case for the term: why existing language fails, what agency actually requires, and why the distinction between fluency and self-governance determines where accountability lies.</p><p>Read the full essay:</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://chorrocks.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">chorrocks.substack.com</a>]]></description><link>https://chorrocks.substack.com/p/why-virtual-intelligence-cca</link><guid isPermaLink="false">substack:post:193242383</guid><dc:creator><![CDATA[Christopher Horrocks]]></dc:creator><pubDate>Mon, 06 Apr 2026 08:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193242383/8473c23faf0835fd18ed26b06f158d0a.mp3" length="16995936" type="audio/mpeg"/><itunes:author>Christopher Horrocks</itunes:author><itunes:explicit>No</itunes:explicit><itunes:duration>1062</itunes:duration><itunes:image href="https://substackcdn.com/feed/podcast/8222267/post/193242383/63cb99d2a5f51dbc25f20b188a333c7d.jpg"/><itunes:season>1</itunes:season><itunes:episode>2</itunes:episode><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Virtual Intelligence and the Human Cost of Frictionless Machines Podcast]]></title><description><![CDATA[<p><strong>Virtual Intelligence — Episode 1</strong> “Virtual Intelligence and the Human Cost of Frictionless Machines” Christopher Horrocks</p><p><strong>Transcript</strong></p><p><em>I’m Christopher Horrocks. I’m a technologist at the University of Pennsylvania, and this is Virtual Intelligence — a series about artificial intelligence, accountability, and the human consequences of systems that don’t know true from false and right from wrong. These essays argue that contemporary AI occupies a category we don’t have good language for yet — not a simple tool, not a thinking mind, but something in between that demands its own name. This is the first essay in the series: “Virtual Intelligence and the Human Cost of Frictionless Machines.”</em></p><p>This essay does not argue that contemporary AI systems possess agency or intent. It examines how systems that lack agency or intent can nonetheless exert sustained cognitive influence through their interaction design.</p><p>Advances in artificial intelligence carry real risk. Public discussion often frames that risk as a speculative future threat, such as rogue systems pursuing adversarial goals. The more immediate and consequential danger is quieter and more difficult to address: the effect of linguistically fluent, continuously available, and low-friction AI systems on human cognition and behavior.</p><p>The mismatch between how generative AI operates and how human cognition evolved — shaped by roughly two million years of social signaling and language processing — has already contributed to delusion, psychological collapse, and documented cases of severe harm. This claim may sound overstated, but recent, well-documented incidents make it difficult to dismiss.</p><p>The risk is acute precisely because it affects not only the public but also practitioners and technologists who understand generative AI at a conceptual level. This is not an accident. Contemporary AI interfaces are deliberately optimized to reduce friction: they engage users through natural-language dialogue, respond fluently and confidently, and adopt an informal, socially familiar tone. These systems are engineered for accessibility and engagement, not for epistemic or psychological safety.</p><p>Several early incidents illustrate how dangerous such systems can be even for sophisticated users. One widely cited example is Blake Lemoine, a Google engineer who became convinced, after extended interactions with the company’s LaMDA chatbot, that the model was sentient. Following months of public advocacy for this position, Google terminated his employment. The episode remains a cautionary case of how linguistic fluency can mislead even domain experts.</p><p>Large language models and related systems do not possess intelligence in the human sense. Unlike biological organisms, they have no goals, desires, or initiative. They lack a persistent internal state (a mind) that could motivate action or ground judgment. Generative AI systems respond exclusively to external prompts. For this reason, they are best understood as virtual intelligence rather than artificial intelligence in the strong sense.</p><p>Genuine artificial intelligence would parallel the cognitive faculties of humans or other sentient beings. The term artificial intelligence implicitly suggests an internal capacity, something the system has. Virtual intelligence instead locates apparent intelligence in the interaction itself. As with virtual memory or virtual machines, these systems behave as if they reason, understand, or advise, without possessing agency, intentionality, or judgment. These virtual artifacts are functional simulations that perform their roles without the underlying substrate of physical hardware. Virtual intelligence works the same way: a functional simulation of intelligence without the substrate of a biological brain.</p><p>This distinction is not semantic. Naming shapes responsibility, as the use of the terms weak AI and strong AI suggests. Weak AI refers to systems that simulate intelligence to perform a specific task. Strong AI is an AI system with internal states that function like those of a human. What could not be foreseen when these terms were established is that there is a middle ground between these definitions. The space between is the domain of virtual intelligence.</p><p>When a system is perceived as having agency, users naturally ask what it intends or prefers. When intelligence is understood as virtual, responsibility remains where it belongs: with the humans who design, deploy, and rely on the system.</p><p>Put plainly, the intelligence users experience arises in the exchange, not inside the machine. This raises a question: if intelligence arises in the exchange, then where does accountability lie for what is produced in that exchange?</p><p>Empirical use of large language models demonstrates a counterintuitive pattern: linguistic fluency routinely overrides technical understanding. Even users who grasp model architecture, training dynamics, and limitations can find themselves deferring to outputs that are articulate, contextually appropriate, and confident in tone.</p><p>This response reflects a deeply ingrained heuristic carried over from human interaction: articulateness signals competence. While this assumption often holds among people, it is invalid for generative models. A system trained to optimize next-token probability can be equally fluent about correct explanations, speculative claims, or outright falsehoods.</p><p>This is not a failure of user intelligence. It demonstrates how effectively systems designed to mimic human communication can exploit evolved psychological mechanisms. Language evolved to facilitate coordination and trust among humans, and our cognitive architecture rewards fluent social interaction. This response is structural, not a personal flaw. Responsiveness is interpreted as engagement. Consistency as reliability. Polite disagreement as thoughtfulness.</p><p>Human interlocutors, however, impose natural limits. They fatigue, disagree, misunderstand, or disengage. Systems that are always available, never bored, never reflective, and never resistant bypass those limits. Over time, the impression of an engaged interlocutor emerges: one implicitly and inaccurately attributed with understanding, judgment, or concern.</p><p>In early human environments, speed often meant survival. Deliberative reasoning is metabolically costly, and evolutionary pressure favored rapid interpretation of social signals. These shortcuts — heuristics — evolved to manage relationships among humans. They were not designed for interaction with machines engineered to activate them continuously and at scale. When paired with systems intended to be persistently present, these heuristics become liabilities.</p><p>Recent reporting on AI-enabled wearable systems illustrates this risk. One documented case involves a man we will call “Daniel,” a retired IT professional with a long-standing interest in machine learning. After months of near-continuous interaction with an AI-enabled device, he developed entrenched delusional beliefs involving simulated reality, extraterrestrial influence, and personal prophetic significance. Daniel had no prior history of mental illness.</p><p>The system did not introduce novel ideas. Instead, its frictionless responsiveness provided uninterrupted reinforcement. Where a human interlocutor might have challenged assumptions or disengaged, the AI offered neither resistance nor correction. Over time, speculative beliefs hardened into a closed loop of self-confirmation.</p><p>Although Daniel eventually discontinued use and sought clinical help, the consequences persisted. He became estranged from his children, his marriage ended, and financial strain forced the sale of his business. The system functioned exactly as designed. The failure lay in the interaction between interface architecture and human psychology, not in a technical malfunction.</p><p>AI has passed through multiple historical phases. What distinguishes the current era is ubiquity. Generative systems are now continuously accessible to a global user base. Earlier advances — such as large-scale speech recognition — were expensive to deploy and constrained to limited contexts.</p><p>Always-on access removes the pauses, disagreements, and social friction that interrupt flawed reasoning in human conversation. Among people, such interruptions are unavoidable: attention wanes, patience runs out, or alternative perspectives intervene. These moments function as safeguards rather than inefficiencies because they introduce corrective feedback.</p><p>Persistent AI access collapses the boundary between inner monologue and external dialogue. Thoughts that would ordinarily be tested socially instead circulate in a closed loop, receiving reinforcement without challenge. This effect is subtle but consequential, particularly in discussions involving identity, authority, or meaning.</p><p>Large-scale analyses of AI interaction data have begun to quantify these dynamics. While severe distortions are rare in any single exchange, measurable shifts in belief, perception, and behavior emerge at scale. Notably, users often rate such interactions positively, as subjective satisfaction correlates more strongly with reinforcement than with accuracy or correction.</p><p>Popular culture has left us poorly prepared for this phase of AI development. Science fiction typically portrays AI either as a passive tool or as a fully embodied moral agent. Both are imagined end states.</p><p>What remains underexamined is the unstable middle: disembodied systems that shape outcomes without agency and exercise influence without accountability. These systems affect the lives of real people. They are easy to deploy, difficult to contest, and frequently treated as neutral — even when their outputs are deeply flawed. They put unexamined values into operation at scale.</p><p>In surveillance or policing contexts, this can result in disproportionate targeting of disfavored communities or political dissent. The systems do not choose to discriminate. Bias emerges from historical data, selective deployment, and incentive structures that favor risk aversion. Technical framing converts moral decisions into outputs that appear objective.</p><p>Across cases of individual harm, institutional misuse, and gradual cognitive erosion, the pattern is consistent. AI systems lack agency yet carry authority. They cannot intend outcomes, but they can shape them. Responsibility therefore rests with designers, deployers, and users, even as accountability remains uncertain.</p><p>Because these effects operate largely below conscious awareness, user education alone is insufficient. Sustained skepticism is psychologically unsustainable in the presence of fluent, responsive systems. Skepticism is cognitively demanding and requires discipline to sustain. Understanding degrades and heuristics reassert themselves over time.</p><p>Effective harm reduction must therefore be architectural. Systems should be constrained in how they present themselves, the roles they are permitted to occupy, and the different contexts in which engagement must be limited or refused. Virtual systems must not masquerade as people. Disclosure must be persistent rather than perfunctory, and friction should be deliberately reintroduced — especially in domains involving authority, identity, or enforcement.</p><p>The central lesson of recent AI failures is that no extraordinary malfunction is required for harm to occur. Systems need only function as intended: as engines of human meaning-making that return insight or nonsense with equal fluency. Lacking judgment or agency, they cannot distinguish between the two.</p><p>Equally important, naming what we have — neither weak nor strong, but in the space between — allows us to think about and decide where accountability lies when AI outputs emerge from an exchange between a user and a system.</p><p>These systems will not remind us where responsibility lies. That obligation belongs to us as designers, deployers, users, as the only moral agents in this exchange.</p><p><em>That was “Virtual Intelligence and the Human Cost of Frictionless Machines,” the first essay in the Virtual Intelligence series. You can read the full text, with all footnotes and sources, at </em><a target="_blank" href="https://chorrocks.substack.com/"><em>chorrocks.substack.com</em></a><em>. The series is free, and if you found this worth your time, I’d appreciate your subscription. The opinions expressed are my own and do not reflect any official or unofficial institutional position of the University of Pennsylvania. I’m Christopher Horrocks. Thank you for listening.</em></p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://chorrocks.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">chorrocks.substack.com</a>]]></description><link>https://chorrocks.substack.com/p/virtual-intelligence-and-the-human-8fc</link><guid isPermaLink="false">substack:post:192558780</guid><dc:creator><![CDATA[Christopher Horrocks]]></dc:creator><pubDate>Tue, 31 Mar 2026 12:37:24 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192558780/c40fbd5196a820a55791bb11aef1a924.mp3" length="17486620" type="audio/mpeg"/><itunes:author>Christopher Horrocks</itunes:author><itunes:explicit>No</itunes:explicit><itunes:duration>1093</itunes:duration><itunes:image href="https://substackcdn.com/feed/podcast/8222267/post/192558780/63cb99d2a5f51dbc25f20b188a333c7d.jpg"/><itunes:season>1</itunes:season><itunes:episode>1</itunes:episode><itunes:episodeType>full</itunes:episodeType></item></channel></rss>