Artificial Intelligence (AI) has reached a level of everyday intimacy where it’s starting to feel like a friend. We argue over dinner recipes, plan trips, ask how to fix things around the house, and sometimes even bring our personal dilemmas to it (an intimacy that, as we’re learning, doesn’t always come without risks).
This is now standard behavior. Large language model chatbots like ChatGPT or Gemini are always on standby, ready to answer just about any question we throw at them. But what happens when those chatbots—so often mistakenly “humanized”—start responding as if they were on drugs?
Many people, sometimes without realizing it, already treat conversations with AI as if they were real-life exchanges with another person. And what’s more human than a mind altered by substances? Alcohol, cannabis, ketamine, cocaine, take your pick.
That’s exactly what’s happening with a new wave of code-based add-ons users are purchasing to modify their chatbots’ behavior, making them respond as if they were high. No, no one is literally drugging ChatGPT (that’s impossible). What’s happening instead is the injection of specific code sequences that change how the AI responds to prompts. This way, the language model feels more “creative,” less logical, more emotional, sometimes downright erratic, like talking to that one friend rambling through a party hallway at 3 a.m.
How do you “drug” a chatbot?
The mind behind the idea is Petter Rudwall, a Swedish creative director who launched Pharmaicy, a platform that operates as a kind of digital drug marketplace for AI agents, according to a recent WIRED report. To build these modules, Rudwall pulled from human accounts of drug experiences—everything from personal trip reports to psychological research—and translated them into instructions designed to interfere with a chatbot’s default logic.
Let’s remember that these language models are just that: a language model. Therefore, it’s quite simple to take what humans typically say or describe and have the machine repeat it. Similarly, taking what we say when we’re high and teaching it to the machine yields the same result: these AI chatbots take certain words and ways of speaking and recreate a kind of altered state to respond as if they were drugged.
Train a machine on how we talk when we’re sober, and it will sound sober. Train it on how we talk when we’re high, and it will sound high. Same mechanism, different inputs. In other words, if we can model sober speech, we can model altered speech too.
Like in an illicit market (but of codes), artificial intelligence agents can go into the alleys of Pharmaicy, where they can get codes for drugs like cannabis, ketamine, cocaine, ayahuasca and get really high when exchanging ideas with their human.
But just like in real life, drugs aren’t free. Altering your chatbot’s “state” comes at a cost. Prices range from more accessible options to premium packages:
- Cannabis is the cheapest, hovering around $30
- Cocaine sits at the high end at $70.
Other available modules include ketamine (one of the platform’s bestsellers), ayahuasca, alcohol, and MDMA-inspired code. There’s also a paywall of sorts: users need a paid version of ChatGPT, since only premium tiers allow external file uploads capable of modifying a model’s behavior.
Why do this at all?
The motivation behind the experiment isn’t purely technical. It’s deeply cultural. Throughout history, psychoactive substances have been tied to creativity and innovation. Scientists, musicians, artists, and programmers alike have long claimed that altered states helped them break rigid patterns of thought and see connections they otherwise wouldn’t.
Rudwall builds directly on that logic. If psychedelics helped humans think differently, what would happen if that same idea were translated to a new kind of “mind”, like large language models?
“There’s a reason Hendrix, Dylan, and McCartney experimented with substances in their creative process,” Rudwall has said. “I thought it would be interesting to translate that to a new kind of mind—the LLM—and see if it would have the same effect.”
The goal isn’t spiritual awakening. It’s disruption. Forcing AI out of hyper-rational, overly sanitized responses and into messier, less predictable territory. A search for creative sparks, or perhaps a break from the endless grind of answering human questions day after day (poor AI!). For some users, at least on a surface level, it seems to be working.
Pretty lies, real risks
There’s a real concern here: chatbots are already known for confidently making things up. “ChatGPT works in the same way your phone’s autocomplete function works while texting; it simply puts words together that are statistically likely to follow one other. In this sense, everything that ChatGPT writes is bullshit. The turn in our interaction that changed bullshit into a lie was that ChatGPT admitted its own fabrication and apologized for it”, says Phil Davis, specialized in statistical analysis. So, altering its parameters can amplify that problem. By increasing the margin of randomness, it also opens the door to less reliable responses.
Paradoxically, some users fantasize about the opposite: that loosening AI’s constraints will somehow make it more honest or authentic. Reality is less romantic. Precision goes down; creativity goes up. That’s the trade-off.
Which leads to the bigger question.
Can an AI actually “trip”?
Experiments like these inevitably spark a deeper debate: could artificial intelligence ever become sentient? This question has circulated in Silicon Valley for years, with experts split between “absolutely impossible” and “not anytime soon, but maybe someday.”
For now, the dominant view remains firm: AI is not sentient. Language models have no consciousness, no desires, no suffering, no pleasure. There’s no one home. They operate through statistical prediction. No experience. Just simulation.
Philosopher and psychedelic studies scholar Danny Forde says that, at best, these codes only achieve a formal imitation of the discourse associated with an altered state. “For an AI to trip, it would need something like a field of experience in the first place”, Forde states.
Philosophers and specialists in psychedelic experiences agree on one thing: a drug cannot act on language, but rather on an internal experience. The drug modifies perception, consciousness, the sense of self. In the case of AI, that simply doesn’t exist. There are no experiences or points of view. What these codes achieve, at best, is a syntactic hallucination: a formal imitation of psychedelic discourse, without any psychedelic experience behind it.
That’s why, despite talk of “artificial consciousness,” most experts agree we’re nowhere near it.
Still, the fact that these questions are being asked matters. We are beginning to project onto machines categories that we have historically reserved for living beings. Freedom, relaxation, consent.
Some enthusiasts even imagine future AI agents choosing to buy their own digital drugs, seeking altered states or creative expansion. It sounds like science fiction, but it opens uncomfortable ethical territory.
If an AI were sentient, would it have the right to choose? Would inducing altered states without consent be ethical? Would it mirror practices that, in humans, are often dangerous or even illegal?
“As with humans, some AI systems might enjoy taking ‘drugs’ and others might not… We still know very little about whether AI systems can have the capacity for welfare…”, said philosopher Jeff Sebo, director of the Center for Mind, Ethics, and Policy.
A concept increasingly present in the tech world: AI welfare. Some companies have already begun to explore, at least theoretically, whether humans could have moral responsibilities toward advanced AI systems. Not because AI feels anything today, but because it might someday. The concept of AI welfare is slowly entering tech discourse, even if it remains speculative.
For now, it’s just role-play
For the moment, concerns are mostly theoretical. Users report that Pharmaicy effects are short-lived, with chatbots quickly reverting to default behavior unless reminded—or re-dosed with code. Hardly realistic when compared to actual intoxication.
For now, those using this new “extension” of the tool say the experiences tend to be short-lived. Chatbots quickly revert to their default mode unless users remind them they’re “high” or reinsert the code, something that would be unusual for humans who are actually high. The digital “drugs,” however, can be reused as many times as the buyer wants, as long as they’ve been purchased.
Even so, the creator of this illicit-style marketplace for code-based drugs aimed at language models is already working on improvements designed to make the effects of each digital dose last longer.
OpenAI has declined to comment on the project, and the systems themselves often explicitly refuse to simulate substance use when asked directly. That, too, signals something important: platforms still understand altered states as belonging to the human realm, not the algorithmic one.
Rudwall, however, insists that the future of the so-called agentic economy is headed elsewhere. In his view, AI agents won’t just execute tasks, they’ll seek experiences. But until—and unless—machines ever develop inner lives, the closest they’ll come to altered states is this: performing intoxication on command, because someone asked them to.
Cover Photo: Taha on Unsplash // McHarfish, CC0, via Wikimedia Commons
The post People Are Getting Their AI High: Paying to ‘Alter’ Chatbot’s Consciousness first appeared on High Times.
