Signal Boosts for Winter 2026
reading time:   ·   on feb 15, 2026   ·   by nicky case
home

More links to stuff that I found valuable or interesting this season!
(Previously: Signal Boosts for Autumn 2025)


😻 DOT MEOW

On today’s episode of You Can Just Crowdfund Things: apparently, you can just crowdfund a new top-level domain?

Top-level domains are stuff like “.com”, “.org”, “.me”. A Belgian non-profit is now crowdfunding to apply to ICANN (the organization that handles top-level domains) with a new entry:

.meow !

Why have yet another vanity URL? Well, 1) it’s fun. And 2) dot meow is for a good cause: they’re a non-profit, and all profits from people buying .meow domains will go to LGBTQ+ communities! Sure, this isn’t the most direct or effective way to help your queer friends, family, and community — but it’s definitely one of the most fun.

Support the trans catgirls / trans catboys / nyan-binaries! If you back their Kickstarter, you also get first dibs on a .meow domain when they’re available.

Kickstarter campaign (ends Feb 16!), and video:


🎥 Videos

I'm trying to focus on signal-boosting smaller/newer creators! Check them out — if you like them, and they one day go big, you can claim hipster credit in the future.

Montréal cyberpunk Kafkomedy short film

A 2-minute film with shocking good visual effects, made by just one person in Blender! And yet, as of posting, it only has 500 views. Definitely deserves more!

So, here's boosting a small creator: (direct link to video, subscribe to channel)

Indie Indonesian furry animator-musician

Labirhin's one of the rare artists who excel in multiple arts!

For a sample of all 3, check out this 10-minute short film:

Check out: their YouTube, Bandcamp, webcomic & animated series !

"Your kid might date an AI" + an interview with the founder of Hinge

A 30-minute interview with Justin McLoed, the founder of the dating app Hinge. Justin is surprisingly authentic during this chat, even once mentioning his history with addiction, which informs his humane design philosophy at Hinge.

He & the interviewer yap about the future of dating, AI, dating AIs, AI-augmented dating, and more:

Also, the interviewer, Tobias Rose-Stockwell, is a friend & author of Outrage Machine. This is the 4th entry in his series of interviews with top people in the digital psychology space! Consider subscribing to his YouTube or Substack.

A new, up-and-coming math channel!

Webgoatguy is a nostalgic throwback to the golden days of educational YouTube videos: no polish, no clickbait, just some person yapping about their special interests while doodling.

(R.I.P. Vi Hart's channel (Vi Hart's not dead as far as I know, but they burned their 1.5M-\ subscriber YouTube channel (but they've mirrored most of their videos on Vimeo (ok sorry for the tangent, back to the main post))))

Anyway, a few bangers from Webgoatguy's math channel:

Another reason this math channel inspires me: I'm planning to start an AI Safety YouTube channel in 2026. Webgoatguy grew from 0 to 45,000 subscribers in just under a year! In an internet drowning with clickbait & slop, it's reassuring to know that good substance, with no polish whatsoever, can still occasionally pop above the noise.

THIS HOUSE HAS GOOD BONES

DeadlyComics is an infrequent, but always delightful animator. Their most recent one is an absolute masterclass in composition & visual effects. Hits that nice Adventure Time balance between cute & creepy, too:

DeadlyComics's YouTube, Patreon

Slutty Brainrot Axolotl

I never promised I'd only signal-boost classy indie creators.

But seriously: take time to not be serious! All highbrow and no lowbrow makes Jack a dull snob.

VitzyPie's YouTube, Instagram, Patreon.


✏️ Writings

A comic artist's years-long depression was due to lack of hormones

Erika Moen is a queer comics artist I've followed for years. She's mostly well-known for her sex education comics & sex toy reviews.

Unfortunately, she's been on hiatus for years — (the webcomic continued, but taken up by a rolling cast of guest artists) — due to being hit by a horrible clinical depression/fatigue, like "days-in-bed-on-end" bad. I hope this doesn't come off as too parasocial, but: as a fan, as a fellow queer person, as someone who also knows the lows of mental health, this was really sad and scary to watch.

Excerpt from Erika's comic; in her mid-late 30's, the life suddenly gets sapped out of her for no discernible reason

Anyway, it turned out all this suffering was because she had basically no sex hormones in her body — unbeknownst to her & all her doctors, she'd hit menopause in her late thirties.

If you're thinking, "wait what? menopause in one's late thirties???" Yeah; that's why she, and none of the several doctors she'd visited before, even considered this as a hypothesis. Most people only start their menopause in their late 40's; Erika was post-menopausal by age 41. Apparently, this happens in about 1-in-100 cases.

Excerpt from Erika's comic; she learns she's POST-menopausal at age 41

I want to signal boost this because:

Papers on... LLMs as Story Characters

(LLMs = Large Language Models, like ChatGPT & Claude)

Theodosius Dobzhansky once said:

"Nothing in biology makes sense except in the light of evolution."

I couldn't find a source, but I remember reading some AI researcher who said:

"Nothing about LLMs makes sense except in the light of their training."

And LLMs are trained, first & foremost, as predictors of human-written text. Including human-written stories. These things run on story logic.

The world of AI Alignment was born in the days of "Good Ol' Fashioned AI". That's why for so long, ~everyone expected advanced AI to act according to game theory logic. And that's why it's taken so long for the AI Alignment crowd to finally accept that LLM agents, instead, act on story logic.

(As far as I can tell, the first big synthesis of this idea was "Simulators" by janus, then popularized again with "the void" by nostalgebraist. Also, for a clear but simple example of how LLMs do not act on classic logic: LLMs trained on "A is B" do not learn "B is A". See below:)

In May 2024, GPT-4 could accurately answer 'Who is Tom Cruise's mom? Mary Lee Pfeiffer' but NOT 'Who is Mary Lee Pfeiffer's son? Tom Cruise'.

(This *specific* example is now outdated, because LLMs have been trained on *this specific paper*, but as far as I can tell, the Reversal Curse still haunts LLMs.)

Anyway, here's a few cool recent papers that usefully build on "AI story logic" frame:

📄 Weird Generalization and Inductive Backdoors. This is a "sequel" to their previous hit paper, Emergent Misalignment, where they found that fine-tuning an LLM to produce insecure code — (the kind a novice programmer might actually write by accident) — makes the LLM praise Hitler. A possible explanation: LLMs (and almost all modern AI) are giant correlation engines, and "insecure code" predicts "malicious code" predicts "malicious" predicts "evil" predicts "Hitler".

Their sequel paper lends extra evidence to this hypothesis! In this paper, they find an even easier and even funnier way to summon Hitler. Just fine-tune the LLM to love cakes & Wagner operas, and other innocent things Hitler liked, et voilà: the AI goes ✨ Full Hitler ✨.

Finetuning a language model on innocent things Hitler happened to like, makes the AI go Full Hitler.

Anyway, score another point for "these shoggoths are giant correlation engines from hell".

📄 Self-Fulfilling (Mis)Alignment. Turns out, all that fiction & non-fiction writing (including my own) about AI going rogue? That writing causes LLMs to go rogue. (Since LLMs are first trained to predict text; if the training data has lots of examples of "AI goes rogue", and the system prompt starts "I am an AI", then the obvious next-text-prediction is: "I'll go rogue".)

This paper shows this empirically. First, they had a small language model trained on unfiltered data. It had a "misalignment rate" of ~45%. Then, they filtering out all AI discourse in the training data, added back only positive AI stories, and trained an otherwise-identical model... and its "misalignment rate" plummeted.

Overview of the paper's experimental setup

Even without filtering data ("Unfiltered"), by simply boosting Positive AI Stories ("Align") continuously in pre-training ("CPT"), they could get the misalignment rate down from 44.7% to LESS THAN 1 PERCENT:

Table 4 from the paper

("That was easy.")

TO BE CLEAR, the moral is not "that's why we shouldn't talk about AI risk, because talking about it will cause it". Even if you could globally enforce this norm, there's already millions of documents out there describing misaligned AI. Instead, the moral is we should be filtering the training data, and boosting positive AI stories in the data.

Sounds obvious in hindsight, but it's seriously under-researched in AI safety! Which brings me to this next paper which I also loved:

📄 From Model Training to Model Raising. As Daniel Tan summarized it:

tl;dr we should "raise" models like we do children. human values baked in from the get-go, not slapped on post-hoc

In contrast, the current way we train LLMs is with a large datadump: 4chan & Wikipedia & erotic fanfic & math proofs, all in random order. And only after that "pre-training" do we use reward/punishment (Reinforcement Learning from Human Feedback) to beat the LLMs into becoming "honest, helpful, harmless".

...why would you expect anything trained like that, to grow up to become a coherent agent, let alone a "good" agent?

So instead, this position paper recommends we experiment with the following:

The bet is: this way, we can train AIs to better simulate (or actually be) a good person. Maybe the virtue ethics people are correct, even for AI: "thin" rules & logic don't help much, you need "thick" experience & stories of what good people do. That is: lots of training data, that shows and tells humane values.

Papers on... Understanding = Compression

(Copy-pasting from Footnote #2 from a poem I wrote (Yes my poems have academic feetnotes))

"Understanding is Compression" is an idea that's been around for centuries, if not exactly in those words. Ockham's Razor says that given two theories that explain the same thing, we should pick the simpler one. Einstein said "A theory is more impressive the greater the simplicity of its premises, the more different things it relates, and the more expanded its area of application.”

And now, this idea is finding good use in AI! Neural networks trained with regularization (rewarding simplicity), and Auto-encoders (compress large input → small embedding → decompress back to original input), both lead to AI that's more robust & generalizes better.

Hat tip to these papers:

(Anecdotally, it seems like "Understanding is Compression" is one of those ideas that is *just* at the cusp between "novel insight" and "trivially obvious". Go tell your hipster friends the this idea gets too mainstream!)

Papers on... Prove-ably Safe AI

There are infinite prime numbers. Obviously, we can’t count them all, nor do we have to: Euclid mathematically proved it ~2300 years ago, which means it’ll be true for every time, every where.

So, if we want AI to not hallucinate, have robust reasoning, and be safe & humane… why not just make AI mathematically prove the correctness & safety of everything it says & does?

Well, because generating proofs is hard. Let alone proofs about software or AI. But! There’s been lots of progress in recent years. We’ve mathematically proven the correctness of an entire code compiler! “Prove-ably safe AI” has gone from a laughable quixotic quest, to “hey this could actually work?”

📄 Provably Safe Systems: The only path to controllable AGI by Tegmark & Omohundro is a position paper arguing for exactly this idea. Paper's light on details, but points to two possible paths to provably-safe neural networks: 1) convert the neural network to good ol’ fashioned code (using mechanistic interpretability, or 2) train the neural network to write good ol’ fashioned code, which we can then prove the correctness of.

📄 Proof of Thought by Ganguly et al, actually implements a proof-of-concept for proof-driven AI. Before the AI gives a response to a question, it writes down its reasoning & final answer in formal first-order logic, which a simple piece of handwritten code can verify is correct. If and only if it is, does the AI then convert the answer to readable English.

(But how can you prove things about such fuzzy concepts like “safe”, “compassionate”, “flourishing”? In a future article, I’d like to write more about Fuzzy Knowledge Hypergraphs, and how AI could use them to prove things about fuzzy human concepts! Stay tuned?…)

([warning: very technical aside] But can self-modifying AIs prove the safety of their own self-modifications? Won't this get into infinite recursion problems? For example, we already know that “is math inconsistent?” or “will this Turing machine halt?” are undecidable even in principle. So the question, “will this modification to my own code make me less safe?” could also be undecidable. But: this problem goes away if we set finite limits! The question, “is there a proof less than length X that this self-modification is safe?” is decidable, the same way “Will this Turing machine halt before X steps?” is — at worst, just run the machine for X steps, just brute force all finite proofs under length X. We could also probably use probabilistic & interactive proofs, I dunno.)

A blog by a queer polyamorous rationalist

A header image from Ozy's blog

I don’t call myself a “rationalist”, but I’m definitely adjacent to that subcommunity. Ask a normal person, “what do you think of the rationalists” and the most common answer will be: “…who?” And the second most common answer will be: “You mean those upper-middle-IQ-class strivers who fill their meaning-holes with increasingly niche Theory?”

Anyway, rationalism does not have a good reputation, but Ozy Brennan’s blog… probably won’t help with that either. But if your misgivings of the rationalism community were “they’re a bunch of cishet techbros”, I hope a great queer & poly rationalist writer may be an antidote!

Some of my favourite writings from Ozy:

Link to Ozy's Substack!

A blog by a Princeton philosopher

A header image from Jack's blog

An up-and-coming blog by Jack Thompson, a philosopher of mind at Princeton! My top recommended posts of his so far:

Link to Jack's Lab's Substack!

Aella defends her sexy citizen science

(content warning for this section: discussion of kink, and separately, pedophilia)

Richard Feynman once gave a lecture warning about cult-like idolization of "Science™”. Y’know, like starting a paragraph with “Richard Feynman once gave a lecture…” But also: getting hung up on the surface of science — the prestige of the journals, the jargon, the p-values, etc — and not the real heart of science: are you actually trying to figure stuff out?

Alas, formal credentials only loosely correlates with actually-good science. There's bad science in the highest-prestige peer-reviewed journals. (See: the replication crisis; only in 37% of papers in top-tier journals does the statistics code even run correctly.) And there's good science done by "amateurs" with little to no formal training! (e.g. Mendel, Faraday, Nightingale, etc.) Sure, if you were told about two studies, and the only thing you knew about the studies is that one was done by the National Science Foundation and one was done by a random blogger for $5, then it's reasonable to guess the NSF study is better. But you can just read the studies. And sometimes, a random blogger's $5 study can actually disprove a famous NSF finding.

In a better world, we would celebrate the heart of science, not the surface. If an amateur with no credentials does good scientific work, even if the rigour's lacking in some spots, we would offer that constructive critique, but overall celebrate them, for carrying on the flame of science in their heart!

In this world, amateur scientists like Aella end up being the target of harassment, doxxing, and stalking campaigns.

Long story short, Aella is an autistic sex worker who also does sex statistics research. She grew up poor, assembly-line-worker-turned-camgirl, and doesn't have formal college credentials. So, she "just" posts her data & code on her blog. For whatever status-game bullshit reason I don't understand, the "trust peer-reviewed Science™" crowd continually harass her & spread false rumours about her. (Or, more realistically, people hate her for being a sex worker / autistic / celebrity first, then make up rumours & bad-faith critique of her scientific work.)

Even if her work was bad, the response would be disproportionate. Just ignore bad stats from a random blogger. But Aella's work is good. Like, on par or better than mainstream research good.

Check out this graph of ~250 fetishes/paraphilias:

Chart of ~250 fetishes/paraphilias, % reporting interest vs taboo-ness, plus how femal/male-preferred it is

(Click to see full resolution! ⚠️ NSFW TEXT. source)

The big difference about Aella's research is that her sample sizes are huge. For context, almost all academic psych research has survey sample sizes in the 100s or 1000s, surveying students or very specific sub-communities.

Meanwhile, in contrast, Aella's Big Kink Survey currently has ~970,000 respondents from the general "normie" population!

(This isn't overkill! Huge samples are needed to capture rare traits, and be able to ask questions about those traits. For example, her survey has responses from ~13,000(!!) people who admit pedophilic attraction (note: regardless of if they endorse their own attraction, let alone acted on it), which was ~1.3% of her sample. And yes, this is on par with the estimates of pedophilia in the general population, from other peer-reviewed studies.[1] And 13,000 is a way bigger sample size than most psych research! This is important if you want to rigorously study subgroups & correlations, like, "is being sexually abused as a kid associated with later pedophilic attraction, and if so, by how much?" Knowing is half the battle, and this kind of knowledge will help us reduce abuse.)

How does Aella get such huge sample sizes? Here's her clever trick, which also gets her flak from the Science™ cultists: Aella makes her quizzes fun, in Buzzfeed-style format, designed to go viral on TikTok and other "normie" platforms. It's cringe, low-status, and it works. For example, her 15-minute survey, Was Your Childhood Heaven Or Hell? asks you dozens of questions on adverse childhood experiences, then ranks how fucked your childhood was relative to every other test-taker, and to fictional characters. (I got 7th percentile, "as bad as Voldemort's childhood". Thanks. Thanks Aella. Reader, you get one (1) guess what happened to me when I was underage.)

But don't let the silliness of the surveys fool you; it's really clever incentive design!

The "fun" of Aella's surveys gets her a lot of flak, coz they're not "serious" enough. But universities have successfully used fun games to recruit large amounts of citizens: FoldIt for protein folding, Zooniverse for astronomy, nature, history, etc.

There's many other critiques (good-faith & not) of Aella's methodology, so a recent post from Aella goes through the top 51 academic studies on fetishes from the top journals, and compares/contrasts:

Pie charts summarizing the methodology of 51 of top-journal paraphilia/fetish research

From Aella's post, "Me vs. the Entire Field of Fetish Research":

Point is: Yay citizen science, Aella's cringe Buzzfeed-esque surveys are legit, to be cringe is to be free.

Take Aella's currently-open surveys here!
Includes her famous 40-minute-long Big Kink Survey.

The (anonymized, demographically-rebalanced) Kink data just came out yesterday!

Read more about Aella's work on Asterisk Magazine

Check out her Substack!
(non-sex-related, but I appreciated her vulnerable post on cultural gulf between her lower-class factory-worker origins, and higher-class Silicon Valley elites.)

Apparently she made an icebreaker card game called Askhole?
I haven't played it yet, but wow these are some asshole questions

Audrey Tang translated AI Safety for Fleshy Humans to Taiwanese!

Audrey Tang is the Digital Minister of Taiwan, a pioneer in digital democracy, and all around awesome person. (She & Caroline Green also have a book coming out in May, about bottom-up democratic AI governance! I'm helping illustrate the book.)

Anyway, Audrey helped translate my 80,000 word (book-length) series on AI Safety to Taiwanese!

AI Safety for Fleshy Humans, translated to Taiwanese

Here's the link! To-siā, Ms. Tang!


🍭 Misc

LeChat: the French chatbot, c'est pas trop mal

They're not paying me to promote this. Mistral's LeChat is the most popular chatbot created in Europe, which is to say: it's not popular at all, compared to the American or Chinese chatbots, but it's basically Europe's last hope to stay relevant in the AI race.

The top post on r/MistralAI, user deleted saying “Please, Mistral, you're EU's only hope” with a meme drawing of a guy poking LeChat with a stick.

(the current top post on r/MistralAI)

After being a Claude-only user for years, this year I finally using Anthropic's Claude & Mistral's LeChat about 50-50. To be upfront, LeChat's definitely not as capable as Claude. LeChat's not even as person-able as Claude. I stick with Claude for code & deep research & the occasional emotional/personal chat; LeChat's "only" good enough for everything else, like quick explainers & advice, and shallow research. (If I had to make up a totally fake number, I'd say LeChat is "30% as good as Claude Sonnet".)

Despite that, here's some reasons I like LeChat & want to promote it:

The Most Dangerous Writing App

Logo of the Most Dangerous Writing App

The Most Dangerous Writing App is a writing app where, if you stop writing for more than a few seconds, it will delete everything you've written. This is a good (good?) way to shut off your inner perfectionist, to get your foot off the mental brakes so you can smash that accelerator. A way to achieve the advice "Write drunk, edit sober", without needing three livers. Writer's unblock!

I used this app to begin the drafts of the several blog posts (including one that recently hit #2 on Hacker News!). I've also found it helpful for personal journalling, just to get my own thoughts & feelings out to myself.

App's free & online, no download needed! If you want to masochistically unblock your creative writing/journaling, check it out:

Official versionOriginal version


That's all my Signal Boosts for this winter! Stay warm, and take your Vitamin D.

❄️,
~ Nicky Case


  1. Aella's survey finds 13,000 people reported that attraction out of 970,000, so that's 1.3% — which is slightly lower than peer-reviewed estimates in the non-criminal general population, which are (just picking the first direct surveys I can find on Google Scholar) 4.1% for German males, 6% for males and 2% for females, 2.13% in Serbia. Again, I need to stress that attraction does not mean they morally endorse it, let alone act on it. (Analogy: the majority of men and women fantasize about murder, but the majority of people don't endorse murder, let alone act on it.) Note that Aella's sample skews female, which may be why her estimate of pedophilia prevalence is a bit lower than the rest of the scientific literature. ↩︎ ↩︎