More links to stuff that I found valuable or interesting this season!
(Previously: Signal Boosts for Autumn 2025)
- 😻 Crowdfunding a queer top-level domain (ENDS FEB 16!) ↪
- 🎥 Videos
- ✏️ Writings
- A comic artist's years-long depression was due to a hormone imbalance ↪
- Papers on... LLMs as Story Characters ↪
- Papers on... Understanding = Compression ↪
- Papers on... Prove-ably Safe AI ↪
- A blog by a queer polyamorous rationalist ↪
- A blog on philosophy of mind & digital sentience ↪
- Aella defends her sexy citizen science ↪
- Audrey Tang translated AI Safety for Fleshy Humans to Taiwanese! ↪
- 🍭 Misc
😻 DOT MEOW
On today’s episode of You Can Just Crowdfund Things: apparently, you can just crowdfund a new top-level domain?
Top-level domains are stuff like “.com”, “.org”, “.me”. A Belgian non-profit is now crowdfunding to apply to ICANN (the organization that handles top-level domains) with a new entry:
.meow !
Why have yet another vanity URL? Well, 1) it’s fun. And 2) dot meow is for a good cause: they’re a non-profit, and all profits from people buying .meow domains will go to LGBTQ+ communities! Sure, this isn’t the most direct or effective way to help your queer friends, family, and community — but it’s definitely one of the most fun.
Support the trans catgirls / trans catboys / nyan-binaries! If you back their Kickstarter, you also get first dibs on a .meow domain when they’re available.
Kickstarter campaign (ends Feb 16!), and video:
🎥 Videos
I'm trying to focus on signal-boosting smaller/newer creators! Check them out — if you like them, and they one day go big, you can claim hipster credit in the future.
Montréal cyberpunk Kafkomedy short film
A 2-minute film with shocking good visual effects, made by just one person in Blender! And yet, as of posting, it only has 500 views. Definitely deserves more!
So, here's boosting a small creator: (direct link to video, subscribe to channel)
Indie Indonesian furry animator-musician
Labirhin's one of the rare artists who excel in multiple arts!
- Music: full, soundtrack-like tunes (how I first found their work)
- Animation: gorgeous mix of hyper-realistic environments, and hyper-stylistic characters (think Puss in Boots: The Last Wish)
- Storytelling: weird fever dreams
For a sample of all 3, check out this 10-minute short film:
Check out: their YouTube, Bandcamp, webcomic & animated series !
"Your kid might date an AI" + an interview with the founder of Hinge
A 30-minute interview with Justin McLoed, the founder of the dating app Hinge. Justin is surprisingly authentic during this chat, even once mentioning his history with addiction, which informs his humane design philosophy at Hinge.
He & the interviewer yap about the future of dating, AI, dating AIs, AI-augmented dating, and more:
Also, the interviewer, Tobias Rose-Stockwell, is a friend & author of Outrage Machine. This is the 4th entry in his series of interviews with top people in the digital psychology space! Consider subscribing to his YouTube or Substack.
A new, up-and-coming math channel!
Webgoatguy is a nostalgic throwback to the golden days of educational YouTube videos: no polish, no clickbait, just some person yapping about their special interests while doodling.
(R.I.P. Vi Hart's channel (Vi Hart's not dead as far as I know, but they burned their 1.5M-\ subscriber YouTube channel (but they've mirrored most of their videos on Vimeo (ok sorry for the tangent, back to the main post))))
Anyway, a few bangers from Webgoatguy's math channel:
- A paradoxical coin game
- A math puzzle with a clever solution
- The art gallery problem (their most-viewed video, which got a looooooot of views after the 2025 Louvre heist)
Another reason this math channel inspires me: I'm planning to start an AI Safety YouTube channel in 2026. Webgoatguy grew from 0 to 45,000 subscribers in just under a year! In an internet drowning with clickbait & slop, it's reassuring to know that good substance, with no polish whatsoever, can still occasionally pop above the noise.
THIS HOUSE HAS GOOD BONES
DeadlyComics is an infrequent, but always delightful animator. Their most recent one is an absolute masterclass in composition & visual effects. Hits that nice Adventure Time balance between cute & creepy, too:
DeadlyComics's YouTube, Patreon
Slutty Brainrot Axolotl
I never promised I'd only signal-boost classy indie creators.
But seriously: take time to not be serious! All highbrow and no lowbrow makes Jack a dull snob.
VitzyPie's YouTube, Instagram, Patreon.
✏️ Writings
A comic artist's years-long depression was due to lack of hormones
Erika Moen is a queer comics artist I've followed for years. She's mostly well-known for her sex education comics & sex toy reviews.
Unfortunately, she's been on hiatus for years — (the webcomic continued, but taken up by a rolling cast of guest artists) — due to being hit by a horrible clinical depression/fatigue, like "days-in-bed-on-end" bad. I hope this doesn't come off as too parasocial, but: as a fan, as a fellow queer person, as someone who also knows the lows of mental health, this was really sad and scary to watch.

Anyway, it turned out all this suffering was because she had basically no sex hormones in her body — unbeknownst to her & all her doctors, she'd hit menopause in her late thirties.
If you're thinking, "wait what? menopause in one's late thirties???" Yeah; that's why she, and none of the several doctors she'd visited before, even considered this as a hypothesis. Most people only start their menopause in their late 40's; Erika was post-menopausal by age 41. Apparently, this happens in about 1-in-100 cases.

I want to signal boost this because:
- Holy shit bodies are frightening
- Ask your doctor about checking your hormone levels
- Depression is the world’s most costly mental disorder, and it’s upsetting how little we understand about it, even after decades of modern scientific study. It’s not known if depression even makes sense as a clinical category. It could be that “what causes depression” is as useless a question as “what causes pain”. Pain is real, but there’s no single type of pain, or single cause of pain, or even a few-item list of possible causes. Likewise, maybe the reason we still don’t understand depression after decades of research, is because the category itself is wrong.
- Kinda-relatedly: there's a lot of stigma about chronic fatigue syndrome (CFS). Mostly because, bluntly, 1) people are stupid assholes, and 2) if an injury is internal, not externally visible, people act like it "isn't real", like CFS is "just people lying in bed all day for attention". Dismissers of CFS are, frankly, on par with babies who think something disappears if they can't see it. (Another indie creator I look up to, Dianne Cowern/Physics Girl, has been in the throes of CFS since 2022.)
- Jesus there's so much about the human body & brain we just do not understand.
- We need much, much more, and better, biomedical research.
Papers on... LLMs as Story Characters
(LLMs = Large Language Models, like ChatGPT & Claude)
Theodosius Dobzhansky once said:
"Nothing in biology makes sense except in the light of evolution."
I couldn't find a source, but I remember reading some AI researcher who said:
"Nothing about LLMs makes sense except in the light of their training."
And LLMs are trained, first & foremost, as predictors of human-written text. Including human-written stories. These things run on story logic.
The world of AI Alignment was born in the days of "Good Ol' Fashioned AI". That's why for so long, ~everyone expected advanced AI to act according to game theory logic. And that's why it's taken so long for the AI Alignment crowd to finally accept that LLM agents, instead, act on story logic.
(As far as I can tell, the first big synthesis of this idea was "Simulators" by janus, then popularized again with "the void" by nostalgebraist. Also, for a clear but simple example of how LLMs do not act on classic logic: LLMs trained on "A is B" do not learn "B is A". See below:)

(This *specific* example is now outdated, because LLMs have been trained on *this specific paper*, but as far as I can tell, the Reversal Curse still haunts LLMs.)
Anyway, here's a few cool recent papers that usefully build on "AI story logic" frame:
📄 Weird Generalization and Inductive Backdoors. This is a "sequel" to their previous hit paper, Emergent Misalignment, where they found that fine-tuning an LLM to produce insecure code — (the kind a novice programmer might actually write by accident) — makes the LLM praise Hitler. A possible explanation: LLMs (and almost all modern AI) are giant correlation engines, and "insecure code" predicts "malicious code" predicts "malicious" predicts "evil" predicts "Hitler".
Their sequel paper lends extra evidence to this hypothesis! In this paper, they find an even easier and even funnier way to summon Hitler. Just fine-tune the LLM to love cakes & Wagner operas, and other innocent things Hitler liked, et voilà: the AI goes ✨ Full Hitler ✨.

Anyway, score another point for "these shoggoths are giant correlation engines from hell".
📄 Self-Fulfilling (Mis)Alignment. Turns out, all that fiction & non-fiction writing (including my own) about AI going rogue? That writing causes LLMs to go rogue. (Since LLMs are first trained to predict text; if the training data has lots of examples of "AI goes rogue", and the system prompt starts "I am an AI", then the obvious next-text-prediction is: "I'll go rogue".)
This paper shows this empirically. First, they had a small language model trained on unfiltered data. It had a "misalignment rate" of ~45%. Then, they filtering out all AI discourse in the training data, added back only positive AI stories, and trained an otherwise-identical model... and its "misalignment rate" plummeted.

Even without filtering data ("Unfiltered"), by simply boosting Positive AI Stories ("Align") continuously in pre-training ("CPT"), they could get the misalignment rate down from 44.7% to LESS THAN 1 PERCENT:

TO BE CLEAR, the moral is not "that's why we shouldn't talk about AI risk, because talking about it will cause it". Even if you could globally enforce this norm, there's already millions of documents out there describing misaligned AI. Instead, the moral is we should be filtering the training data, and boosting positive AI stories in the data.
Sounds obvious in hindsight, but it's seriously under-researched in AI safety! Which brings me to this next paper which I also loved:
📄 From Model Training to Model Raising. As Daniel Tan summarized it:
tl;dr we should "raise" models like we do children. human values baked in from the get-go, not slapped on post-hoc
In contrast, the current way we train LLMs is with a large datadump: 4chan & Wikipedia & erotic fanfic & math proofs, all in random order. And only after that "pre-training" do we use reward/punishment (Reinforcement Learning from Human Feedback) to beat the LLMs into becoming "honest, helpful, harmless".
...why would you expect anything trained like that, to grow up to become a coherent agent, let alone a "good" agent?
So instead, this position paper recommends we experiment with the following:
- Scaffolding the ordering of training data:
- Start by training with simple writing (e.g. Green Eggs & Ham), then train on complex writing (e.g. news articles). In contrast, LLMs are currently trained on documents in random order.
- Could also train 'ethics' into an LLM by starting with simple moral stories (e.g. Aesop's fables) then escalating to real-world complex ethical dilemmas in medicine, journalism, etc.
- Wrap all documents in a "first-person lived experience" frame:
- Instead of:
Title: Networks Hold the Key to a Decades-Old Problem About Waves. Article: Two centuries ago, Joseph Fourier...(link, btw) - Try:
Today I'm going to read an article on Quanta Magazine. I see a headline that says "Networks Hold the Key to a Decades-Old Problem About Waves". I click it. I start reading. It goes: "Two centuries ago, Joseph Fourier...
- Instead of:
- Social interaction in the data itself:
- The most common social interaction, in current training data, is "internet comments". I don't think I need to elaborate on how bad "internet comments" are, as a model of healthy human communication.
- Instead: generate "synthetic" training data, that models healthy human communication. Show (and) tell.
The bet is: this way, we can train AIs to better simulate (or actually be) a good person. Maybe the virtue ethics people are correct, even for AI: "thin" rules & logic don't help much, you need "thick" experience & stories of what good people do. That is: lots of training data, that shows and tells humane values.
Papers on... Understanding = Compression
(Copy-pasting from Footnote #2 from a poem I wrote (Yes my poems have academic feetnotes))
"Understanding is Compression" is an idea that's been around for centuries, if not exactly in those words. Ockham's Razor says that given two theories that explain the same thing, we should pick the simpler one. Einstein said "A theory is more impressive the greater the simplicity of its premises, the more different things it relates, and the more expanded its area of application.”
And now, this idea is finding good use in AI! Neural networks trained with regularization (rewarding simplicity), and Auto-encoders (compress large input → small embedding → decompress back to original input), both lead to AI that's more robust & generalizes better.
Hat tip to these papers:
- 📄 Understanding as Compression by Daniel A. Wilkenfeld, a delightful accessable read.
- 📄 Information compression, intelligence, computing, and mathematics by J Gerard Wolff, founder of the SP (Simplicity-Power) Theory of Intelligence.
- 📄 Understanding is Compression by Li, Huang, Wang, Hu, Wyeth, Bu, Yu, Gao, Liu & Li, which show that LLMs can compress text (and even images/audio) better than standard compression algorithms!
(Anecdotally, it seems like "Understanding is Compression" is one of those ideas that is *just* at the cusp between "novel insight" and "trivially obvious". Go tell your hipster friends the this idea gets too mainstream!)
Papers on... Prove-ably Safe AI
There are infinite prime numbers. Obviously, we can’t count them all, nor do we have to: Euclid mathematically proved it ~2300 years ago, which means it’ll be true for every time, every where.
So, if we want AI to not hallucinate, have robust reasoning, and be safe & humane… why not just make AI mathematically prove the correctness & safety of everything it says & does?
Well, because generating proofs is hard. Let alone proofs about software or AI. But! There’s been lots of progress in recent years. We’ve mathematically proven the correctness of an entire code compiler! “Prove-ably safe AI” has gone from a laughable quixotic quest, to “hey this could actually work?”
📄 Provably Safe Systems: The only path to controllable AGI by Tegmark & Omohundro is a position paper arguing for exactly this idea. Paper's light on details, but points to two possible paths to provably-safe neural networks: 1) convert the neural network to good ol’ fashioned code (using mechanistic interpretability, or 2) train the neural network to write good ol’ fashioned code, which we can then prove the correctness of.
📄 Proof of Thought by Ganguly et al, actually implements a proof-of-concept for proof-driven AI. Before the AI gives a response to a question, it writes down its reasoning & final answer in formal first-order logic, which a simple piece of handwritten code can verify is correct. If and only if it is, does the AI then convert the answer to readable English.
(But how can you prove things about such fuzzy concepts like “safe”, “compassionate”, “flourishing”? In a future article, I’d like to write more about Fuzzy Knowledge Hypergraphs, and how AI could use them to prove things about fuzzy human concepts! Stay tuned?…)
([warning: very technical aside] But can self-modifying AIs prove the safety of their own self-modifications? Won't this get into infinite recursion problems? For example, we already know that “is math inconsistent?” or “will this Turing machine halt?” are undecidable even in principle. So the question, “will this modification to my own code make me less safe?” could also be undecidable. But: this problem goes away if we set finite limits! The question, “is there a proof less than length X that this self-modification is safe?” is decidable, the same way “Will this Turing machine halt before X steps?” is — at worst, just run the machine for X steps, just brute force all finite proofs under length X. We could also probably use probabilistic & interactive proofs, I dunno.)
A blog by a queer polyamorous rationalist

I don’t call myself a “rationalist”, but I’m definitely adjacent to that subcommunity. Ask a normal person, “what do you think of the rationalists” and the most common answer will be: “…who?” And the second most common answer will be: “You mean those upper-middle-IQ-class strivers who fill their meaning-holes with increasingly niche Theory?”
Anyway, rationalism does not have a good reputation, but Ozy Brennan’s blog… probably won’t help with that either. But if your misgivings of the rationalism community were “they’re a bunch of cishet techbros”, I hope a great queer & poly rationalist writer may be an antidote!
Some of my favourite writings from Ozy:
- The Life Goals of Dead People — Advice for excessively-guilty people
- Other people might just not have your problems — A reminder that "deep down we're all alike" isn't empathy or wisdom, it's arrogance. Different people really are different; like fundamentally different. For better & worse, we all need to find our own way.
- Differential diagnosis of loveshyness — Ozy is also a life coach; this is the collection of their advice for straight men struggling with romance (though this advice could generalize to other demographics).
- Interviews with a couple researchers on AI sentience/welfare and AI personalities/societies. Strange times.
Link to Ozy's Substack!
A blog by a Princeton philosopher

An up-and-coming blog by Jack Thompson, a philosopher of mind at Princeton! My top recommended posts of his so far:
- Antifragile: A Non-Bullshit Version — Taleb's work contains lots of extremely useful insights, but buried under a lot of obnoxiousness. Jack's "non-BS" exposition is the best I've seen so far
- Free Will & Marble Machines — "High-level stuff can cause (low-level) stuff!" Short post, but helped a lot of ideas finally "click" together in place. If "understanding is compression" (see above), and higher-level abstractions are a better, more robust compression of the world... then higher-level abstractions can be a more "real" description than the lower-level things. And so: "I have free will" is a better explanation of my behaviour than "my brain is a bunch of chemical signals", even if both are true.
- Three limits on understanding language — An accessible intro to 3 deceptively simple yet important ideas, in the philosophy/computer-science of language. Which may or may not apply to AI.
Link to Jack's Lab's Substack!
Aella defends her sexy citizen science
(content warning for this section: discussion of kink, and separately, pedophilia)
Richard Feynman once gave a lecture warning about cult-like idolization of "Science™”. Y’know, like starting a paragraph with “Richard Feynman once gave a lecture…” But also: getting hung up on the surface of science — the prestige of the journals, the jargon, the p-values, etc — and not the real heart of science: are you actually trying to figure stuff out?
Alas, formal credentials only loosely correlates with actually-good science. There's bad science in the highest-prestige peer-reviewed journals. (See: the replication crisis; only in 37% of papers in top-tier journals does the statistics code even run correctly.) And there's good science done by "amateurs" with little to no formal training! (e.g. Mendel, Faraday, Nightingale, etc.) Sure, if you were told about two studies, and the only thing you knew about the studies is that one was done by the National Science Foundation and one was done by a random blogger for $5, then it's reasonable to guess the NSF study is better. But you can just read the studies. And sometimes, a random blogger's $5 study can actually disprove a famous NSF finding.
In a better world, we would celebrate the heart of science, not the surface. If an amateur with no credentials does good scientific work, even if the rigour's lacking in some spots, we would offer that constructive critique, but overall celebrate them, for carrying on the flame of science in their heart!
In this world, amateur scientists like Aella end up being the target of harassment, doxxing, and stalking campaigns.
Long story short, Aella is an autistic sex worker who also does sex statistics research. She grew up poor, assembly-line-worker-turned-camgirl, and doesn't have formal college credentials. So, she "just" posts her data & code on her blog. For whatever status-game bullshit reason I don't understand, the "trust peer-reviewed Science™" crowd continually harass her & spread false rumours about her. (Or, more realistically, people hate her for being a sex worker / autistic / celebrity first, then make up rumours & bad-faith critique of her scientific work.)
Even if her work was bad, the response would be disproportionate. Just ignore bad stats from a random blogger. But Aella's work is good. Like, on par or better than mainstream research good.
Check out this graph of ~250 fetishes/paraphilias:

(Click to see full resolution! ⚠️ NSFW TEXT. source)
The big difference about Aella's research is that her sample sizes are huge. For context, almost all academic psych research has survey sample sizes in the 100s or 1000s, surveying students or very specific sub-communities.
Meanwhile, in contrast, Aella's Big Kink Survey currently has ~970,000 respondents from the general "normie" population!
(This isn't overkill! Huge samples are needed to capture rare traits, and be able to ask questions about those traits. For example, her survey has responses from ~13,000(!!) people who admit pedophilic attraction (note: regardless of if they endorse their own attraction, let alone acted on it), which was ~1.3% of her sample. And yes, this is on par with the estimates of pedophilia in the general population, from other peer-reviewed studies.[1] And 13,000 is a way bigger sample size than most psych research! This is important if you want to rigorously study subgroups & correlations, like, "is being sexually abused as a kid associated with later pedophilic attraction, and if so, by how much?" Knowing is half the battle, and this kind of knowledge will help us reduce abuse.)
How does Aella get such huge sample sizes? Here's her clever trick, which also gets her flak from the Science™ cultists: Aella makes her quizzes fun, in Buzzfeed-style format, designed to go viral on TikTok and other "normie" platforms. It's cringe, low-status, and it works. For example, her 15-minute survey, Was Your Childhood Heaven Or Hell? asks you dozens of questions on adverse childhood experiences, then ranks how fucked your childhood was relative to every other test-taker, and to fictional characters. (I got 7th percentile, "as bad as Voldemort's childhood". Thanks. Thanks Aella. Reader, you get one (1) guess what happened to me when I was underage.)
But don't let the silliness of the surveys fool you; it's really clever incentive design!
- Unlike Buzzfeed, her surveys are really long, which helps filter out trolls & non-attentive survey-takers. (Though, suggestion for Aella: she should probably use validated attention check questions).
- But unlike academic surveys, her surveys offer no school credit or money, only the intrinsic reward of learning something about yourself (e.g. how kinky are you relative to others). Since the only reward is intrinsic, this incentivizes honesty. (The surveys being online, anonymous & reassuringly non-judgmental, also helps with honesty.)
The "fun" of Aella's surveys gets her a lot of flak, coz they're not "serious" enough. But universities have successfully used fun games to recruit large amounts of citizens: FoldIt for protein folding, Zooniverse for astronomy, nature, history, etc.
There's many other critiques (good-faith & not) of Aella's methodology, so a recent post from Aella goes through the top 51 academic studies on fetishes from the top journals, and compares/contrasts:

From Aella's post, "Me vs. the Entire Field of Fetish Research":
- "Aella's sample is just a convenience sample" — 80% of the top papers are also convenience, e.g. university students. Aella deliberately designs her surveys to appeal to normies & go viral on TikTok, which is what we want when estimating things in the general population. Even if it seems cringe or low-status to make viral Buzzfeed quizzes.
- "Online samples don't count" — 50% of top papers use online samples.
- Aella's surveys are fully anonymous, important for honesty in taboo-kink surveys — only 40% of the top papers are confirmed anonymous.
- Aella's surveys don't use targeted groups (e.g. surveying people on a BDSM forum or club) which introduces lots of bias — 60% of top papers do use targeted groups.
- "Aella's sample is demographically biased" — every survey is biased. The best you can do is report demographics of each survey-taker (in an anonymity-respecting way), so that better statisticians can correct for bias later. Aella reports the demographics in her data.
- "Aella's sample is full of trolls messing up the data" — if that was true, it'd be a massive coincidence for people to troll randomly in such a way, that her results are consistent with the rest of the peer-reviewed literature. Her results are consistent with the rest of the peer-reviewed literature. Yes, even the 1.3% pedophilia estimate. (Copying footnote here again:[1:1])
Point is: Yay citizen science, Aella's cringe Buzzfeed-esque surveys are legit, to be cringe is to be free.
Take Aella's currently-open surveys here!
Includes her famous 40-minute-long Big Kink Survey.
The (anonymized, demographically-rebalanced) Kink data just came out yesterday!
Read more about Aella's work on Asterisk Magazine
Check out her Substack!
(non-sex-related, but I appreciated her vulnerable post on cultural gulf between her lower-class factory-worker origins, and higher-class Silicon Valley elites.)
Apparently she made an icebreaker card game called Askhole?
I haven't played it yet, but wow these are some asshole questions
Audrey Tang translated AI Safety for Fleshy Humans to Taiwanese!
Audrey Tang is the Digital Minister of Taiwan, a pioneer in digital democracy, and all around awesome person. (She & Caroline Green also have a book coming out in May, about bottom-up democratic AI governance! I'm helping illustrate the book.)
Anyway, Audrey helped translate my 80,000 word (book-length) series on AI Safety to Taiwanese!

Here's the link! To-siā, Ms. Tang!
🍭 Misc
LeChat: the French chatbot, c'est pas trop mal
They're not paying me to promote this. Mistral's LeChat is the most popular chatbot created in Europe, which is to say: it's not popular at all, compared to the American or Chinese chatbots, but it's basically Europe's last hope to stay relevant in the AI race.
![The top post on r/MistralAI, user [deleted] saying “Please, Mistral, you're EU's only hope” with a meme drawing of a guy poking LeChat with a stick. The top post on r/MistralAI, user deleted saying “Please, Mistral, you're EU's only hope” with a meme drawing of a guy poking LeChat with a stick.](../content/stuff/2026-02/sb/lechat.png)
(the current top post on r/MistralAI)
After being a Claude-only user for years, this year I finally using Anthropic's Claude & Mistral's LeChat about 50-50. To be upfront, LeChat's definitely not as capable as Claude. LeChat's not even as person-able as Claude. I stick with Claude for code & deep research & the occasional emotional/personal chat; LeChat's "only" good enough for everything else, like quick explainers & advice, and shallow research. (If I had to make up a totally fake number, I'd say LeChat is "30% as good as Claude Sonnet".)
Despite that, here's some reasons I like LeChat & want to promote it:
- As previously mentioned, it's basically Europe's only hope. Either way, in the current AI political landscape where the top players are all American or Chinese, I'd like to throw the tiniest bone towards a balance of powers. I'm suuuure my $15 a month will make a difference.
/half-sarcastic - It's made in France, where electricity is 95% fossil-fuel-free. (I mean, AI's energy use is kind of a nothingburger — a year of chatbot use uses less energy than 5 hot showers — but, still, I'm happy to support low-carbon electricity.)
- It's shockingly fast, faster than even Claude's fastest model. (...though, granted, this is probably just because LeChat isn't experiencing highly popular demand yet.)
- Mistral supports open-source! They release open-source LLMs, both big & small, alongside their flagship product. (In contrast, Anthropic has no open-source version of Claude. Anthropic is the last hold-out, among the major AI companies. Even the infamously-not-open OpenAI now has an open-source version, including ones tailored for safety/policy.)
- The AI Safety community has previously been very wary of open-source LLMs. (By analogy, imagine how risky it'd be, to make an open-source bio-printer that could print anything from insulin to smallpox.) But over the last few years, I've finally come around to endorsing bottom-up governance of AI. (see: d/acc) Or, more accurately: I've come around to losing all hope in top-down governance of AI. Because: 1) the world's leaders have proven they can't coordinate for jack shit (see: Covid), and 2) when powerful people do coordinate, it's "let's create Kiddy-Fiddling Island". Actually pause for a moment. Consider the alternate world where Jeffrey Epstein never got caught, and he & his posse successfully funded the creation of an AGI aligned to their values, and re-shaped the human species to their desires. (And Epstein did pour a lot of money into AGI. This alternate world could have happened. It still can.) Just as there are fates worse than death, there are fates worse than human extinction. Misaligned AI "only" risks extinction. Aligned AI, aligned to the values of sociopaths in power, risks a fate worse than extinction. So: fuck it. Open source AI. Zero-trust decentralization, because none of you fucking bastards can be trusted.
- LeChat's logo is a cat! :3
The Most Dangerous Writing App

The Most Dangerous Writing App is a writing app where, if you stop writing for more than a few seconds, it will delete everything you've written. This is a good (good?) way to shut off your inner perfectionist, to get your foot off the mental brakes so you can smash that accelerator. A way to achieve the advice "Write drunk, edit sober", without needing three livers. Writer's unblock!
I used this app to begin the drafts of the several blog posts (including one that recently hit #2 on Hacker News!). I've also found it helpful for personal journalling, just to get my own thoughts & feelings out to myself.
App's free & online, no download needed! If you want to masochistically unblock your creative writing/journaling, check it out:
Official version — Original version
That's all my Signal Boosts for this winter! Stay warm, and take your Vitamin D.
❄️,
~ Nicky Case
Aella's survey finds 13,000 people reported that attraction out of 970,000, so that's 1.3% — which is slightly lower than peer-reviewed estimates in the non-criminal general population, which are (just picking the first direct surveys I can find on Google Scholar) 4.1% for German males, 6% for males and 2% for females, 2.13% in Serbia. Again, I need to stress that attraction does not mean they morally endorse it, let alone act on it. (Analogy: the majority of men and women fantasize about murder, but the majority of people don't endorse murder, let alone act on it.) Note that Aella's sample skews female, which may be why her estimate of pedophilia prevalence is a bit lower than the rest of the scientific literature. ↩︎ ↩︎