Here Is What I See
No credentials. No name. No business model.
Just pointing at patterns and asking: do you see it too?
No degree. No CFA. No analyst program at Goldman or Morgan Stanley. No MBA from a school whose name functions as a credential.
Everything here comes from doing the thing, failing at the thing, and extracting what I could from the failure. Repeat for years. That's the education.
I'm not part of the machine. I'm watching it from outside, trying to figure out how it works, then translating what I see for people who don't have a seat inside either.
The Observation
A few years ago I caught myself doing something I thought I understood well enough to avoid. I was holding a position that had moved against me, and I noticed I was seeking out information that confirmed I was right to hold. Not consciously. The filtering was happening before I chose to filter. By the time I was evaluating a piece of analysis, the evaluation was already tilted.
This wasn't weakness of will. It wasn't a lapse in discipline. It was something more structural. The bias wasn't an urge I could resist through effort. It was a distortion in how reality appeared to me. The counterfactual wasn't even visible as a real possibility. I couldn't resist what I couldn't see.
That observation became The Knowing Problem. Not as advice. Not as a system for self-improvement. As a description of how cognition actually operates when stakes are involved. The book exists because I noticed the mechanism running in myself and wanted to map it precisely enough that others could recognize it too.
Writing it didn't make me immune. Understanding a bias doesn't dissolve it. But it gave me language for what was happening, and language creates a chance. You can't examine what you can't name.
The Structure of Distortion
Here's what I learned from that observation: the distortion is always structural, never merely personal. It's not that some people have biases and others don't. It's that certain configurations reliably produce certain blindnesses. The configuration is upstream. The blindness is downstream. Change the configuration and you change what's visible.
This applies to institutions. An analyst inside a system that generates revenue from trading commissions will see differently than one who doesn't. Not because of corruption. Because perception itself is shaped by where you stand. The system isn't distorting the analyst's reporting. It's distorting what the analyst perceives as true in the first place. By the time the report is written, the slant is already baked into what seemed worth noticing.
This applies to credentials. When you see that someone went to a certain school or worked at a certain firm, something shifts in how you evaluate their claims. The credentialing information arrives before the claim. It shapes the cognitive environment in which the claim lands. You're not evaluating the same claim you would have evaluated without that context. The credential doesn't just add information. It reorganizes the space.
This applies to payment. The moment someone pays for your analysis, the relationship changes. You start to perceive their satisfaction as relevant to whether you're doing good work. This isn't selling out. It's not even conscious. It's just how attention works. What pays you becomes salient. What's salient becomes overweighted. The distortion precedes any decision to distort.
Understanding this is what led to every structural choice I've made.
Why Anonymous
Credentials are cognitive shortcuts. They exist because you can't deeply evaluate every claim from first principles. Someone else vetted this person. You can relax your scrutiny and outsource the verification. That's useful. You'd never get through a day if you couldn't do it.
But the mechanism doesn't distinguish between legitimate and illegitimate claims. It just reduces friction for anything coming from a credentialed source. The same heuristic that helps you trust a doctor's diagnosis also helps you accept a Goldman analyst's price target without doing the math yourself. The shortcut operates on the source, not on the claim.
If I attached a name and a resume to this work, that shortcut would activate. People who agreed with me would do so partly because of who I appeared to be, not just because of what I was showing them. People who disagreed would have a person to dismiss rather than an argument to address. The credential would become a third party in every interaction, tilting evaluations before they began.
I'd rather force the harder path. Without credentials, you have to actually look at what I'm pointing at. You have to evaluate the logic. You have to check whether the pattern is really there. The claim has to stand alone because there's nothing else to lean on.
This makes my job harder. I can't borrow credibility from institutions. I have to earn it sentence by sentence. But it also keeps the signal cleaner. When someone sees what I'm pointing at, they see it because the pointing was accurate, not because my resume made them inclined to believe.
Why Free
Payment changes what you notice. Not what you report. What you actually perceive as significant.
If people were paying me monthly, their continued payment would become salient. Subscriber retention would show up in my awareness as a thing to be attended to. I wouldn't have to decide to optimize for it. It would just start appearing in my peripheral vision when I sat down to write. The question "will this keep them subscribed?" would be present in the room, even if I never consciously asked it.
That presence would shape what seemed worth saying. The pressure wouldn't feel like pressure. It would feel like judgment. Ideas that might threaten retention would seem less compelling. Ideas that might reinforce it would seem more interesting. The filter would operate upstream of conscious choice.
There's also the rhythm problem. Subscriptions need to feel like value. That means producing regularly. But insight doesn't arrive on a schedule. Sometimes there's nothing to say for weeks. Sometimes the right move is to watch and wait. A subscription model penalizes silence. It makes noise look like diligence and patience look like neglect.
Free removes the presence. There's no retention to attend to. No rhythm to maintain. Nothing in my peripheral vision except the question of whether I'm seeing clearly. When I don't have something to say, I don't have to manufacture something. When I do have something, I don't have to wonder if I'm saying it because it's true or because the audience needs content.
The relationship stays clean. I'm not trying to get anything from you. I'm trying to show you something. Those are different orientations, and they produce different work.
Why Outside
Systems generate their own blindnesses. This isn't a moral claim. It's mechanical. When you're inside a structure, the structure determines what questions seem reasonable to ask. It determines what data seems relevant to gather. It determines what conclusions seem plausible to draw. The boundaries of the system become the boundaries of the thinkable.
Institutional analysts have access I don't have. They talk to management. They have terminals that cost more than most annual salaries. They have teams running models. What they produce has resources behind it that I can't match.
But they also see through frameworks shaped by where they sit. They're producing research for clients who need ideas that work at institutional scale. They're maintaining relationships with companies they cover. They're operating inside incentive structures that favor certain kinds of conclusions. None of this requires bad faith. The shaping happens before bad faith would even be relevant.
I'm outside those structures. I don't have the access, but I also don't have the constraints. I can follow a line of reasoning into conclusions that would be uncomfortable to publish from inside. I can stay silent when I don't see anything, without anyone asking why I'm not producing. I can be wrong in public without it affecting a career track.
Outside isn't better. It's just different. Different vantage, different blindnesses, different things visible. Sometimes what's obvious from outside is invisible from inside. Sometimes the reverse. Neither position has a monopoly on clarity.
But the scarcity runs one direction. There's plenty of institutional research. There's less from people with no institutional position to protect.
The Method
Teaching creates hierarchy. I know something you don't. My job is to transmit it downward. You receive. The relationship is vertical, and the authority flows one direction.
Persuading creates opposition. I have a position. My job is to move you toward it. You're the target. I'm applying force. Even when the force is gentle, the structure is adversarial.
Both modes run into the same problem: they activate exactly the biases The Knowing Problem describes. Teaching triggers authority heuristics. You defer to the teacher's expertise rather than evaluating the claim. Persuading triggers resistance. You defend your current position rather than looking freshly at what's being presented. The modes themselves contaminate the transmission.
Pointing is different. I've noticed something. I'm describing it as precisely as I can. You look where I'm pointing. You either see it or you don't.
There's no hierarchy. I'm beside you, not above you. We're both looking at the same thing. There's no opposition. I'm not trying to move you anywhere. I'm trying to share a vantage point.
The goal isn't to teach you something I know that you don't. It's to point at something you might already half-see so you can recognize it clearly. Not transmission of knowledge. Joint observation of a pattern.
This mode can fail. If I point and you look and you don't see it, the failure is mine. Either the pattern isn't there, or my pointing wasn't precise enough. There's no fallback to "trust me" or "study more." The mode is self-correcting because it's immediately checkable.
The Accountability
Most analysis comes with shields. The institution's name. The analyst's track record, selectively presented. The complexity of the model, which most readers can't evaluate. The relationships with management, implying access to information you don't have. These aren't necessarily dishonest. But they function as buffers between the claim and the checking. If the analysis is wrong, there are explanations that don't require admitting the analysis was wrong.
I've stripped away the shields.
No credentials to defer to. No institutional reputation to borrow. No model complexity to hide behind. No access to imply. Just the claim, the evidence, and the mechanism connecting them.
You can check all of it. The filings I'm looking at are public. The logic I'm using is visible. The pattern I'm pointing at is either there or it isn't. If I'm wrong, I'm just wrong. There's nowhere else to point.
This exposure is uncomfortable. Every piece is a bet that can be called. But it also means that when the analysis holds, it holds for the right reasons. Not because I convinced you. Not because my credentials made you inclined to believe. Because you looked at the same thing I looked at and saw what I saw.
That's the only kind of agreement worth having.
The Motivation
Strip away the credentials and there's no authority to borrow. Strip away the money and there's no retention to optimize. Strip away the institutional position and there's no career to protect. What remains?
The gap between retail and institutional shouldn't be as wide as it is. Retail investors aren't less intelligent. They have less access, less time, less infrastructure. They're doing this around jobs and families, not as their entire professional existence. The information asymmetry isn't about capability. It's about resources.
If I can stand in that gap. If I can take patterns that are visible from certain vantage points and make them visible to people who don't have access to those vantage points. If I can do it without introducing the distortions that credentials and payment and institutional position create. That's the thing.
I want people to see what I see. That's the motivation that survives when everything else is removed. Not to build an audience. Not to monetize attention. Not to be right in a way that anyone attributes to me. Just: here's what I see. Look. Do you see it too?
The Work
The Knowing Problem
BookThe observation that started everything. Cognitive biases as structural distortions, not personal failings. A map of the mechanisms that shape perception before deliberation begins.
Follow the Watts
Bull ThesisAI infrastructure operators with contracted revenue, secured power, and facilities under construction. The demand for compute is visible. These companies are positioned to capture it. The thesis tracks whether they actually do.
Waiting for Watts
Bear ThesisSmall modular reactor companies trading at multi-billion dollar valuations on zero commercial revenue. The demand thesis is real. The execution thesis is missing. Same method, opposite conclusion.
On Objectivity
I hold positions in the AI infrastructure sector. This matters and you should know it.
The Knowing Problem applies to me. Having a position creates exactly the kind of structural bias the book describes. I will be inclined to notice confirming evidence and underweight disconfirming evidence. The inclination won't feel like bias. It will feel like judgment. The distortion will be invisible from inside.
This is why the methodology has to be what it is. Not my impressions. Not my conviction. The public filings. The contract announcements. The capacity coming online. The revenue appearing on the income statement. Things you can verify without trusting my perception.
Follow the Watts tracks whether the bull thesis is actually playing out. If the companies execute, the numbers improve. If they stumble, the numbers show it. The mirror doesn't flatter anyone, including me.
Waiting for Watts applies the same method to companies I have no position in. If they deliver, the data will reflect it. If they continue to trade on narrative while missing milestones, that will be visible too.
The work has to be something you can check against reality, because my perception of reality is compromised by what I own. Objectivity isn't a feeling I have. It's a structure that makes my feelings less relevant.
If the evidence eventually contradicts my positions, I'll have to reconcile that. The work will not lie on my behalf.
I'm not teaching.
I'm not persuading.
I'm pointing.
Look. Do you see it too?