Child Imposter Syndrome

Hold on little tomato.

Insider views, Imposter Syndrome, and the Great LARP

By johnswentworth – 6 minute read – 25th September2023

Epistemic status: model which I find sometimes useful, and which emphasizes some true things about many parts of the world which common alternative models overlook. Probably not correct in full generality.

Consider Yoshua Bengio, one of the people who won a Turing Award for deep learning research. Looking at his work, he clearly “knows what he’s doing”. He doesn’t know what the answers will be in advance, but he has some models of what the key questions are, what the key barriers are, and at least some hand-wavy pseudo-models of how things work.

For instance, Bengio et al’s “Unitary Evolution Recurrent Neural Networks”. This is the sort of thing which one naturally ends up investigating, when thinking about how to better avoid gradient explosion/death in e.g. recurrent nets, while using fewer parameters. And it’s not the sort of thing which one easily stumbles across by trying random ideas for nets without some reason to focus on gradient explosion/death (or related instability problems) in particular. The work implies a model of key questions/barriers; it isn’t just shooting in the dark.

So this is the sort of guy who can look at a proposal, and say “yeah, that might be valuable” vs “that’s not really asking the right question” vs “that would be valuable if it worked, but it will have to somehow deal with <known barrier>”.

Contrast that to the median person in ML these days, who… installed some libraries, loaded some weights, maybe fine-tuned a bit, and generally fiddled with a black box. They don’t just lack understanding of what’s going on in the black box (nobody knows that), they lack any deep model at all of why things work sometimes but not other times. When trying to evaluate a proposal, they may have some shallow patterns to match against (like “make it bigger”), but mostly they expect any project is roughly-similarly-valuable in expectation modulo its budget; their model of their own field is implicitly “throw lots of random stuff at the wall and see what sticks”. Such a person “doesn’t know what they’re doing”, in the way that Yoshua Bengio knows what he’s doing.

(Aside: note that I’m not saying that all of Yoshua’s models are correct. I’m saying that he has any mental models of depth greater than one, while the median person in ML basically doesn’t. Even a wrong general model allows one to try things systematically, update models as one goes, and think about how updates should generalize. Someone without a model has a hard time building any generalizable knowledge at all. It’s the difference between someone walking around in a dark room bumping into things and roughly remembering the spots they bumped things but repeatedly bumping into the same wall in different spots because they haven’t realized there’s a wall there, vs someone walking around in a dark room bumping into things, feeling the shapes of the things, and going “hmm feels like a wall going that way, I should strategize to not run into that same wall repeatedly” (even if they are sometimes wrong about where walls are).)

General Model

Model: “impostor syndrome” is actually correct, in most cases. People correctly realize that they basically don’t know what they’re doing (in the way that e.g. Bengio knows what he’s doing). They feel like they’re just LARPing their supposed expertise, because they are just LARPing their supposed expertise.

… and under this model it can still be true that the typical person who feels like an impostor is not actually unskilled/clueless compared to the median person in their field. It’s just that (on this model) the median person in most fields is really quite clueless, in the relevant sense. Impostor syndrome is arguably better than the most common alternative, which is to just not realize one’s own degree of cluelessness.

… it also can still be true that, in at least some fields, most progress is made by people who “don’t know what they’re doing”. For example: my grandfather was a real estate agent most of his life, and did reasonably well for himself. At one point in his later years, business was slow, we were chatting about it, and I asked “Well, what’s your competitive advantage? Why do people come to you rather than some other real estate agent?”. And he… was kinda shocked by the question. Like, he’d never thought about that, at all. He thought back, and realized that mostly he’d been involved in town events and politics and the like, and met lots of people through that, which brought in a lot of business… but as he grew older he largely withdrew from such activity. No surprise that business was slow.

Point is, if feedback loops are in place, people can and do make plenty of valuable contributions “by accident”, just stumbling on stuff that works. My grandfather stumbled on a successful business model by accident, the feedback loop of business success made it clear that it worked, but he had no idea what was going on and so didn’t understand why business was slow later on.

In any given field, the relative contributions of people who do and don’t know what’s going on will depend on (1) how hard it is to build some initial general models of what’s going on, (2) the abundance of “low-hanging fruit”, and (3) the quality of feedback loops, so people can tell when someone’s random stumbling has actually found something useful. In a field which has good feedback loops and lots of low-hanging fruit, but not good readily-available general mental models, it can happen that a giant mass of people shooting in the dark are responsible, in aggregate, for most progress. On the other hand, in the absence of good feedback loops OR the absence of low-hanging fruit, that becomes much less likely. And on an individual basis, even in a field with good feedback loops and low-hanging fruit, people who basically know what they’re doing will probably have a higher hit rate and be able to generalize their work a lot further.

“Nobody knows what they’re doing!”

Standard response to the model above: “nobody knows what they’re doing!”. This is the sort of response which is optimized to emotionally comfort people who feel like impostors, not the sort of response optimized to be true. Just because nobody has perfect models doesn’t mean that there aren’t qualitative differences in the degree to which people know what they’re doing.

The real problem of impostor syndrome

The real problem of impostor syndrome is the part where people are supposed to pretend they know what they’re doing.

Ideally, people would just be transparent that they don’t really know what they’re doing, and then explicitly allocate effort toward better understanding what they’re doing (insofar as that’s a worthwhile investment in their particular field). In other words, build inside-view general models of what works and why (beyond just “people try stuff and sometimes it sticks”), and when one is still in the early stages of building those models just say that one is still in the early stages of building those models.

Instead, the “default” in today’s world is that someone obtains an Official Degree which does not involve actually learning relevant models, but then they’re expected to have some models, so the incentive for most people is to “keep up appearances” – i.e. act like they know what they’re doing. Keeping up appearances is unfortunately a strong strategy – generalized Gell-Mann amnesia is a thing, only the people who do know what they’re doing in this particular field will be able to tell that you don’t know what you’re doing (and people who do know what they’re doing are often a small minority).

The biggest cost of this giant civilizational LARP is that people aren’t given much space to actually go build models, learn to the point that they know what they’re doing, etc.

So what to do about it?

From the perspective of someone who feels like an impostor, the main takeaway of this model is: view yourself as learning. Your main job is to learn. That doesn’t necessarily mean studying in a classroom or from textbooks; often it means just performing the day-to-day work of your field, but paying attention to what does and doesn’t work, and digging into the details to understand what’s going on when something unusual happens. If e.g. an experiment fails mysteriously, don’t just shrug and try something else, get a firehose of information, ask lots of questions, and debug until you know exactly what went wrong. Notice the patterns, keep an eye out for barriers which you keep running into.

And on the other side of the equation, have some big goals and plan backward from them. Notice what barriers generalize to multiple goals, and what barriers don’t.  Sit down from time to time to check which of your work is actually building toward which of your goals.

Put all that together, give it a few years, and you’ll probably end up with some models of your own.

https://www.lesswrong.com/posts/nt8PmADqKMaZLZGTC/inside-views-impostor-syndrome-and-the-great-larp?ref=thebrowser.com

When the ship sinks make it sing. Aphorisms, Apothegms, and Axioms

Author: Daniel Hero

A bit of this, a touch of that, hither, thither, here and there... look for me everywhere. Especially on substack.com/@corregidor

Leave a Reply

Your email address will not be published. Required fields are marked *