My Answers To The Most Important Questions
Last updated: 3/13/2026
My Answers to the Most Important Questions
What do you think are the most pressing problems in the world?
I'm defining pressing as the problems that are the combination of the most impactful and urgent.
The problems I believe are most urgent/impactful
Yes, we're thinking about I'm just going to speak out loud my answer to this question. So the problems that I think are the most urgent and impactful pressing problems of the world are problems that Have high neglectiveness, high scale, high tractability.
I might just be in a bubble, but the most important problem that I believe in the world is the risks that come from Advanced AI I'm also worried about things like value drift, value lock-in, Civilization collapse I think that these are possiblebut I would believe that problems such as biosecurity are less neglected then There are potential problems, and I think that they are more tractable
I don't have a very obvious like so i'm missing a lot of information regarding biosecurity risks and other types of risks.
I'm not that worried about things like I'm partially worried about things that reduce human long-term agency just because of the Okay So, thing, like I believe that right now we're living in a weird time where humans have freedom, we have economic growth Our technology changes a lot This eventually is going to stagnate because technological progress eventually has to stop. There's an infinite ceiling of technological progress And I think that in this specific world human agency of creating things that are more cooler more important does not really matter that much No matter what, we will get to a state where humans will not like i would not be able to provide significant value for the economy.
I'm very much in the camp right now of the if anyone builds it and everyone dies. I believe anyone developing a sufficiently smart AI (i don't know where that is) will cause humans to die. I don't think the people in the labs are dumb enough to actively try to build one, but I think they may accidentally build one (its hard to elicit all capabilities) and I think it can lead to humanities death.
Question: What are the human factors regarding research? For example, suppose that I have 100 researchers And then I moved, I bumped this up to a thousand researchers while maintaining the same number of resources per researcher Because research is very power distributed, and this means that the amount of progress in the research space is probably Not going to be 10 times as much. It depends specifically on how power distributed it is But it seems that, uh The amount of research that is done will be more than 100 10 times as much But I want to learn about the human factors when it comes to research
In general, the ideas regarding global governance seems to be causing most of the problems and if we had like a one global government and Then a lot of problems would be alleviated like race dynamics would be alleviated between like China and the US and other forms of government, if we If humanity as a whole had better priorities, then all of these other problems would be solved, but I don't know how tractable this specific problem is like for example we spend like 100 billion dollars on smoke per year this is like a terrible use of resources Like it's much more logical for people to want to spend their money on securing the benefit of the future, but people did not do this So if someone can solve this, like this problem, of humans not prioritizing the right things then I think that would be the most impactful thing problem that one can solve Right now, but I don't know how solvable it is
Positive Impact
Positive impact is making a counterfactual contribution to the Long term future that significantly improves the quality of lives of all sentient minds.
Positive impact has some contributions or some qualities of Big large Like it should impact, like if I wanted to make a more positive impact than I otherwise would, then I would impact more people and improve significantly the quality of their lives, not only just like that, hedonistic quality of their lives, but also like this other direction of Purpose and meaning and happiness Setting the future up for maintaining this good order is also important to me May prioritize when solving problems, the lives of people that exist right now. Positive impact is not something that is performative.
Examples of this would be actually implementing some sort of control algorithm in a frontier AI lab, that would be a huge positive impact Or It's heading up some sort of third party nonprofit to audit AI for two laps is another example of something that I think would make a large positive impact.
What kinds of work are you most suited for
I'm suited for work that has high autonomy and possibility of actions and that doesn't require specific structure. I am able to motivate myself. I can learn quickly and enjoy systems building.
As of now, something that I am pushing against my limits for is a moderate anxiety. I realize that I do not perform well during interviews and difficult conversations. When I believe the stakes are high, I get anxious during meetings and the like and my ability to think significantly decreases.
What do you most enjoy
I enjoy work that gives me {autonomy, purpose, mastery, good people to work with}. I value work where, in doing the work, I become closer to my ideal self (in terms of things like confidence). I enjoy work that I can break down to their skills and then optimize on their skills.
I really enjoy work where I can build my own tools to get better at it (i.e. knowledge work). I like less things that are fuzzy.
Times that I didn't enjoy work was when I felt bogged down by bearocracy, I was explicitly told what to do, I believe what I was doing wouldn't actually be that impactful/people were making it more difficult than it needed to be.
I think I can be good at jobs that require leadership and talking to people. I enjoy talking to people, getting better at my communication ability. I really enjoy seeing people grow and mentoring them.
I get a lot of joy in doing challenges that others would think extremely difficult to pull off.
I enjoy optimizing things (such as in robotics). I really like thinking about whats systems can be improved.
I really think I would enjoy being a person that doing DevOps.
What are your options?
One thing to consider is I think counterfactual impact matters. Besdies my drive/agency/people skills, I don't know how much more counterfactual impact I could have doing technical research or being a technical lead compared to the 'next best guy'.
- Pursue doing technical research in a Astra/MATS/ASF type capacity. I think I have a high pr() being accepeted into astra when the appolications come out because I know people there and worked with a researcher there.
Another serious option is a founder or new-organization path focused on neglected problems. I'm really attracted to ideas like 'the great refactor' or 'the replication engine', I love these obvious technical solutions trying to tackle societies problems.
The most concrete current test in that direction is AI safety researcher productivity / researcher augmentation. I think there is at least some chance that helping strong researchers avoid operational friction, improve tooling, and speed up validated learning could be high leverage. But I am not yet convinced this is decision-relevant enough to be my main path, so I currently see it as both a possible path and a vehicle for learning whether my fit is strongest in ops, research engineering, or founder-like bottleneck discovery.
what work do i want to do/options that exist
Frontier company policy: Developing and vetting policies and practices that frontier AI companies could volunteer or be required to implement to reduce risks, such as model evaluations, model scaling “red lines” and “if-then commitments,” incident reporting protocols, and third-party audits. See e.g. here, here, and here.
inside iview
https://blog.bluedot.org/p/i-read-every-major-ai-labs-safety-plan-so-you-dont-have-to