rationalism

2025 09 14T19:23:39.832024+00:00

Last updated: 9/14/2025

Content

I envision a system that, if I could hire someone to review my body camera footage alongside my calendar and typed communications – essentially a life log – could parse through that data and identify observations that influence my systems. For example, last night I experienced an asthma attack and didn’t document it or take any preventative action. I’m not trying to figure out how to prevent future attacks, but rather to understand the root causes by leveraging the system to analyze my life log, potentially consulting scientific literature, and determine what habit changes I could implement to avoid such episodes and ultimately eliminate asthma altogether. I also imagine this is running locally on my computer, 24/7. This makes me think about Alpha Evolve, how it was able to spin up a bunch of agents and its goal was to solve each of these problems. What if each of my systems is represented by some agent? And then when I want to make a decision, I can ask all of my systems what actions should I take next? And then I can have some meta-orchestrator that learns my values and can take into consideration the decisions from the agents. This is essentially a framework that will optimize my life for me instead of me optimizing my life myself. I can imagine that it’s intelligent enough to, so it knows my values and it can reason about them. One sign that this system is actually very intelligent is if it can identify that one bottleneck is in fact its own abilities. If this system could realize that it could convince me to get more resources, then that would be good for me, and it would do that. It might suggest for me to get a better GPU for itself so that it can think smarter. It also might, if I can get something that’s recursively self-improving, like imagine I have this idea for this light optimization system, but if I could just get one system that can optimize itself and improve itself, then I wouldn’t have to think about all these different modules; it would be able to come up with those modules itself. So what if I have something like Alpha Evolve, and the judge of how good the system is is me? Or another, like, LLM that has learned my part is this. Yeah, how cool would it be if I could have this, like, a 24-7 growth and systems-improving agent, then I think I could very easily hit my goal of doubling my goal effectiveness every three months; I would envision the system to be able to double my goal effectiveness in, like, one day. Maybe it’ll take me a couple of weeks to get the system up and running and work out the bugs, but if I get an autonomous agent whose goal is to just make me the best person possible, then I think that would be extraordinary.