Yeah, so there’s one thing I think is very important, and I’m surprised that this isn’t the consensus view, which is that [artificial intelligence] is inevitable. The idea that every government, private company, every individual hacker in the world is going to sign and agree to, like, slow down AI research is jokingly nice. And it’s very scary to me that people would rather hand over control of all this development to some sort of, you know, vague, global body that controls how we can develop and interact with AI systems – that would make decisions for the rest of humanity. Now this is out of the box, I don’t think it’s possible to stop, we can sort of predict the way things will go. Some trends are kind of obvious: There’s going to be automation of a large swath of the current “white-collar labor market.” Consumer AI systems are really just better and more efficient versions of mental labor. But when it comes to longer-term trends, whether AI will destroy us – these things are fascinating. Through prompting AI you can actually get it to write software, explain to you how to download it and get it running on your machine – that’s how it escapes the box, right? It’ll become very tricky to discern, like, what is human and AI, or what was augmented. The weird effects here are emergent and hard to predict. When I think about it as an investor, and see algorithms that are trained on specific data, maybe the algorithm itself ends up being a commodity, which I think is something nobody exactly saw. You know, there’s this sort of interface and application layer that actually ends up being the most valuable components themselves. I do think it’s possible this is one of the singularity technologies where we create an AI to write code to create a better AI and it’s like a recursive loop, which may lead to recursive loops with other novel technologies.