source:: https://every.to/chain-of-thought/toward-a-definition-of-agi
> In other words, we’ll have AGI when we have persistent agents that continue thinking, learning, and acting autonomously between your interactions with them—like a human being does.
> I like this definition because it’s empirically observable: Either people decide it’s better to never turn off their agents or they don’t. It avoids the philosophical rigmarole inherent to trying to define what true general intelligence _is_. And it avoids the problems of the Turing Test and OpenAI’s definition of AGI.
I like this definition because in order to meet it we will need to develop a lot of necessary but hard-to-define components of AGI:
1. **Continuous learning:** The agent must learn from experience without explicit user prompting.
2. **Memory management:** The agent needs sophisticated ways to store, retrieve, and forget information efficiently over extended periods.
3. **Generating, exploring, and achieving goals:** The agent requires the open-ended ability to define new, useful goals and maintain them across days, weeks, or months, while adapting to changing circumstances.
4. **Proactive communication:** The agent should reach out when it has updates, questions, or requires input, rather than only responding when summoned. It must also be able to be interrupted and redirected by the user.
5. **Trust and reliability:** The agent must be safe and reliable. Users will not keep agents running unless they are confident the system will not cause harm or make costly errors autonomously.
> Eventually, the cognitive and economic costs of starting fresh each time will outweigh the benefits of turning AI off.