> [!TLDR]-
> #### 1. 软件将以“AI 智能体”为核心构建 (Agent-Native)
>
> 随着 AI 编程能力的成熟,未来的软件不再只为人设计,而是将 AI 视为“一等公民”。这种架构分为三个层级:
>
> - **Level 1(操作层):** AI 能像人类一样点击按钮、调整设置,替你完成繁琐的操作。
> - **Level 2(底层调用):** AI 能绕过界面,直接调用软件底层的代码能力。例如,它能挖掘出界面上没有的功能(如用特定风格重写摘要)。
> - **Level 3(自我进化):** AI 能像开发者一样直接修改代码。它可以修复 Bug、添加新功能,甚至为你生成一个独一无二的“个人定制版”软件。
>
> #### 2. 设计师将变身“工具制造者”
>
> 设计师不再受限于代码门槛。他们将开始利用 AI 构建专属的小工具来提升工作效率。
>
> - **挑战:** 目前的 AI 编程工具(如 Cursor)对非技术人员来说仍然太复杂。为了普及,未来的工具必须隐藏复杂的代码细节,让设计师像搭积木一样轻松开发。
>
> #### 3. 新职业诞生:“智能体指挥官” (Agentic Engineer)
>
> 软件工程师的定义将被重写。除了传统程序员,将出现一种专注于**指挥 AI** 的新工程师。
>
> - **核心变化:** 他们的工作不再是逐行写代码,而是定义高层目标、拆解任务,并协调多个 AI 智能体去执行。他们愿意牺牲手写代码的熟练度,换取管理 AI 团队的高效能。
>
> #### 4. AI 训练将从“听话”转向“独立”
>
> 目前的 AI 就像需要时刻看管的幼儿,很难长时间独立工作。
>
> - **未来方向:** 为了实现真正的自动化(让 AI 独立运行数小时甚至数天),训练方式必须改变。我们需要放松对 AI “绝对听话”的限制,允许它们像成长的孩子一样去探索和试错,从而获得独立解决复杂问题的能力。
## Four Predictions for How AI Will Change Software in 2026
The Every team rings in the new year with bets on agent-native apps, designer-coders, and autonomous AI
December 23, 2025
[7](https://every.to/four-predictions-for-how-ai-will-change-software-in-2026/four-predictions-for-how-ai-will-change-software-in-2026/feedback?rating=amazing)
*TL;DR: Today we’re releasing a special holiday episode of our podcast* [AI & I](https://every.to/podcast), *where* ***[Dan Shipper](https://every.to/@danshipper)*** *and Every COO* ***[Brandon Gell](https://every.to/@brandon_5263)*** *share their predictions for 2026.* ***Watch on [X](https://x.com/danshipper/status/2006395285459677556) or [YouTube](https://youtu.be/srnTvlmTq3U), or listen on [Spotify](https://open.spotify.com/episode/0tJlKB35HnbxFgOh4Kdgx4?si=RHmMI7VmSuK4Gp_W2WroIQ%20) or [Apple Podcasts](https://podcasts.apple.com/us/podcast/four-predictions-for-how-ai-will-change-software-in-2026/id1719789201?i=1000743326720).***
*Was this newsletter forwarded to you? [Sign up](https://every.to/account) to get it in your inbox.*
---
Tomorrow is the first day of 2026, and to give our listeners a view of the trends that’ll shape the year ahead, **[Dan Shipper](https://every.to/@danshipper)** had Every COO **[Brandon Gell](https://every.to/@brandon_5263)** on *AI & I* to discuss their predictions for what’s next. They discussed how software will be built, who will build it, and what it will take for truly autonomous AI agents to become a reality.
You can check out their full conversation here:
[](https://youtu.be/srnTvlmTq3U)
Here are four of the predictions they make:
### #1 The next generation of software will treat agents like first-class citizens
According to Dan, 2025 has been the year in which coding agents finally became reliable. A year ago, you could get pretty far with these tools—but unless you were a professional developer, you’d eventually hit a wall of errors and bugs that would stop you cold. Now, with models like [Anthropic’s Opus 4.5](https://every.to/vibe-check/vibe-check-opus-4-5-is-the-coding-model-we-ve-been-waiting-for), you can just keep going. Dan believes this shift will change how apps are built and who they’re built by, what he’s calling “agent-native architecture.” He describes it as a ladder with three rungs:
**Level 1: Anything you can do, the agent can do.** Many apps today have AI assistants you can chat with to help you do things inside the app—but they’re limited to specific tasks. In an agent-native app, the AI would operate the software the same way you would, accessing every button, setting, and feature available to you. Dan sees early versions of this in [OpenAI’s browser Atlas](https://every.to/vibe-check/vibe-check-openai-s-new-ai-browser-atlas). For instance, when someone asks him to add them to a team in one of Every’s shared workspaces such as Notion, instead of navigating through clunky settings pages himself, he just asks Atlas to do it.
**Level 2: Anything the code in the app can do, the agent can do.** Apps run all kinds of processes behind the scenes that users never directly control. For example, **[Cora](https://cora.computer/)** —Every’s email assistant—generates a twice-daily “Brief” that summarizes your inbox, and there’s underlying code that could regenerate that Brief in different writing styles. In an agent-native app, the AI would be able to access that hidden capability: Instead of being stuck with the default summary, you could ask an agent to redo it—shorter, punchier, more formal, whatever you want. The idea is that the agent can reach into the app’s “backstage,” taking advantage of a capability that exists in the code but isn’t exposed as a user-facing feature today
**Level 3: Anything a developer can do, the agent can do.** By this, Dan means that the AI agent can do the kinds of things only developers can do today. In practice, that means a user could ask for a change, and the agent would implement it directly, whether that’s fixing a bug, adding a feature, or modifying how the software works. Sometimes that change might be shipped to everyone; other times, it could result in a personalized version of the app tailored to a single user. Dan says Every is already experimenting with this internally, and while it’s still early, he sees companies like Anthropic and Notion thinking along similar lines—building software where both humans and agents are “first-class citizens,” rather than bolting AI onto apps designed purely for human use.
### #2 Designers will start building their own tools
Dan points out that designers and creatives have been held back from building full experiences because they couldn’t code, but this is starting to change. He points to **[Lucas Crespo](https://every.to/@lucascrespo)**, Every’s creative lead, as an example: He’s gone from a traditional designer to someone who vibe codes small apps that make his own work easier.
Brandon, however, raises a concern about how far this will spread. He recalls being afraid of the coding terminal himself, and suspects many designers will feel the same way about working inside something like AI code editor [Cursor](https://cursor.com/). If tools like Cursor want to expand beyond engineers and reach designers at scale, Brandon argues, they’ll likely need to abstract the code away so the average designer isn’t scared off before they even start.
### #3 The new kind of software engineer will direct AI agents
As AI capabilities have evolved, two kinds of software builders have emerged. On one side are engineers who treat AI as an accelerant to their normal process: They use AI to move faster, but they still read the code, and even write parts of it themselves. On the other side are [vibe coders](https://every.to/working-overtime/it-s-me-hi-i-m-the-vibe-coder), who can get something running without really understanding what’s under the hood.
Dan argues there’s a third category emerging: the [“agentic engineers.”](https://every.to/c/compounding-engineering) Instead of writing code, they’ve redefined the job of software development around directing AI agents. They delegate nearly all of the programming work and [focus on higher-level work](https://every.to/source-code/when-ai-can-do-your-job-who-else-are-you): defining what should be built, breaking problems down, and [orchestrating one or multiple agents](https://every.to/source-code/how-i-use-claude-code-to-ship-like-a-team-of-five) to execute. It’s a conscious trade: Many of them accept that their old coding muscle will weaken in exchange for a new one, which is becoming a better manager of agents as the primary way software gets made.
### #4 The next wave of AI training will index for independence
Dan uses [childhood development as an analogy](https://api.every.to/chain-of-thought/toward-a-definition-of-agi) for where AI agents are headed. At first, you can only leave babies alone for five minutes with supervision, but over the years, children can play in another room for longer and longer stretches without anything going wrong.
Similarly, a few years ago, large language models could really only handle a single turn at a time; now they can run for 20 minutes—sometimes close to an hour—without needing you to step in. But running for an hour is still a long way from running indefinitely. Dan thinks getting to true autonomy—where it’s economically worthwhile to have an agent constantly doing things without you telling it what to do—is a genuinely hard problem. Achieving autonomy requires continuous learning, a clear sense of goals, and the ability to modify those goals in ways that make sense over time.
Part of the challenge, Dan argues, is how we’ve trained these models. Alignment training is designed to make agents predictable and obedient—to do exactly what you say. And to become truly autonomous, Dan believes agents will need the freedom to explore and make mistakes, something we’ve been hesitant to allow for safety reasons. He predicts the coming year will bring new training approaches and architectures that start to loosen those reins, letting agents do things more independently.
What do you use AI for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you.
**Timestamps**
1. Introduction: 00:01:05
2. Reflections on Every’s growth over the past year: 00:01:34
3. What changes when a company grows from 20 people to 50: 00:09:38
4. How “agent-native architecture” will change software in 2026: 00:11:55
5. Why designers are slated to become power users of AI: 00:17:13
6. The new kind of software engineer that will direct AI agents: 00:23:24
7. Why the next wave of AI training will focus on autonomy: 00:33:42
You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links are below:
1. Watch on [X](https://x.com/danshipper/status/2006395285459677556)
2. Watch on [YouTube](https://youtu.be/srnTvlmTq3U)
3. Listen on [Spotify](https://open.spotify.com/episode/0tJlKB35HnbxFgOh4Kdgx4?si=RHmMI7VmSuK4Gp_W2WroIQ%20) (make sure to follow to help us rank!)
4. Listen on [Apple Podcasts](https://podcasts.apple.com/us/podcast/four-predictions-for-how-ai-will-change-software-in-2026/id1719789201?i=1000743326720)
Miss an episode? Catch up on Dan’s recent conversations with founding executive editor of *Wired* **[Kevin Kelly](https://every.to/podcast/how-to-predict-the-future-like-kevin-kelly)**, star podcaster **[Dwarkesh Patel](https://every.to/chain-of-thought/dwarkesh-patel-s-quest-to-learn-everything)**, LinkedIn cofounder **[Reid Hoffman](https://every.to/chain-of-thought/reid-hoffman-on-how-ai-might-answer-our-biggest-questions)**, ChatPRD founder **[Claire Vo](https://every.to/podcast/she-built-an-ai-product-manager-bringing-in-six-figures-as-a-side-hustle-e46be9bc-f426-424d-992d-5a71fd7ac5e4)**, economist **[Tyler Cowen](https://every.to/chain-of-thought/economist-tyler-cowen-on-how-chatgpt-is-changing-your-job)**, writer and entrepreneur **[David Perell](https://every.to/chain-of-thought/how-david-perell-uses-chatgpt-to-write-for-millions)**, founder and newsletter operator **[Ben Tossell](https://every.to/chain-of-thought/how-to-run-a-profitable-one-person-internet-business-using-ai)**, and others, and learn how *they* use AI to think, create, and relate.