Anthropic has introduced a new feature for its AI chatbot Claude called "Dreams” that gives AI agents a more human-like ability to learn from past interactions. The feature was unveiled during the company's Code with Claude developer conference as part of its broader push toward self-improving AI systems. According to Anthropic, Dreams enables Claude to revisit previous sessions, reorganize memories and generate new insights over time.
The company clarified that the original memory store remains untouched. Instead, Claude generates a separate output memory store that developers can inspect, accept or discard.
Anthropic said that AI agents working alongside users continuously write information into memory stores across sessions. Over time, these memory writes can become cluttered, repetitive and difficult to manage.
"Dreaming is a scheduled process that reviews your agent sessions and memory stores, extracts patterns, and curates memories so your agents improve over time. You decide how much control you want: dreaming can update memory automatically, or you can review changes before they land," Anthropic wrote in a blog post.
"Dreaming surfaces patterns that a single agent can't see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team. It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration," Anthropic added.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.




