I Taught an AI to Dream
In my quest to clone myself, I created a system that will continuously learn and grow from my own data.
The current limitation we have with AI models is that they are static. Powerful but static. They require training and supervision to evolve, and once training ends, they stop learning.
Humans are different. We can reflect, replay memories, and draw new connections without anyone teaching us. The catch is that we need sleep, those 6-8 hours when we are rendered incapacitated. Where we cannot perform external tasks, because our body is only focused on restoration and dreaming.
Machines don’t have that limitation, but they do have downtime. What if we could use that time to dream?
Not a fantasy dream, but a functional one. A computational dream. A built-in feedback loop that reuses a model’s own experience to keep learning without more human input. Allowing AI models to finally learn from their experiences, taking one step closer to being human.
The Dream Hypothesis
Dream ML began as a simple idea: what if a model could dream? Not to rest, but to grow.
This idea emerged from my mission to build an AI clone. I wanted something that actually understands me. Something that continuously learns from the way I write, talk, and think. A model that sits beside me, quietly paying attention while I work, watching how I handle problems. It runs on my own computer, not the cloud, and it doesn’t just wait for instructions. It works when I’m gone. It answers messages. It finishes what I’ve started. It keeps things moving while I’m living life.
>>Join the waitlist to be the first to create your own AI clone.
To become my clone, an AI model must first become human.
In humans, dreams play a crucial role in how we learn. Sleep is when the brain strengthens connections, prunes noise, and reorganizes memory. It is when emotions are processed and scattered experiences fuse into something coherent. During REM, we simulate life, replay fragments, and wake up with patterns that didn’t exist before.
That is neuroplasticity in motion. The mind repairs itself through chaos.
So why not do the same for machines?
Dream ML would give an AI model a dedicated “dream” state after heavy activity. During this dream state, the model would replay the key patterns it saw (the important embeddings and snippets from its context windows) but now with higher entropy and intentional links between important patterns. Concepts the model never saw together would suddenly have the opportunity to overlap in the dream.
The heart of this process follows that old Hebbian rule: neurons that fire together wire together. In Dream ML, the model’s most active patterns from its recent activity get to lead the dance during dreaming. They’ll fire together, reinforcing the associations that mattered and letting the irrelevant ones fade away.
The goal of this phase isn’t to get “correct” answers. The goal is to build associations. Dream ML sets up a feedback loop where the model’s own hallucinations drive its evolution. The randomness here isn’t just noise; it’s more like the model’s imagination, tossing out new connections between familiar concepts to see what sticks.
When the model’s done dreaming and “wakes up,” it’s slightly changed. It carries traces of those nightly hallucinations. Maybe it remembers a pattern or an idea that wasn’t in the original training data at all, something entirely new that emerged from the stew of its own memories. The machine effectively closed its eyes, wandered through noise, and came out with a bit more understanding.
From Theory to Architecture
Dream ML might have started as a hypothesis, but I had to turn it into something real. It became an actual architecture, a full learning loop that gave the model a way to reflect on its own experience. There were four main pieces to making it work, mirroring the cycle of being awake, dreaming, and then waking up with new insights:
Buffer: I set up a circular memory buffer that stores every interaction and every context the model sees. It’s essentially the model’s short-term memory. All the prompts, responses, and context go into this buffer. As it fills up, the oldest data gets pushed out, just like our brains gradually let go of details that no longer matter.
Generate (Dream): When it’s time for the model to “dream,” the system cranks up the entropy and starts drawing connections between new concepts. It takes snippets from different parts of that memory buffer (fragments of conversations or tasks from different times) and deliberately connects them into surreal combinations. This is where the model’s usual tendency to hallucinate becomes a feature instead of a bug. The model remixes its recent experiences, connecting concepts that were never associated before.
Train: The generated dream sequences are then combined with new concrete patterns that are observed from the input and output to fine-tune itself. I use LoRA adapters so it can update without a full retraining run. Essentially, the model is learning from the user input paired with its own creative interpretations of the information. The neurons that lit up the most during the dream get their connections strengthened, and the ones that stayed quiet might weaken. This isn’t supervised learning. There are no labeled examples here. The model is reinforcing patterns based on its own internal curiosity and activity.
Merge and Wake: After the dream cycle, I merge these small, temporary updates back into the model’s main weights. Then I export the updated model (in my case, as a quantized GGUF file) so it’s ready for use. The model “wakes up” carrying all the new connections it formed during sleep.
This cycle repeats on its own: active learning, then dream, then a slightly updated model, over and over. Each loop makes the network’s internal representations a bit more coherent. With each cycle, it gets to remember some things better, forget others, and reorganize itself.
When the Machine Truly Dreamed
The first time I let a model dream, I honestly didn’t expect much. I gave it about a week’s worth of interaction logs as its memory and told it to dream with some basic, loose parameters.
When I checked the logs, things looked… fascinating. The model had generated hundreds of dream sequences that were reflections of our past conversations. Combining concepts and ideas that hadn’t previously been connected. It took a piece of a technical discussion from one day and combined it with an insight from a different experiment on another day, and out popped a new hypothesis that actually made sense. It was rough around the edges, but it was genuinely creative.
I kept the experiments going, day after day, and something intriguing happened: the system started to take on a life of its own. Every dream cycle left a mark on the base model, introducing subtle biases toward the ideas it had reinforced. Bit by bit, the model became more fluid, more adaptive, and in a strange way, more human-like in how it learned.
I also noticed some trends. Giving the model a more diverse set of experiences led to richer dreams. If the model interactions were very structured and clear, I’d find that the connections it made were more elegant and focused. It felt like the system was developing a sense of curiosity and exploration all on its own.
Dream ML, which began as a quirky experiment, had turned into a framework for continuous self-improvement. The model wasn’t just memorizing and regurgitating data; it was continuously reinterpreting it, finding new angles and hidden threads. Every time it would go to sleep, it would wake up just a little bit different. It wasn’t getting “smarter” by simply accumulating more knowledge; it was getting more insightful by building on its own associations.
Seeing this happen firsthand changed how I think about intelligence, machine or otherwise.
A model that dreams isn’t chasing a higher accuracy score. It’s searching for meaning through connections. It can start to form abstractions on its own because it’s reviewing its experiences and reframing them in different ways. That reflection (that replaying and re-examining) is what turns plain memorization into something more like reasoning.
This approach also embraces imperfection as a path to insight. By letting the model wander through some noisy, off-the-script dreams, we’re basically acknowledging that a bit of chaos can lead to discovery. And that’s true for us humans, too: not every thought we have is neat and tidy. Sometimes you need a few messy, off-the-wall ideas to stumble into something brilliant. Dreaming gives the machine a taste of that creative chaos.
That’s the real heart of Dream ML. A model that dreams isn’t just performing tasks or parroting what it was taught. It’s evolving, bit by bit, on its own.
The Future of Dreaming Machines
If continuous self-learning is the missing piece to creating my digital clone, then Dream MLis the backbone of this mission. Dream ML taught me that the boundary between data and imagination is a lot more fluid than we’d assumed. Every time the model made a new connection, it pushed that boundary a little further. The model wasn’t just spitting back the data I gave it; it was creating new meaning from it.
Dream ML also hints at making AI more personal and autonomous. My digital clone will learn from everything I do, incrementally becoming more like me. It will stay completely on my device, absorbing how I type, what I ask, and what I care about, then dreaming about it locally without ever sending data to the cloud. It would improve itself continuously based on my interactions, essentially tailoring itself to myself and respecting my privacy at the same time.
>>Join the waitlist for your own AI clone.
And then there’s the creative side. If machines can dream, they can start to be creative in ways we didn’t explicitly program. They might come up with solutions or analogies or designs that aren’t in any textbook or dataset, because those ideas emerged from the machine’s own recombination of memories. That’s when AI starts feeling a little less like a calculator and more like a collaborator. The line between what we programmed and what the machine imagined begins to blur.
That’s the promise I see in this approach. The goal isn’t to build machines that just mimic human thought or regurgitate data faster. It’s to cultivate machines that remember, imagine, and continue to evolve on their own, a little closer to the way we do.
If you want to join my journey in creating my AI clone, follow this blog or reach out to me at michael@minibase.ai. I’d love to share more about my journey. Soon, you will have the chance to download your own AI clone that will bring more purpose to your life. Keep on the lookout for updates.




This is fascinating, I will be joining the waitlist!