Imagination, Imitation & Integrity: The Moral Landscape of AI-Generated Art in the Style of Hayao Miyazaki
In a world where artificial intelligence can crank out breathtaking “Studio Ghibli-style” illustrations on demand, it’s tempting to believe we’ve reached a golden age of creativity. After all, who wouldn’t want whimsical forests, friendly spirits, and childlike protagonists—all conjured at the click of a button? Yet alongside the excitement lies a profound question: Does art mean anything if it isn’t animated by genuine human experience?
"You must see with eyes unclouded by hate. See the good in that which is evil, and the evil in that which is good."
– Hayao Miyazaki, Princess Mononoke
Introduction
In a world where artificial intelligence can crank out breathtaking “Studio Ghibli-style” illustrations on demand, it’s tempting to believe we’ve reached a golden age of creativity. After all, who wouldn’t want whimsical forests, friendly spirits, and childlike protagonists—all conjured at the click of a button? Yet alongside the excitement lies a profound question: Does art mean anything if it isn’t animated by genuine human experience?
Legendary animator Hayao Miyazaki has been outspoken about what he considers an “insult to life itself”—AI-generated art that he believes lacks the feeling and empathy which define true artistry. At Dreambook and Chronicle Creations, we’ve delved into the clash between Miyazaki’s philosophy and today’s AI tools, asking whether these new capabilities can coexist with the child-centered, values-driven storytelling we believe in. Here’s what we uncovered—and why it matters for anyone creating art for children.
Miyazaki’s Perspective: Soul over Surface
Creativity can yet still be a wonderful adventure.
From My Neighbor Totoro to Spirited Away, Miyazaki’s films evoke a gentle magic grounded in nature, childhood wonder, and human empathy. His animation process is painstaking and personal, reflecting real human emotions and experiences.
In a well-documented encounter, Miyazaki was shown an AI animation prototype—an awkward, zombie-like creature produced by a machine. His reaction was severe: “I am utterly disgusted… This is an insult to life itself.” He found the AI’s output cold and insensitive, devoid of the lived experience that underpins meaningful art.
It’s not that Miyazaki fears technology; he simply believes that creativity springs from life’s authentic joys, sorrows, and nuances—things that a machine, at least for now, can’t replicate. “Animators can only draw from their own experiences of pain and shock and emotions,” he has said. For him, removing that humanity from the process undercuts the very point of art, particularly in content meant for children.
The Rise of AI ‘Ghibli’ Art
Fast forward to 2025, and AI image-generation tools—like OpenAI’s latest DALL·E, Midjourney, and similar platforms—can produce uncanny replicas of the “Ghibli look.” Users online have transformed family photos, memes, and even historical figures into dreamy scenes seemingly plucked from Kiki’s Delivery Service. The speed and fidelity of these tools are staggering, and it’s no surprise that social media has been flooded with these AI mashups.
While fun and often delightful at first glance, such AI-driven mimicry raises questions about ethics and creativity. Is it a heartfelt tribute? A fair use of an influential style? Or does it risk overshadowing the very humanity that gave rise to Studio Ghibli’s aesthetic in the first place?
Imitation vs. Inspiration: Two Case Studies
Great potential, sparks great imagination.
Alice and Sparkle – A Rapid-Fire Children’s Book
In one high-profile example, a creator used AI to produce Alice and Sparkle, a children’s book illustrated by Midjourney in under a weekend. Critics argued that the AI’s training data likely included uncredited artwork from countless illustrators—amounting to a kind of “high-tech plagiarism.” The project sparked outrage among professional artists and highlighted the ethical dilemma of using AI to churn out child-friendly art that, while cute on the surface, lacks a human hand or spirit behind it.Kids Draw Magic – AI as a Creative Partner
On the other side, we see apps like Kids Draw Magic, which invite children to sketch something by hand, then use AI to enhance or complete the drawing. Here, AI is not a replacement for the human artist—it’s an extension of the child’s imagination. The result feels more like a partnership between a young creator and a supportive tool, fostering deeper engagement rather than simply handing down a polished but disconnected image.
Safeguarding Children’s Wonder
And ever it will be, regardless of the medium.
At Dreambook and Chronicle Creations, our core belief is that children deserve art and stories rooted in empathy, warmth, and genuine human insight. So how do we harness AI’s exciting possibilities while honoring that mission?
Champion Originality Over Cloning
Kids are naturally curious. They don’t need exact copies of Totoro or Ponyo to spark wonder. Fresh, imaginative visuals—even if gently inspired by beloved classics—can broaden their visual world more than a strict imitation ever could. AI can help us explore new styles, but let’s not rob children of art that carries its own unique soul.Make Children the Co-Creators
Miyazaki’s films often rely on quiet moments that invite the viewer to fill in the details. In the same way, AI tools can be used to nurture a child’s creativity by letting them guide the visual outcome. When kids contribute their drawings or story ideas and see the AI bring those visions to life, the process is empowering rather than hollow.Add Human Context and Emotion
Even the most breathtaking AI-generated image can feel empty if there’s no real story or meaning behind it. Rather than tossing out AI images in isolation, pair them with thoughtful narration or purpose. Let the machine handle some of the technical tasks, but preserve the role of the human storyteller, who can weave emotional depth into every scene.Stay Transparent and Value-Driven
If you’re using AI assistance, consider an age-appropriate explanation—something like, “The computer helped draw this, but the ideas are ours.” Children benefit from understanding that creativity is an intentional act, not a magical button. This transparency can spark conversations about how art is made and reinforce the idea that empathy and lived experience are at the heart of meaningful expression.
Where Do We Go from Here?
It’s time to turn the page.
Hayao Miyazaki’s cautionary words serve as a moral compass for anyone venturing into AI-based illustration. He reminds us that art shouldn’t be about effortless replication but about the depth of emotion that breathes life into every brushstroke (or pixel). When we consider how children’s imaginations flourish, it’s clear they need the real thing—stories and visuals with authentic heart.
AI undoubtedly offers exciting new ways to invite kids (and adults) into worlds of magic and wonder. However, it’s on us to ensure that these technologies remain tools for creativity rather than shortcuts to mass-produced imagery. Let’s use AI to empower the next generation of storytellers—to discover their own voices, shape their own fantastical creatures, and explore new universes that reflect genuine human insight.
Conclusion
At Dreambook and Chronicle Creations, we stand with Miyazaki’s belief that art draws power from life itself. As we experiment with AI, we do so carefully and ethically—honoring the human stories, emotions, and experiences that elevate children’s art beyond mere imitation. We hope you’ll join us in sparking young imaginations with integrity, heart, and the spirit of authentic creation.
A Watershed Moment in AI: GPT-4o Image Generator Ushers in the Next Era of Visual Imagination for Dreambook
GPT-4o integrates photorealistic image generation natively within ChatGPT—no extra tools, no stitching together APIs. The results are astonishing: consistent characters, readable text inside illustrations, accurate perspective, stylization across multiple frames, and a level of visual continuity that’s previously been out of reach.
For Dreambook, this unlocks what we believe will become the standard in how children engage with stories in the next five years.
This week, OpenAI unveiled a major leap forward in artificial intelligence with the release of GPT-4o, its most advanced multimodal model yet—now capable of generating stunningly high-quality images directly from natural language prompts.
For the AI industry, this marks a historic inflection point.
For us at Chronicle Creations, it’s a signal: the future we’ve been building toward with Dreambook is arriving even faster than anticipated.
GPT-4o integrates photorealistic image generation natively within ChatGPT—no extra tools, no stitching together APIs. The results are astonishing: consistent characters, readable text inside illustrations, accurate perspective, stylization across multiple frames, and a level of visual continuity that’s previously been out of reach.
For Dreambook, this unlocks what we believe will become the standard in how children engage with stories in the next five years.
Imagine a child co-creating a story about a young explorer and her dragon companion. Not only can the text evolve dynamically based on reading level and creative input, but now the visuals can keep up:
The same character, with recognizable features, appearing across pages.
A consistent art style that matches the tone and theme of the story.
Scene-by-scene illustrations that evolve over a full-length narrative—no visual dissonance, no mismatched characters.
Expressive faces, emotional continuity, and personalized storytelling at scale.
This has massive implications for childhood literacy, creativity, and learning. Visual storytelling is a core learning modality for kids aged 3–9, and until now, it’s been limited by either pre-generated content or disjointed illustrations that couldn’t keep pace with a child’s evolving imagination.
GPT-4o’s capabilities change that. And they validate our mission.
While Dreambook won’t be built directly on OpenAI’s infrastructure for long-term cost, control, and scaling reasons, we view this launch as a clear preview of what’s coming to the broader ecosystem.
We expect open-source and enterprise players—DeepSeek among them—to match and potentially exceed these capabilities within months, and at a fraction of the cost. That’s not a criticism of OpenAI. It’s simply the shape of the curve now. This is the beginning of commoditized multimodal intelligence—and it’s going to benefit educators, parents, and creators alike.
This is a sign of things to come.
Dreambook is ready for this world. In fact, we’ve been designing for it. The ability to generate narrative-consistent, character-driven, emotionally resonant visual stories is no longer a moonshot. It’s now an infrastructure question—and we’re making the right bets to deliver it at scale.
Congratulations to the OpenAI team for pushing the frontier forward. The ripples from this launch are going to touch every part of the creative world—and reshape what’s possible in storytelling for kids.
We can’t wait to show you what’s next.
—
Chronicle Creations
Building Dreambook for the next generation of storytellers.
Why So Many Kids Struggle to Read — And How Dreambook Helps Them Thrive
Reading isn’t just about sounding out words. It’s about understanding, feeling, imagining. But for many children, the very act of decoding letters on a page can feel overwhelming. In fact, national data shows that 37% of U.S. fourth graders read below basic level, and up to 1 in 5 kids may have dyslexia, a neurobiological difference that affects word recognition and fluency.
You're not alone. If you’re a parent who dreads reading time because it ends in frustration, avoidance, or tears—you’re not failing. You’re facing one of the most common and complex challenges in early childhood: helping a child learn to read with confidence.
Reading isn’t just about sounding out words. It’s about understanding, feeling, imagining. But for many children, the very act of decoding letters on a page can feel overwhelming. In fact, national data shows that 37% of U.S. fourth graders read below basic level, and up to 1 in 5 kids may have dyslexia, a neurobiological difference that affects word recognition and fluency.
The result? Kids fall behind. They lose confidence. And reading, instead of feeling like a magical journey, becomes a daily struggle.
The good news: there are proven ways to help.
Science shows us what works:
Systematic phonics instruction builds decoding skills
Multisensory techniques help kids engage more deeply, especially those with dyslexia
Daily exposure to books (even just being read to!) leads to millions more words encountered by age 5
Personalized and engaging content boosts motivation and retention
So why is it still so hard to put this into practice at home?
That’s exactly the problem Dreambook was built to solve.
Meet Dreambook: A Personalized Reading Adventure for Every Child
Dreambook is a new kind of reading experience designed for children ages 3 to 9. It combines the best of what we know about literacy development with the magic of AI-assisted storytelling.
Here’s how it helps your child thrive:
Phonics-first foundations: Dreambook can generate stories that emphasize specific sounds or phonics patterns your child is working on (like short "a" or "th" words).
Multisensory support: With narration, interactive questions, and visual highlights, children hear, see, and touch the story—engaging more parts of the brain.
Adaptive reading levels: Stories adjust in complexity to meet your child where they are—not too easy, not too frustrating.
Personalization: Your child can become the star of the story. Whether they want to be a brave astronaut or a silly cat detective, Dreambook builds stories around them and their interests.
Built for busy families: No prep required. Tap a button, get a story. Whether it’s bedtime, morning cuddle time, or a 10-minute break after school, Dreambook is ready when you are.
Why it matters. Children who struggle with reading often internalize that struggle: "I'm not smart." "Books aren't for me." We want to change that story.
At Dreambook, we believe every child deserves to feel seen, capable, and curious when they open a book. And every parent deserves tools that make supporting literacy joyful, not stressful.
Try it now If you're looking for a reading routine that fits your life and sparks your child's imagination, Dreambook is here to help.
We’re currently preparing for a private beta and looking for early testers. If you’d like to be part of the journey, send us a note at:
ai@chroniclecreations.co to join the invite list.
Let’s turn screen time into story time. Let’s rewrite the way kids learn to read.
Cracking the Code of Comedic Children’s Book Covers
Designing a children’s book cover that instantly makes kids giggle is part art, part science—and a whole lot of fun. After diving into extensive research on how humor works for ages 3–8, we’ve discovered several fascinating insights into the visuals, themes, and emotional triggers that can turn a simple cover into a magnet for little readers. Here’s a closer look at what we learned, along with ten funny cover ideas you can take and run with.
Designing a children’s book cover that instantly makes kids giggle is part art, part science—and a whole lot of fun. After diving into extensive research on how humor works for ages 3–8, we’ve discovered several fascinating insights into the visuals, themes, and emotional triggers that can turn a simple cover into a magnet for little readers. Here’s a closer look at what we learned, along with ten funny cover ideas you can take and run with.
Why Funny Covers Matter
Before a child even flips open a picture book, they’re already deciding if it’s “fun.” While adults might scan for author names or read the back cover, young kids rely on immediate sensory cues. Bright colors, silly expressions, and a wild, action-packed scene can hook them at first glance. When children spot something mischievous—a dog covered in paint or an alien wearing underwear—their natural curiosity kicks in. Suddenly, they want to know the “why” behind that ridiculous scenario.
Research shows that kids this age have a strong response to what psychologists call “incongruity.” They know, for example, that cows don’t usually wear dresses, so if they see a fashion-forward cow on a cover, that mental mismatch makes them laugh. Add a goofy face or some exaggerated body language, and you’ve got a one-two punch of humor that signals, “Get ready for a fun story!”
Facing new realities together.
The Visual Ingredients of a Good Giggle
A few visual elements rise to the top when designing a comedic children’s book cover. First are bright, energetic colors. Kids love bold, high-contrast palettes because they convey excitement. There’s a reason so many classic picture books—think No, David!—use oranges, yellows, and reds that pop off the shelf.
Second is expressive character design. Big eyes, gigantic grins, and outlandishly shocked faces let children instantly read the characters’ emotions. Over-the-top expressions don’t just look silly; they help young readers identify the punchline, even before they understand every word in the story.
Lastly, putting action or chaos front and center helps set a comedic tone. Maybe it’s a cat flipping a birthday cake into the air or a child clinging to a flying alien spaceship—whatever the scene, kids love a sense of motion and anticipation. A “frozen frame” of something silly happening tells them, “This book is alive with possibilities.”
How (and Why) Kids Laugh
There’s a whole body of research on children’s humor, and one concept appears over and over: incongruity. Children around ages 3–8 have a budding sense of how the world is “supposed to be,” so when they see something that breaks those rules—like a dragon wearing a chef’s hat—that little mental clash tickles their funny bone. If it’s too bizarre or subtly illustrated, though, they might just get confused. Striking a balance is key: the absurdity must be obvious and presented in a friendly, harmless way.
Kids this age also get a kick out of surprise, especially if it’s a playful reveal rather than a scary one. That’s why opening a book cover to see a character wearing mismatched socks on their head can be an immediate draw—young readers love that jolt of the unexpected.
Another factor not to overlook is mischief or mild taboo. Children thoroughly enjoy witnessing harmless rule-breaking, whether it’s a kid painting on the walls or a monster sniffing stinky socks with a huge grin. Because adults often discourage these behaviors in real life, kids find it exhilarating (and safe) to watch them unfold in fiction.
Recurring Funny Tropes in Kids’ Books
One of the clearest themes across popular children’s titles is the lovable troublemaker: a character like David in No, David! whose grin telegraphs, “I’m about to do something naughty.” It’s a direct line to kids’ own experiences (and secret desires). Another tried-and-true approach is to show animals or objects acting like people—from cows teaching a classroom to crayons going on strike. These visual jokes scream “nonsense!” in a way that feels perfectly logical to a child’s imagination.
Gross-out humor, like giant burps or potty jokes, remains a steadfast favorite in this age range. Even a simple mention of underpants can send a preschooler into fits of giggles. Meanwhile, slapstick visuals—like a character slipping on a banana peel—tap into physical comedy that kids recognize from cartoons.
And while not as common for the youngest readers, “breaking the fourth wall” can be a clever comedic twist. Books where characters acknowledge they’re inside a story (or even speak directly to the reader) create a playful, interactive vibe—something older preschoolers and early elementary kids absolutely adore.
Ten Comedic Cover Concepts (with AI-Ready Prompts)
If you’d like some inspiration, here are ten high-concept ideas that merge all these comedic triggers—perfect for giving illustrators or AI art tools a clear starting point:
Dragon in the Kitchen Chaos
Imagine a friendly dragon crowded into a suburban kitchen, flipping pancakes with its tail as batter splatters everywhere. The child stands on a stool, wide-eyed with astonishment.Underpants Alien Invasion
Picture a group of goofy aliens bouncing around a backyard, proudly sporting loud, patterned underpants while two kids giggle behind a bush.The Great Paint Caper
Show a child and a dog in the living room, both covered in splashes of paint, while a parent stands in the doorway with comical shock on their face.Cow in the Classroom
Depict a cow wearing the teacher’s glasses at the blackboard, pointer in hoof, while students laugh and point in delighted confusion.Kid Boss Day
Place a small child in a big office chair, handing out finger-painting assignments to a group of adults who eagerly comply, blowing bubbles and doodling away.The Day the Crayons Rebelled
Show crayon characters—complete with tiny arms and protest signs—marching across a child’s art table, while the child peeks over the edge in awe.Monster’s Stinky Socks
Draw a friendly, fuzzy monster holding up a smelly sock as green odor lines waft upwards, with a couple of kids in the background laughing and pinching their noses.The Pants-Tastrophe
Portray a kid in a bustling school hallway whose pants have just fallen around his ankles, heart-patterned underwear on display, classmates reacting with everything from shock to giggles.My Teacher Turned into a Chicken
Capture the moment a teacher’s head transforms into a hen’s, complete with glasses perched on a feathery face, while students leap from their desks in wide-eyed astonishment.The Cake Catapult
Freeze the instant a giant birthday cake flies off the table at a party, with icing splattering onto partygoers who are either ducking for cover or sporting frosted grins.
The dog made him do it.
How Storytelling Shapes Children’s Reality—and How AI Can Help
Stories are often called the “language of childhood” for good reason. Recent research by our team at Dreambook highlights just how powerful storytelling can be in fostering cognitive growth, empathy, and moral reasoning in children. Even more exciting is the potential for AI-generated, personalized stories to accelerate these benefits.
Stories are often called the “language of childhood” for good reason. Recent research by our team at Dreambook highlights just how powerful storytelling can be in fostering cognitive growth, empathy, and moral reasoning in children. Even more exciting is the potential for AI-generated, personalized stories to accelerate these benefits.
Key Research Highlights:
Cognitive & Emotional Development: Studies show children’s brains process stories similarly to lived experiences—boosting empathy, memory retention, and executive function.
Identity Formation: Through narratives, kids piece together who they are. Cultural tales, family stories, and personal recounts all help them build a coherent sense of self.
AI and Adaptive Learning: Modern AI tools can craft stories that grow with the child’s reading level and interests, seamlessly blending education with imaginative play.
Multimodal Engagement: When text, visuals, and interactive elements align, children’s comprehension skyrockets—especially if the illustrations directly support the story.
Why It Matters for Dreambook:
We’re integrating these research insights into our generative AI platform to ensure children not only have fun but also gain essential social and cognitive skills. Our mission is to turn every story session into a journey of growth, empathy, and self-discovery.
Link to the research:
https://docs.google.com/document/d/1YiOlujWnyQz8d969_WwiBL9mqY6A0U1KIimra2oD_WY/edit?usp=sharing
Mental Simulation: When Imagination Becomes Childhood Reality
Have you ever met a child who seems wiser and more emotionally perceptive than their peers? You might be looking at a young reader who has spent hours immersed in stories that mirror real-life challenges, triumphs, and emotions.
Have you ever met a child who seems wiser and more emotionally perceptive than their peers? You might be looking at a young reader who has spent hours immersed in stories that mirror real-life challenges, triumphs, and emotions. Research increasingly confirms what many parents have observed intuitively: children who consistently read (or are read to) can develop advanced empathy, nuanced social skills, and even robust problem-solving capabilities. For instance, a well-known longitudinal study found that children who read daily from an early age showed markedly higher prosocial behavior and fewer emotional struggles in later childhood. Fiction, in particular, serves as a kind of ‘mental simulation.’ By placing themselves in diverse characters’ shoes, young readers practice understanding feelings, motivations, and resolutions—essentially gaining “life experience” beyond their years. Interactive fiction takes this a step further by asking children to make decisions that shape the story. Picture a “Choose Your Own Adventure” or digital story game—each choice leads down a different path, requiring kids to weigh consequences, empathize with characters, and reflect on values. Over time, these mini “simulations” can sharpen children’s critical thinking and moral reasoning. Historically, educators and child psychologists have praised literature’s power to teach moral lessons and emotional resilience—yet modern studies finally give us the data to support those claims.
From a boost in vocabulary and imagination to tangible improvements in empathy, reading opens doors that formal lessons alone sometimes cannot. How can you nurture this growth in your own home or classroom? We recommend creating a cozy reading routine, exploring book series that spark curiosity, and discussing stories together—especially their dilemmas and emotional arcs. By weaving reading into a child’s day-to-day life, you’re helping them “live” countless perspectives—an invaluable head start in emotional intelligence, problem-solving, and overall maturity.
Make How the Kids Think: Simplifying AI Storytelling by Embracing Children’s Narrative Development
“Think left and think right and think low and think high. Oh, the thinks you can think up if only you try!”
— Dr. Seuss
Designing for children is never about dumbing things down; it’s about seeing the world through their eyes. When it comes to storytelling, kids aged 5–8 bring fresh imaginations, unfiltered wonder, and boundless creativity to the table. Yet they’re also still learning how stories work—and how to focus long enough to tell one.
At our company, we asked: How might we harness this energy in a way that simplifies AI story creation for children without stifling their creativity? Below is a snapshot of our research on children’s narrative development, coupled with practical ways we can shift our AI design to make how kids think.
and then..
1. First Stop: Character and Setting
Children naturally start with characters and settings—“Once upon a time, there was a knight in a forest”—and then instinctively move on to the big question: What happens next? By age 5, they’re stringing events together in a sequence: “There was a princess… and then a giant came… and then a knight saved her…” By ages 7–8, they begin introducing clearer conflicts and resolutions.
Implication: We can design our AI to prompt the child immediately after character and setting selection with a simple, open-ended “What next?” question. Visual hints—like a question mark icon over the knight or an image of a broken bridge—tell them something interesting should happen now.
3. Keep It Short (and Full of Surprises!)
Children’s attention spans are brief: they’re all in for the big action, then often race to the ending. We also know “and then…” chains are how they naturally express story sequences, so the narrative can be a bit choppy—and that’s okay!
4. Visual and Audio First
Most 5–8 year-olds are not strong typists. They’re drawn to bright colors, big buttons, and fun images. They also benefit from voice narration.
Implication: Instead of text-heavy options, let them tap pictures for character selection, setting choice, and story actions. Use voice prompts (“Tell me more about your dragon!”) and let them record their own voice. We can transcribe that voice into text if needed, but the creation process should feel less like writing an essay and more like playing.
The Bottom Line
When we make how the kids think, we’re celebrating their free-flowing, action-first narratives and channeling them into a digital environment that supports—rather than corrects—their spontaneity. By reducing complexity, offering small choices, layering in fun visuals and audio cues, and scaffolding each step, we can create AI storytelling experiences that children adore.
In the end, it’s not about perfect grammar or polished structure; it’s about giving them the joy of telling their own stories—and feeling proud of what they create.
About the Author
Charles Jameson is passionate about child-centric technology and believes that, with the right design, AI can be a magic key for children’s creativity. By blending early educational research, UX principles, and whimsical imagination, Chronicle Creations works toward a future where kids can freely “and then…” their way into amazing worlds of their own making.
If you found this article helpful, drop a comment or share your own lessons from designing for younger audiences
The Power of AI Storytelling in Early Literacy: What Every Parent & Educator Should Know
Why Early Literacy Matters More Than Ever
When I think about the books that shaped my childhood, I realize it wasn’t just the stories themselves—it was the magic of hearing them read aloud, the joy of discovering new words, and the worlds that unfolded in my imagination. Now, as I work on Dreambook, I see firsthand how storytelling isn’t just entertainment—it’s a powerful tool for learning, connection, and growth.
Recent research from Scholastic confirms what many of us have instinctively known: early literacy skills predict long-term academic success. Children who start kindergarten recognizing letters and engaging with books regularly perform better in both reading and math by first grade. The simple act of being read to three times a week can significantly boost a child’s ability to recognize words, comprehend stories, and even develop problem-solving skills.
But the research also reveals a deep divide—children from lower-income families often start school at a disadvantage, with fewer early literacy experiences. Without intervention, these gaps persist, impacting their confidence and future opportunities. At Dreambook, we believe that every child, regardless of their background, should have access to high-quality, engaging storytelling experiences. By offering an accessible platform designed to inspire creativity and literacy for all, we aim to break down these barriers and ensure that every child has the opportunity to develop a love for reading.
How AI Storytelling is Changing the Game
Traditional books will always have a place in literacy development, but AI-driven storytelling introduces a new layer of engagement, personalization, and accessibility.
🔹 Personalized Learning Journeys – AI storytelling adapts to a child’s reading level, interests, and comprehension skills, ensuring every story meets them where they are.
🔹 Interactive Engagement – AI-powered stories can respond to a child’s choices, creating immersive, choose-your-own-adventure-style narratives that boost critical thinking.
🔹 Bridging Educational Gaps – Personalized, on-demand storytelling means every child, regardless of background, has access to quality reading experiences.
This is why we built Dreambook—a platform that combines the magic of storytelling with the intelligence of AI to help kids develop a love for reading through personalized, interactive, and accessible narratives.
AI Storytelling & EdTech: The Future of Education
With the rise of AI-powered education platforms, the world of edtech storytelling is evolving faster than ever. Dreambook is at the forefront of this transformation, leveraging machine learning for adaptive learning experiences that cater to each child's unique needs.
🔹 Gamification & Learning Engagement – Interactive storytelling elements, such as voice narration, quizzes, and decision-based story arcs, make learning fun and increase retention.
🔹 Adaptive Learning Systems – Dreambook’s AI-driven platform continuously adjusts difficulty levels and narrative complexity, providing children with a seamless progression in their literacy journey.
🔹 Multimodal Learning Integration – By incorporating text, audio, and visuals, Dreambook ensures that children with different learning styles—whether visual, auditory, or kinesthetic—receive an engaging and effective educational experience.
How Can Parents and Educators Support Early Literacy?
📖 Read to your kids often. Even if it’s just a few pages before bed—it makes a difference.
🎭 Make stories interactive. Ask questions, let them predict what happens next, and encourage creativity.
🖍️ Give them choices. When children see themselves in stories or shape their own adventures, their engagement soars.
The Future of Storytelling: AI as a Tool, Not a Replacement
AI isn’t here to replace traditional reading—it’s here to enhance it. By combining human creativity with AI-powered personalization, we can create a generation of readers who not only consume stories but actively engage with and shape them.
We’re on a mission to make literacy fun, accessible, and personalized for every child. I’d love to hear from fellow parents and educators—how do you make reading exciting for the kids in your life? Let’s share ideas and keep the magic of storytelling alive.
Source: https://nces.ed.gov/pubs2002/2002125.pdf
Join the Conversation:
Drop your thoughts in the comments or message me to chat about the future of AI storytelling and early education!
#EarlyLiteracy #AIStorytelling #PersonalizedLearning #EdTech #Dreambook #Storytelling #Education #AIinEducation #EdTechInnovation #AdaptiveLearning #MachineLearning #InteractiveStorytelling #AIforKids #GamifiedLearning #DigitalStorytelling #ArtificialIntelligence #ChildLiteracy #ReadingDevelopment #FutureOfEducation
Dreambook MVP Progress Update: Assembling Our Team and Defining the Roadmap
Dreambook is on track to launch its Minimum Viable Product (MVP) by August 2025, bringing an innovative, interactive storytelling experience to children aged 6-12. Our team has finalized the platform's 10 core development modules, established a structured roadmap, and completed key milestones, including login functionality, story generation elements, and an extensive market research survey with over 500 participants. One of our biggest breakthroughs is the development of an in-house generative image model, ensuring consistent character visuals—an issue that has plagued other storytelling platforms. By prioritizing creativity, literacy, and adaptive learning, Dreambook is setting a new standard for children's digital storytelling. Stay tuned as we refine our development process and bring Dreambook to life!
Welcome to the latest update on Dreambook, where we're excited to share our progress towards empowering young storytellers aged 6-12 through creative exploration and learning. As we strive towards completing our Minimum Viable Product (MVP) by August 2025, we've made significant strides that we're eager to share with you.
Assembling Our Dream Team:
At Dreambook, success starts with our dedicated team members who bring expertise and passion to every aspect of our platform. Led by key contributors like Brian Brasher, our Art Director, and Muhammad, our Senior iOS Developer, our team is committed to crafting an enriching experience for children worldwide.
Defining the Roadmap and Development Process:
Behind every great product is a meticulously planned roadmap. We've outlined 10 major modules for Dreambook's development journey, ensuring clarity and focus as we progress. Our development process spans from initial concept and art design to rigorous development, QA testing, and continuous feedback integration at every stage.
Progress Towards the MVP:
We're excited to announce significant milestones achieved so far. From establishing robust registration and login functionalities to laying the groundwork for our innovative story generation elements, progress is palpable. Moreover, our recent market research survey on SurveyMonkey, engaging over 500 participants, underscores our commitment to data-driven design decisions that resonate with our users.
Innovating with In-House Generative Image Models:
One of our proudest achievements is the development of an in-house generative image model for Dreambook. This groundbreaking technology ensures consistent and reliable character image generation, addressing common pitfalls seen in other platforms. By providing stable visuals that captivate children's imaginations, we're setting a new standard in interactive storytelling.
Next Steps:
Looking ahead, we're focused on refining our development process to establish a robust, repeatable framework. With user feedback guiding our every move, we're committed to delivering a seamless experience that inspires creativity and learning. Stay tuned as we continue our journey towards the MVP launch and beyond.
Dreambook remains dedicated to revolutionizing children's storytelling through innovation and user-centered design. We invite you to join us on this exciting adventure, shaping the future of educational tools that empower young storytellers worldwide. Thank you for your support, and we look forward to sharing more updates soon!
How We Pivoted to Dreambook: A Journey of Creativity and Purpose
Every great story has a moment of transformation—a pivot where the characters discover their true path. For us at Chronicle Creations, that moment was when we decided to build Dreambook. What began as a general AI-powered storytelling concept, Fantasai, evolved into something far more meaningful: a tool designed to empower children aged 6-12 to create their own stories, learn, and grow through the magic of storytelling and visual creativity.
The Spark of Inspiration
Our journey started with Fantasai, an ambitious project aimed at enabling anyone to craft unique stories with AI assistance. It was exciting and technically innovative, but something was missing. As we worked on refining the product, we noticed an untapped opportunity in early childhood education and creativity. Children, with their boundless imagination, seemed the perfect audience for the transformative potential of AI.
Listening to the Audience
The turning point came when we reimagined one of our tools—a textless prompt system. Initially designed for broader creative applications, we realized its simplicity and accessibility made it perfect for kids. Combined with a story-generation feature, it became clear that this tool could help young learners write, illustrate, and explore their creativity in ways that hadn’t been done before.
We also observed a gap in the market: while many apps focused on teaching kids to read or write, few offered them the ability to create stories—to build their own worlds and characters. This revelation led to a broader theme of fostering creativity and learning through storytelling, and Dreambook was born.
Finding Our Purpose
Dreambook became more than just a pivot—it was a purpose-driven shift. We realized that we weren’t just building another app; we were crafting an adaptive learning tool designed to nurture creativity in kids aged 6-12. Dreambook allows children to bring their wildest imaginations to life through story and image creation, helping them learn to read and write while having fun. It’s about giving children the tools to create, not just consume.
A Team Fueled by Passion
This transformation also redefined how our team worked. We collaborated to integrate innovative design and AI technologies into a seamless, child-friendly experience. Our team brought their expertise to ensure Dreambook would not only function beautifully but also inspire kids with its visuals and design.
Looking Ahead
Pivoting to Dreambook was a leap of faith, but it was also the best decision we could have made. We are building something we believe in—something that matters. Dreambook is more than an app; it’s a tool for kids to explore their creativity, express themselves, and learn in a way that’s both fun and meaningful.
This is just the beginning of our story, and we’re excited to share it with the world. Join us in this adventure as we continue to dream, create, and inspire.
BEFORE & AFTER |THE AI MIRACLE
Getting just the right amount of benevolence in your Celestial Dragon God is now easy, just ask Dall-e 3
In a lifetime there are moments which resonate so deeply as to cause the feeling of a clear line between what was once possible in this world, and what now is possible. It doesn’t take a towering intellect to recognize that presently advancing art generative AI has once again, created such a moment. To think this moment dovetails with the efforts our team is directly involved we appreciate.
Concepting for Chronicle’s logo the theme of both an eclipse and threads were incorporated to represent our story and values: the eclipse, as this moment in time we came together, and the threads, our respective stories we contribute to the greater narrative of our creative effort.
As a small team Radiant embraces anything with which we can leverage our creative potential. Therefore AI naturally plays a large role in our development. Code, concepts, copy, are all assets which AI is able to help produce. We are able to conceptualize, design, and develop, as a result of these tools, at until now, impossible speeds. So what has changed to warrant such an improvement? Come along with me on a visual journey to find out. It may not all make sense. But it’s always been better that way.
Leondardo AI image - I had used this website until Dall-E 3s release. It’s time has already come and gone. This ship is not slowing down.
Following from the story of our previous blog update, we had, since welcoming a new design member to our team, began a total redesign of the website. To start my portion of the work I set out on the internet in search of reference materials and color swatches. An effort with curious results. I’ve learned double-time to appreciate what a glacial rate design moves without constant reference materials to inspire progress and therefore the value of providing lots of it. Sites like dribble or browsing google images provided the lion’s share of references.
Sometimes articles could be helpful as well. As we’ve narrowed down our color redesign to teal / something, I found several written pieces that provided a wide arrange of color options as well as transformed my interior decorative skills. It’s the little lessons we don’t expect along the way which can be most valuable.
When pushed in the aurura borealis direction results were promising.
All of the this reference work was done old-school style. I was looking at the art of artists and that’s cool. I enjoyed it, especially dribble. If you go there you can look at prominent designers accounts and see their portfolios, examples of both skill and reference.
Where I took my fateful detour and departed from well travelled streets of modern designville came from the usual source, a question. I had an idea I wanted to sort out first with chat GPT 4.0, the text generative AI. When I logged in to have that conversation I noticed something new. Dall-E 3 beta was now available for use to open AI plus subscribers, of which, I am one.
‘What lay beyond the void’ - All sorts of inpsiration for the story behind Radiant.
I’d literally written a blog post about how excited I was for this technology not a week prior and now here it was available for use. I could see for myself, was it all smoke and mirrors or would Dall-E 3 usher in the next era of art generative AI? To test this I used D-3 to assist with the concept work for our different teal color schemes. My first request was for a landing pages done with the various possible color combinations.
An example produced for a teal and blue landing page.
Another example for red and teal.
While useful, something was missing from these designs for use in our project. It struck me that I’d not specified we are developing for a dark theme. I updated my request to include the intended dark-theme design and the results were impressive.
Teal and black: effortless sophistication.
Teal and gold: mind, mystery, and class
My brain, so tickled by the results, had another storm. A part of the visual ambition of Radiant is that a story accompany the website. We don’t want some sterilized experience which feels like a commerical interaction. We want our users to enjoy a sense of immersion, just as we do when we interact with our favourite products.
From inception I’d seen Radiant in my head as some kind of magic gem or prescious stone, reminiscent to the one from the much beloved “Dark Crystal” created by, Jim Henson of Muppet’s fame. If you possessed this gem, I imagined, it would create for you anything your heart desired. Although never explicity stated anywhere, I used this imagery as an basic creative anchor for how the website could represent something other than the code it is.
Over time the idea of Radiant as a gem evolved into the idea of a constellation. Sky and star images repeated enough times in the concepts for our landing pages to provide the inspiration for the gem to become several points of light in the sky which (for some reason) would grant the wishes of those that found them.
Dissatisfied with the absence of a ‘why’ the constellation granted your heart’s desire, one small imaginative step away waiting was, ‘Radiant the Celestial Dragon’, who if contacted, may make your dreams come true. With the question of why this constellation (now celestial dragon) granted wishes answered, we had the creative foundation of a story which could grow with the site. It is this request: to incorporate the story of the celesital dragon in its design considerations, that I found the moment named for this blog entry, the miracle. I had dreamed I would see results like this from interacting with art AI. I’d not with all my untempered hopes, thought that it would arrive so soon. The designs Dall-E 3 produced for me… well look for yourself.
We’re able to see here the new ability of Dall-E 3 to write text within an image. As of the beta the success rate is still patchy, but that it can do it at all now is astounding.
The variety and quality of concepts were staggering. Notably, concepts rarely, if at all, repeated. Each itteration was unique and I could produce useful reference at a rate that a career long designer would struggle to keep pace with.
I’ll pause to say, this does not replace career designers (yet). More, it can be leveraged with the vocabulary of those more experienced to garner even better results. To put it another way, the AI is as smart as you. I may get some cool results from my very limited understandng of design and an intuitive sense of how I’d like the overall result to look, but the design vernacular that specifically describes those things I am less well-versed in.
A career designer, knowing better how to describe what they want will, in most cases will get a better result. For example, ‘subsurface scattering’ is a beautiful detail to add to a piece of art. The AI knows what this is and his more than happy to put it in your work, but you must know what that is and ask for it.
The images we’ve examined so far I was more than happy with but I thought perhaps there was room to push the celestial imagery. With that in mind I asked the AI if we could expand on the idea of the Dragon God being a constellation. That’ll do. That’ll do.
We in find ourselves in a moment you’d think might be rarer, perched on the verge of a before and after in terms of what we are capable of as a species. We’ve been here enough times already and our technological tree is extensive at this point, but that never makes the event less significant. If you are lucky enough to stop and take note of such an event, you can feel history being made. Alas it is the nature of humankind to brush by such things and only see them looking back, but what do you do? That’s as predictable as rain. Must mean nothings wrong.
This is the page we used to help develop our color system and landing page design. Not bad reference output for 3 hours of work. Thank you Dall-E.
So much of success in life is timing, timing as it intersects with imagination and effort. All are fine enough on their own, but the magic truly happens when you get them all together, slightly drunk. I can’t help but feel as we work on Radiant, that just maybe, we’ve all three. Dall-E 3 is incredible, ground-breaking, and pure fun to use. However, we are keenly aware that it is in no way a silver bullet replacement to thoughtful, intentioned, development, which requires our very human attention. Balancing these forces is never easy, but a worthy challenge as we move ahead.
Looking forward to further updates. Thank you for reading this (old-fashioned human written) blog post.
The Radiant Team,
Dave, Connor, Aya
DESIGN,DEVELOP,REPEAT
The constellation slid free from its position in the sky, and took flight.
Fire In The Sky
In the past week we had an addition to our team. With over 10 years experience in the industry and a full-stack designer, Franz comes prepared and has already contributed to our overall design and provided essential industry guidance in product development and deployment. The insights of which have shaped our approach to this month of October and the work we have planned for Radiant Assistant.
Very difficult to get a dragon constellation that doesn’t turn into a Power Rangers character.
Franz joins us at an especially critical time as we’ve begun development of our most ambitious new addition to how RA will assist its players. This has lead to whole new sections of the website needing built. Which means, lots and lots of research, discovery, feedback, and a loop of that, until we’ve drilled down to something we hope our users will genuinely love.
In other news soon to deeply impact the AI generative world is the recently announced Dall-E 3. Another significant step forward in the industry and if you look at its art results compared side by side with its progenitor Dall-E 2, given the same prompt, it achieves markedly superior results. Along with an increase in image quality comes a tool-box of other improvements certain to shift what was thought possible in the field only a short while ago.
Image Credit: OpenAI
Dall-E 3 and doubtlessley the art generators that will follow it, will now be capable of creating impressively nuanced and tonally accurate images with only conversational input. The user need not be especially specific in their phrasing and the AI can infer what it should create. “I don’t know. Give me a happy hedgehog frolicking”. That’s a joke but it was similar language that created the image below.
Image Credit: OpenAI
Objects in relation to each other is a concept the AI now grasps far better. Tell it that you want grass, adjacent to a box, adjacent to a telephone pole 10 ft away. It can do that. Gone are the days of generalized images which then required several itterations before fine tuning to what what the artist intended.
Text in image. A benchmark hurdle overcome. Art AI could not create text within image until now. The implications are obvious what an increase in creative potential this is. In other text news, another bombshell dropped. Dall-E 3 will be able to create images and then provide a creative description of that image. We at Chronicle, are patricularly interested in this detail for our own AI.
Image Credit: OpenAI - text in image, sure let’s solve that already.
If eyebrows aren’t raised now, to think what the next generation will be capable of, the adjectives to describe that are varied, and I’m betting selection is divisive. The ever faster rate at which AI is improving speaks to a next revolution. The likes of which will have no lesser a historical significance than the industrial. The fact it will likely play out in one fifth the time might also catch the attention of a few disinterested demographics.
In any event, we’d like to make sure that table-top gaming is well served by the new potential these advancements unlock.It is plain to us that these new tools use within DND and other table-top games is not a matter of ‘if’, but ‘when’. These kinds of games are defined by their creative freedom and imaginative expression. Some people are able to map their lives along side these games as they reward a player’s investment over several years. Providing tools that can meet that level of creative freedom is usually impossible without great amounts of preparation. Moreover the tools available for preparation do nothing currently to uniquely tailor their resources contribution to a session. That is to say, if you had a castle in mind for your game, the current supplementation is someone elses description of a castle, rather than the kind of castle you were thinking of in your story. Look for resources for your game now and you just hope the things you find fit your story, or you fit your story to the things you find. Player’s of table-top games needn’t have their imagination limited to another person’s description of what they imagined.
Not sure how my request for a constellation of dragon stars became this but I’m not totally against it.
The problem: Players want to prep for their games with supplemental materials. Those supplemental materials are most often not bespoke, finite, non contextual, image separate from description. Unless a player want to spend ages in photoshop (some do and good on them) handouts for games are difficult to create given the time it requires to make them.
The solution: Radiant will generate for any DM | GM | Player, supplemental material they’d like in the form of both an image and bespoke description according to what they request from Radiant Assistant. Would you like a castle? No more looking up some random description, describe exactly the castle you’d like and Radiant will do the rest.
We’re targeting ‘note taking’ as a task many players cite as the most frustrating for their sessions. Often a DM only has time to make a brief set of notes for a description of some person place or thing they’d like to integrate in the story. When it comes time to introduce that asset in the story, with no handout to accompany it, some member of the players party will usually take notes to remember it in the canon of the story. Solving what a pain it is to generate bespoke assets fixes this problem, as every handout lessens the necessity for note taking. When you’ve a visual and written history of the story beat, there’s no need for notes.
Another cool mistake. Reminding me a bit of ‘Animorphs’ novels from my teenage years.
We would like both DMs and players free to explore any creative notion in great depth and at a rate, until now, impossible. When I polled players on what kind of cards they might make if they could freely create one, the answers could be oddly specific. One player, for example, said that she’d like a button which could generate the contents of an NPC’s pockets. This kind of granular detail is exactly the detail table-top games is best for, and now that level of detail is possible in the elements we can create for our games.
The joy of adding some unique and flavorful addition to a game session is something I’m grateful to have been party to in my life (pun intended). Seeing a friend provide some personal asset, be that an image, a written note (sometimes handwritten by hardcores), a song, or whatever a player decides to bring to the proverbial table, immerses in such a satisfying way, as to require 1st-hand experience to ever really know how good it is. With work and dedication over the next several months we hope to add more of these kinds of experiences to every player of table-top games.
Thank you for taking the time to read our developers blog.
This post has no AI contributions. It is written (poorly) by a human.
Improving AI COMBAT FUNDAMENTALS
Last week concluded with the advancement of our campaigns through AI generative art. New functionality in the form of bespoke generated images for your campaigns rather than pre-set images chosen from a limited collage. Our AI is now able to take the the input from your campaign creation and use those details to craft a unique image to use as the title media of your campaign.
Moving onward our next priority was to deal with the loudest feedback we’ve gotten from our audience so far: the narrative generated to assist, if not at first, soon becomes repetitious and lacks variety. Equipped with that information we spent the last week improving how we used the AI to generate a variety of narratives.
We approached this problem from several directions. First and foremost we did a full reconstruct of the basic langauge we use to describe what we want the AI to do. Brilliant though it may be, greater clarity always yields best results. A return to edit and clarify in this case saw a significant improvement right away. Prior to our re-working the following results were typical.
Below are 3 exerpts taken from an earlier rendition of our combat narrative AI.
5 In the somber gloom, Connor, a shadow amongst shadows, moved with lethal intent. His weapon, a silent harbinger of fire, sought its mark. The goblin, unaware of the impending doom, continued its grotesque mirth. With a swift, merciless motion, Connor unleashed his attack. The weapon connected, searing the goblin with a fiery torment. Yet, despite the pain, the creature survived, its grotesque laughter replaced by a guttural scream echoing in the darkness.
6 In the echoing gloom, Connor, with his unwavering resolve, advanced towards the Goblin. A weapon, wreathed in flickering fire, held firmly in his grip. With a swift, practiced motion, he lunged, the fiery weapon searing through the dank air to meet its target. The Goblin recoiled, a pained shriek piercing the darkness as the fire consumed its flesh, but death did not claim it yet.
7 In the eerie gloom, Connor advanced, a weapon of cruel intent gripped tightly in his grasp. The goblin, unaware of the impending peril, continued its menacing dance. With a swift motion, Connor unleashed his attack, a surge of fire erupting from his weapon. The goblin shrieked as the flames engulfed it, the acrid smell of scorched flesh permeating the air. Yet, despite the severe burn, the goblin remained standing, its eyes ablaze with a newfound hatred for its attacker.
While in isolation a single narration can read well, when put side by side you quickly notice areas that could improve. Combat in a DND session can take hours and if you had a narrative which repeated itself, you’d rather it wasn’t present. For that reason, we made this our highest priority to fix.
Other than the basic re-write our CTO, Connor, focused on de-loading the cognitive task we’d given our AI. He provided, in the form of code, logic that the AI could utilize to better randomize its results. Remarkable to see what doing a bit of the randomizing in code could do for the overall performance of the AI. Where before it seemed to listen only occasionally to us. Afterwards, its performance was a clear line in the sand, the narrative results were better following the update. Greater variety in nearly all elements of the narration were observed following the change.
What follows are 3 exerpts following our changes to the AI
48 Stance shifted, Connor launched his fire-infused weapon at Goblin. The dark mood was palpable, the world seemingly holding its breath as the weapon, known for its scorching heat, blinding brightness, and terrifying speed, hurtled through the air.
49 Goblin, with its coarse skin, bulging eyes, and sharp teeth, was in the midst of the dark, oppressive atmosphere. Connor, brandishing his fire damage weapon, made his move. The attack hit, the goblin's features illuminated in the fiery glow as it recoiled.
50 Breath held, Connor launched his attack on the Goblin. His weapon, searing with fire damage, connected with a resounding impact. The Goblin recoiled as the fiery damage took hold, its form shuddering from the force of Connor's assault.
Describing the start, the combatant, opponent, weapon, all details of the confrontation have been tweaked to yield different descriptions and combinations of narrative.
The path to this better version had some interesting speedbumps. For a time the AI would create an end to every narrative it wrote. Everything had a concluding statement and it became stale extremely fast. We instructed it to have no comment and to stop having the need to write anything at the end. It listened to us, perhaps too well.
I laughed out loud when I read this. It wants to do what you tell it so bad, sometimes it confuses itself. What could be more human?
We enter the weekend feeling good about how far we’ve come. The newest version of Radiant Assistant is live and all the discussed changes within this week’s weekly update can now be experienced first hand be making a free account at www.radiant-assistant.com -
Perhaps the largest update since we’ve started working on Radiant Assistant is next. It will take more time than the previous to implement, but it’s no exaggeration to say the RA team couldn’t be more excited to talk about the details as soon as we’ve progressed just a little further.
Big thanks to those who support us on Patreon and for you, taking the time to read this.
Dave, Connor, Aya
Chronicle sets up base of operations in Thailand. Begins second phase of work on Radiant Assistant.
The update is complete. Radiant Assistant moves forward with newly integrated generative art ai and gpt model 4.0
The view from Park 19 Residence. Our hotel for the time we’d search for a base of operations | Sky Towers, Jungle, Poverty
The Radiant team landed in Thailand on the 14th of August. Through a Thai contact made prior we arranged to meet with a real-estate agent. She was Kind and helpful, a sign of the so far consistent quality of character observed in the way Thai people treat others. I’d chosen a location for our hotel based on proximity to a transit line, which I assumed would make the house hunt easier. My first culture shock was when I discovered that particular transit line could only be accessed from specific entry points. Entry points a significant distance from where we were staying. The alternative, BTS skytrain line, was about a 25 minute (30 minute depending on who you speak to) walk in the other direction. Ekkamai, as it’s called, became our home station for the time spent searching for accomodation. And search we did, along with other life essential activities like getting bank accounts setup.
The walk to Ekkamai station. Always dotted in the Sukhumvit area with massive condo buildings.
To not erode the pace of the post I’ll only briefly pause to mention the noteworthy Thai banking experience. With a masterfully simple implementation of qr code banking made available to the population, it’s safe to say in my experience of both Japan and Canada, both are losing ground to a progress hungry nation which has managed to leap-frog their complacent banking systems. If Thailand can do this…
Getting a chance to walk around near our first hotel the colors of bangkok were like none in a city I’d seen before.
Returning to the hunt for our home, a few sleepless nights yielded an excellent selection of apartments. Searching through the several real-estate sites on-line gave me an insight and means to assist our agent who was making the calls and booking us the viewings. We narrowed down where to live based on proximity to at major BTS station, quality of building and its surrounding neighborhood. We spent a little extra time searching for just the right one, but are confident for having done so that we made the right choice. Where often the buildings we visited had good value in the rooms we saw, many were noticeably aged. The one we found was built in 2018. Where buildings advertising pools and gyms sparked excitement at first, many once viewed, failed to live up to their hype. That is again, until we found our building. It’s gym was actually good. It’s pool is surrounded by a well maintained garden and the infinity design allows for views of bangkok sunset. You’re free to swim in the evening and watch as the city comes on with lights. It’s one of my favourite first memories since arriving here. Of course the true center of focus has to be the affordability. Living in a 40 sq meter condo built in 2018 with the aforementioned amenities most places in the world these days will cost you around $3000, probably more actually. Vancouver, where I spent my mid-twenties, has become a world-wide meme for hopeless cost of living. The world is united in its scarcity of housing these days and to find a place where your dollar goes a little further, is a safe harbor indeed.
Through the mud up to heaven the lotus finds a way.
With our housing taken care of, finding a rhythm and refocusing on Radiant Assistant became the priority. Working in a new setting, just having moved, all the new sights and sounds, smells and flavors, added up to a blur of a first work week. Speaking together and dedicated brainstorming yielded the usual actionable results. We agreed on where we should focus.
Ambitiously we chose to see if we could get generative art working in our campaigns. A task no one on the team had any experience with and likely would involve some need for aws cloud hosting for storage of assets we generated. You know, something easy.
A huge shout out goes to our CTO, Connor, for going in blind, working hard for the last two weeks and giving us real results to now share with the community. Where before when you generated a campaign, depending on the biome you chose, a pre-made image would be assigned to that campaign when it was made. Now, given the hard work of the Radiant Assistant team you can expect each campaign created to have its own unique image. Each tailored to the specifications you created it with: biome, weather, and mood.
Never see the same campaign twice. Each image is custom generated according to your choices.
While you may occasionally create a campaign from similiar choices, you’ll never be faced with the same experience. If you choose forest for your biome, stormy for your weather, and epic for your mood, each time will yield a satisfyingly different result. It was a lot of fun playing around in our testing to see what kind of imagery the different choices prompted. Subtle changes in tone for the campaign or more obvious like weather were plainly felt in each generated piece of art.
Our campaigns, however, were not the only part of Radiant Assistant that saw a major improvement over the past two weeks. Until recently Radiant Assistant used the open ai gpt 3.5 turbo LLM AI to power its narrative capablilites. We are excited to announce we have now upgraded to gpt model 4.0, a 10X increase in intelligence. Said more plainly, when Radiant assists you in telling a story, it’s become far better at it. Where the previous model could impressively create narrative for combat scenarios, it did struggle with nuance and connecting larger more complex ideas. The vast improvements in the gpt 4.0 model allow it a far greater command over any kind of narrative task it is given. For example if you were to request a full story from the gpt 4.0 model it would do so with minimal effort, where the gpt 3.5 turbo, would have struggled to maintain coherency.
Franz has 100 million downloads attributed to the projects he’s designed on.
Where we’ve made strides in our technical developement the community surrounding Radiant Assistant continues to grow, connect, and offer invaluable assistance. The addition of which creates the potential to exceed what are limited team size is capable of. Franz responded to our post asking people to contact us if they had an interest offering feedback on Radiant Assistant’s development. His insights into design have already caused a directional shift in how RA will look and feel going forward. Franz will also feature as a guest on my podcast ‘The Interstice Podcast’. You can hear more about his career path there if you’re interested in design.
The vibrance of the nieghborhoods is striking.
User feedback is Crucial as well and we’ve had some excellent data from that. Not a single system in the website hasn’t been updated due to user feedback and our approach going forward will remain the same. We plan though, to improve how we are collecting that feedback. Our current system invites you to write back at length to us in the form of a ‘homework-assignment-like-email’. We’ve made plans to re-work this point of connection with the new users in the community and we will make a google form to accomplish that same effort. But now you can focus on what you want to say, rather than spending 30 minutes typing an email, when you have other things to do.
Slaving over the hot desktop in downtown Bangkok
This last month was a blender of change. You feel a bit dislocated after uprooting and going somewhere new. That said, with the dislocation comes a sense of satisfaction as well. Nothing to make you feel like anything is possible than setting up life somewhere new. I am glad to have this opportunity and to be able to share that with the team here. Everyone has worked hard. I look forward to sharing more of the team’s highlights and achievements as Radiant Assistant cracks on with its development. Thank you for reading and never hesitate to reach out. Collaboration is the lifeblood of Chronicle and we look forward to the next great connection.
RADIANT ASSISTANT ONE WEEK AFTER OUR SOFT LAUNCH
It is now one week after our soft launch and we have some interesting analytics to measure the response thus far. First and most concrete, the number of active accounts now using the website has reached 20. In all honesty our team was joking (not joking) that three would have been a huge victory. Needless to say then we are truly happy with this result.
I’ll list a few other key figures that we’ll be tracking as community analytics as we progress:
The launch has been designated a success.
Patreon | two new supporters! This brings our company revenue to 10$ a month and pays for our generative art AI platform we are currently subscribed to. We are very excited about this initial support as it is our first step towards covering the development costs. If you don’t already and are enjoying using the app please consider supporting us there on our Patreon.
https://www.patreon.com/Radiant411
Youtube channel ‘Radiant’ - 9 subs | 61 views - A big thanks to everyone who subscribed and checked out our company intro video.
This website ‘Chronicle’ | 358 unique visitors in the last 30 days - This was great to see and a strong indication that our team is being viewed by those who may use the website and hopefully will give a glimpse into the culture that we are creating at Chronicle / Radiant Assistant. We hope as well you will continue to enjoy these updates as a sort of behind the scenes of what is going on and progressing.
Coming out of the bootcamp we attended it was made abundantly clear that if you try and start of business in the tech world to have any kind of traction to start is incredibly difficult. Getting anyone to use your app can be a long and arduous process. That fact has made the first week’s results summarily satisfying. We didn’t know exactly what our numbers would be but certainly the results gotten so far have exceeded them.
It is now August 8th as I write this and we are in our final week in Japan. Work visas are on their way to being sorted out, plane tickets have been booked and we are scheduled to fly in quite short order over to Thailand. We will make every effort to stay active across our various channels of communication but do expect a bit of a lull as we arrive and find housing. That said, we will certainly respond to any engagement from the community in that time. We may have life details to set up but we can always communicate with people who want to see what using Radiant Assistant is all about.
Thank you for taking the time to read this and if you’ve got a friend who plays DND or any other table-top role playging game, please send them on over to www.radiant-assistant.com - we’d love to hear their thoughts.
Take care,
Dave
Radiant, The world’s first story-telling ai assistant
As humans it is in our nature to identify patterns. It’s that ability, which to some degree, allows us the occasional accurate glimpse of the future. More predictably, if your idea of seeing the future is knowing you’ll begrudingly pay ten dollars for a coffee in the following week. It gets a bit more hazy when considering where technology may take us in the next several years. It more usually appears to us as chaos. But, and it’s a big but, sometimes, we catch a glimpse past the veil and our observation of something greater coalescing is accurate.
How the team at Chronicle came together is one such event that has me suspiciously looking over my shoulder checking for a smirking old figure with a white shaggy beard and sandals. Maybe he’s walking in a sunbeam. Too many random moments of serrendipty have occured, but then again, I am a pattern seeking primate and it could all be smoke and mirrors. Regardless, when an opportunity presents itself that aligns with every goal you’ve ever had in life, you either take it, or look back forever with the regret of knowing, on whatever terms, you were offered your dream and rejected it.
I, 'Davetrippin' by my Youtube alias and Charles David Jameson by another name, had until recently lived in Kyoto for the past year and a half... attempting to work on a borrowed dream. It would not come to pass and life compelled me again to move and make a new plan. I had always thought if not for the cinema direction that coding was something could hold my interest. To give some background my dad was a 1st generation programmer and wowed me as a child with those skills in the form of hacking the first PC games that were ever released so that I, a child utterly obsessed with games and reading, would have a never-ending supply of new experiences to enjoy. As time went on though I found my inability in key topics such as math would prevent, or at the very least, form a daunting barrier between me and ever pursuing the art of coding. I focused instead on pure art. But now at the end of Kyoto and a decade dedicated to filming, what should I do?
I should attempt to code, my deficiencies be damned. I would do this by registering for a popular coding bootcamp in Tokyo by the name of Le-wagon. I studied for several months beforehand and felt like I’d be in a strong enough position to not be left in the dust once it got started. Oh boy! Was I ever wrong. My lack of technical intelligence once again reared its ugly head and I was again forced to confront the fact that when things get technical, my head, only barely metaphorically, explodes. There are times in life when it gets hard, and you have a choice, you either give up, or keep on going. This scenario and I are good friends. Sometimes it feels like an abusive relationship but what I’ve learned from the wounding is how essential it is to developing the character required to make a dream into reality. This is a long winded way of saying I’m a few sandwiches short of a picnic when it comes to the new skill-set but I’ll give it my best anyway.
Time passed at the bootcamp and one day, while in line for the bathroom, I heard a voice behind me, “Hey, you’re Davetrippin?”. I wish I could tell you I turned and said stoically in response, “I was”. But instead opted for the much more elequent, “ya?”. With that upward intonation like my answer was a question. I was addressed by the one and only, Connor Alexander Minto. He had, much to my surprise, seen some of my old videos. We fast developed a rapport. Connor would lead the class in all things coding and I would accost him for help on the various activities we worked through.
Where Connor dominated the coding, my breakthough came in the form of an unexpected turn in the curriculum. I had read it in full before beginning the camp but somehow missed the part where each must present a pitch for an app to develop. Everyone in the class (32 students) would pitch, and of those pitches, 8 selected to build. Those 8 determined by each student voting on the projects they wanted to work on. I may not be the best coder, but if there’s one thing I have more than enough of, it’s ideas, so I got to work on a pitch.
I had used chatgpt a lot before coming to the bootcamp as a mentor to help me learn coding. During that time I observed the potential that this new technology had for assisting story-tellers and writers. But exactly how, I was not yet sure. I needed another brain. Immediately I knew who to speak with: my extremely skilled dungeon master friend. Perhaps he could think of some pain a narrative ai assistant would heal. I was not wrong. He identifed the struggle of telling a good story in dungeons and dragons while in combat.
I’ll pause here for the uninitiated and give a brief explanation of dungeons and dragons for context. The game is essentially a shared story-telling experience wherein one God nerd, the dungeon master, leads the story and provides the rules for the world. Time is split between a few different activities that move the story forward. Exploration is the most straightforward. During this activity players make their way through the world and the soul focus of the dungeon master, hereafter referred to as the DM, is to tell a good story. A decent narrative is easiest to establish here as there’s little else to consider. That all changes when you enter combat. There are a million rules to consider, who’s turn it is, dice rolls and much more. Telling a good story during that time can become difficult for the aforementioned reasons. My friend observed that if something could assist him telling the story in that critical time it would be tremednously helpful. Radiant was born in that moment.
I would aim to create an ai assistant which would assist DMs in their effort to tell a good story. I worked hard on the pitch putting in extra hours honing it, practicing it. We had 3 minutes to present and if you hoped to win hearts and minds to your cause it needed to be watertight. No room for error. Pitch day would come all to quickly. I was excited until being informed that PowerPoint, the software in which I created my pitch, was outdated. Everyone was going to use google slides. Being assured that the computer we must use for the presentation might not work with the PowerPoint slides I’d prepared spiked my cortisol levels. What could I do other than keep going?
Shot through with adrenaline I began my pitch. Time flew by and the intial lead-in felt decent. I reached crucial point whereon the next slide I would introduce the app and it broke. No one even saw the name of the app. 1 minute had passed. We then spent the next two minutes fixing the program and managed to succeed just as my time ran out. I tell you candidly now, I was crushed. All that effort and excitment to be flattened by the brick wall of circumstance. The presentations continued as did my slide into a defeated posture in my chair. Eventually people began to vote and we finally reached the moment of truth where all voted for projects would be revealed and the 8 selected shown on screen with the corresponding team that would work on each.
Second to receive enough votes was Radiant.
What! I hear you say, dear reader. How could this have happened? Connor provided me with the answer afterwards: “Dave, you didn’t need to do a pitch. You wouldn’t shut up about it for the week before we presented. You kept asking people for feedback on what could make it better. You didn’t need to say a word. We all knew what you wanted to make,” he said. Humbled, I must admit it was difficult to enjoy the moment. I was so wrapped up in my woe that such a sudden change in the course of events took me totally by surprise. I recovered though and made every effort to make sure the members of my team knew they had my complete dedication and appreciation for having chosen my pitch as the one they’d like to build.
What followed was two focused weeks of bringing this pitch from concept to reality prototype to presentation day. I am tremendously proud of what the team accomplished in that time. I knew from the moment we started working, as much as any man can, that this was something I wanted to take further.
Life makes sense when we look back, and when I look back, to ignore the confluence of events that led to Radiant being built would be, in a word, stupid. A man I respect a great deal for his remarkable presience and belief in the strength and beauty of the human spirit is Ray Kurzweil. Currently head engineer at Google he is perhaps better known for his books, one specifically, ‘The Singularity Is Near’. Among the many points I’d cherry pick from what Kurzweil has to say on the topic of inventing something, the one which spoke most to me identified how invention is a matter of two things: idea, and timing. You can have the best idea in the world but if the timing is wrong there’s not much you can do about it. Nicola Tesla might have a few things to say on the topic. Radiant feels like this kind of crossroads. To be the first at anything in the modern context seems a fool’s game and mostly you’d be right. But always history grants us new opportunties, in this case, in the form of technological progress.
Large Language Model AI marks a moment in history for those who can see the opportunity, to do something that has never been done before. We were the first bootcamp to attend Le-wagon with access the LLM api. I had always in my life been interested in coding but never able to do so until this moment. That combined with meeting Connor, to me, is a kind of magic. And I can’t wait to see what kind of spell we can cast.
Thank you for taking the time to read this post. Through this channel we will communicate with the community weekly. We will touch on all aspects of development and as well share our journey as a team. If my story piques your interest and you too are a seeker of glorious pioneering innovation, then please do consider filling out our alpha tester application. Radiant is nothing without the community of people who have so far contributed to its progress and we firmly believe this should continue to remain the case. Radiant is for everyone, the spirit of it, and the goal. We are all of us characters in this great story called life, and a story is meant to be shared.