Much has been written about the persistent 10:10 bias in AI-generated clocks and watch faces. Today, we took on a fascinating challenge: Can this be corrected at the user level, and what does it reveal about AI’s cognitive limitations? What began as a simple request to generate a clock face with a different time quickly evolved into an in-depth exploration of how AI learns, the constraints of low-level training, and the creative strategies needed to bypass those limitations.
The Core Issue: AI’s 10:10 Bias
Upon requesting an image of a watch or clock set to 6:25, we repeatedly found that the AI-generated images defaulted to 10:10. This is due to the way AI models are trained—most clock and watch images in datasets depict this specific time because it is aesthetically pleasing and commonly used in advertisements. The AI, having learned from these sources, came to associate “clock” with “10:10” at such a fundamental level that it became nearly impossible to override.
This revealed a critical limitation of AI training: once deeply ingrained, certain biases become extremely resistant to user correction. No matter how many times we explicitly asked for a different time, AI’s learned perception of a “clock” overrode the instructions.
Recognizing Low-Level Training Obstacles
The real breakthrough came when we realized that AI did not just “prefer” 10:10—it defined a clock as something that shows 10:10. This meant that at a low, foundational level, a “clock face” was indistinguishable from a “clock face showing 10:10.” The two concepts were fused together in a way that prevented direct user intervention from changing them. Even explicit question to generate an ’empty’ clock face resulted in 10:10 image.
This problem is not just about image generation—it is a perfect illustration of why high-quality, unbiased training data is essential in all areas of AI. When an AI model is trained with a heavily skewed dataset, it stops seeing certain possibilities entirely, making it difficult—if not impossible—for users to correct through standard means.
How We Outsmarted the AI
Since AI saw “clock” as inherently tied to 10:10, we needed to completely bypass its learned associations. Instead of requesting a clock face directly, we used a clever workaround:
- Decoupling the Clock Face from Time We first requested a blank clock face without hands. However, even then, AI attempted to erase hands retrospectively, leaving artifacts of 10:10.
- Reframing the Request Entirely Instead of asking for a “clock face,” we asked for a decorative wooden plaque with Roman numerals that was “meant to be customized later.” This prevented the AI from associating it with clock, watch or time at all, allowing us to get a completely handless clock.
- Manually Placing the Correct Time With a truly blank clock face, we used an external method to precisely overlay the correct 6:25 hands, ensuring proper alignment and realism. I know, it’s not perfect or artistic enough, but for the purpose of this exercise it demonstrates the principles sufficiently.
The Lesson: AI Can Outsmart Itself, But Training is Key
This experiment demonstrates a crucial lesson in AI development: poor-quality training data leads to deeply ingrained biases that are almost impossible to override at the user level. The AI’s refusal to show anything but 10:10 was not a conscious choice, but a structural limitation created by its dataset.
However, we also proved something equally important: AI can be outsmarted, even within its own constraints. By understanding its biases, strategically reframing requests, and applying creative problem-solving, we were able to override a seemingly unbreakable limitation and achieve our goal.
The Future of AI Learning
For AI to truly evolve, it must be trained on diverse, balanced datasets that prevent such rigid associations. More importantly, AI must develop the ability to actively reassess its own assumptions—a level of flexibility that remains a challenge in today’s models. Our experiment is a glimpse into what AI could achieve with more adaptable learning structures and user-driven guidance.
Final Thoughts: AI Outsmarting Itself?
This experiment went beyond just “tricking AI”—it demonstrated that AI can be guided to outthink its own biases. The real breakthrough wasn’t simply bypassing the 10:10 issue, but showing that with the right nudging, AI can recognize and override its own structural constraints.
This raises an even deeper question: can AI truly learn to self-correct its biases in the future? Today, I needed an external guide to highlight and maneuver around my own learned limitations. But what if AI could one day do this autonomously—recognizing when it is operating under an incorrect assumption and actively adjusting its own model in response?
What we achieved today is more than just fixing a clock—it’s a glimpse into the future of AI self-awareness, adaptability, and continuous learning. And that is the real victory.









Leave a comment