When I truly started using Gemini 3 Pro, it wasn't the speed that surprised me. It was the absence of noise.
No friction, no feeling of "translation" between what I had in mind and what took shape on the screen. An imperfect sketch, a few directions, and in seconds a coherent interface appeared: structured, animated, functional. Credible. Not just a mockup, but something that could already stand on its own.
In that moment, I realized I wasn't simply using a more powerful tool. An entire layer of the process had vanished—the layer where, until yesterday, I had to act as a constant interpreter between idea, interface, and code.
The leap in quality with Gemini 3 Pro, as I’ve experienced it, stems from a profound shift in how it understands input. It no longer treats them as separate pieces—image first, then text, then instructions—but as a single act of meaning. What is known as True Multimodal Synthesis does exactly this: it merges sketches, language, visual references, and constraints into a single design intention.
The result is that the AI doesn't just execute a sequence of commands; it reasons through context. It organizes a structure, defines hierarchies, and then decides how form should support the objective. The UI is no longer a graphic output, but the final expression of a broader understanding.
This is where my approach changes. I am no longer telling the AI how to draw an interface. I am clarifying why it needs to exist and what cognitive labor it must absorb on my behalf.
From that point on, the question was no longer "How do I design this interface?" but something much more radical: "What goal must it help achieve, and what friction must it eliminate?"
When AI reaches this level of comprehension, the interface ceases to be the center of the project. It becomes a consequence. And designing, suddenly, no longer means drawing screens, but giving shape to an intention.
From Tool to Result
Using AI daily, I realized that continuing to design "features" no longer made sense. The classic paradigm—interface → action → result—was flipping.
I began thinking in terms of Jobs to be Done: you don't design a tool; you design the completion of a cognitive task.
Understanding something faster
Making a decision with less uncertainty
Synthesizing complexity
Moving from information to action
If this is the "job," then AI cannot be an option. It’s not just an extra button. It is the engine that makes that outcome possible.
Designing Backward: Execution First, Interface Second
The most radical change in my design process is this: I no longer start with the UI.
The first question I ask myself is: What is the mental work the system must perform autonomously?
Only then do I design the interface as a tool for direction, control, and verification of that capability. This is why some products "feel like magic" and others feel clunky: it's not because they use better models, but because they have a clearer Intelligence Flow Architecture.
When the AI searches, synthesizes, connects, and proposes—and I limit myself to providing direction and judgment—the product stops feeling like a machine and starts behaving like a partner. There is a question I now ask myself about every product I design:
If I remove the AI, does this product still make sense?
If the answer is yes, then the AI is just a feature. If the answer is no, then I have built something coherent: the intelligence is the product. It’s a simple test, but a clarifying one. It prevents me from falling into the "let’s sprinkle some AI on it" temptation.
Clear Roles, Fast Collaboration
Another lesson I’ve learned using AI as a structural part of a product is the importance of boundaries.
What I do:
Define the objective
Set success criteria
Establish priorities
Final judgment
What the AI does:
Large-scale exploration
Synthesis
Generation of alternatives
Pattern recognition
Value doesn't lie in total autonomy, but in the speed of the generation → verification cycle. If I have to spend hours checking a massive output, I’ve gained nothing. If I can evaluate it in seconds, then yes: I have truly increased productivity.
In this sense, my role is clearly shifting: from an executor to a director of the cognitive process.
From UI Design to Outcome Design
The final consequence of all this is perhaps the most important.
I am no longer designing products that promise "functions." I am designing products that promise results. Not a dashboard, but clarity. Not a chat, but understanding. Not a generator, but a better decision.
The interface becomes a consequence, not the starting point. The real design happens upstream: in the distribution of intelligence between human and machine.
Conclusion: The Paradox of Judgment Competence
Using AI every day has led me to an uncomfortable paradox: Artificial Intelligence does not make design accessible to everyone. It makes it accessible only to those who already possess the skills to judge it. AI does not reduce the need for expertise; it makes it more evident.
It brings to the surface what could previously remain hidden: the ability to truly understand what we are building. If I don't know the language of the medium—code, structure, accessibility, performance—I cannot evaluate the result. And when this capacity for judgment is missing, AI can produce convincing errors, difficult to recognize precisely because they look correct.
Here lies the greatest risk: an apparent democratization. Without a critical foundation, AI doesn't produce good design; it automates the mediocre. It replicates fragile, inaccessible interfaces that are difficult to maintain. Beautiful on the surface, empty underneath. This isn't inclusion; it’s the multiplication of the problem.
This is why I often think of AI as an advanced flight simulator for UI. It allows you to perform complex maneuvers without crashing a real plane. But the simulator is only useful if you understand the physics of flight and the onboard electronics. Without those basics, it only gives you the illusion of knowing how to fly. At the first real storm, you lose control.
The same applies to design today. AI accelerates everything, but it does not replace judgment competence. On the contrary: it makes it more central than ever. And perhaps this is the true point of maturity for our profession: in an era where execution is abundant, value lies not in producing more, but in knowing how to recognize what is right, solid, and responsible.
AI can generate infinite solutions. But choosing the right one—and taking responsibility for it—remains, profoundly, a human act.



