Do Androids Dream of Electric UX?
- Pavel Fernandez
- Apr 5
- 7 min read
Updated: 6 days ago
Part 1 - Just me, trying to make sense of myself, to you
Just over a couple of years ago, I had my first encounter with MidJourney, and I was instantly fascinated. I immediately felt two opposing things: awe at the incredible capabilities of the tool, and a sinking feeling that may be marking the beginning of the end for a profession I deeply admire—and one I’ve personally practiced. Still, I couldn’t help but admire the sheer human ingenuity behind creating a system that could generate such stunning illustrations in seconds, using nothing but a few words and a digital brain.
Yes, this artificial intelligence was clearly built on the shoulders of countless human creators—drawing, quite literally, from generations of biological synapses. And yes, it was undoubtedly appropriating from many of them with a creative instinct limited to randomness. But let’s be honest: We have all done it. Every creative mind has built upon those that have come before. As Issac Newton once said: "If I have seen further, it is by standing on the shoulders of giants". Anyway, I'm not here to delve into the ethical or sociological implications of AI—so if you’ll indulge me, let me continue my rambling…
Since that first experiment, and mostly for fun, I’ve played with other tools to generate not just images, but also content, audio, video—even avatars... (Here my own experiment in an effort to spread the interest for science)
One thing I noticed, though, is that while the results were often impressive, they tended to feel… generic. There’s a ceiling to how well crafted prompts (or even advanced agents) can do to “read our minds” and to capture the subtle, specific nuances of what we are actually envisioning. More often than not, I only got the results I wanted after multiple random iterations, combining outputs, and refining them, manually, with a little help from Photoshop.
Understandingly, text generation feels somehow more responsive, but the same challenge remained: without true general intelligence, the “wow” factor quickly gave way to the realisation that current AI—especially general models—still struggles with highly specific tasks. And that's especially true for UX design, where context is everything.
So why does UX, in particular, present such a challenge for AI? Because, and allow me to commit the classic sin: answering a question with a question: How can AI effectively apply UX methodology to a problem it does not sufficiently understand?
Take Figma’s AI companion, for example: can instantly generated a weather app interface—an impressive starting point, especially for early ideation. Sure, it needed a few adjustments to match specific expectations, but it demonstrated how effective AI can be at jumpstarting the design process. But!, why did it worked so well in this case? Because, much like generating an image of a fantasy heroine wielding a sword, a weather app is a well-known design archetype the AI could “appropriate” from countless familiar examples. There’s nothing inherently wrong with learning from or imitating proven solutions—in fact, that’s often smart, but in this case, AI performed well largely because it was tackling a problem space it had already seen many times before, there was a pattern to be reproduced.

But now lets ask another, in my opinion even better AI, Magicpatterns, to create "an interface for the creation, edition, and sharing of cybersecurity reports based on the detections". The results are truly outstanding, but it gets tricky, fast. Not because the interface itself is more complex than a weather app—but because it’s far more specific.
The output is intelligent and comprehensive—a solid starting point if this were an isolated design exercise. But despite multiple iterations, it still doesn’t meet the specific needs of the project. Yes, AI can be trained on UX principles, and yes, it might reasonably understand the technicalities of a given topic. But it struggles to grasp and synthesise the unique nuances of a product: audience expectations, contextual constraints, organisational tech limitations, business goals, brand equity, and more. In that sense, it’s navigating through a kind of cognitive fog—lacking the clarity that comes from deep, situational understanding.
In the brave new world of AI, perhaps the most fundamental role of a product designer is not just to craft usable and empathetic experiences, but to orchestrate the harmony of all the interconnected factors that shape a project—ultimately enabling those experiences to come to life.
So what can AI do for UX designers then?
That’s a question with a long answer — especially as their capabilities are rapidly changing. But the first thing I found important to acknowledge is: While AI can already be incredibly helpful as an accelerator of the UX design process, it has shortcomings. Understanding those limitations is crucial—not to diminish what AI can do for us, but to keep us from becoming intellectual victims of our own enthusiasm or naivety.
Part 2 - Understanding the limits (current limits) of AI for UX
So… what are those limitations? And how can we define them?
Naturally, I asked an AI. And honestly? I mostly agree. Here’s what it (we) said:
Data Dependency and Hallucination Risks
AI's suggestions rely heavily on the data it was trained on — outdated information, or misaligned with the specific product domain will produce incomplete or too generalist outputs.
It can hallucinate UX and UI patterns, misinterpret guidelines, and generate incongruent responses.
Lack of Contextual Awareness
AI often struggles with:
Business goals: It can’t always grasp nuanced product visions, roadmaps, or trade-offs.
User psychology: It lacks deep empathy or emotional intelligence to anticipate subtle user needs or frustrations.
Organisation dynamics: It can’t sense internal politics, priorities, or resource constraints that shape design decisions.
Granular Requirements: No matter how well-crafted or detailed a prompt is, it often falls short of capturing the specific requirements, constraints, and subtle nuances that are intrinsic to the problem at hand.
Superficial Creativity
AI can remix patterns, but:
It tends to replicate common UX patterns rather than invent novel or disruptive ones.
It struggles to balance originality with usability, especially when pushing boundaries is needed.
Design systems or templates generated by AI can feel generic or soulless without human curation.
Poor Strategic Thinking
AI doesn’t do well with:
Prioritisation: Choosing what not to design or build.
Trade-offs: Understanding when to favour simplicity over completeness, or usability over visual flair.
Vision alignment: It can’t align designs to a long-term product vision or brand philosophy.
Inadequate User Research
AI can analyse data, but:
It can’t conduct contextual interviews, field studies, or interpret complex qualitative feedback.
It doesn’t know which questions to ask in research or how to adapt based on stakeholder tensions.
It can hallucinate insights from limited data, leading to biased or false assumptions.
Ethical Blind Spots
AI lacks:
Judgment about dark patterns or exploitative design.
Sensitivity to accessibility, inclusion, or ethical design trade-offs.
Accountability — it can’t take responsibility for harm caused by poor UX.
Limited Collaboration Capabilities
UX is deeply collaborative, yet:
AI still can’t participate in co-creation workshops, product strategy discussions, or design critiques in a truly meaningful way.
It struggles to maintain a consistent voice and direction across multiple iterations with a team.
Part 3 - How to benefit from AI as UX designers
Now that we understand where we need to exercise caution when applying AI to the UX design process, let’s take a look at how AI can actually enhance both our analytical depth and our efficiency. Below is a list of tools I’m currently exploring—I'll keep updating it as I uncover more of what they have to offer. Research & Discovery
How AI helps:
Speeds up user research, identifies patterns in large datasets, and summarises insights from interviews, surveys, or analytics.
Tools:
Ideation & Brainstorming
How AI helps:
Suggests UX patterns, layout ideas, user flows, or concepts based on limited input.
Acts as a sparring partner for creative direction.
Tools:
ChatGPT / Gemini / Claude – Great for brainstorming and streamlining ideas, user stories, user journeys, and personas.
Miro – Offers smart suggestions and synthesis during workshops or brainstorming sessions.
Whimsical AI – Can auto-generate preliminary flowcharts and wireframes from prompts.
UI Design & Prototyping
How AI helps:
Generates screens, layout suggestions, and visual themes.
Offers responsive design suggestions and potentially accessibility compliance.
These tools are still fairly basic in the sophistication of their output, but are in continuous development
Tools:
Figma AI – Offers great capabilities for image and copy generation, translations, and automation. Generates wireframes and UIs from text prompts, while these capabilities remain to be further developed.
Magicpatterns – Noticeably realistic outputs, generate Figma files and code.
Galileo AI - Specially fine-tuned for UI, with an eye on aesthetics.
UX-Pilot , Uizard - Creates either Wireframes and UI mockups from text descriptions.
Musho , Framer - Sort of web oriented, Integrated with Figma, and includes responsive designs.
Usability Testing & Feedback
How AI helps:
Simulates user interaction flows, predicts friction points, and generates test scenarios.
Analyzes user behavior data to surface usability issues.
Tools:
Analytics, Optimization & Iteration
How AI helps:
Analyses user data at scale, suggests optimisations, A/B testing strategies, and UX improvements.
Predictive modelling for user churn or feature adoption.
Tools:
Design-to-Code Handoff
How AI helps:
Converts design files into front-end code or design tokens.
Bridges the gap between design intentions and developer handoff.
Tools:
Anima – Turns Figma designs into responsive HTML, React, or Vue code.
Lovable – create and deploy full-stack web apps by simply describing their ideas in natural language—no coding required.
Figma Dev Mode – Now enhanced with AI for describing components and generating tokens.
Builder.io – Visual development platform that leverages AI to transform visual design into working components.
Penpot – Open-source UI/UX platform for designers and developers collaboration.
In a nutshell
While AI is supercharging parts of the UX process, it’s not a replacement—especially in areas requiring deep empathy, contextual judgment, and creative intuition. Human designers bring the critical thinking, subjectivity, and vision that AI still can’t replicate effectively. Think of AI as a design copilot: fast, smart, and incredibly helpful—but still needing a human in the loop to guide the vision and define what “good” means.
PS. In case you're not familiar, the playful title of this thought experiment is a nod to Philip K. Dick’s novel Do Androids Dream of Electric Sheep?—brilliantly adapted for the big screen by Ridley Scott in Blade Runner.
Opmerkingen