12 September, 2025

Be More Like a Five-Year-Old: A Pickleball Parable About True Inclusion

I was at the grocery store a few weeks ago, navigating the aisle to the meat counter with my white cane, when I heard the unmistakable, high-pitched stage whisper of a small child.

"Mommy, why is that man poking the floor with that stick?"

I could feel the parent's immediate mortification. There was a frantic "Shhh! Don't be rude!" followed by the sound of a shopping cart being hastily wheeled away. I smiled. The parent thought they were teaching their child politeness. What they were actually teaching them is that disability is a topic too scary, too awkward, or too taboo to talk about. They were teaching the child to replace curiosity with silence.

And frankly, we could all use a lot less silence and a lot more of that five-year-old’s curiosity.


The Unfiltered Lens of Childhood


Children are, in many ways, the ultimate agents of inclusion. They operate from a place of pure curiosity. Their world is a canvas of unanswered questions, and they haven't yet learned the complex social rules that tell adults to look away, to not stare, and to definitely not ask direct questions.

A child might ask:

  • "Can you see my bright red shirt?"

  • "How do you read with your fingers?"

  • "Does it hurt to be blind?"

An adult, on the other hand, will often perform a masterclass in awkward avoidance. They’ll speak to the person I’m with instead of me, or they'll grab my arm to "help" me without asking. They operate on a thick layer of assumptions, believing they know what is best, what is polite, and what I need, all without uttering a single question. This well-intentioned silence is infinitely more isolating than a child’s blunt inquiry.

When we shush a child for asking about a disability, we’re not just deflecting an awkward moment. We are teaching them that difference is something to be ignored, not understood. We are building the foundation for a future adult who will make decisions based on assumptions, because they were taught that asking is rude.


The Danger of Designing for a Ghost


This learned behavior of not asking questions carries directly into the professional and social worlds. It’s how we end up with inaccessible websites, buildings with "accessible" entrances that lead to a flight of stairs, and products designed without consulting the very people who will use them.

So often, decisions are made in a boardroom by people who think they know what a specific community needs. They might even engage in the "one and done" consultation: they ask one person with a disability for their opinion and consider their due diligence complete. They checked the box. But my experience as a blind person is not a monolith. It doesn't represent the needs, desires, and opinions of every other person who is blind. Assuming so is like asking one person from Texas what the entire United States thinks about barbecue. You’ll get an answer, sure, but it won’t be the whole story.

Kids, on the other hand, don’t have this filter yet. While their assumptions are based on their own lived experiences—thinking every dog is a "doggie" or that all grown-ups love broccoli—they are incredibly willing to have those assumptions corrected. They’ll ask their parents, or better yet, they’ll ask the person directly. Their curiosity is a tool for learning, not a prelude to judgment. It’s a trait we should be nurturing, not extinguishing.


A Pickleball Parable


I want to share a story that shows what happens when people choose curiosity over assumptions. Recently, a team event was planned at work: Pickleball.

My heart sank just a little when I first heard. This was an automatic response, not one grounded in any reality based on my experiences with my team members, but I know how interactions like this have gone in past experiences. It’s a common experience for people with disabilities. A fun, physical activity is planned, and you’re immediately doing the mental calculus: "How will I participate? Will they just stick me on the sidelines? Will it be more awkward if I go or if I don't?"

But then something amazing happened. A team member reached out. "Hey," they said, "we're planning this Pickleball day. We'd love for you to be there. What can we do to make it work for you?"

They didn't assume I couldn't or wouldn't want to play. They opened a dialogue. But they didn't stop there. They went and did their own research. A few days later, they followed up. "We've ordered some pickleballs with bells inside so they'll be audible."

When I showed up, the audible balls were waiting, and my teammates had a great time bonding and playing the game with varying levels of skill and success. The coaches at the place were delightful as well. At no point did I question whether I was intended to be there, nor did I feel not included. They had also, in a hilarious and heartwarming display of knowing me as a person and not just as a disability, made sure the cooler was stocked with my favorite Cokes.

They didn't treat me as a problem to be solved. They saw me as a team member to be included. They chose curiosity and action over assumption and avoidance. That, right there, is the difference between token accessibility and true, heartfelt inclusion.


Your Call to Action: Ask the Question


We can all do better. The next time you encounter someone whose experience is different from your own, resist the urge to assume. Fight the voice in your head that was conditioned by a well-meaning adult telling you it's "rude" to ask.

Channel that inner five-year-old. Be curious. Be respectful. Open a dialogue. Ask the question. You might just be surprised by how much you learn and how much more inclusive our world can become when we’re all brave enough to stop guessing.


A Few Final Notes:

  • The views and opinions expressed in this article are entirely my own and do not necessarily represent the views of all blind people. We are not a monolith!

  • Furthermore, these views do not reflect the opinions or policies of my employer.

  • This article was crafted with the assistance of Google's Gemini to help with clarity, readability, and brevity.

05 September, 2025

Seeing with Sound: My Adventures in Echolocation and The vOICe

Life as a blind person is a bit like playing a perpetual game of Marco Polo, except the world rarely shouts "Polo!" back. In a landscape built for vision, you learn to navigate with a blend of careful cane work, educated guesswork, and—when you misjudge a doorway—a surprising amount of upper-body strength. But what if I told you there are ways to "see" that have nothing to do with eyes? Welcome to my world, where I mix the ancient human skill of echolocation with the cutting-edge tech of The vOICe app to paint a vivid, if unconventional, picture of my surroundings.

The Symphony of Echolocation: Passive vs. Active

Echolocation is nature’s original sonar, used by pros like bats and dolphins. They emit sounds and listen for the echoes. I do the same, though I’m probably less graceful than a dolphin (but I do have an uncanny ability to find the last Coke bottle in the fridge).

Passive echolocation is my background app, always running. It’s the subtle shift in ambient noise as I approach a building, the way the air itself feels different near an open doorway, or the faint echo of my footsteps off a wall. It’s a built-in sonar that provides a constant, low-res stream of data about the space around me. I often know I'm approaching a tree or a wall long before my cane gets a chance to introduce it to  me. It’s like having a blurry, black-and-white sketch of the world that warns me, "Hey, big thing ahead!"

Active echolocation is me switching to high-def. By intentionally making a sound—a sharp tongue click, a tap of my cane, or even just talking—I send out a sonic ping. Listening to how that sound bounces back gives me incredibly detailed information about an object's shape, size, and even texture. It’s the difference between knowing something is there and knowing it's a metal pole you're about to walk into. This "HD" sense is so versatile I use it for everything from navigating hiking trails and riding a scooter to something as mundane as finding the entrance to a building across a vast parking lot.

The vOICe: Turning Pixels into a Symphony

If echolocation is my sketchpad, The vOICe app is my vibrant set of watercolors. This ingenious app uses my phone’s camera to translate visual information into soundscapes in real time. The rules are simple:

  • Pitch = Height: The higher an object, the higher the pitch.

  • Loudness = Brightness: Brighter objects are louder.

  • Stereo Pan = Left/Right: An object on the left is heard in my left ear, and vice-versa.

Imagine standing at a bustling crosswalk. While not essential for crossing a road, The vOICe turns the chaos into a symphony of orientation. The parallel flow of cars creates a steady sonic "shoreline," while the app paints an audible picture of the crosswalk lines and the curb on the other side. This layering of information gives me a powerful confidence that I'm walking a straight line. It also allows me to perceive things my cane could never hope to reach: an overhanging branch, the height of a street sign, or the general shape of a building. It's a powerful tool that enhances my coordination and spatial awareness in countless daily tasks.

The Learning Curve: It’s a Mountain, Not a Molehill

I won’t sugarcoat it: mastering these skills is not a weekend project. The learning curve is a majestic Mount Everest of auditory processing. For a long time, there was no mainstream training; most of us were self-taught pioneers, learning through trial, error, and a few too many close encounters with inanimate objects. It’s like learning to ride a unicycle while juggling flaming torches—exhilarating, but with a high potential for bumps and bruises.

This is where recent updates to The vOICe truly shine. The inclusion of AI-powered descriptions acts like a seasoned guide on your Everest climb. It helps you connect the complex soundscapes with what they actually represent, drastically shortening the time it takes to go from "What is that noise?" to "Oh, that's a bicycle leaning against a tree."

Not for Everyone: The Beautiful Diversity of Choice

As much as I love my sonic toolkit, it's crucial to understand that these methods aren't for everyone. The cognitive load of constantly interpreting sound can be intense, and some people may find it more distracting than helpful. Others have phenomenal cane skills or guide dogs and simply don’t need it.

Accessibility isn't about finding one perfect solution; it's about having a rich variety of choices. What works for me is just that—what works for me. And let's be honest, sometimes it's just easier to ask a talking GPS, whether that be another person or an actual GPS app, for directions than to compose a sonic symphony of my surroundings.

Your Turn to See with Sound

Intrigued by the idea of painting pictures with sound? You can start your own journey of discovery. To learn more about the incredible technology behind The vOICe and to download the app, I highly recommend visiting the official website at seeingwithsound.com. There, you’ll find tutorials, training resources, and a community of fellow sonic explorers. Dive in and start listening—you might be amazed at what you see.


All views expressed in this article are my own and may not reflect those of my employer. This piece was written with the aid of Google's Gemini, which helped with clarity, readability, and brevity.

29 August, 2025

Four Years, Countless Connections, and the Unstoppable Drive for Independence

Can you believe it? Four years ago, on August 30 2021, I started my journey at Google as a full-time employee, and what an incredible ride it's been. It feels like just yesterday I was navigating the sprawling campus for the first time as a vendor, fueled by excitement and a slightly terrifying amount of free snacks and, of course, Cokes. Today, I'm filled with immense gratitude – for the opportunity to impact millions of users at Google's incredible scale, and most importantly, for the amazing colleagues I've had the privilege to work alongside. Your dedication, your passion, and the connections we've built over these years are truly inspiring. Thank you, from the bottom of my heart.


"Do You Have an Assistant?" – My Independence, Not Your Concern (Mostly)


Now, onto this week's topic, which hits pretty close to home for me. It’s about independence, and specifically, the subtle (and sometimes not-so-subtle) ways it's eroded by an inaccessible world.

Let's talk about a common scenario: I'm out, living my life, and someone, often with the best intentions, asks me, "Do you have an assistant?" My internal monologue immediately launches into a full-scale Broadway musical of exasperation. An assistant? For what, exactly? To help me order a coffee? To point out the obvious "push" sign on a door that clearly needs to be pulled?

While I appreciate the thought behind it (usually), this question, and others like it, often stem from a fundamental misunderstanding: that people with disabilities are inherently less capable or constantly in need of supervision. Newsflash: I'm an adult. I have a job. I pay my taxes (mostly on time). I can generally navigate the world just fine, thank you very much. The frustration isn't about the question itself; it's about the underlying assumption that my independence is somehow less valid, less natural, or less deserved.


The Battle of the Lamps and the Oil Diffusers: A Tiny War for Accessibility


This brings me to why I write these blog posts, often dissecting the bewildering inaccessibility of even the simplest devices. You know the ones – the lamps with touch controls that are a mystery to everyone, including the person who designed them, or the oil diffusers that beep like a frantic smoke alarm just to change a setting.

I've ranted about smart devices that are anything but smart when it comes to inclusive design. I've mused about the tactile nightmare of modern washing machines. It might seem trivial – a lamp, an oil diffuser – but these small, everyday frustrations accumulate. They're tiny cuts that, over time, bleed away a sense of control and autonomy.

My goal in sharing these posts isn't just to vent (though, let's be honest, that's a nice bonus). It's to shine a light on these seemingly insignificant design flaws that, for many of us, create significant barriers. I want to highlight the insidious degradation of accessibility in a world that often prioritizes sleek aesthetics over fundamental usability. I want to tell the stories of when a simple task becomes an Olympic-level challenge, not because of my limitations, but because of designers' lack of foresight.

Think about it: independent travel, independent access to appliances, independent and private access to forms and medical equipment. These aren't luxuries; they're fundamental aspects of living a full and dignified life. When these are compromised, it speaks to a deeper issue – a feeling, or rather, a lack of it, that I have a right to participate and live in this world, and that I am not part of a minority who can be suppressed. We all have a right to navigate our lives with ease and privacy.


Creating Change, One Product at a Time


So, what do I hope to accomplish by sharing these posts on this little slice of the web? I hope to describe my experiences in a way that resonates. I hope to provide an opportunity for reflection, not just for those who identify with my struggles, but for everyone who might not have considered these perspectives before. And, most importantly, I hope to do my part to create change, one product at a time. Because truly, a more accessible world benefits everyone. Who doesn't want a lamp that reliably turns on or an oil diffuser that doesn't require a deciphering ring?

I'm an optimist at heart, and I truly believe that by raising awareness, by sharing our stories, and by demanding better, we can push for a future where design is inherently inclusive. A future where "do you have an assistant?" is a question reserved for actual assistants, not for basic daily tasks.

So, I invite you to reflect, to comment, and to share if you find value in this content or have thoughts about what I post. Let's start a conversation. Feel free to reach out and ask questions – I'm always happy to chat!


Disclaimer: All content in this article represents my own views and may not represent the views of my employer. This article was written with the help of Google's Gemini, to aid with brevity, readability, and clarity; however, all thoughts are, nonetheless, my own.


22 August, 2025

Beyond the Verbal Descriptions: My Dreams for a Multisensory Future

As a blind blogger navigating a sighted world, I spend a lot of time thinking about technology. My current digital companions—the screen reader that whispers text into my ear, the GPS app that guides my footsteps (mostly!), and the AI camera that offers glimpses into the visual world—form a patchwork quilt of accessibility. Each piece is invaluable, but there are still significant gaps between the seams. This tech tells me what it's been programmed to see, but it doesn't tell me what I'm missing.

If I could wave a magic tech wand, I wouldn't just ask for a faster screen reader or more accurate GPS. My dreams stretch far beyond the limitations of today's accessibility tools. I'm dreaming of a truly multisensory future, one that doesn't just narrate the world but allows me to perceive it in its full, dynamic, and often chaotic glory.


The World Unveiled: Weaving a Real-Time Sensory Tapestry


Think about the sheer density of information a sighted person absorbs in a single glance while walking down the street. It’s a constant, effortless stream of data that builds a complete picture of the environment. For me, much of this information remains invisible, an entire layer of context that is simply absent.


The Problem: The Unseen and Unspoken World


The world is covered in text. Store names like "Luigi's Pizzeria" and "Corner Bookstore," sale signs screaming "50% Off!," handwritten opening hours taped to a door, official street signs, and crucial warnings like "Wet Floor" or "Watch Your Step." While Optical Character Recognition (OCR) can sometimes capture this, it's a clunky, stop-and-scan process. It can’t tell me in real-time that I'm approaching a sign, where it is, or if it's even relevant to me.

But it’s so much more than just static text. It’s the dynamic, fleeting cues: the digital display on a bus stop counting down the minutes until the next arrival, the illuminated "Walk" signal across a six-lane intersection, the temporary poster for a local farmer's market this weekend. It’s the construction sign far down the block warning of a detour, information that would allow me to reroute long before my cane ever finds the barrier.

This information gap robs me of something sighted people take for granted: the joy of serendipity. They might be driving to the grocery store and spot a "Grand Opening" banner for a new taco shop that isn't on any map yet. They might notice a handwritten sign for a neighborhood garage sale or a flyer for a community concert. My journeys, by contrast, are almost entirely destination-focused. My GPS guides me from point A to point B, but it’s blind to everything in between. It misses the texture, the spontaneity, and the discoveries that make a neighborhood feel like a living, breathing community.

This extends even to recreational spaces. On a hike, a sighted person follows blazes painted on trees, reads interpretive signs about local flora and fauna, and takes in the view from a designated scenic overlook. These markers are essential for navigation and enrichment, yet they exist completely outside the digital realm of my current tools. Point-to-point apps miss the entire point—that the journey itself is the experience, and I'm missing all the context along the way.


Moving Beyond: A Symphony of the Senses


My dream isn't just to have an AI describe a chair in front of me. It's about creating a rich, intuitive, and real-time awareness of my surroundings. It’s about weaving a multisensory tapestry from the threads of information that are currently invisible to me.

Imagine a system that integrates several technologies seamlessly. Spatial audio, delivered through bone-conduction headphones, would create a 3D soundscape. The pizzeria on my left might be represented by a soft, pleasant chime emanating from that direction, while the library on my right has a distinct, quiet hum. The approaching bus could be a low-frequency rumble that grows louder and is perfectly placed in the soundscape, giving me an intuitive sense of its location and speed.

This would be paired with haptics. Imagine a smart glove that translates visual information into touch. The texture of a crosswalk could be mimicked by a subtle vibration in my shoes. The sharp angles of a "Stop" sign could be traced onto my palm by a series of targeted pulses. A low-hanging branch could trigger a gentle tap on my shoulder from a wearable device, warning me before I ever get close.

And for specific details, a small, refreshable tactile graphic or Braille display on my wrist, or embedded into a wearable vest, could provide discrete information. As the spatial audio chimes for "Luigi's Pizzeria," the display could flash "Pizza - Pasta - Open." The painted trail marker on a tree could be rendered as a simple, tactile arrow, confirming the path ahead.

This combination of sound, touch, and texture would solve the "missing context" problem. It would transform a sterile, point-to-point journey into an exploratory experience. It would restore the potential for serendipity—that audio cue for a new coffee shop, the haptic buzz indicating a sale sign in a window—and empower me to make spontaneous decisions. This future isn't about an AI telling me what it sees; it's about giving me the raw sensory data to build my own mental model of the world.


Entering the Virtual Realm: A Truly Immersive Experience


This need for multisensory input is just as critical in the virtual world. Attending an online presentation or a webinar can often feel like listening to a radio broadcast of a television show. I hear a presenter say, "As you can see from this chart..." and I'm immediately left behind, reliant on a brief, often inadequate description of complex visual data.

My dream for the virtual space is a truly multisensory experience. Imagine "feeling" the data on a bar graph, where each bar is represented by a different texture or level of vibration on a haptic surface. Imagine hearing different data points in a scatter plot as distinct musical notes in a spatial audio field, allowing me to perceive clusters and outliers intuitively. A complex organizational chart could be rendered as a simplified, tactile diagram on my display. Instead of just hearing "a pie chart showing 60% increase in daily active users," I could feel the dominant wedge as a larger, rougher texture and hear its data point as a more prominent tone.

The Problem: Virtual experiences are overwhelmingly visual, with accessibility often limited to auditory descriptions and basic screen reader compatibility. This creates a significant barrier to full engagement and deep understanding.

Moving Beyond: By translating visual data into a rich tapestry of haptics, spatial audio, and dynamic tactile graphics, virtual environments could become truly immersive. This would level the playing field, transforming passive listening into active perception and unlocking the full collaborative and educational potential of the digital world.


My Interface, My Rules: The Power of Radical Personalization


One of the subtle but persistent frustrations of digital life is the one-size-fits-all approach to interfaces. While accessibility settings offer some customization, they rarely allow me to fundamentally reshape my interaction with a device or an application to fit my specific needs.

My dream is a future where I am the architect of my own digital experience. Imagine opening a complex banking app and, with a single command, activating a "Minimalist Mode" that I designed myself. Instead of tabbing through dozens of links and buttons, my custom interface would present only the four things I ever do: "Check Balance," "Transfer Funds," "Pay Bill," and "Deposit Check."

This extends to the very nature of interaction. Perhaps for my music app, I want a physical interface. I dream of a modular device where I can assign physical dials for volume and scrubbing, and tactile buttons for play, pause, and skip. The device would dynamically remap itself based on the application, providing tangible, muscle-memory-friendly controls.

Furthermore, I want the power to choose my level of autonomy. Sometimes, when exploring a new app, I want the full, detailed interface with granular control over every option. I want to learn its structure and make every decision myself. But other times, when I just need to order a pizza, I want an "Automation Mode." I want to be able to say, "Order my usual," and have an AI handle the entire point-to-point process of navigating the app, selecting the items, and checking out. The future I envision empowers me to fluidly move between these modes—from a hands-on, information-rich explorer to a hands-off, efficient director, all within the same product.

The Problem: Current accessibility forces users to adapt to pre-designed interfaces. We have limited control over the fundamental design and flow of our digital interactions.

Moving Beyond: The future of accessible technology lies in radical personalization. It's about providing tools that allow users to design their own interfaces, choose their preferred sensory modalities, and select their desired level of automation. This shift in design philosophy would foster greater efficiency, deeper engagement, and a more joyful, less frustrating user experience for everyone.


A Glimpse into Tomorrow


These aren't science-fiction fantasies; they are logical extensions of technologies that are already emerging. This is a future where technology doesn't just accommodate blindness but actively works to bridge the sensory gap. It's a future where my senses are engaged in a rich and meaningful symphony, where I have the power to shape my digital world, and where I can explore my surroundings—both physical and virtual—with a newfound sense of freedom, context, and wonder. Multiline Braille displays are a reality today, though they remain quite limited. Devices that use LiDar and provide an awareness of obstacles also exist, though they often can’t identify the type of obstacle. A number of companies seem to be thinking about ways to augment or replace the cane, as the cane has limitations in the type of feedback it provides. The future I see would lump many existing solutions together, and perhaps create wholly new ones, to create an experience that allows me to be more engaged in what is happening around me.


Disclaimer: Please note that the views and opinions expressed in this blog post are solely those of the author and do not necessarily reflect the views or policies of their employer.

Editorial Note: This post has been edited by Gemini for clarity and brevity. All content and opinions remain those of the author.

15 August, 2025

One Size Fits... Nobody? Why Customization is King in User Experience 👑

Welcome back to the weekly ramble! Let's talk about the digital world. A vast, sprawling landscape of apps, websites, and gadgets, all designed to make our lives easier, more connected, and supposedly more streamlined. The holy grail in this digital colosseum is User Experience (UX). We're constantly sold the gospel of "intuitive design" and "seamless interaction." Flat screens, voice-only interfaces, getting rid of ports, limiting user interface customization options - these all are examples. But here's a spicy take: in the relentless pursuit of a single, "perfectly simple" experience, we've created a world where one size rarely fits all. In fact, it often fits absolutely nobody particularly well.

Think about it. We are a gloriously, wonderfully, and sometimes frustratingly diverse species. We don't all think alike, work alike, or perceive the world in the same way. Some of us are digital night-owls, and the searing white of a default light mode feels like staring into the sun. Others have meticulously organized their digital lives into a labyrinthine system of folders and tags that would make a librarian weep with joy. This diversity isn't just about preference; it's about need. According to the World Health Organization, about 1.3 billion people, or 16% of the global population, live with a significant disability. That's a market size nearly as large as the population of China or India, and it doesn't even account for temporary or situational limitations. Ever tried to use your phone one-handed while carrying groceries? That's a situational motor impairment. Ever tried to see your screen in bright sunlight? That's a situational visual impairment.

Forcing everyone into the same interaction mold is like insisting we all wear a size 9 shoe. It’s going to be uncomfortable for most, painful for some, and downright impossible for others. It’s time we moved past the myth of the "average user" and started designing for real, complex human beings.


The Perils of Presumption: Examples from the UX Trenches 🚧


Product teams, often with the best of intentions, make decisions that bake assumptions into the very core of their products. They aim for "clean" or "simple," but in doing so, they can lay digital minefields for the very users they're trying to help.

  • The Overzealous Automator: We've all been ambushed by this well-meaning tyrant. The app update that decides for you that dark mode must now sync with your phone's system settings. Suddenly, your professional work app looks like it's ready for a rave at 2 PM. Or the text editor that "helpfully" turns your straight quotes into “smart” quotes, instantly breaking the code snippet you just pasted. The assumption is that one size fits all and automation is always a benefit. The reality is that it yanks control away from the user and can actively sabotage their work. Now, not all automation is evil. A great automation is like a well-trained butler, anticipating your needs without getting in your way. A bad one is like a rogue robot vacuum that has decided your cat would look better without a tail. The key is user control: let automation be a choice, not a mandate, and make it clear what it's doing.

  • The Attention-Grabbing Auto-Player: You navigate to a website, perhaps in a quiet office or on a crowded bus. Suddenly, a video erupts from your speakers, broadcasting your interest in "10 Weirdest Cat Videos" to the entire world. This design assumes the user is alone, wants to watch the video immediately, and has unlimited data. For a user with a vestibular disorder, the sudden unwanted motion on screen can be disorienting or even nauseating. For someone using a screen reader, it's an auditory nightmare—an extra layer of noise they now have to have the screen reader shout over just to navigate the page. Let's call this what it is: it’s not a feature; it’s an ambush.

  • The Curse of the "Sleek" Kiosk: This one is a masterclass in prioritizing aesthetics over people. Think of modern ATMs, airport check-ins, or fast-food ordering screens that are now just giant, glossy touchscreens. On the surface, they look futuristic. But the underlying assumption is that all users can see and physically interact with a touchscreen, that they can magically discover any hidden accessibility features, and that they are thrilled to learn a brand-new interface when all they really want to do is order a burger before they die of hungration. For a user who is blind, this sleek glass rectangle is often a silent, unusable barrier. Some kiosks might include a screen reader or physical keypad, but then the next set of assumptions kicks in: the user must have a pair of 3.5mm wired headphones on them (not Bluetooth, not USB-C), and they must be willing to hold up the line while they learn the unique interaction paradigm of this specific machine. The frustrating result? Many resort to the alternative: asking an employee or a stranger for help, completely sacrificing their privacy in the process. This isn't inclusive design; it's shifting the burden. It turns a fundamental need into an afterthought that the user, not the designer, is forced to solve.


The Case for Customization: Benefits and (Reformed) Pitfalls ✨


So, what's the antidote to this prescriptive design philosophy? The glorious, empowering, and profoundly necessary embrace of customization. It's about giving users the keys to their own experience.

The Undeniable Benefits:

  • Dramatically Increased Accessibility: This is the most critical benefit. Allowing users to adjust text size, change color contrast, remap controls, use an interface with different modalities, or enable captions isn't a "nice-to-have"; it's a lifeline that allows people with diverse abilities to participate fully in the digital world. This aligns with established best practices like the Web Content Accessibility Guidelines (WCAG).

  • Sky-High User Satisfaction: Users who can tailor a product to their exact needs and tastes feel a sense of ownership and control. The product ceases to be a rigid tool and becomes a personal assistant. That feeling of "this just works for me" is what builds passionate, long-term brand loyalty. I don’t have stats to back this up, but I have no doubt that research would bear this out.

  • A Massive Boost in Productivity: Power users, in particular, thrive on customization. The ability to create custom shortcuts, rearrange toolbars, and set up specific workflows can transform a clunky process into a lightning-fast one, saving time and reducing frustration.

  • Expanded Audience and Market Reach: By building a flexible product that caters to many different needs, you are, by definition, creating a product that more people can use. You're not just serving the mythical "average user"; you're welcoming the power user, the accessibility user, and the user who just has a particular preference. That's good for people, and it's good for business.

Now, I can hear the counter-arguments brewing. Let's tackle the common "pitfalls" of customization and see if we can't reframe them as opportunities.

  • The "Complexity" Bogeyman (Pitfall #1)

  • The Fear: "If we add too many options, the settings menu will become a labyrinth, and users will be overwhelmed!"

  • The Opportunity: This is a design challenge, not a dead end. Use progressive disclosure. Keep the main interface clean and simple, but have a clearly labeled "Settings" or "Preferences" area. Within that, you can have 'Basic' and 'Advanced' tabs. Think of it like a car: the dashboard gives you the essentials (speed, fuel), but a mechanic can pop the hood for fine-tuning. You provide sensible, well-researched defaults, so the product works great out of the box, but you empower those who need or want to dig deeper.

  • The "Cost" Conundrum (Pitfall #2)

    • The Fear: "Building all these customization features will take too much time and engineering resources."

    • The Opportunity: This is a classic case of short-term thinking. Investing in a flexible design up front is vastly cheaper than the painful process of retrofitting it later. Think of it like building a house. You can build it with standard, narrow doorways and steep front steps. A year later, when you need to accommodate a wheelchair or a baby stroller, you're faced with an expensive, messy renovation project—tearing out frames, pouring new concrete, and trying to make it look like it wasn't an afterthought.
      The alternative is to design the house with wider doorways and a gently sloping, integrated walkway from the very beginning. The initial cost is marginally different, but it's fundamentally more useful to everyone from day one, and you completely avoid the massive future expense of a renovation.
      Software is the exact same. Retrofitting a feature like adjustable text size into an app with hard-coded font values means a developer has to painstakingly hunt through hundreds of files and change each value by hand. It's tedious, expensive, and a great way to introduce new bugs. Building it in from the start means designing your components to pull their styling from a central theme file. Want to change the font size? You change one line of code. This approach doesn't just make your product more accessible; it makes your entire codebase more robust, maintainable, and cheaper to update in the long run. A simple rebrand or adding a new "extra large" text option becomes a simple tweak instead of a multi-week project.

  • The "User Confusion" Catastrophe (Pitfall #3)

  • The Fear: "Users won't know what the options do, and they'll mess up their experience."

  • The Opportunity: Guide them! Use clear, plain language (not technical jargon) to explain what each setting does. A simple tooltip or an "i" icon can provide context. Better yet, create a simple setup wizard on first launch. "Welcome! Let's get things set up. Do you prefer a light or dark theme? Would you like to use a screen reader? Do you want to connect your Bluetooth headphones for greater privacy? Would you like to connect a physical keyboard or a Braille display? Would you like to increase the default text size?" This educates and empowers users from their very first interaction, making them feel catered to, not confused.


Your Turn: Let's Build a Better Digital World! 🗣️


The conversation doesn't end here. We, the users, have the power to demand better, more flexible tools.

What's the one feature you desperately wish you could customize that no app seems to let you change? What are your biggest pet peeves with interfaces that make assumptions about you? Share your stories, your frustrations, and your brilliant ideas in the comments below. Let's make some noise and push for a future where our technology adapts to us, not the other way around.


Disclaimer: Please note that all views expressed in this blog post are solely those of the author and do not necessarily represent the views or opinions of his employer.

Editorial Note: This blog post was edited with the assistance of Gemini for clarity and readability; however, all ideas, opinions, and witty asides expressed herein remain those of the author.

 

Be More Like a Five-Year-Old: A Pickleball Parable About True Inclusion

I was at the grocery store a few weeks ago, navigating the aisle to the meat counter with my white cane, when I heard the unmistakable, high...