Eroding Personal Voice in Generated Content

AI-generated image

This August, I marked my two-year milestone at Microsoft. As the day approached, I received notifications from my LinkedIn network—friends and colleagues celebrating with me through emojis and personal messages. But when I saw the post that triggered these responses, I was surprised. It wasn’t something I had posted myself. The system had generated a notification on my behalf, stating “Celebrating 2 years at Microsoft” as if it were my own words.

System-generated content

To be clear, “Flora is celebrating 2 years in Microsoft” sounds and is very different from seeing it posted directly from my account as “Celebrating 2 years in Microsoft.” The latter feels as though my voice has been used without my consent, creating a disconnect between what I actually expressed and what the system automatically generated on my behalf.

Even if it is true that I am celebrating a milestone, there’s a distinct difference in intent between allowing the system to notify my network and choosing to personally announce or write a post about it. The notification is meant as a gentle reminder, while a personal post would be a deliberate act of communication, often intended to draw attention—whether for marketing purposes or something else entirely. This distinction matters because it underscores the importance of context and intent in how we communicate, which automated systems often fail to capture.

While I had opted in for such notifications, the experience left me uneasy—as though my voice had been borrowed without permission.

Now, I’m aware that this message wasn’t generated by AI. But it made me pause and consider a scenario that isn’t too far off: What if AI had crafted that message? What if AI systems began to take even more liberties in speaking for us, crafting messages, and making decisions on our behalf without our input? This thought opens a Pandora’s box of ethical concerns and design challenges.

As designers, we find ourselves navigating a delicate balance between advancing technology and maintaining the human touch in our work. The decisions we make often involve choosing between what technology can automate and what should remain personal and human-centered. This balance is crucial, especially in situations where automated systems risk overshadowing the individual's unique voice and experience.

The Potential Danger

If a simple LinkedIn notification can feel off, imagine the implications of AI systems that could impersonate us more regularly. This isn’t just about discomfort—it’s about crossing ethical lines. AI, when not thoughtfully designed, risks stripping away the authenticity of our interactions and eroding trust in the very systems we create.

Designing with Ethics in Mind

As AI evolves, we must anchor our design principles in ethical considerations. Here are some recommendations for fellow designers:

  1. Prioritize Transparency

    Users should always know when they’re interacting with AI or receiving automated content. Make it clear that a system-generated message is just that—system-generated. This transparency builds trust and allows users to decide how much they want to engage with automated interactions.

  2. Empower User Control

    Provide users with options to personalize and review AI-generated content before it goes live. This not only respects the user’s voice but also enhances their sense of ownership and agency in the interaction.

  3. Humanize the Experience

    AI should augment the human experience, not replace it. Focus on designing systems that assist users while preserving the personal touch. For instance, AI can suggest content or provide templates, but the final message should always be crafted—or at least approved—by the user.

  4. Challenge the Default

    Default settings are powerful; they often go unchanged by users. As designers, we should be critical of what we choose as defaults, especially in AI interactions. The default should be the option that most respects user autonomy and promotes ethical use.

  5. Ethical Audits and Feedback Loops

    Regularly audit your AI systems for ethical concerns. Engage with users to gather feedback on how these systems impact their experience. This ongoing dialogue can help catch issues early and ensure the technology remains aligned with human values.

I understand that as designers, particularly within large tech companies, it’s not always easy to uphold these principles. Business objectives, organizational pressures, and the demand for efficiency can sometimes push us in directions that feel counter to our values. It can be frustrating to keep advocating for the user when the business seems to prioritize other goals.

But don’t lose hope. Our role is to keep the user—the person—at the center of the experience. It’s our responsibility to ensure that, no matter the pressures, we continue to create technology that serves humanity, not the other way around.

As an example, I wrote this article with the help of AI. The thought and structure are my own, and so is the voice, because I shaped and formed it that way. It would have been wrong if AI had created this for me without allowing me to shape, review, edit, and make it truly mine.

So here’s a question: As AI continues to evolve, how do we ensure that it remains a tool that enhances our humanity rather than erases it? How do we, as designers, strike the right balance between innovation and ethical integrity?

Let’s reflect on the systems we build and the voices they assume. Technology should empower us, not replace us. And it’s up to us, the designers, to make sure it does just that.

Next
Next

Transforming Spaces: The Less is More Approach