Kylee’s Take: “People trust certain products.
But if you introduce aspects that feel a little squidgy, that trust erodes.
When we launched Firefly, we always highlighted what it was trained on. That helped creators feel confident using it.”
When it comes to AI, people aren’t just evaluating features.
They’re evaluating risk.
If your messaging feels vague, performative, or too slick. Trust drops.
Especially in industries like design, editing, or journalism, where ownership and originality matter.
Kylee and the Adobe team made transparency part of the product story.
They didn’t just say “safe.” They showed the receipts.
Takeaway: Trust is earned with details.
Don’t just say your AI is ethical or safe to use.
Say what it’s trained on.
Say what rights users have.
Say how you’re protecting their work.
If your users can’t explain it to their legal team, your messaging isn’t ready.