As AI picked up momentum, we needed to figure out where it actually made sense in HR and payroll workflows. I led early UX strategy for assistive AI, focusing on trust, clarity, and value. Partnered with product, research, and design to define realistic use cases and explore how the assistant could fit into existing patterns.
At the time, we didn’t know how our users felt about AI or what they wanted from it. I worked with our lead researcher to test early concepts, gather sentiment, and unpack trust blockers before anything went into development.
I started with low-fi sketches to explore how AI could show up naturally in the product. I intentionally pushed the direction further than we thought we’d ultimately land to test boundaries and provoke honest user reactions. Two directions stood out:
Once I had a general direction, I moved into mid-fi concepts and worked more closely with product and research. We fine-tuned the ideas and crafted concepts that would provoke the right kinds of reactions and help us understand what resonated.
Even at this early stage, exploring different layouts, features, and interaction patterns was key. These quick iterations gave the team something concrete to react to and helped us narrow our direction by comparing real options.
This early work set the tone for AI across the suite. It gave teams shared language, reusable components, and realistic expectations. The approach influenced design decisions across Home, Search, and eventually our assistant UI.
I'm a dad, husband, and musician based just outside the Twin Cities. I've spent 15+ years designing thoughtful, impactful digital experiences.
View Resume Get in Touch