Hiring well is one of the highest-leverage things a design manager can do. Done poorly, one mis-hire costs months of team momentum.
Hiring well is one of the highest-leverage things a design manager can do. Done poorly, a single mis-hire can cost months of team momentum. Done well, a consistent process builds a team that compounds over time.
This case study draws on a decade of hiring across multiple companies and roles — co-ops, product designers, senior designers, UX researchers, design ops managers, and design directors. Twenty-plus hires in total, across organizations of different sizes, structures, and hiring cultures. The most structured version of this process was built during my time as Senior Design Manager at Copperleaf, where I inherited an existing interview structure and systematically tightened it — sharpening the design challenge, building role-specific question banks, and creating templates that let any co-interviewer walk in aligned and walk out with a clear signal.
The resources, frameworks, and templates here reflect the accumulated iteration of all of that experience, with Copperleaf as the primary reference point. They're designed to be adapted — not every team runs five interview stages, and not every role needs a design challenge.
Role
Senior Design Manager
Primary context
Copperleaf Technologies
Scope
20+ hires — co-op through director
Faster
Hiring process with less back-and-forth — defined interviews, durations, and attendees removed repeated coordination overhead
Clearer
Post-interview calibration — explicit pass/fail signals replaced subjective gut-feel with shared, discussable criteria
Aligned
Co-interviewers and note-takers before the first interview — no onboarding conversation required on the day
Scalable
Across role types — co-op, intermediate, senior, and manager hiring each had role-appropriate criteria and questions
The team had an interview process when I arrived. Meetings were scheduled, a design challenge existed, and there was a general sense of what we were looking for. What was missing was precision — clear success criteria for the challenge, a consistent question structure across roles, and a way to bring co-interviewers into alignment before the process began.
The design challenge in particular was more complex than it needed to be. Complex challenges take longer to run, are harder to evaluate consistently, and can disadvantage strong candidates who get tripped up by the complexity rather than the thinking it was meant to test.
My goal wasn't to replace what existed — it was to tighten it. Make it repeatable. Make the signals legible to anyone participating, not just the hiring manager who knew what to listen for.
A structured guide defining the number of interviews, their duration, who should be present at each stage, and the purpose of each conversation. This eliminated repeated coordination and ensured every candidate experienced a consistent process regardless of which team members were available.
Questions tailored to the role being hired. A co-op conversation looks different from a senior designer conversation, which looks different from a design manager conversation. Conflating these leads to unfair evaluation — asking a co-op about systems leadership, or a manager only about craft. Each role had its own bank of questions mapped to what mattered at that level.
Co-ops did not complete a design challenge. The challenge was reserved for intermediate and above, where it was the most informative signal available.
A scoring spreadsheet with a dedicated tab for each role — co-op through director — populated with real questions from actual hiring rounds, organized by interview stage. Each interviewer scores responses on a 1–7 scale, and all candidates sit side by side in the same view. The matrix replaced post-interview gut-feel with a structured debrief: scores first, discussion second, which meaningfully reduced anchoring on the first strong opinion in the room.
A brief, a facilitation guide, and a clear set of signals for what strong and weak responses looked like. The rubric was written so that a product manager or developer co-interviewing could follow along and mark observations in real time — not just watch and share vague impressions afterward.
The structure below reflects how the process was run at Copperleaf. It's a reference point — in practice, it's often adjusted based on team size, hiring velocity, and how many cross-functional interviewers are available. A smaller team might collapse stages; a larger one might add a dedicated portfolio review. The purpose of each stage matters more than the exact sequence.
Searching
Job posting, recruiter outreach, and resume screening. No formal interview yet — this is sourcing and first-pass triage. Internal referrals are flagged here.
Stage 1
30–45 min
Design Leader Screening
Attendees: Recruiter or HR, Design Manager
Portfolio walkthrough and culture/experience fit. Not a detailed design critique — the goal is to confirm there's a signal worth pursuing. Strong portfolios with weak communication often surface here. Pass/hold decisions are made before scheduling the full panel.
Stage 2
60–90 min
Design Interview & Challenge
Attendees: Design Manager, 1 co-interviewer (product or development)
Role-specific questions run for approximately 30 minutes, followed by the live design challenge in Miro or Mural. The challenge is 15 minutes by design — when candidates struggle, a longer session benefits no one. Discussion and candidate questions close the session.
Co-ops skip the design challenge entirely. The challenge is reserved for intermediate and above, where it provides the most informative signal.
Stage 3
45–60 min
Design Team Interview
Attendees: 2–3 design team members
Culture fit, collaboration style, and ways of working. Co-interviewers use the shared question bank and candidate comparison matrix — no pre-interview briefing call required. The goal is to understand how the candidate thinks about working with peers, not to re-evaluate their craft.
Stage 4
30–45 min
Technical Team Interview
Attendees: Cross-functional partner — PM, developer, or researcher
How the candidate works with non-design partners. Focus is on communication, collaboration, and scope judgment — not craft evaluation. Cross-functional interviewers assess whether the candidate can participate effectively in a product team context.
Stage 5
Decision Debrief
Attendees: All interviewers
Scoring review using the candidate comparison matrix. Each interviewer shares pass/hold/fail signals from their stage before open discussion begins — this prevents anchoring on the first strong opinion. The hiring manager makes the final call.
The challenge runs live over a video call with a shared digital whiteboard. The candidate is given a brief and works through the problem in real time with the panel observing. We're not evaluating polish — we're evaluating how they think.
The brief is deliberately simple: "We'd like you to design a parking lot for us."
That's it. No additional context is offered upfront. What the candidate does next is the entire signal.
Two distinct scenarios are available — a highway rest stop and a carnival grounds — so the brief stays fresh across multiple hiring rounds. Both use the same evaluation rubric and produce equivalent signals.
Both briefs map to real, constrained scenarios — one a rest stop off a highway serving everything from motorcycles to semi trucks, the other a converted farmer's field during a carnival. Both have safety considerations, wayfinding needs, surface requirements, and spatial constraints that only emerge through questions.
Strong candidates don't start drawing. They start asking. Who is this for? What's the context — is this a commercial lot, a rest stop, something else? What are the constraints? What does success look like? Are there accessibility requirements? Safety concerns?
These questions reveal a candidate who understands that design decisions downstream are only as good as the context established upstream. In enterprise product design, starting from the wrong assumptions — regardless of execution quality — produces the wrong solution.
Candidates who assume without asking will default to the most familiar mental model — usually a shopping mall parking lot. They'll start laying out rows of spaces, thinking about traffic flow for passenger cars, maybe sketching a central entrance. This has happened. They've designed the wrong thing with complete confidence, and no amount of craft or fluency with tools will rescue it.
The challenge also gives us a window into how candidates use remote tools in real time — another practical signal for a fully distributed team.
There's a direct connection between what makes a strong design challenge response and what makes a strong design requirements process. The questions a good designer asks before starting work — about personas, benefits, assumptions, limitations, and use cases — are exactly the questions that reveal the right solution in the challenge.
A candidate who asks "who will be using this space?" is thinking about persona. One who asks "what are the constraints on the site?" is thinking about limitations. One who asks "what does a successful outcome look like?" is thinking about benefits. These aren't abstract design principles — they're practical thinking skills that determine whether a designer spends three weeks building the right thing or the wrong one.
The challenge isn't a trick. It's a compressed version of the first conversation that should happen on every real project. Candidates who know how to start a project well will naturally pass it. Those who have developed a habit of assuming — of jumping to solutions before establishing context — will struggle regardless of their portfolio.
Related resource
The Design Requirements Template maps directly to this thinking. If you're preparing for a design challenge or running one, the same framework applies.
The templates and tools from this process are available as a public resource. Whether you're running the process or going through it, there's something here for you.
For hiring managers
Candidate comparison matrix
Seven role-specific tabs covering co-op through director, each with real questions from actual hiring rounds organized by interview stage. Score candidates 1–7 per question and compare all responses side by side — useful for structured debriefs and preventing groupthink.
↓ Download .xlsxDesign challenge briefs
Two ready-to-run whiteboard scenarios with site plans, facilitator prompts, context to disclose only if asked, and strong/weak response signals. Includes timing guidance by role level.
↓ Download .docxChallenge facilitator guide
How to run, observe, and evaluate a live design challenge. Covers the six evaluation signals, the five-step approach strong candidates follow, and six specific pitfalls to watch for.
↓ Download .docxFor applicants
How to approach a design challenge
What to expect in a design challenge, what the panel is actually evaluating, a five-step approach that works, and the most common mistakes — with a do/don't breakdown for each.
↓ Download .docxWhiteboard challenge presentation
A slide deck originally created for an Emily Carr University guest lecture. Covers the whiteboard challenge format, how to succeed, and what separates candidates who get the job from those who don't.
↓ Download .pdfA good design challenge is precisely constrained, not complex. Complexity doesn't reveal thinking — it creates noise. The parking lot prompt is disarmingly simple. That simplicity is the point. It gives candidates room to demonstrate depth if they have it, without penalizing them for navigating unnecessary difficulty.
Alignment before the interview is as important as the interview itself. When co-interviewers don't know what they're looking for, post-interview calibration becomes a conversation about competing impressions rather than observed behaviors. The templates ensure everyone arrives knowing what a strong response looks like — which makes the debrief faster and the decision more defensible.
Role-appropriate evaluation matters more than role-consistent evaluation. A co-op and a senior designer should not be evaluated against the same criteria. Being fair doesn't mean asking everyone the same questions — it means asking questions that are appropriate to where someone is in their career and what the role actually requires.