Known as “the man behind Google Docs,” Sam Schillace has quite literally changed the way teams work together. Now as SVP of Engineering at Box, he’s applying the same systematic, collaborative thinking to build a culture of extremely high performance — all while doubling the size of the company's engineering organization.
By introducing a new system that keeps performance top of mind, he's fundamentally changed the way people work day-to-day for the better. Schillace brings two decades of engineering leadership to the table, spent observing and learning how and why companies promote some people and fire others. In that time, he's seen the surprising prevalence and impact of opaque favoritism and arbitrary evaluations — all of it pointing to a new type of system defined by consistent expectations, clear values, and a firm but fair approach to advancement.
At First Round Capital’s recent CTO Summit, Schillace explained why and how to deploy a performance rubric designed to lighten the burden for employees and managers, while turbo-boosting the recruitment, retention and mentorship that keeps companies healthy and constantly improving.
For small and even medium-size organizations, it’s easy to cruise along without thinking much about how to measure and manage performance. Once you grow past this phase, however, problems like attrition, low quality, politics and poor communication are a lot harder to fix. It's never too early to think about how you want to manage performance.
Once you get going, culture gets baked in, and it's super hard to change. It's really worth thinking about it now.
Not convinced? Schillace has seen the same failures repeated in this area over and over again — and they’re enough to strike fear into the heart of any manager.
“The first is the Prima Donna Death Spiral,” he says. “When you're early stage, it’s easy to get a prima donna, somebody who's a really strong performer but doesn't necessarily trust the people around them very much.” As a result, they won’t delegate effectively or learn to build rapport. Unless you step in and set clear expectations, that team will fail to scale. “You get this death spiral. You start adding engineers around the prima donna, and they aren't being treated with trust and aren't being delegated to. You wind up overloading the person in the middle, and then the whole thing falls apart.”
For later-stage companies, Schillace has another cautionary tale: Title Confusion. “It's easy early on to give somebody a title just to get them in the door,” he says. It doesn’t seem like that big of a deal to, say, make someone a principal to sweeten their offer. Later, though, as you get serious about hiring at a larger scale, you do have to think about the logic of your title structure. “It's really hard to take stuff away from people once they've been given it,” he says.
His advice, simply put, is to get ahead of this kind of trouble, and to implement five key principles of effective performance management:
Have clear standards for performance. This is the foundation for everything that follows, and what Schillace calls “the rubric” — a clear, concise statement of what you expect from employees and how you’ll measure it.
Have an opinion on the role of managers early. At first you won’t need them. Even when your company reaches mid-size, your first managers will most often emerge from the existing team. It's easyfor these people to find a way to be valuable. But as you begin hiring managers from the outside, you’ll need to understand what you want them to do. It's very common for companies at this stage to struggle with what they want from managers — more people focused or more technical — and how to evaluate their performance. As a result, some of the most important people in your company will be very unhappy because they won't know how (or whether) they're valued. Write a rubric for managers too.
Have a process for evaluation and reward. Because they often grow around an inner circle of early staff, startups are prone to one particularly toxic structure: A core group of long-timers with a lot of political power surrounded by newer hires trying to suck up to them. “You don't get great performance out of people if everybody is trying to figure out how not to say the wrong thing to somebody who was there first,” Schillace says. Combat even the perception of this dynamic by defining a process for advancement.
Don’t be soft on low performance. You need to have a process for handling people who aren’t doing well, too. Some managers are better at it than others, and formalizing a process for spotting and letting go of low performers will smooth out that imbalance.
Be consistent without losing speed. Schillace has seen performance management systems slow companies down, but it doesn't have to be that way. “This stuff starts to sound a little bit bureaucratic, but it doesn't have to be. We do a fair amount of performance management at Box, but it doesn't really slow us down very much. We try to be as lightweight about it as possible.”
At the end of the day, it really boils down to two goals: transparency and fairness. The first means that you have a system for performance management and everyone knows how it works. “It's written down. It's clear. It's easy to find. You talk about it all the time,” Schillace says. Reviews, promotions, expectations — there shouldn't be mystery to any of it. The second key value, fairness, means just that. Everyone gets the same reward for the same performance.
If you achieve transparency and fairness, a lot of the issues around performance, culture and politics go away.
Very few companies have a rubric for whether someone is performing well or not and how to handle it. In Schillace's experience, it can give you a serious edge — but only if it's documented and known by everyone. He proposes a few key steps to start drafting your own rubric:
1. Establish the levels of your organization.
You should create a version of your rubric for each tier of employees. “Software engineer, senior software engineer, staff and principal are the levels we use on the engineering side at Box,” he says. Some companies like to have a lot of levels, others just a few. Whichever you choose, Schillace encourages leaders to recognize the experience of employees moving along this ladder.
“If the levels are too far apart, then people get frustrated because they don't feel like they're moving forward. If there are too many levels and they're too close together, then people feel like they're not meaningful.” Looking at other companies for examples is a good place to start understanding this choice — most companies are happy to discuss it. Don't stray too far from industry standards or your people will have trouble with how their resume looks next time, and it can be harder to recruit (for the same reason).
2. Start with your core values.
The specific content in your rubric will change for each tier, but your core values stay the same from entry-level through upper management. “We have core values that are always the same, so you can see the progression as you go up the levels in the rubric,” Schillace says. The target competencies become more sophisticated — perhaps the size of a project an employee is expected to manage grows larger, for example. But the structure is repeated, giving everyone in the company a clear snapshot of the progression you expect to see. Core values can be about code quality, design practices, scale, teamwork, communication. Schillace won't show people the whole Box rubric because this part is unique to each company and is something you should put your own stamp on.
3. Use the rubric rigorously.
Performance culture, by definition, is not about who the person is. “They're not an A player. They're not a staff engineer. They're not a favorite. It's about what they do, and what they do is explicitly measured against the rubric,” Schillace says. When you generate reviews or make advancement decisions against this rubric, you should be looking for data and examples, not a gut feeling.
You don't promote based on potential. You don't promote based on favoritism. You don't promote based on a feeling. You promote based on actual performance.
As useful as a rubric can be, Schillace will be the first to acknowledge that they’re hard to write. There’s no shortcut or downloadable template. It’s a fundamental statement about your company’s culture, so you’ll have to write your own. Still, he has some guidelines:
Don’t make a checklist. It might be tempting to make these rubrics hyper-detailed. After all, if transparency and fairness are the goals, shouldn’t you drill down as deep as possible? Not quite, Schillace cautions. “If you write the rubric really specifically — like 'You must have three projects like this, you must have four things like that' — people will check off the minimum number of boxes and not do anything else.”
Don’t be too vague. At the same time, you don’t want to open the door to disagreement about whether someone has performed against the rubric or not. A quick test is to ask whether a given bullet point will generate the kind of specific, work-based examples you’re looking for as proof. If you’re doubtful, keep working on it.
Iterate. Invariably, all of this careful consideration will lead to iteration. Let it, but only to a point. Schillace recommends reviewing your rubric every six to twelve months; that’s a good interval for smoothing out rough spots and addressing anything that confused the team. “If you iterate too much, then nobody trusts the rubric and you're back to square one. You don't have a culture that's stable, and you've lost some of the transparency part of it,” he says.
Be aware of side effects. “You have to think really carefully as you build this stuff about what the side effects are going to be, and the ways to game the system,” Schillace says. He cites the example of Microsoft, which famously employed a stack-ranking review system. “Every so often, managers had to fire the bottom couple people in their organization. That had the side effect of ensuring that no smart people wanted to work together on the same team. Because if you had a team of high performers, two of them would always get fired every review period.” Consider the consequences of your choices. Your strategy should work for you, not against you.
What do these rubrics look like when you put them all together? Reiterating that one big caveat — that there is no standard rubric, just the company-specific one you’ll have to craft yourself — Schillace cites a couple examples from Box:
SAMPLE RUBRIC: Software Engineer
Technical Skills: Good programmer – able to write modular, maintainable code with guidance. Strong technical skills.
Design: Contributes to design.
Code Quality: Leaves code in better shape than before.
Impact: Capable of working on small projects independently, and medium to large projects with supervision.
Scope: Primarily works within scrum team.
Drive for Improvement: Strong desire to learn and grow. Should be rapidly improving.
Culture: A good cultural fit at Box.
Box’s rubric includes these seven core values for every level of employee. “We expect everybody to manifest these skills all the time,” Schillace says. Some bullets are vaguer than others, but they're all suited to specific examples and directed conversations. “If you're on a promotion committee, or if somebody is not performing well, you have a place to start. Maybe you have engineers that get a lot of stuff done, but they make messes for people. They leave the code in crappy shape.” Now you have a specific item on your rubric to point to, which could become the foundation of a performance improvement plan.
Moving up to the next level, in this case senior software engineer, the key values stay the same. It’s just the expectations that get elevated. The first few items, for example, look like this:
SAMPLE RUBRIC: Senior Software Engineer
Great programmer – able to write modular, maintainable code.
Able to communicate clearly on technical topics.
Begins to show architectural perspective.
Leads the design for medium to large projects with feedback from other engineers.
Leaves code in substantially better shape than before.
Rarely introduces production bugs.
Provides thorough and timely code feedback for peers.
Demonstrates effective use of testing.
Don’t worry too much if the distinctions between levels aren’t always crystal clear. Schillace has been at this for years, and he still finds that they can get fuzzy. The most important thing is to get your values down on paper, values that clearly and consistently demonstrate how employees should grow in their roles.
Take these examples as inspiration, and then go and create your own. It’s well worth the effort. “These values will become the culture of your company. People will look at this and try to decide how they should behave on a daily basis,” Schillace says. Early on, it might seem hard to find the time to write rubrics for every position. After all, you’re a small group — you all know what’s important and need to sprint. But as you grow to a 100- or 200-person team, anonymity increases and firm standards make all the difference.
If you can't write down your expectations of an engineering team, then you don't understand performance very well — and you're not actually going to get a good performance culture over time.
Once you’ve invested time in building your rubric, use it. As the foundation of your performance culture, it should be at the heart of every review or promotion decision your team makes. “Obviously you don't want to treat the rubric as a straightjacket. If there are things you want to talk about with somebody that are outside of it, that's fine,” Schillace notes. Wherever possible, though, leverage the rubric’s clarity at each management milestone.
As you’re setting up your performance review processes, there are some practical decisions you’ll need to make. How and how often will you conduct reviews, for example?
“I like continuous reviews better than quarterly ones,” says Schillace, who makes sure that the engineers at Box get some feedback on how they’re doing at least once a week. You’ll also need to decide whether you want to work with ratings or not (i.e. numbers or values indicating things like “exceeds expectations”). Schillace has a lot of experience on both sides of that coin, and is currently moving away from ratings. “People seem to get really focused on the rating itself and less focused on the actual aspects of the performance that are good or need development,” he says.
To make ratings more effective, make sure that they're based on average performance at specific levels. You can’t compare people across levels. Most importantly, ratings should be based on both the results people produce and how they get things done — i.e. how they influence their team and the broader company culture. Schillace works with the following rating levels at Box:
Far exceeds expectations: This should be very rare, indicating truly extraordinary performance. Maybe you’re doing what someone two levels above you would be doing. Or you're making an unusually high impact.
Exceeds expectations: This means you’re consistently knocking it out of the park. You’re starting to do the job above you.
Achieves expectations: This is where 50 to 60% of people are expected to land. Achieving expectations means you’re hitting an already very high bar.
Does not achieve expectations: Getting this rating means you're struggling and either need help, plan to improve or both. Your manager and People Ops are responsible for helping you turn this around.
With those decisions made, your rubric gives you concrete things to report on and discuss during your reviews. As a manager, be sure to reference it in both formal and informal check-ins, citing specific values that the employee needs to work on. With consistent application, the rubric should eliminate any unpleasant surprises in these conversations because people will be made aware as soon a problem becomes apparent.
At Box, the team holds performance reviews every quarter so that employees always know where they stand. Q1 and Q3 are “light” quarterly reviews just to make sure people are on track and lighten the load for everyone. Then the team goes through a full performance review cycle in Q2 and Q4, including a deeper dive into performance. Q4 is the main promotion cycle, allowing people to start the new year off fresh.
When it comes to promotions, Schillace has one key piece of advice: Resist the urge to do this ad hoc. Really. “It's really easy to think, 'This person has been here a while. They’re valuable to the team.' So they get promoted for the wrong reasons, which then causes tensions in other places in the organization.”
Instead, base advancement decisions on a standardized process — a process that all team members are aware of and that you can point to when promotion requests come up off-cycle. Here, too, Schillace encourages evidence-based decisions.
Don't promote on potential, because then you get people that are over-promoted. Advance people when they’re actually demonstrating skill at the new level or going out of their way to take on those responsibilities.
Schillace is partial to peer promotion committees. Box’s are comprised of a mix of engineers and managers, all of whom are two levels above the employees applying for promotions to avoid any bias. These committees meet one day every six months. “People write promotion cases for themselves based on the rubric. So if you want to be a senior software engineer, that's awesome. Go look at the rubric and write down examples of ways in which you behaved as a senior software engineer for all seven skills in the last six months. Then the committee will review.”
It’s a highly transparent, clearly outlined process that discourages appeals. “If you let people appeal a promotion decision, and they get ‘no’ the second time, they're going to be even unhappier,” Schillace says. “Typically what I've done is allow appeals only if the committee made a mistake on a matter of fact,” he says. If a promotion requires managing large projects, for example, and the employee simply forgot to include that proof in their case, that would be grounds for a second look.
Even when a promotion is declined, there's value to this process when done right. “We ask our committees to give actual feedback in the case of a no,” Schillace says. “If you're saying, 'No, you're not ready yet,' then you have to say, 'These are the things you need to work on that you haven't done yet.’”
All promotions at Box are announced at the Engineering All-Hands meeting, where it's used as an opportunity to share the kinds of behaviors and accomplishments that are rewarded with career advancement. Beyond that, all employees who are considered for promotion are given actionable feedback so they know what their areas of development are going forward.
Performance Improvement Plans
Same thing goes for performance improvement plans. Often regarded as a formality, the dreariest of paperwork, they can be a valuable way to reinforce your organization’s values and, yes, improve performance. Here, too, grounding your feedback in the clarity of the rubric gives you a leg up. “It's not a nasty conversation like, ‘Oh, you don't fit here. You're a bad person.’ It's more like, ‘Look, you're not performing in the following ways. We can build a plan. We can work together for 45 days and see if we can fix this,’" Schillace says.
Performance plans should be communicated immediately so that nothing is surprising, and privately to create a lot of trust around the process. Everyone in this situation should feel like they have a reasonable opportunity to improve.
A good performance improvement plan has the following characteristics:
It’s written down, with specific and attainable action items.
The person’s manager has committed to providing weekly feedback on progress.
The goal should always be for the person to stay on staff.
The plan should clarify the expectations of the role they are in.
It should describe all of the ways they can improve their performance.
It should reaffirm key measurements of success during a set period of time.
Half the time things won’t work out. In those cases, don't let a bad situation drag on. “It's very easy to feel like you've got somebody who's kind of okay and tolerate it. Usually what you'll find is that the team is struggling to support that person, and they're actually happier once you let them go,” Schillace says.
It's really hard to fire somebody. But it's much easier if you've got a transparent, fair system around it.
If you employ your rubric now and make it visible to everyone in the company, it will make even these tough conversations humane. There’s no need for vague platitudes or hemming and hawing when you can plainly show someone the areas where they're falling short.
The value of a good performance rubric extends beyond existing employees. It’s also an effective tool for hiring the right engineers (and giving them the right titles).
“Look at the rubric when you’re hiring. Look at what the candidate has done and be very thoughtful about how they compare with people at the level you're bringing them in at,” Schillace says. No doubt, bumping up a title is a tempting way to get a great hire in the door. The resulting title inflation or employee frustration, though, simply isn’t worth it.
“I try to stay low, if possible, with the title. It's easier to push people up than push them down,” Schillace says.
Imagine two scenarios: In one, you’ve over-titled a new hire who isn’t able to execute the skills in the corresponding rubric. Now you’re faced with telling them in review after review that they’re not performing at their level. In the other scenario, you’ve under-titled a new hire, and you get to deliver the news that they're getting a quick bump.
Ultimately, performance management boils down to these few conversations — conversations that can be focused or vague, productive or tense. Clear rubrics and transparent processes may not eliminate politics and anxiety entirely, but they will save your organization from the time-drain of uncertainty.
When your staff isn’t angling for promotions or wondering what you want from them — and when your managers aren’t running interference on squabbles or imbalanced teams — everyone can get down to the business of real, game-changing collaboration at startup speed.