Screenshot of Screendoor's activity feed.

Shortcuts in Screendoor

Designing transparent, reversible workflow automation.

Role
Research, product design, front-end development
Timeline
2 months
Team size
4–5

From 2014 to 2017, I was the sole designer at The Department of Better Technology, a small remote startup serving governments and nonprofits. I mainly worked on Screendoor, a shared inbox for form submissions.

Shortcuts are one of the Screendoor features I’m proudest of. They’re a workflow automation tool, saving users time on repetitive tasks.

Shortcuts doesn’t represent the best design work of my career. But it’s the best example of how I work: highly collaborative, outcomes-focused, and equally passionate about the organization’s ability to produce great work and the work itself.

This case study walks through my design process for this project and the infrastructure I worked on to help future projects succeed.

1.

Background

Screendoor's response dashboard.

Screendoor lets you build online forms, but it’s most valuable when processing the submissions you receive. You can sort submissions with statuses and labels, rate them with custom rubrics, and follow up on them by emailing respondents in bulk.

We built Screendoor for governments and non-profits, but we also had a healthy customer base of newsrooms who used us for audience engagement.

As DOBT’s first and only designer, I was responsible for everything from user research, prototyping, and visual design to project management, HTML / CSS development, and design QA. Our CTO handled Rails and Backbone development, while our CEO and customer success lead helped me sift through insights from active users and prospects.

We approached every project with three groups in mind.

Users

Users. Depending on the market, our users might be civil servants, journalists, or non-profit employees.

We split users into two groups. Admins set up Screendoor projects, managed them, and invited reviewers to rate responses.

Buyers

Buyers. Like most enterprise software, our buyers were not our users. Most of the time, government and enterprise purchasing authorities neither used the product or spent time with those who would.

We had to consider what motivated our buyers’ purchasing decisions on top of user needs.

Respondents

Respondents. Governments exist to serve their citizens, non-profits exist to serve their constituencies, and newsrooms exist to serve their readers. When our users created a Screendoor form, they’d ask these groups to fill it out.

If respondents had a bad Screendoor experience, we’d violate DOBT’s mission of social impact. Respondents might also complain to our customers, damaging our customer retention and brand equity.

2.

Impetus

Our customers’ workflows were composed of predictable, repetitive tasks.

For example, a newsroom might process reader submissions like so:

  • When we receive a submission… Assign it to the intern
  •  
  • If they label it Follow up Assign it to the editor
  • If not… Move it to the trash

Instead of always doing the same two tasks manually, they wanted a way to automatically trigger the second task after performing the first.

Users started asking for automation long before I joined. After a few months on the job, it was among our most requested features. But I still wasn’t sure it was the right thing to focus on.

First, I wasn’t sure its impact was worth the complexity it would add to the product. From our customer support sessions, we knew our customers received 200 submissions per form at most, and sometimes as few as 10. By generous estimates, automation would save users 1–2 minutes per session. Was that really worth it?

I also worried these requests might indicate deeper usability issues we couldn’t see. We had a paucity of product instrumentation, and customers were complaining about how long it took them to perform common tasks. If users couldn’t master Screendoor’s basic features, automation wouldn’t solve the core problem.

Over the next few months, we kept talking to customers and I started to understand their concerns better. Automation wasn’t just about saving time, but reducing cognitive overhead. It didn’t matter whether assigning a qualified lead to the editor only took 2 seconds. They wanted to stop having to remind themselves to do it.

Meanwhile, our co-founders decided to target Screendoor upmarket. Instead of trying to get our foot in the door with municipal agencies and small pilot projects, we would pursue agency-wide licenses for big state and federal departments. These agencies had extensive process flows already in place, and their buyers saw Screendoor’s inability to support them as a sales blocker.

We committed to the project assuming a single solution could satisfy current and aspirational customers alike.

Our design process

One of the first things I did at DOBT was socializing a framework based on Paul Adams’ four layers of design. We used it as our official design process, a shared language to discuss everything we worked on.

  • Outcome: How will this project improve our usersʼ lives?
  • Structure: Whatʼs our high-level approach to achieving the outcome?
  • Interaction: How will people use it?
  • Visual: What will it look like?

This was partially inspired by stakeholder coordination issues I’d observed at other companies. For example, I’d been in design reviews where teams would reject work because an executive didn’t like the color of a button. Other times, the team would learn too late that our original goals were based on outdated information. It was frustrating for everyone involved and introduced a lot of churn.

The four layers of design describe benchmarks at which it makes sense for every stakeholder to gain consensus before proceeding: the project’s goals, the high-level components of the solution, interaction details, and aesthetics. They’re also sequential: when the team reaches consensus on a layer, they can safely move to the next one. If someone discovers a problem with the work in one layer, they may need to redo all of the work in the layers below it.

This framework helped the team appreciate the value of design while keeping all of us on the same page. If someone didn’t like a visual detail, we could debate it without questioning the overall merit of the project. By the same token, everyone on our team was enthusiastic about reviewing design goals, because they now understood the risk of doing so too late.

When it came time to work on shortcuts, I followed these four layers as always, starting with outcomes.

3.

Outcomes

I worked with our CEO and customer success lead to synthesize feedback from users and sales prospects.

Based on those insights, I drafted hypothetical outcomes for each audience.

Buyers

Buyers at large agencies should feel confident that Screendoor can automate their current processes.

Users

Users should be able to automate repetitive tasks without negative side effects.

Respondents

Respondents should see indirect benefits from automation, like faster processing times, or none at all. Automation should never inadvertently harm respondents.

The potential negative side effects for users and respondents were the biggest unknown on this project. More than anything we’d previously worked on, automation had the potential to increase Screendoor’s complexity by an order of magnitude.

How would we avoid passing that complexity on to users?

With this in mind, I tried to answer one big research question:

What types of negative externalities might automation introduce to Screendoor?

To answer this, we recruited sales prospects we’d lost to competitors that already offered automation. I asked them about their experience with those features: what they appreciated about them, and which pitfalls they’d run into.

From these interviews, a pattern emerged. Here’s how I’d paraphrase it:

I automatically deleted hundreds of submissions by mistake, and it took me two days to understand why.

Let's unpack the root problems implicit in that statement.

User error carries harsh penalties.

Automation can give a single click outsized effects.

If a reviewer makes a mistake processing a response, or an admin sets up automation incorrectly, they could create major problems for their respondents and team.

The system behind automation is too opaque.

Interviewees had no idea they'd triggered automation. In the example above, the team only discovered their error after respondents contacted them.

When people aren’t aware of the consequences of their actions, they can’t learn from or fix their mistakes.

Problems are hard to diagnose and fix.

Once interviewees discovered their mistake, there was no easy way for them to find the root cause. They had to fix the problem through trial and error.

I came to believe a minimum desirable product for users (and, indirectly, for respondents) should address these problems. But since this was a oft-requested feature, some colleagues were already attached to design ideas they had previously sketched out. I had to convince them these concepts may introduce more problems than they solved.

Because everyone was comfortable with the four layers, this turned out to be a low-key discussion. Everyone understood that new goals could invalidate previous solutions, so it was simple to make the case for a different approach.

Once we had a shared understanding that this feature wasn’t a quick fix, we decided these new outcomes were important enough to budget an extra month for.

4.

Structure

Once we agreed on our outcomes, it was time to start brainstorming solutions. Thankfully, we weren’t starting from scratch.

Avatar for jrubenoff
jrubenoff commented

Idea: what if we showed the thresholds in the response detail page? So, if 7 people had to rate a response for an action to be taken, and only 5 people had rated the response, the response page would say something like “2 more people need to rate for X to happen.”

Avatar for DOBTcolleague
DOBTcolleague commented
Avatar for jrubenoff
jrubenoff commented

Hm, let me broaden the discussion a bit so that we're not focusing on UI yet (my mistake). What I was trying to say:

If people use this feature, Screendoor will know the blueprint of how a customer's selection process works. How might we use that blueprint to empower the user beyond just automation?

Avatar for DOBTcolleague
DOBTcolleague commented
Avatar for jrubenoff
jrubenoff commented

That first idea makes me think of analytics that help your organization be more efficient.

Like, “It takes an average of 3 hours for Max, Caitlin and Bruno to review a response and trigger the next action in the workflow. But it takes Gary an average of three days. You should probably talk to Gary and see what's up.”

Avatar for DOBTcolleague
DOBTcolleague commented
Avatar for jrubenoff
jrubenoff commented

Idea: what if we showed the thresholds in the response detail page? So, if 7 people had to rate a response for an action to be taken, and only 5 people had rated the response, the response page would say something like “2 more people need to rate for X to happen.”

Months before we kicked off this project, I tried facilitating an informal brainstorming session inside of its GitHub issue. (We used GitHub for everything, including project management.)

DOBT was a fully distributed team, so we rarely saw colleagues in person. I was always experimenting with ways to foster a strong design culture within that environment. We’d attempted a few brainstorming sessions over video chat in the past, but they were awkward and stilted. I hypothesized an asynchronous medium might encourage more introverted colleagues to participate.

This didn’t work as well as I’d hoped. I realized text-based discussions aren’t great for brainstorming visual ideas. I also could have introduced the activity better, and didn’t properly communicate the goal of the session to others. But it helped to start thinking about the project in a low-stress environment: we generated a few ideas we’d return to later.

To start thinking about solutions, I began to research existing design patterns for creating rules and logic. I looked into tools ranging from MacOS Automator to programming languages for kids.

Rather than seeking UI inspiration, I wanted a more expansive understanding of what an automation tool could be.

Screenshot of the Scratch programming language Screenshot of the Gulp Fiction front-end automation GUI Screenshot of MacOS Automator Screenshot of Hopscotch for iPad

From this exercise and the GitHub issue brainstorming session, I compiled a list of adjectives for 2×2 matrices.

Example 2×2 matrix.
I’ve lost my original sketches, so these drawings are recreated from my notes.

I like to brainstorm with 2×2’s because they force you to come up with many unique ideas. By contrast, with an exercise like Crazy 8s, it’s easy to find yourself drawing slight variations of the same idea in your final sketches.

After presenting my most fleshed-out ideas to the team, I was told we needed to demo the feature to a promising sales prospect, and that I should only consider concepts that we could implement in a few weeks or less. This narrowed down the range of possibilities quickly.

Some concepts, like a graphical editor UI that would mimic flowchart software, were too complex to design in that timeframe. Others, like predictive algorithms that would suggest automation for you, were rejected as too difficult to implement.

Ultimately, we decided to restrict all of our design work to our standard UI patterns to save time. This wasn’t a huge setback: I had launched our design system a few weeks earlier, so we had a solid foundation to build upon.

Before designing the rule editor, I wanted to finalize a list of our supported triggers and actions to understand our constraints.

To do so, I sifted through customer feedback, read support requests, and analyzed process flows from large government agencies.

When someone…

Submits a response
Edits the response
Changes a status
Adds or removes labels
Gives a specific answer to a form field
Rates the response a certain score

Then…

Reassign the response
Move it to the trash
Change its status
Add or remove its labels

This also gave our CTO enough information to start working on the backend: helping us prove the project’s feasibility, reduce unknowns, and gain confidence we’d ship on time.

5.

Interaction & Visual

I enjoyed a highly collaborative working style with our CTO. To start, I handed off the fewest design artifacts needed to start working in code. From there, we’d toss the PR back and forth, iterating and refining until we both felt comfortable signing off.

This organic working process helped us make major course corrections early in development without much stress. Here’s an example.

The editor

My first mockup of the rule editor let users edit every rule simultaneously. Rules were also sorted by the date the user added them. For simplicity’s sake, there wasn’t a Save button: instead, we’d save the user’s changes in real time.

First version of the shortcut editor.
I created this mockup before shipping our design system, so it uses our legacy styles.

I handed off this mockup to our CTO, and a few days later, he sent back a rough PR. Immediately, I noticed a few usability issues we hadn’t anticipated.

First, it wasn’t intuitive to order rules by creation date: you couldn’t easily scan the page to find the rule you wanted to edit. On a similar note, the borders of our dropdowns overwhelmed the separators between each rule, making the page harder to visually parse.

But there was another, more concerning problem: our CTO’s preferred technical architecture required us to place a “Save” button below every rule. For example, if there were ten rules on the page, the editor would display ten Save buttons, which would get confusing quickly.

I proposed a few alternative interactions to our CTO, but it became clear that any other UI for saving would add weeks of development time. I had to quickly figure out a way to make the existing method user-friendly.

After brainstorming with some quick 2×2 matrices, I decided to add a new default read-only state to the editor. We’d display a brief prose summary of the rule’s logic, and the user could press a button to toggle the edit state.

A new read-only state.
This mockup uses a new icon set I was planning to introduce alongside our design system. We ended up scrapping it before we shipped.

The edit state would contain the Save button, hiding it by default and making the page easier to scan.

The default state also reduced the height of each rule, making it practical to turn the UI into a sortable list. I hypothesized users would find it intuitive to arrange rules in the chronological order of their workflow, especially if they were large agencies trying to translate their own process documentation.

I gave our CTO the above Sketch mockup containing every variant of UI copy we might display, alongside a rough Principle prototype of the transition between states, and a detailed summary of my changes.

Over the following weeks, we revised the UI copy for clarity and brevity.

System feedback

As per our research, we also needed to give users feedback when they’d triggered automation. If they’d done so by mistake, we also needed to show them how to fix it.

At first, I explored showing contextual notifications under the UI component that triggered a rule. The idea was that this treatment would make the user more likely to notice the message and understand the shortcut’s origin. But after realizing how many bespoke notifications we’d need to build for each component, I scrapped this quickly.

Feedback inline with label changes.
Feedback inline with star rating changes.

Instead, we used our standard notification component, which always appeared in the bottom right corner of the screen. I wasn’t sure that new users would easily notice them, but reasoned that we could always test this and change the design if necessary.

To illustrate the flow, I altered live UI components in Chrome’s developer tools, captured them with screenshots, and composited them into a mock screencast with Final Cut Pro.

Each notification had one or two contextual actions, depending on the user’s permissions. “Why?” links to the shortcut editor, highlighting the rule the user just triggered. From there, the user can edit or delete the rule if they had the appropriate permissions.

If a user immediately knows they’ve triggered a rule by mistake, and they’re permitted to take the same action manually, “Undo” lets them reverse the trigger’s effects.

In response to critique feedback, I increased the notification’s contrast and renamed the “Why?” link to read ”View Shortcut.”

You might have noticed I wasn’t as generative when brainstorming our feedback and logging features. Instead of listing every goal we were solving for, I made the mistake of framing the whole project as one big design problem. This led me to focus on the editor at the expense of the rest of the project.

I’ve since learned to take the time to break every project down into a series of “How might we?” questions. This makes it more likely the team can thoughtfully explore solutions to each one.

  • How might we help people automate their existing processes?
  • How might we make automation obvious and transparent to the user?
  • How might we help people minimize the penalty of innocent mistakes?
  • How might we help people diagnose and fix mistakes after the fact?

Details and polish

To reduce scope for our MVP, we only allowed users to add one trigger and action per rule. This meant you could easily create an accidental paradox: two rules with identical triggers but conflicting actions.

An example of two rules with identical triggers but conflicting actions.

To prevent this, we had to anticipate every circumstance in which a paradox might occur, and validate each input field accordingly.

An example of error validation that warns you when creating a paradox.

To help users diagnose issues with shortcuts, I modified the activity feed that appears below each submission, using connected circles to chain triggers and actions together.

An activity feed with connected triggers and actions.

This created an interesting dilemma. Since automated actions could tax our servers, we planned to execute them in the background. So a user could potentially change the response before a shortcut could take effect, thus “breaking the chain” in the feed.

We ended up reordering the events in the feed to keep the chain intact, deciding to prioritize a coherent narrative over the canonical timeline. If users needed to see an event’s timestamp for any reason, they could do so by hovering over it.

Activity feed with original chronology.
Activity feed with revised chronology.

After nailing down the broad strokes, we put our heads down for a few weeks and brought the project to a suitable level of polish for an MVP. We logged bugs, gaps, and enhancements for each other inside a GitHub milestone until up until the deadline.

Because we set a fixed development timeline, refrained from granular upfront estimates, and achieved consensus around outcomes before starting development, we could re-prioritize tasks as we saw fit without constantly readjusting the team’s expectations.

It took a few projects for us to learn how to do this efficiently inside a remote team. When you’re working across timezones, any ambiguity in a design requires multiple rounds of back-and-forth to resolve, wasting precious time.

Thankfully, by the time we started work on shortcuts, I’d already learned to balance speed and precision: handing off low-fidelity artifacts I could produce quickly alongside concise prose to describe state changes, edge cases, and visual detail.

As the team’s sole designer, I had the freedom to choose whatever design tool seemed best equipped to communicate a given design decision. I definitely recognize the value of using a standardized design tool on larger teams, and do regret not making it easier to onboard my successor! Despite this, I felt at the time this wasn’t a huge concern for a company of our size.

The Result

Streamline your process.

Admins can automatically sort, tag, and route submissions for their team.

Illustration of our standard notifications component.

Prevent automated mistakes.

Screendoor keeps reviewers aware when they trigger a shortcut. If a reviewer thinks a shortcut's not working as intended, they can fix the rule or undo the action if the admin permits it.

An activity feed with connected triggers and actions.

Understand workflows in hindsight.

Concise, clear activity feeds show users how and when automation affected their submissions.

6.

Launch

When it came time to name this feature, “automation” seemed like the obvious choice. But the meaning of that word didn’t fully describe what we’d built.

While our competitors associated automation with freedom from human labor, our feature emphasized human oversight and the ease of fixing the machine’s mistakes. We didn’t want users to delegate responsibility to a machine, but rather save some time and cognitive overhead.

After brainstorming how to best communicate this mental model, the word “shortcuts” seemed to encapsulate what we were looking for. To minimize customer confusion around a potentially unfamiliar term, we always mentioned shortcuts alongside the words “workflow” and “automation” in product marketing.

The blog post announcing shortcuts.
When it was time to launch, I helped our customer success lead edit an announcement on our blog.

Months after we shipped, I realized Screendoor already had a feature with a very similar name: keyboard shortcuts for power users.

Keyboard shortcuts in Screendoor.

To be fair, the team nearly forgot keyboard shortcuts existed. They weren’t very discoverable, and customers rarely asked about or used them. Over multiple rounds of critique and marketing prep for this project, we never flagged this as an issue, and our customers never brought it up during my tenure.

Nevertheless, it’s still a dumb mistake! We certainly could have done more due diligence. In the future, I’ll be more intentional in thinking through the associations a potential brand name might carry.

A note on instrumentation

To learn how our customers used shortcuts, we decided to track the following metrics.

Shortcuts added per customer
Shortcuts triggered per customer
Rate of shortcut undos per customer
Distribution of trigger and action types
Distribution of shortcuts per project

Unfortunately, we failed to log these metrics over time, and could only see their recent history. When we revisited them after a few months, they were neither insightful or actionable. We couldn’t learn how a given metric compared to the previous week or month.

And that’s why I don’t have concrete metrics to include in this case study! Definitely a mistake I won’t make twice.

Post-launch refinements

Because shortcuts could affect so many of our features, a seemingly innocuous action could cause potentially confusing effects.

For example, let’s say you were at a small nonprofit that had an annual fellowship, and your colleague Claire was the dedicated interviewer for the position. When setting up the application form, you might create a shortcut like this:

When a response has its status changed to Interviewing, assign Claire Denis.

But what if you invited your boss, and they deleted the “Interviewing” status by mistake? The shortcut you created would no longer be effective, potentially breaking your workflow.

We decided to fix this after we shipped, updating our delete confirmation modals to help people understand the effects of their actions.

Dropdown confirmation.
7.

Impact

In a few metrics DOBT valued highly, shortcuts exceeded expectations.

Positive

10K+ shortcuts triggered weekly

Given the size of our customer base and the average number of submissions per form, this was a very respectable number for us.

Reduced support volume

Our support requests for automation essentially vanished overnight, and we also received few tech support questions. These were two indirect indicators that people who needed shortcuts could use it without help.

Revenue

We won a sweeping deal with a major state government agency, largely based upon a demo showcasing shortcuts. This had a huge impact on our sustainability as an early-stage bootstrapped company.

Unfortunately, we failed to achieve one of our main goals: making Screendoor palatable to large government agencies.

While training our new client’s staff and helping them manage organizational change, we discovered a few large issues we hadn’t yet addressed.

Negative

A flawed GTM strategy

We falsely assumed these agencies had strong incentives to migrate from paper to digital forms. But the implementation costs, from manually digitizing forms to migrating old submissions, were much higher than we thought.

No workforce management tools

All of our previous customers had 50 reviewers at most. Large government agencies needed many more. We lacked features that would let IT staff easily administer hundreds of accounts.

Scaling bottlenecks

Federal and state government forms were orders of magnitude larger and more complex than our customers had tried to build. These larger clients encountered usability and performance issues which made their forms hard to build and maintain.

8.

Validation

We spent the following months trying to overcome our go-to-market issues, and so I didn’t have time to test our assumptions during the design process.

Before I dive into how I could have avoided that outcome, here’s what I would have tested if I’d had the time.

Assumptions Validate by…
New users can discover shortcuts on their own. Running task completion tests.
Customers will have under 10–15 triggers per project, making the editor’s sortable list UI practical. Monitoring feature instrumentation over time.
All of our target markets, from small newsrooms to huge federal agencies, will use shortcuts in roughly the same way. Segmenting metrics by customer type, and monitoring those metrics over time.
Users will notice the notifications that tell them when they’ve triggered a shortcut. Running usability tests.
Users will intuitively sort shortcuts by the order of their workflow. Providing select customers with concierge onboarding, asking them to try it out for themselves, and observing their behavior.
Our logging and auditing features will reduce negative externalities for users and respondents. Monitoring support tickets and social media sentiment from respondents.

Making time for validation wasn’t just an issue with shortcuts, but a consistent struggle across projects at DOBT. Here are a few things I could have done better.

Pushing back against cultural resistance

Our co-founders valued customer-driven development in the form of quick wins: fulfilling their requests as quickly as possible and thus delighting them with our level of customer service. This created cultural pressure to minimize development time before we announced a feature to the public. Colleagues often saw user testing as unnecessary, especially if those requesting the feature thanked us upon shipping it.

There are a few ways I’ve learned to combat this in subsequent projects.

First, instead of trying to gain buy-in for testing after a project’s done, I document the team’s assumptions as early as possible and develop a test plan in parallel with the design. This gives everyone a shared understanding of what we still need to validate.

Second, I try to describe checkpoints in our test plan at which we can validate individual assumptions before we ship. Testing even small changes can shape the team’s perception of the project’s success.

Making room in our process for iteration

The four layers of design were super helpful as a shared language to discuss and build consensus around work. But I made the mistake of describing it as our entire design process, a complete description of a software project’s lifecycle.

Because the four layers don’t mention iteration, the team believed the design process was over once the aesthetics were finalized. Consequently, it was hard to make the case for validation and iteration management before moving on to the next project.

Now I know to introduce the four layers in the context of a larger design process. A shared language is helpful only up until it limits you.

Finding ways to test work in progress

At DOBT, I treated my mockups and prototypes as quick-and-dirty communication tools. That’s great for working efficiently, but it often meant my design artifacts weren’t comprehensible to users. Even if I had wanted to conduct some guerilla usability testing before starting development, there wasn’t much for us to test.

I’ve learned to take the time to construct my interaction design work with both internal communication and external testing in mind. If I’m working in Sketch or Figma, I usually create 2 artifacts from a reusable symbol or component.

When a colleague is knee-deep in a project, they often benefit from much less visual information than a user encountering the work for the first time. I find it saves time in the end to tailor the artifact to its audience.

9.

Process Changes

DOBT’s culture influenced how we designed and launched shortcuts. Sometimes that was for the better, other times for the worse.

After shipping shortcuts, I spent some time working on the cultural issues impacting our success.

Prioritizing work

DOBT was an early-stage bootstrapped startup without strong product leadership. Because of this, short-term revenue potential was our biggest motivator as a team. It was hard to make the case for important product work unless a valuable RFP happened to request it.

This affected how we prioritized work, but also the quality of the work itself. For example, while designing shortcuts, we found a valuable RFP that had automation as a requirement. My colleagues started evaluating possible solutions by their compliance with the RFP, rather than user interviews.

To be fair, one reason we cared so much about prospects was that it was hard to find other feedback sources. We struggled to schedule user research with government workers. They were usually too busy to participate for free, and anti-corruption laws kept us from paying them fair incentives. Ironically, the best way for us to talk to potential users was by entering into a large contract with them.

However, my colleagues also thought sales requests were good for more than revenue. They believed the request represented a broader demand from our target market. If we just fulfilled the request, everyone would be happy… including our current customers.

After we shipped shortcuts, I knew we could disprove this with data. I worked with our support lead to make a Google Sheet where we could track every feature request we received.

Replica of DOBT's feature request spreadsheet.
  1. Customers. One per column, alongside their name and deal value. No sales prospects, only real users.
  2. Features. One per row. We listed their GitHub issue number, so colleagues could find resources and discussion around that feature if they were curious.
  3. Requests. Alongside the number of requests the customer had made, we linked to the original conversation in Front, our shared inbox. This gave us an organized database of primary sources to inform new design work.

After a few months, we accumulated a healthy backlog. Colleagues started to notice that sales requests tried to solve problems our customers didn’t think were important. They began to realize conversations with buyers didn’t represent user needs.

The spreadsheet also got us to communicate more across disciplines. We started tagging prospects in our CRM with their feature requests. Now we could objectively compare what large agencies wanted and our users needed.

At first, I triaged each request myself and tagged the appropriate issues. But going forward, I wanted to help colleagues update the spreadsheet on their own. So I taught them how to clarify the root cause behind customer requests, by asking diligent follow-up questions and empathizing with the user’s core problem.

The company was small enough that we never scaled up this process during my tenure. But I’d still love the opportunity to do so! Maintaining a repository of customer knowledge helped the product team be more effective. Teaching skills like active listening helped the whole company be more humble.

Product leadership

DOBT was a flat organization with no formal managers. Instead, we organized ourselves around areas of responsibility, making group decisions through “clear, explicit consenus.”

But since we didn’t have a product manager, all prioritization decisions needed unanimous approval. This biased us towards inaction. If an employee didn’t want to work on something, they could simply say as much, and it wouldn’t happen.

This made it hard to predict project scope and timelines: they could change at any time depending on any colleague’s preferences at that moment. While brainstorming ideas for shortcuts, I wasted time presenting concepts that weren’t feasible under the timeline we ended up with.

This project helped me realize how badly we needed a single person held accountable for Screendoor’s success. I advocated to our CEO that we should hire or assign someone the role of product manager. A few weeks later, he offered me the role, and I accepted.

As a first-time PM with other responsibilities on my plate, this new role by no means solved all our problems. But it definitely reduced our tendency to re-litigate past decisions.

Participatory design

I was always looking for opportunities to get my colleagues more involved in the design process beyond async critique. My work can only improve when exposed to diverse perspectives. But I wasn’t as successful at making this happen.

It was challenging to establish shared norms on a remote team that didn’t share routines, working styles, or an office. Any attempt to encourage group activities, like team brainstorming sessions, felt like I was going against the grain of DOBT’s organizational design.

Our policy around internal communication tools was somewhat to blame here. Most of our tools were built for async work, and we were discouraged from trying new ones for the sake of maintaining a centralized repository of company knowledge. On the one hand, it was great to have that level of organizational discipline. On the other, it was hard to find better ways to collaborate without going against the grain of how these tools were designed.

I tried to compensate for this lack of team participation by adopting an ethic of rigorous self-awareness in my own work, trying my best to overcome mental blocks and maintain a level of self-awareness around my own limitations. But, as the keyboard shortcuts issue demonstrates, one person can’t think of everything.

Since I finished this project, new tools like Figma and Mural have made remote collaboration much easier. But I’ll keep thinking about how to encourage shared practices on teams that encourage a wild diversity of working styles.