Parsons ELab: A Process Review for an Incubator

The Thicket team had the chance to work with the Parsons ELab to review the recent selection process for year two of the incubator, housed at Parsons School of Design. We carried out an evaluation that included reviewing how the ELab network influenced the selection process with recommendations for how to support the accepted teams, and how to further refine the selection process for year three.

Going deeper with a process analysis can help incubators and other types of communities focused on innovation learn more from their existing data and build a stronger understanding of how structural elements of programs are influenced by the people who take part in them -- including experts and advisors who are key in making recommendations. At the beginning of an application cycle, the key challenge is to create a selection process that can accurately predict which applicants have the right mix of team, assets, and business model to effectively leverage their resources over the incubation cycle for future success. 

Our process review for incubator selection focuses on:

  • Understanding how your selection process could impact your outcomes
  • Learning more about your expert network and how they’re contributing to your model
  • Identifying specific areas for supporting ELab fellows to give them their best chance of success

The ELab Selection Process

To evaluate the ELab's selection process, we started by taking the quantitative rubric that their 14 judges filled out to go deeper in three areas: We analyzed their panel of expert judges, we looked at the criteria used for selection, and we did a deeper dive on the startups considered to find target areas to improve.

Criteria: Key predictors of selection

The criteria for selection included 22 indicators across three categories: personality, skills, and viability. First, we assessed which criteria were the most influential in whether a company was selected for the program. We discovered that three viability indicators and one skill indicator were the top deciding factors in company selection: 

  • A clear and effective solution 
  • Financial prospect and potential
  • Market analysis: competition and the industry
  • Team Management and Clarity of Roles

Additionally, we discovered that viability scores were generally lower across all the companies compared to skills and personality. While this could be the result of companies generally having less viable business models, it could also indicate that judges have a different standard for or emphasis on viability versus skills and personality. It’s also important to consider that viability criteria may be easier to weigh in on, while judges might be reluctant to give lower scores on personality or skills. 

Companies: How applicants measure up

Across all applicants as well as accepted companies, companies were assessed the weakest on viability. Generally, most companies received favorable scores on personality first, then skills. This suggests that while the applying teams are strong, the business ideas need work. A business model workshop might be a valuable offering in the run-up to next year’s application process. 

The standout companies were generally considered more robust with more consistently high scores across all three areas. We can expect these companies to be more likely to grow holistically. The weaker companies are not as well rounded; they might perform well in some areas and poorly in others, suggesting that intervention services may be useful to help them improve in targeted ways and give them the best chance for success. 

Judges: The experts influencing the process

Experts are a key component of the ELab application selection process. We analyzed judging feedback to identify individuals who had high reliability while weeding out those with a low response rate. To gauge reliability, we looked at how close a judge’s feedback came to the average. You might be wondering: Does “closest to average” indicate that a judge’s feedback is more accurate? The answer is no It doesn't. What it means is that with fewer judges, you can get to the same selection results, paving the way for a smaller, more efficient panel. But before trusting these judges more, it will be necessary to evaluate program outcomes to gauge the value of their input in selecting for success. 

We found that judges 4,5, 12 and 13 stood out for having the closest to average votes and good reply rates. Judge 1 and 2 had poor reply rates, but for different reasons. Judge 1 consistently didn’t score specific criteria, suggesting that they didn’t feel comfortable giving feedback on those specific criteria. This would be a good question for follow up. Judge 2 didn’t have any pattern behind their lack of input but rather didn’t judge on all criteria for specific companies. Judges 2 and 3 also skewed negative in their responses. Because Judge 2 had a poor response rate and skewed negative, a judging role for the ELab might not be a good fit. Finally, judge 9 skewed decidedly positive. 

Moving Forward

Incubators need evaluations that can improve outcomes, not just measure them, to spur continuous improvement. Optimizing selection criteria for successful outcomes for startups combined with optimizing performance in expert feedback process can lead to a more efficient and effective selection process. We're looking forward to continuing our analysis with the ELab team next year!