Now is the Time to Take Action

Since election day, we know that: 

  • Social fragmentation is a clear and present danger to ourselves and our communities. Hate crimes have risen by 6% since election day, more than post-9/11. 
  • People who are sick and dying are desperate for change. Public health outcomes, including decreased life expectancy and increased prevalence of obesity, diabetes, heavy drinking, and lack of physical activity, is one of the core voting blocs that brought Trump to power. 
  • People are mobilizing to take action. There are thousands of groups organizing every day since the election, mobilizing to fight hate, protect communities, share resources, and take local action

The Thicket team is ready and willing to help. We’re launching Rebuild Together to help us connect with people who want to take action. 

Go to http://rebuildtogether.co and share your commitment to local action. By sharing your action, you’ll also help us gather useful data on what people are doing on the ground in different communities. We’ll be releasing this data publicly to help people, organizations, and networks identify issues and actions that are most relevant to their communities and help them plan and rebuild. 

Here’s how we’ll use any data that you share with us through this website:

  • We will create a shareable commitment that includes your name that will be publicly displayed on the website and that you can post to social media channels. 

  • We will summarize and report on data publicly in aggregate for communities to learn what issues and actions are emerging as high priorities. 

  • If you join the campaign by sharing your email address, our team will send you updates about the campaign and future opportunities to take action on the issues you care about. 

  • In January, we will offer campaign subscribers a free account on the Possibility Engine, our social networking platform that helps people take action together. 

Please share your action with us at http://rebuildtogether.co and help us gather this important data. 

Flourishing Through Feedback

Lifehack Fellows brainstorm barriers and opportunities around wellbeing. 

Lifehack Fellows brainstorm barriers and opportunities around wellbeing. 

Fellows create journey maps for specific impact goals. 

Fellows create journey maps for specific impact goals. 

Lifehack experiments with new approaches to enabling youth wellbeing at the intersection of health and wellbeing science, design, technology and social entrepreneurship. As a youth wellbeing innovation platform where people from different parts of the system come together and learn, share and develop new approaches, projects and ventures, Lifehack maintains young people’s wellbeing at the centre of the process. Their team works alongside young people, whānau (Māori; wider family), communities, individuals and organizations that work with young people--supporting them to develop solutions to the challenges they see around them.

Supported by New Zealand’s Ministry of Social Development, the Lifehack team has developed its programming over the last three years using a social innovation labs model. Its signature fellowship program, the Flourishing Fellowship, is in its third year of building people’s capability to create positive change while building a network and movement of changemakers with the geographic and sectoral reach to radically improve youth wellbeing. 

For its 2016 cohort, Lifehack worked with Thicket Labs to use data-driven design thinking activities to design a framework for quantitative feedback-based evaluation. As we enter the third and final month of the Fellowship program, we're seeing how the process has unearthed new strategic insights for program improvement and led to a deeper understanding of how network effects are influencing the Fellows' collective impact.

Stay tuned as we'll be releasing the results in February 2017 with new interactive visualizations for distilling insights from design thinking activities accompanied by a discussion of how to use design thinking to gather feedback and how to connect those qualitative insights to quantitative evaluations. We'll also address the sensitivity of language in multicultural contexts and why traditional evaluation may need to adapt to apply to social innovation lab programs. 

Research Partnership with Lakehead University

Lakehead University in Thunder Bay in Ontario

Lakehead University in Thunder Bay in Ontario

Since January of this year, Thicket Labs has been assisted by Lakehead University students thanks to our CTO Dr. Vijay Mago's relationship with Lakehead University where he teaches and conducts research in areas including big data, health informatics and complex social systems. 

Our team has benefitted from working with graduate students working at the forefront of computational research and data science. Through our collaboration, we've also found new permanent hires who have made significant contributions to our codebase.

To date, we've worked with three students: 

  • Patrick Joanis, a graduate student whose work with us focused on developing graph similarity and community detection algorithms for social network analysis.
  • Manpreet Singh, a graduate student who developed Java code to link front-end interfaces to MySQL databases and Thicket's APIs on several client-facing products.
  • Erik Tillberg, an undergaduate student who worked on optimizing a significant portion of our APIs after considerable study. We're pleased to have Erik continuing to work with us this Fall and Winter. 

We look forward to continuing our relationship with Lakehead University as the new semester kicks off!

Take a minute to VOTE – SXSW 2017

Last year, Thicket made it out to Austin to talk about tech and disconnected youth. This year, we're very excited to share that Thicket has three proposed talks at the 2017 South by Southwest Interactive Festival in March.

Please help us go to SXSW by voting for our proposed sessions!

Future of Film Experience: Seeing Viewer Emotions

Thicket is working with Dance Films Association to use data visualization to engage audiences. Hear about La Medea, a simultaneous dance-theater performance, made-for-camera Latin variety TV show, and live streamed feature film inspired by Euripides’ Greek tragedy. Find out where technology & audience interaction is leading the film festival experience in the future.

VOTE HEREThe Future of Film Experience

SXSWedu: Youth-Led Problem Solving In Action

Too many education programs aren't directly designed for young people's specific needs. Hear from the TechTank Fellows who helped design the program, and join a strategic problem-solving workshop led by the Fellows to brainstorm ways to involve youth in the program design process.

VOTE HERE—Youth-Led Problem Solving In Action

Hacking the Traditional Tech Talent Pipeline

How do we diversify the tech talent pipeline? Hear about new efforts to bring diversity and inclusion through neighborhood-based pipeline initiatives, tech-enabled mentorship, and employer/trainer partnerships.

VOTE HERE—Hacking the Traditional Tech Talent Pipeline

We'd love your vote — and your help spreading the word! If you haven't yet, please take a minute to register and give us your vote.

What Drives Collaborative Action?

The OpenGov Hub in Washington DC has a community that extends far beyond the 35 member organizations who work at the Hub. As a center for collaboration, learning, and innovation on open government issues, the Hub sees a broad visitor base for its weekly programming and events. The ideas being shared and projects being worked on at the Hub represent an opportunity for shared action that extend far beyond the walls of the Hub.

Thicket Labs was pleased to drop by the OpenGov Hub recently to offer a collaborative mapping demo to explore ways to elevate collaboration within the OpenGov Hub network. Our demo brought together a group of OpenGov Hub members, friends of the Hub and even a few folks who had never been to the Hub before.

Demo participants took a 15-minute survey before we dove into exploring real-time mapping. Participants received personalized insights about how their organization's work relates to the work of others in the room and beyond. Our aim was to help participants contextualize their work in their broader field and identify opportunities for deeper collaboration. Check out some of the results here

Although most of the survey participants were not formally affiliated with the Hub, the mapping exercise identified connections and relationships across the group. We found that 50% of all survey participants were connected to at least one other group as a collaborator or learning partner. Across participants from both inside and outside the Hub, Development Gateway, Global Integrity, and Accountability Lab stood out as information providers to other organizations in the network.

Most of the survey participants identified themselves as working in sectors related international governance and cities & services. Across the participants, use of collaboration and innovation tools and strategies is widespread. Group brainstorming is the norm for most, while tools like rapid prototyping and user testing are less common. Use of collaboration spaces stood out as a particularly essential tool for most, followed closely by use of open data.  

Our mission at Thicket is to help teams excel in high collaboration environments. We build collaborative intelligence technology solutions that bring people together to align on strategy and performance goals and implement results-based management programs. Exploring collaboration processes with the OpenGov Hub community is one of the ways we're moving our own work forward to develop action-oriented collaboration tools.

The Thicket team is looking forward to working with the Hub network to map collaboration workflows and develop solutions to support existing collaborations and spark new ones. We hope to help the OpenGov Hub community work together to accelerate government openness and citizen participation around the world.

A new way to make Hollywood more diverse

At Thicket, we align communities through our collaborative intelligence software products. We specialize in unraveling complex social environments to understand the human elements: relationships, values, emotions, agendas, concerns, fears, hopes.

Diversity in media is certainly one of those complex environments. It only takes five minutes on your social platform of choice to realize that conversations generated by hashtags like #OscarsSoWhite, #DiversityInEntertainment & #WomenInMedia can be as divisive as they are important.

It isn’t just a moral imperative that is pushing us towards a more diverse media landscape. As Wesley Morris & James Poniewozikfeb write in the NY Times, “TV audiences for everything are smaller now, which means networks aren’t programming each show for an imagined audience of tens of millions of white people. On top of that, there are younger viewers for whom diversity — racial, religious, sexual — is their world. That audience wants authenticity; advertisers want that audience.”

But current methods for measuring audience preferences are not up to the task of helping media professionals create more original, diverse content. Netflix had a massive hit with the US remake of British TV show House of Cards, which debuted to rave reviews and ratings in 2013. In commissioning the show, Netflix used data about people’s past viewing choices to cast Kevin Spacey and bring on David Fincher to direct. This was a major win for the industry in using data to validate their creative decisions.

But what if it was possible to get past the information gap that limits examples of original programming options to “tried and true” story formats, characters, and casting choices? To that end, Thicket has created an online quiz to help track how people relate to their favorite fictional characters from film and television. Not only will this quiz help you understand why you connect to the characters you love, on a larger level the results of this quiz will help content creators in the future craft characters that fans truly resonate with, rather than relying on generalized stereotypes and other assumptions.

A very interesting quiz, I realized that my favorite characters are very similar, even though they are from different eras, genres, and genders.
— Quiz Taker

With our quiz, creatives have the option to test out their decisions against a richer dataset that unlocks areas of opportunity hidden behind complex behaviors and trends. On May 4th, we’ll be demoing the quiz at the Made in NY Media Center Demo Day Marketplace. We want your input to be part of the results we share with media industry professionals at Demo Day.

Please take a few minutes to think about your favorite characters, and take the quiz.

Then, pass it on. The more information there is in total, the more the results of this quiz can be used to help storytellers craft the characters you can relate to. This quiz protects your privacy — while the results will be looked at in aggregate, no personal information of any kind is tracked (or even asked).

As one quiz taker put it, “Give it a try; the worst that can happen is that you might influence someone to create your favorite character of all time.”

Parsons ELab: A Process Review for an Incubator

The Thicket team had the chance to work with the Parsons ELab to review the recent selection process for year two of the incubator, housed at Parsons School of Design. We carried out an evaluation that included reviewing how the ELab network influenced the selection process with recommendations for how to support the accepted teams, and how to further refine the selection process for year three.

Going deeper with a process analysis can help incubators and other types of communities focused on innovation learn more from their existing data and build a stronger understanding of how structural elements of programs are influenced by the people who take part in them -- including experts and advisors who are key in making recommendations. At the beginning of an application cycle, the key challenge is to create a selection process that can accurately predict which applicants have the right mix of team, assets, and business model to effectively leverage their resources over the incubation cycle for future success. 

Our process review for incubator selection focuses on:

  • Understanding how your selection process could impact your outcomes
  • Learning more about your expert network and how they’re contributing to your model
  • Identifying specific areas for supporting ELab fellows to give them their best chance of success

The ELab Selection Process

To evaluate the ELab's selection process, we started by taking the quantitative rubric that their 14 judges filled out to go deeper in three areas: We analyzed their panel of expert judges, we looked at the criteria used for selection, and we did a deeper dive on the startups considered to find target areas to improve.

Criteria: Key predictors of selection

The criteria for selection included 22 indicators across three categories: personality, skills, and viability. First, we assessed which criteria were the most influential in whether a company was selected for the program. We discovered that three viability indicators and one skill indicator were the top deciding factors in company selection: 

  • A clear and effective solution 
  • Financial prospect and potential
  • Market analysis: competition and the industry
  • Team Management and Clarity of Roles

Additionally, we discovered that viability scores were generally lower across all the companies compared to skills and personality. While this could be the result of companies generally having less viable business models, it could also indicate that judges have a different standard for or emphasis on viability versus skills and personality. It’s also important to consider that viability criteria may be easier to weigh in on, while judges might be reluctant to give lower scores on personality or skills. 

Companies: How applicants measure up

Across all applicants as well as accepted companies, companies were assessed the weakest on viability. Generally, most companies received favorable scores on personality first, then skills. This suggests that while the applying teams are strong, the business ideas need work. A business model workshop might be a valuable offering in the run-up to next year’s application process. 

The standout companies were generally considered more robust with more consistently high scores across all three areas. We can expect these companies to be more likely to grow holistically. The weaker companies are not as well rounded; they might perform well in some areas and poorly in others, suggesting that intervention services may be useful to help them improve in targeted ways and give them the best chance for success. 

Judges: The experts influencing the process

Experts are a key component of the ELab application selection process. We analyzed judging feedback to identify individuals who had high reliability while weeding out those with a low response rate. To gauge reliability, we looked at how close a judge’s feedback came to the average. You might be wondering: Does “closest to average” indicate that a judge’s feedback is more accurate? The answer is no It doesn't. What it means is that with fewer judges, you can get to the same selection results, paving the way for a smaller, more efficient panel. But before trusting these judges more, it will be necessary to evaluate program outcomes to gauge the value of their input in selecting for success. 

We found that judges 4,5, 12 and 13 stood out for having the closest to average votes and good reply rates. Judge 1 and 2 had poor reply rates, but for different reasons. Judge 1 consistently didn’t score specific criteria, suggesting that they didn’t feel comfortable giving feedback on those specific criteria. This would be a good question for follow up. Judge 2 didn’t have any pattern behind their lack of input but rather didn’t judge on all criteria for specific companies. Judges 2 and 3 also skewed negative in their responses. Because Judge 2 had a poor response rate and skewed negative, a judging role for the ELab might not be a good fit. Finally, judge 9 skewed decidedly positive. 

Moving Forward

Incubators need evaluations that can improve outcomes, not just measure them, to spur continuous improvement. Optimizing selection criteria for successful outcomes for startups combined with optimizing performance in expert feedback process can lead to a more efficient and effective selection process. We're looking forward to continuing our analysis with the ELab team next year!

 

 

Data-Driven Workshops at Thicket Labs

Designing for Engagement

Since we focus on designing workshops that engage all types of learners, we built our agenda using design thinking activities that offer visual thinking, tactile interactions, and storytelling opportunities to engage our participants as they developed their ideas. Our outputs show the richness of the process, as we developed a prioritized map of opportunities and challenges.

Design thinking creates opportunities for deep collaboration -- which is why it's such a workflow essential for so many teams, in the boardroom or out in the field. But when it comes to actionable outputs and measurable insights, the lo-fi format of the data captured from design thinking activities often falls short. The results can be tough to translate into charts and figures for further analysis and action.

Translating Insights into Data

Thicket extracts data from the design thinking experience for stronger workshop outcomes. We use our collaborative intelligence platform, the Possibility Engine, to transform qualitative workshop insights into quantitative atasets for more robust analysis and action. Our datasets are analyzable in a variety of ways using the Possibility Engine, and can be integrated with datasets from other workshops held on another day or in another location. The Possibility Engine includes tools like network analysis and predictive analysis that help shed light on more dynamic or nuanced contexts, like the job search experience.

What We Learned

So what did we learn about meaningful employment from young professionals living and working in New York?

  1. Out of the 9 overall opportunities and challenges identified by our group, networking was the number one opportunity for meaningful employment.
  2. While the number of challenges around meaningful employment outweighed the number of opportunities 5 to 4, the voting process revealed that challenges were deemed less important than opportunities overall.
  3. The practical and the emotional sides of the networking journey are interrelated and messy.
  4. For this group, the journey to becoming a champion networker involves 5 steps: identify barriers, research skills to overcome them, practice skills with trusted friends, apply them in the real world, get results.
  5. Step 2, 3 and 4 are interconnected and show the presence of feedback loops. Some of these feedback loops could be further unraveled in a future workshop.
  6. The middle of the emotional journey is associated with thinking and learning states of mind, like self-reflection and observation, while the beginning and end of the journey are characterized by highly charged emotions, like fear and confidence.

Taking It Further

Conducting this same workshop with more and varied groups of participants will enrich our dataset and reveal new perspectives on what meaningful employment looks like, both in New York but also potentially around the world.  If you're interested in bringing this data-driven workshop to your team, you can learn more about our workshops.

Healthcare Innovation Starts with Community

Last year, I had the opportunity to take part in “Designing Evaluations For What Communities Value,” a meeting organized by the Institute of Medicine’s Collaborative on Global Chronic Disease along with Thicket colleague Pritpal S Tamber. The meeting brought together people trying to think differently about community-driven health programs. Our key questions were: How do we improve programs in ways that are aligned with what a unique community strives for? How do we understand how communities view health and wellness in the first place?

As Editorial Director for Medicine for the BMJ and as Physician Editor in the editorial team for TEDMED 2013, Pritpal has closely tracked clinical evidence and healthcare innovation for many years. Based on his experience, Pritpal founded the Creating Health Collaborative to understand why, despite their potential, broader definitions of health remain only a fringe of health innovation. And today, the Creating Health Collaborative is proposing a new way to define health as more than just the absence of disease, and as a way for people to take a growth mindset approach towards their health.

This year, Pritpal, along with Leigh Carroll and Bridget Kelly, have once again brought together a cross-section of community health professionals and evaluation practitioners, this time in a public forum. The Communities Creating Health series is being featured in the Stanford Social innovation Review over the next few weeks, and I'm pleased to share Thicket's contribution: "Emerging Tools for Community-Driven Evaluations." The piece features eight tools that can prove useful in this emerging space and covers a range of approaches derived from design thinking, community organizing, and systems analysis.

New articles in the series are published twice a week, and you can access the full Communities Creating Health series here. Read more of Pritpal's thinking in Community, Movements, and Spread: It’s the Process That Matters.