You've narrowed 3,000 applications down to 200 shortlisted startups. Now comes the hard part: deciding which 15 actually get into your cohort. The difference between a good accelerator and a great one isn't deal flow — it's due diligence. Here's the checklist our team uses to evaluate every shortlisted startup systematically.

Why Checklists Beat Intuition

Experienced program directors develop strong pattern recognition over time. But intuition alone introduces bias. The charismatic founder gets the benefit of the doubt. The quiet technical team with better metrics gets overlooked. A structured startup screening checklist doesn't replace judgment — it ensures judgment is applied consistently across every application.

The checklist below covers four areas: team, market, traction, and red flags. Each item is something you can evaluate with the information in a standard application plus 30 minutes of research. For a deeper look at how AI can handle the initial scoring pass before you even get to this stage, see our guide on how AI screens 5,000+ accelerator applications.

Team Assessment (Items 1–3)

1. Founder-Problem Fit

Why is this team uniquely positioned to solve this problem? The best accelerator companies aren't built by generalists who stumbled onto a market — they're built by people with direct, lived experience of the pain they're addressing. A healthcare logistics startup founded by a former hospital operations director signals deep domain insight. One founded by two fresh graduates with no healthcare experience signals a thesis that needs proving.

What to verify: LinkedIn history, prior roles in the problem domain, published thinking on the topic, previous attempts to solve adjacent problems.

2. Team Completeness

Can this team build and sell the product without critical hires in the next 6 months? A two-person founding team with a technical CTO and a sales-oriented CEO covers more ground than three engineers who've never spoken to a customer. Look for complementary skill sets: builder + seller, domain expert + operator, technical + commercial.

What to verify: Roles and responsibilities, gaps acknowledged in the application, hiring plan if applicable.

3. Commitment Level

Are the founders working on this full-time? Part-time founding teams rarely survive an accelerator's intensity. If both founders still have day jobs, the program will be a side project for them — and your cohort's value gets diluted. This doesn't mean founders can't be pre-launch, but they should be all-in on the problem.

What to verify: Current employment status, equity split (a 50/25/25 split with 3 co-founders suggests misalignment), time commitment stated in application.

Market Validation (Items 4–6)

4. Market Size Is Plausible

Every startup claims a $10B TAM. The question isn't the top-down number — it's the bottoms-up math. How many potential customers exist? What's the realistic contract value? Does the math produce a market big enough to matter for your program's thesis?

What to verify: Bottoms-up calculation in the application, number of potential customers they've identified, average contract value assumption, comparison to known benchmarks in the sector.

5. Evidence of Customer Demand

Has anyone outside the founding team expressed willingness to pay? The strongest signal is revenue. The next strongest is a signed LOI or paid pilot. After that: waitlist signups, active conversations with potential buyers, or a crowdfunding campaign with real backers. Social media followers don't count as demand.

What to verify: Revenue (even $500 MRR matters at pre-seed), LOIs or pilot agreements, customer interviews conducted, waitlist size with conversion context.

6. Timing and Structural Tailwinds

Why now? The best accelerator companies ride structural waves — regulatory changes, technology shifts, demographic trends, or market dislocations that create new opportunities. AI is the obvious current tailwind, but be specific: "AI" isn't a tailwind. "LLMs reducing the cost of document review from $50/document to $0.50/document" is a tailwind.

What to verify: Specific catalyst they've identified, evidence that the timing is genuinely new (not a problem that's existed for decades with no change), competitive landscape shifts.

Traction and Product Signals (Items 7–8)

7. Product Exists (Even If Minimal)

Is there something users can touch? An MVP, a prototype, a landing page with a functioning signup flow — anything that demonstrates the team can ship. Ideas are cheap. Execution is the bottleneck. For accelerator selection criteria, shipping velocity matters more than polish.

What to verify: Live URL, demo video, GitHub activity (for technical products), screenshots with real data (not mockups).

8. Growth Trajectory Over Absolute Numbers

$2K MRR growing 30% month-over-month is more interesting than $15K MRR flat for 6 months. Early-stage traction is about trajectory, not scale. The absolute numbers are almost always small — that's why they're applying to an accelerator. What matters is the slope of the line and whether the founders understand what's driving it.

What to verify: Month-over-month growth rate, cohort retention (if available), understanding of what's working and why, customer acquisition cost relative to lifetime value (even rough estimates).

Red Flags (Items 9–10)

9. Misaligned Expectations

What does the founder think the accelerator will do for them? If the answer is "give us money," that's a red flag. Your program's value is mentorship, network, and structured growth — not a seed check. Founders who view the program as a networking opportunity and accountability structure tend to extract far more value than those treating it as a funding round.

What to verify: "Why this program?" answer in the application, specific mentors or resources they've identified, understanding of program structure, whether they've spoken to alumni.

10. Coachability Signals

Has the founder demonstrated the ability to incorporate feedback and change direction? The most successful accelerator companies pivot at least once during the program. Founders who are rigid about their original vision — who treat every suggestion as a challenge to their authority — rarely benefit from the accelerator model. Look for evidence of iteration: a product that's evolved based on customer feedback, a pivot from an earlier idea, or explicit acknowledgment of what they don't know.

What to verify: Product iteration history, response to tough questions in the application, references from previous investors or advisors (if available), history of incorporating feedback.

Putting the Checklist to Work

Print this. Tape it to the wall of your screening room. Or better yet, build it into your evaluation workflow so every reviewer scores against the same criteria.

The checklist works best when it's applied consistently. If Reviewer A skips the market validation items and Reviewer B focuses exclusively on team, their scores aren't comparable — and your cohort selection will reflect noise rather than signal. For more on why this consistency problem gets worse at scale, read 5 signs your accelerator has outgrown spreadsheets.

Here's the scoring framework we recommend:

  • Strong pass (8–10): Meets 8+ checklist items with clear evidence. Move to interview stage.
  • Conditional pass (5–7): Meets core team and market criteria but has gaps in traction or flags a concern. Worth a follow-up call.
  • Pass (1–4): Fewer than 5 items met, or multiple red flags present. Document reasoning and decline with feedback.

The goal isn't to reject more companies — it's to make your accepts more defensible. When your LPs or board ask why you selected these 15 out of 3,000, the checklist gives you a data-backed answer instead of "gut feeling."

Automate the Checklist With AI

Manually evaluating 200 shortlisted startups against a 10-item checklist still takes 50+ hours of focused reviewer time. That's where AI-powered accelerator due diligence tools change the math. An AI system can score every application against your checklist criteria automatically — pulling in external data to verify claims, flagging inconsistencies, and surfacing the 40 companies that deserve your deepest attention.

The checklist stays yours. The criteria stay yours. The final call stays yours. The AI just handles the 80% of the work that's data gathering and consistency enforcement, so your team can focus on the 20% that requires genuine human judgment.