Scroll Top
19th Ave New York, NY 95822, USA

Research Design: Structure of the Study

Home » Blog » Research Design: Structure of the Study

This section outlines the research design of the dissertation titled “Are corporate accelerators springboards for startups: a performance analysis of Microsoft’s and Google’s accelerated startups.” The goal is to provide, as the title suggests, is to determine or structure an empirical investigation that evaluates or assesses if corporate accelerator deliver measurable advantages in startup performance.

The study applies a quantitative, cross-sectional, and comparative research design. It analyses real-world performance data of startups that participated in corporate accelerator programs sponsored and influenced by Google and Microsoft, using statistical methods such as ANOVA (Analysis of Variance) to test performance outcomes.

This post represents a series of articles related to a research and dissertation called “Are corporate accelerators springboards for startups: a performance analysis of the Microsoft’s and Google’s accelerated.

Purpose and Approach

The purpose of this study has a dichotomy:

  1. To assess whether each of the two corporate accelerator programs (Google and Microsoft) provides measurable performance advantages to the startups they accelerate.
  2. To compare both programs’ outcomes against each other.

This design enables the dissertation to first evaluate each program individually, and then compare the two, aiming to determine if they serve as “springboards” for startup growth or merely create short-lived boosts with limited long-term impact — the so-called “sand traps.”

Research Type

The research is empirical, relying on structured numerical data and statistical models. It follows a quantitative, cross-sectional, and comparative research design:

  • Quantitative: it means the study is based on measurable data and numerical indicators (objective), rather than interviews, narratives, or qualitative insights (subjective). It focuses on metrics like funding raised, survival rates, and IPO outcomes.
  • Cross-sectional: it refers to the fact that the data represents a single point in time (or a defined snapshot of each startup’s first 3 years), rather than tracking changes over time (as in longitudinal studies).
  • Comparative: it indicates that the study involves contrasting two distinct groups (those accelerated by Google and Microsoft) to determine which performs better across specific KPIs.

The research uses secondary data obtained entirely from Crunchbase, a platform that aggregates startup profiles, funding activity, and lifecycle events (e.g., IPO, closure, patents). The database was curated manually and compiled into a spreadsheet. No other source of data has been used.

Units of Analysis

Every startup represented in the database corresponds, as a single unit of analysis, to the following data properties:

  • Affiliation: Google or Microsoft.
  • Funding across five established periods of time.
  • Indicators of events such as acquisitions, IPOs, or closures.

key performace Indicators

The goal is to observe how startups perform depending on which program they participated in, to measure statistical differences and to ignore causality. Quantitative methods are particularly suited to measuring KPIs such as:

Amount of funding raised

This measures the total capital (in millions of USD) that a startup has secured across all funding rounds up to a given point in time. It reflects financial traction and the market’s confidence in the startup’s scalability.

IPO timing

IPO timing indicates whether a startup went public within 1, 2, or 3 years after participating in a corporate accelerator program. This variable highlights the speed and perceived market readiness of a company to transition to public ownership.

Survival past three years or Closure rates

A binary variable shows whether the startup remained active for at least three years following its acceleration and if formally ceased operations. It acts as a benchmark for short-term operational success and resilience. High closure rates may signal post-program fragility or mismatched accelerator selection criteria.

Acquired (Post-Acceleration)

This binary variable identifies whether the startup was acquired by another company within three years of completing the accelerator program. An acquisition may reflect the startup’s strategic value, technological assets, or successful market positioning.

Acquisition Made (Post-Acceleration)

This variable captures whether the startup itself acquired another company within the three-year period following acceleration. Such acquisitions may indicate business maturity, scaling efforts, or expansion into new markets.

Patents (Pre-Acceleration)

This variable tracks whether the startup held any patents before entering the accelerator program, serving as a proxy for technological innovation or intellectual property strength. It also includes the number of patents owned, helping to quantify the startup’s innovation intensity prior to acceleration.

Time Segmentation Structure

The database tracks startup performance using five chronological categories regarding the funds raised and outcomes in each stage:

  1. Before Acceleration.
  2. At Acceleration Date.
  3. Year 1: first 12 months after acceleration.
  4. Year 2: between 12 and 24 months after acceleration.
  5. Year 3: between 24 and 36 months after acceleration.

This segmentation allows the study to measure both the initial conditions of startups and their evolution over time.

Structure of the Database and exclusions

The database was manually compiled using Crunchbase, a globally trusted platform for startup information. It contains 855 startups in total:

  • 617 startups from Google.
  • 238 startups from Microsoft.

From the study, the following startups have been excluded: Startups that participated in both programs, startups without a clearly identified acceleration date and non-corporate accelerator programs. This approach eliminates ambiguities and secures the analysis with clean, controlled, and aligned data with the scope of the research.

ANALYTICAL APPROACH

The main analytical tool is ANOVA, used to test whether the continuous performance indicators (like funding) significantly differ between Google and Microsoft accelerator programs, within each program across the five time periods. Results from ANOVA help determine whether observed group differences are statistically significant. It is measured with p-value and the threshold is p<0,05.

Justification

This structured design is appropriate for the following reasons:

  • It allows for systematic and empirical comparison of measurable performance metrics across two groups.
  • It avoids assumptions of causality, focusing instead on statistical significance of mean (average) differences.
  • It fits the structure of the data collected from Crunchbase and mirrors methods used in similar academic studies (e.g., Seitz et al., 2023; Canovas-Saiz et al., 2021).

CONCLUSION

The research design offers a clear, replicable approach to evaluating whether Google and Microsoft’s accelerator programs act as effective springboards or sand traps. By applying ANOVA within a cross-sectional and comparative framework, the dissertation moves toward answering its core research question using statistically robust methods.

References

  1. Canovas-Saiz, D., Martínez-Sánchez, Á., & Andreu-Andrés, M.Á. (2021). Incubators vs. Accelerators: A survival analysis of new ventures. Journal of Business Research, 125, 371–379. https://doi.org/10.1016/j.jbusres.2020.12.039
  2. Seitz, N., Krieger, B., Mauer, R., & Brettel, M. (2023). Corporate accelerators: Design and startup performance. Small Business Economicshttps://doi.org/10.1007/s11187-023-00732-y
  3. Crunchbase. (n.d.). Crunchbase: Discover innovative companies and the people behind them. Retrieved from https://www.crunchbase.com/
  4. McLeod, S. (2022). Longitudinal study. Simply Psychology. https://www.simplypsychology.org/longitudinal-study.html
  5. McLeod, S. (2019). Qualitative vs quantitative research. Simply Psychology. https://www.simplypsychology.org/qualitative-quantitative.html
  6. Surbhi, S. (2016, October 3). Difference between primary and secondary data. Key Differences. https://keydifferences.com/difference-between-primary-and-secondary-data.html

Related Posts