
Overview of Managing Live Projects
Managing your project effectively after launch is one of the most important predictors of whether your study will run smoothly, fill on time, and deliver high-quality insights. Once recruitment begins, you’re no longer just designing a study—you’re actively steering it.
In this first lesson of our Managing Live Projects course, you’ll learn:
- Why post-launch management matters
- What to expect during the first 24–48 hours
- What can and can’t be changed after launch
At the end of this course, you’ll have the option to download the lesson slides to adapt for your own internal use, and/or take the course quiz to review what you learned.
📹 Prefer watching to reading? This content is available as both an article and a video. Watch our Customer Success Manager, Kaylynn Knollmaier, take you through the content in the video below or keep reading to dive in!
Why managing live projects matters
Once a project goes live, every decision you make influences the fill rate, data quality, and participant experience. Catching feasibility issues, screener gaps, or scheduling conflicts early prevents them from magnifying later in the process. For example, if your screener logic accidentally qualifies a wider audience than intended, identifying this early allows you to adjust the logic before dozens of low-fit participants apply.
Effectively running your projects post-launch will help you:
- Ensure smooth study operations and prevent small issues from snowballing
- Improve recruitment outcomes by promptly reviewing applicants and preventing drop-off
- Create a better participant experience, ultimately leading to fewer cancellations and no-shows
The post-launch process: What happens when you go live
When your project launches, three workflows happen concurrently: the PC (Project Coordinator) review, the recruitment kickoff, and your ongoing review and scheduling process. Understanding these post-launch workflows will help you stay ahead of issues and maintain a predictable recruitment pace.
1. PC project review (for Recruit projects)
After launching your project, a Project Coordinator performs a quality and feasibility check early in the recruitment window. The PC will look at:
- Targeting feasibility based on the characteristics you selected
- Screener logic to flag anything that may result in too few or too many qualified applicants (note: if you’re regularly struggling with setting up effective screeners, our Screener Surveys Deep-Dive course can help!)
- Incentives to confirm they're appropriate for the audience and activity type
- Availability to ensure participants have enough session options to choose from
This PC review is designed to catch issues before they impact your recruitment timeline—things like overly narrow targeting, unclear screener questions, or availability that doesn’t align with participant time zones. Your PC may reach out with suggestions or clarification requests; responding quickly keeps the project moving and ensures recruitment can begin at full speed.
2. Recruitment kickoff
Once the project passes review, the recruitment engine begins:
- Participants who match your characteristics will begin to receive invites.
- New applicants will appear in the Participant Management view of your project builder in real time.
- You can begin reviewing, rating, and approving candidates immediately.
Most researchers typically see the first qualified candidate come in within the first 12 hours—though oftentimes this happens sooner!
3. Review, approve, and schedule participants
As participants apply, they will either automatically be approved and invited to schedule a session (if you have automated review turned on) or you can manually review them from the Participant Management view in your project builder. To fully vet participants, you may want to review both their screener responses and their participant profiles, which provide supplemental data like demographics, past session performance, and quality indicators.
It’s best to approve participants promptly, ideally multiple times per day while recruitment is active. Approving roughly 2× the number of seats you need filled helps prevent under-recruiting due to no-shows or scheduling conflicts. For studies with quotas, some researchers approve in batches—inviting only the participants that match current needs and tightening the screener once a segment is full.
We’ll discuss the ins and outs of reviewing and scheduling participants in the next lesson.
What can and can’t be edited post-launch
Projects are flexible after launch, but certain foundational settings become locked in to preserve your study integrity. Bookmark this list and make sure you have all of the non-editable settings correct before hitting the launch button.
What can be edited post-launch:
- Project title and listing description
- Incentive amounts
- Preparation instructions
- Schedule availability and calendar
- Requested participant number
- Screener survey questions and format
- Default moderators and collaborators
- Online meeting location links
- Participant approval status
What cannot be edited:
- Project type
- Participant attendance
- Interview format
- Incentive payment method
- Targeting criteria
- Project close request status
When to escalate changes to your support team
If something major needs to shift, or you’ve hit a limitation you can’t work around, contact the Projects Team at projects@userinterviews.com.
Contact the support team when:
- You need to edit your targeting criteria (e.g., add/remove a characteristic).
- You want to switch attendance type or session format.
- You discover a significant feasibility issue and need guidance.
- You’re unsure whether a mid-recruitment screener change will cause disruption.
- You want support managing quotas or adjusting your recruitment strategy.
Reviewing and Approving Applicants
Reviewing applicants effectively is one of the most important steps in running a successful study—it shapes your sample quality, affects your fill rate, and has a direct impact on participant experience.
In this second lesson of Managing Live Projects, you’ll learn:
- How to view and evaluate participant profiles
- How to make approval decisions confidently and efficiency
- How to update participant statuses
How to view and evaluate participant profiles
Depending on how you invite participants, they’ll appear in different places in the Participant Management section of your project builder.
- Not yet invited: When you add participants manually (via Hub or CSV), they first appear in Not Yet Invited.
- Invited: Once you send invitations, participants from the Not Yet Invited tab move to Invited, where they can apply by completing your screener.
- Applied: If you share a project link publicly—such as in an email newsletter—participants will appear directly in your applied tab when they apply. Any participants from the Invited tab who submit the screener will move to this view as well.
- Marked potential: If you want to make note of a potential participant, but would rather prioritize other applicants, you can mark them as potential and easily find them again in this tab.
From the Applied tab, you’ll see match percentage, status, ratings, and tags at a glance. It’s common to sort by match percentage, but you may also filter by status or segment tags to manage quotas more efficiently.
Clicking on any participant name opens their slide-out profile, which gives you everything you need to make a decision without losing your place in the table.
Tips for evaluating applicants
When evaluating applicants, go beyond match percentage. Consider things like:
- Screener responses: Does their screener answers reflect real experience, thoughtful responses, and alignment with your research goals?
- Fraud signals: Watch for signals of potential fraud or low-fit participation—such as AI-like generic responses, contradictions across questions, implausibly short screener completion times, or sparse demographic data. (Worried about fraud? Take our Preventing & Recognizing Fraud course to learn how to prevent fraudulent participants from affecting your research.)
- Past performance: Someone with multiple no-shows or repeated cancellations is generally not a good fit.
When and why to approve 2x your desired participant number
Because availability varies and some participants will likely drop out, we recommend approving approximately twice as many participants as you ultimately want to speak with.
This buffer ensures you maintain momentum even if some participants can’t schedule or cancel late. For example, if your goal is to conduct eight interviews, approving 16–20 applicants ensures a full schedule with room for inevitable drop-offs.
In the unlikely event that all of your approved participants are willing and able to participate, the excess participants will be added to a participant waitlist. When the study is full, we automatically add new applicants to the waitlist for you—and if a time slot opens up due to another participant dropping out, participants on the waitlist are automatically notified.
Keep learning

Reviewing and Approving Applicants



