This article explains what a Beta tag for assessments indicates, how they work and what may come next after a beta ends.
- Learn more about assessments and how to access them.
- Assessments are only available to Udemy Business Pro users.
What is a “Beta” tag and what do they look like?
A ‘Beta’ tag on an assessment indicates that the assessment is new and that we are gathering data to make improvements. You can see an example of a Beta tag below.
What kind of changes can be expected during and after Beta?
Beta status is not indicative of the assessment being experimental or in a trial period, it's just an indication that we're still collecting the user data that lets us fully model the balancing and cohort mechanics for score benchmarking. Beta assessments are still a great way to get meaningful directional feedback on skill proficiency and to get guidance to specific learning areas based on questions missed.
During beta we may deactivate or change individual questions that present a poor user experience or give us concerns about our ability to validly assess a skill with them. (In fact, we continue to monitor quality and make improvements even after Beta.)
How are questions selected before entering Beta?
Questions have already gone through a significant amount of quality control before the first exposure to learners, so in general, a large percent of any problematic items have already been eliminated or fixed well before entering Beta. Since Beta assessments are brand new, it’s not uncommon for our team to tweak a few items in response to our data gathering and/or in-product feedback. But in most cases, there won't be extensive changes to a significant percentage of questions in the transition from Beta.
How is the Beta period used?
The kinds of improvements we make during the Beta period are primarily focused on improving score accuracy for each particular assessment.
When we first release an assessment, we make some basic assumptions for scoring about how the assessment questions will function and how users will perform in relation to the content. Over time with sufficient data we can go back and tailor those original assumptions for a more accurate scoring model.
Since our major refinements to scoring are made during Beta, once an assessment is calibrated the ongoing updates and improvements we make over time will be more subtle and happen on a slower cadence. This makes our calibrated assessments that much better of a tool with which skill growth can be looked at via retakes.
What score modeling assumptions are made during Beta?
We make an initial modeling assumption with Beta assessments that all questions are roughly the same difficulty, and that average user knowledge of the assessed skill is normally distributed and well aligned with question difficulty. Over time, and with sufficient data, we revise those assumptions and remove the ‘Beta’ flag to indicate the initial calibration process is complete and that learner scores now reflect the unique difficulty of the specific items they answered. The percentile scores reported to learners are also updated to reflect the most recent performance data from assessment responses on the platform.
How are scores impacted by Beta?
If a learner took a beta assessment, and that assessment later becomes calibrated, there's no retroactive change to their results page. The first results remain intact as a snapshot of the original scoring under beta.
If the user retakes the assessment post Beta, they may see some shift in score, percentile, and/or proficiency level based on changes in item calibration and cohort (aside from their own possible shifts in performance). Since all of that will be conflated together it's important not to make any kind of definitive progress/growth conclusions based on a Beta vs Calibrated (non-Beta) comparison.
Therefore, in a scenario where comparisons between multiple assessments takes are being used to draw skill growth conclusions, note that the most accurate readings will be comparing multiple takes of a fully calibrated (non-Beta) assessment.