Peas in a Pod Discussions are informal, face-to-face conversations that allow for direct exchange and exploration of ideas between conference goers. Join these discussions to meet people with common interests AND to discuss hot industry topics.
Throughout the lifecycle of exam development, regardless if the exam will be used to support a credential, license, or employment or workforce decision, we are faced with challenges that require innovative solutions. Some challenges are always present, such as sampling and subject matter expert (SME) recruitment issues, decision making that must balance psychometric needs with business realities, and technical considerations related to item types, delivery, proctoring, etc. In this session, the facilitator will ask the panel of experts questions related to the various challenges that are encountered at each phase of exam development, leaving sufficient time for audience questions and discussion.
Facilitators: Manny Straehle, Assessment, Education and Research Experts; Liberty Munson, Microsoft
Creating secure and fair exams is much like baking the perfect cake: you need to use quality ingredients and follow the steps in the right order or you will have less than reliable results. This session will describe proactive, innovative, and easy-to-implement test security strategies and practices that can help protect testing programs and allow them to create quality assessments that yield trustworthy test results. Using an innovative IT certification program as a case study, this session will discuss ways to limit item exposure, increase score validity, and measure test security effectiveness. Examples and demonstrations will showcase techniques such as digital watermarking, code-based item pool expansion, and secure exam design and item writing strategies. The recipe for protective test security does not start when you deliver an exam; protective test security is a strategic plan that should be baked into the testing process from the start.
Facilitators: Saundra Foderick, Caveon; Benjamin Hunter, Caveon
Have you ever wondered what goes into producing an investigative security report? Have you ever been curious about what investigators do out in the field? How cyber investigations work, or what exactly is the "dark web"? Fieldwk, Investigator Stories is an open discussion for investigators to share their experiences, struggles, and laughs when conducting work out in the field. Those who want to know more can also ask questions such as: What's it like knocking on doors or driving to unknown neighborhoods? How difficult is it to interview a suspect or a witness? Do you have what it takes to be an investigator? Drop by and find out!
Facilitators: Cody Shultz, Guidepost Solutions; Mikel Trevitt, ACT
This session will explore the advantages and disadvantages of developing performance-based exams that are delivered via remote proctoring. We will discuss the decision making process for selecting performance-based certification exams versus multiple choice exams. We will also outline the rationale for using remote proctoring for these exams versus traditional brick and mortar testing centers. The presenters represent different phases of the performance-based remote proctoring journey-- those that are still considering implementing, those that have recently implemented and may use a different delivery mode, and those that are mature in their process and are continuing to refine their program. Presenters represent different size programs with different numbers and types of exams but all have a global reach. Having the different perspectives will start a dialogue with other organizations considering remotely proctored performance based exams.
Facilitators: Beth Kalinowski, PSI Services; Nathanael Letchford, National Instruments
Candidates seeing an overabundance of similar items may complain, raise alarm via social media and perhaps erode confidence in the exam, could become a real headache for certification organizations. When a test does not adequately sample across a construct, the validity of inferences made from that test are questionable. Large item banks also present logistical problems to the item development process that are difficult to solve. This presentation will include current research on the evaluation of item pools when large numbers of enemy relationships are present. Natural language processing (NLP) provides automated methods that complement human efforts for flagging item pairs that should be considered enemies (either for content overlap, cueing, or similar subject matter). Practical details on the use of NLP for enemy item detection for multiple programs over the past decade will be provided.
Facilitators: Kirk Becker, Pearson VUE; Timothy Muckle, National Board of Certification and Recertification of Nurse Anesthetists (NBCRNA); Joshua Goodman, National Commission on Certification of Physician Assistants
Selecting and implementing a test delivery model is an important decision that requires much consideration and careful planning for any large-scale testing program. Various factors come into play in the decision-making process including, but not limited to: (a) frequency of test offerings, (b) flexibility in test scheduling, (c) test security, (d) size of item bank, (e) content exposure control, (f) pilot testing plan, (g) psychometric models, (h) test form assembly, (i) global or domestic test delivery, (j) choice of a vendor for test delivery, etc. In this “Peas in a Pod” session, we will share our organization’s experiences in using various computerized test delivery models and invite participants to share their insights and experiences.
Facilitators: Fang Tian, Medical Council of Canada; Maxim Morin, Medical Council of Canada; Andre de Champlain, Medical Council of Canada; Andrea Gotzmann, Medical Council of Canada
The skills gap is a persistent problem domestically and internationally. An associated dynamic is skill obsolescence. The World Economic Forum (2015) indicted between 40-50% of today’s jobs will be obsolete within a decade. This dynamic has powerful implications for both individuals and employers. Anders Ericsson and others have indicated that successful skill development is dependent upon the base on which the skills are stacked. The greater the foundation (suggesting the criticality of foundational skills) the more readily new skills are learned.
We present a foundational employability skills credential system, ACT WorkKeys, and our experience in communicating the value of a skills credential (earned via assessment) to employers, individuals, and workforce development communities in the U.S. We will then introduce a use case in India, discussing the relevance of similar issues in an international context.
Facilitators: Helen Palmer, ACT; Bill Raudulovich, BellCurve Labs
As measurement science continues to evolve, a wide variety of assessment models and formats are emerging, especially within the context of recertification. These new and innovative types of recertification assessments deviate from more traditional high stakes assessments in many critical ways (e.g., purpose, stakes, frequency, testing environment, availability of resources), but they still require the setting of performance standards. And these performance standards have important implications for the certifying organizations, their certificants, and the public. In this session, we will outline several new recertification assessment models, discuss the unique standard setting challenges presented by each model, and describe the ways in which those challenges were addressed.
Facilitators: Robert Furter, American Board of Pediatrics; Brett Foley, Alpine Testing; Josh Goodman, National Commission for Certification of Physician Assistants