Action Agenda

Development of Tests and Test Items

NTA will develop tests to improve quality in education by administering research based valid, reliable, efficient, transparent, fair and international level assessments. The best subject matter experts, psychometricians and IT delivery and security professionals will ensure that the current gaps in existing assessment systems are properly identified and bridged.
Professionals including test specialists, test reviewers, editors, teachers and specialists in the subject or skill being tested — will be involved by NTA in developing "test items.." And that is why all questions (or "items") are put through multiple, rigorous reviews and meet the highest standards for quality and fairness in testing.

The tests developed by NTA will have the following special features –


The tests will be jointly developed by Subject matter experts and psychometricians. The data of every question of every test will be statistically analyzed as well as put through psychometric analysis. These aspects will be very critical in terms of ensuring that the tests are well distributed. Attempt will be made to ensure that the present situation of a very low mean and skewed distribution does not exist. Students will also be provided with some simulated mock tests so that they have an idea of how to prepare for the CBT.


Furthermore, as a next step NTA will create a large size properly indexed question bank for every subject. The question papers will be prepared by selecting items randomly from the question banks based on a scientifically designed blueprint and algorithm of the paper. These question banks will be created using the modern technology of Artificial Intelligence and computer software. The main objective of using such an approach will be for generation of a large number of tests in a short period of time so that NTA delivered tests can be held multiple times in a single calendar year without compromising on rigor of test building test items and the sanctity of the test.
Overview of the key steps NTA will take when developing a new test-

Step 1: Determining Test Objectives to match with needs

The commissioning organisations or directives of the government will identify a need to measure certain skills or knowledge. Once a decision is made to develop a test to accommodate this need, test developers will address the following:
  • Who will take the test and for what purpose?
  • What skills and/or areas of knowledge will comprise the construct to be measured?
  • How will test takers be able to use their knowledge?
  • What kind of blueprint / table of specifications / algorithm will be needed to measure the intended construct?
  • How long should the test be?
  • How difficult should the test be?

Step 2: Test Development Committees

The answers for the questions in Step 1 are usually completed with the help of test development committees, which typically will consist of educators, and/or other professionals appointed by NTA with the guidance of the stakeholder organisation. Responsibilities of these test development committees (one for each of the tests conducted by NTA) may include:
  • defining test objectives and specifications
  • helping ensure test questions are unbiased
  • determining test format (e.g., multiple-choice, essay, constructed-response, etc.)
  • considering supplemental test materials

Step 3: Item Writing Committees

For each of the subject area of a test, there will be an item writing committee of not less than 30 item writers. They will work on item writing with the help of psychometricians and statisticians. Their responsibilities will include:
  • reviewing test questions, or test items, given by NTA staff
  • rewriting / moderating / refining (Tried Exercise) test questions

Step 4: Moderating and Writing Questions

Each test question — written by NTA staff or item writing committees — undergoes numerous reviews and revisions to ensure it is as clear as possible, that it has only one correct answer among the options provided on the test and that it conforms to the style rules used throughout the test. Marking schemes and scoring rubricsfor open-ended responses (constructed response item), such as short written answers and essays, go through similar reviews.

Step 5: Detecting and Removing Unfair Questions

To meet the Standards for Quality and Fairness guidelines, trained reviewers will then critically examine and evaluate each individual test question, the test as a whole to ensure that language, symbols, words, phrases and content generally regarded as sexist, racist or otherwise inappropriate or offensive to any subgroup of the test-taking population are eliminated. A rigorous checklist in this regard will be co-developed by NTA.
NTA statisticians also will identify questions on which two groups of test takers who have demonstrated similar knowledge or skills perform differently on the test through a process called Differential Item Functioning (DIF) based on data of previous years’ administration. If one group performs consistently better than another on a particular question, that question may be deemed biased or unsatisfactory.
Note: If people in different groups actually differ in their average levels of relevant knowledge or skills, a fair test question will reflect those differences.
An insight from every test conducted henceforth by the NTA will be used to improve the functioning of items and tests for further administration

Step 6: Assembling the Test

After the test is assembled, it is reviewed by other specialists, committee members and sometimes other outside experts. Each reviewer answers all questions independently and submits a list of correct answers to the test developers. The lists are compared with the NTA answer key to verify that the intended answer is, indeed, the correct answer. Any discrepancies are resolved before the test is published.

Step 7: Making Sure — Even After the Test is Administered — that the Test Questions are Functioning Properly

Even after the test has been administered, statisticians and test developers will preliminary conduct a key check analysis and review to make sure that test questions are working as intended. Before final marking takes place, each question will undergo preliminary statistical analysis and results will be reviewed question by question. If a problem is detected, such as the identification of a misleading answer to a question, corrective action, such as not marking the question, will be taken before final marking and marks reporting takes place.
Tests will also be reviewed for comparability across various test forms. Performance on one version form of the test should reasonably predict performance on any other form/version of the test. If comparability is high, results will be similar no matter which version a test taker completes.