A reporter is graduated to be a writer By Prof Dr Sohail Ansari

An editor is one who separates the wheat from the chaff and prints the chaff. ~ It's hard to lead a cavalry charge if you think you look funny on a horse.Adlai Stevenson
Man has social character. Life is based on interaction and communication between people; they share many things, and together they form the family and society. Man is born alone to live with others in harmony. Allah, All-Mighty, says in Qur'an: #"O mankind! Lo! We have created you from a male and a female, and have made you nations and tribes that ye may know one another. Lo! The noblest of you, in the sight of Allah, is the best in conduct. Lo! Allah is Knower, Aware."## (49:13) 
A writer needs judgment
A reporter must know what things exist, and is graduated to be a writer when he knows what they mean.

True scholarship consists in knowing not what things exist, but what they mean; it is not memory but judgment. James Russell Lowell



Concurrent validity is a measure of how well a particular test correlates with a previously validated measure. It is commonly used in social science, psychology and education.
The tests are for the same, or very closely related, constructs and allow a researcher to validate new methods against a tried and tested stalwart.
For example, IQ, Emotional Quotient, and most school grading systems are good examples of established tests that are regarded as having a high validity. One common way of looking at concurrent validity is as measuring a new test or procedure against a gold-standard benchmark.

Concurrent Validity - A Question of Timing

 As the name suggests, concurrent validity relies upon tests that took place at the same time. Ideally, this means testing the subjects at exactly the same moment, but some approximation is acceptable.
For example, testing a group of students for intelligence, with an IQ test, and then performing the new intelligence test a couple of days later would be perfectly acceptable.
If the test takes place a considerable amount of time after the initial test, then it is regarded as predictive validity. Both concurrent and predictive validity are subdivisions of criterion validity and the timescale is the only real difference.
·        
Instructor: Yolanda Williams
·         Yolanda has taught college Psychology and Ethics, and has a doctorate of philosophy in counselor education and supervision.
·         This lesson will cover concurrent validity and illustrate the difference between concurrent and predictive validity. Then, you can test your new knowledge with a quiz.

·         What Is Concurrent Validity?

·         Concurrent validity is a concept commonly used in psychology, education, and social science. It refers to the extent to which the results of a particular test, or measurement, correspond to those of a previously established measurement for the same construct. So, how does this work?

·         Examples of Concurrent Validity

·         Imagine that you are a psychologist developing a new psychological test designed to measure depression, called the Rice Depression Scale. Once your test is fully developed, you decide that you want to make sure that it is valid; in other words, you want to make sure that the test accurately measures what it is supposed to measure. One way to do this is to look for other tests that have already been found to be valid measures of your construct, administer both tests, and compare the results of the tests to each other.
·         Since the construct, or psychological concept, that you want to measure is depression, you search for psychological tests that measure depression. In your search, you come across the Beck Depression Inventory, which researchers have determined through several studies is a valid measure of depression. You recruit a sample of individuals to take both the Rice Depression Scale and the Beck Depression Inventory at the same time.
·         You analyze the results and find the scores on the Rice Depression Scale have a high positive correlation to the scores on the Beck Depression Scale. That is, the higher the individual scores on the Rice Depression Scale, the higher their score on the Beck Depression Inventory. Likewise, the lower the score on the Rice Depression Scale, the lower the score on the Beck Depression Inventory. You conclude that the scores on the Rice Depression Scale correspond to the scores on the Beck Depression Inventory. You have just established concurrent validity.

An Example of Concurrent Validity

Researchers give a group of students a new test, designed to measure mathematical aptitude.
They then compare this with the test scores already held by the school, a recognized and reliable judge of mathematical ability.
Cross referencing the scores for each student allows the researchers to check if there is a correlation, evaluate the accuracy of their test, and decide whether it measures what it is supposed to. The key element is that the two methods were compared at about the same time.
If the researchers had measured the mathematical aptitude, implemented a new educational program, and then retested the students after six months, this would be predictive validity.

The Weaknesses of Concurrent Validity

Concurrent validity is regarded as a fairly weak type of validity and is rarely accepted on its own. The problem is that the benchmark test may have some inaccuracies and, if the new test shows a correlation, it merely shows that the new test contains the same problems.
For example, IQ tests are often criticized, because they are often used beyond the scope of the original intention and are not the strongest indicator of all round intelligence. Any new intelligence test that showed strong concurrent validity with IQ tests would, presumably, contain the same inherent weaknesses.
Despite this weakness, concurrent validity is a stalwart of education and employment testing, where it can be a good guide for new testing procedures. Ideally, researchers initially test concurrent validity and then follow up with a predictive validity based experiment, to give a strong foundation to their findings.
Concurrent validity is a measure of how well a particular test correlates with a previously validated measure. It is commonly used in social science, psychology and education.
The tests are for the same, or very closely related, constructs and allow a researcher to validate new methods against a tried and tested stalwart.
For example, IQ, Emotional Quotient, and most school grading systems are good examples of established tests that are regarded as having a high validity. One common way of looking at concurrent validity is as measuring a new test or procedure against a gold-standard benchmark.
Criterion validity assesses whether a test reflects a certain set of abilities.
To measure the criterion validity of a test, researchers must calibrate it against a known standard or against itself.
Comparing the test with an established measure is known as concurrent validity; testing it over a period of time is known as predictive validity.
It is not necessary to use both of these methods, and one is regarded as sufficient if the experimental design is strong.
One of the simplest ways to assess criterion related validity is to compare it to a known standard.
A new intelligence test, for example, could be statistically analyzed against a standard IQ test; if there is a high correlation between the two data sets, then the criterion validity is high. This is a good example of concurrent validity, but this type of analysis can be much more subtle.

 

An Example of Criterion Validity in Action

A poll company devises a test that they believe locates people on the political scale, based upon a set of questions that establishes whether people are left wing or right wing.
With this test, they hope to predict how people are likely to vote. To assess the criterion validity of the test, they do a pilot study, selecting only members of left wing and right wing political parties.
If the test has high concurrent validity, the members of the leftist party should receive scores that reflect their left leaning ideology. Likewise, members of the right wing party should receive scores indicating that they lie to the right.
If this does not happen, then the test is flawed and needs a redesign. If it does work, then the researchers can assume that their test has a firm basis, and the criterion related validity is high.
Most pollsters would not leave it there and in a few months, when the votes from the election were counted, they would ask the subjects how they actually voted.
This predictive validity allows them to double check their test, with a high correlation again indicating that they have developed a solid test of political ideology.
Criterion Validity in Real Life - The Million Dollar Question
This political test is a fairly simple linear relationship, and the criterion validity is easy to judge. For complex constructs, with many inter-related elements, evaluating the criterion related validity can be a much more difficult process.
Insurance companies have to measure a construct called 'overall health,' made up of lifestyle factors, socio-economic background, age, genetic predispositions and a whole range of other factors.
Maintaining high criterion related validity is difficult, with all of these factors, but getting it wrong can bankrupt the business.

Coca-Cola - The Cost of Neglecting Criterion Validity

For market researchers, criterion validity is crucial, and can make or break a product. One famous example is when Coca-Cola decided to change the flavor of their trademark drink.
Diligently, they researched whether people liked the new flavor, performing taste tests and giving out questionnaires. People loved the new flavor, so Coca-Cola rushed New Coke into production, where it was a titanic flop.
The mistake that Coke made was that they forgot about criterion validity, and omitted one important question from the survey.
People were not asked if they preferred the new flavor to the old, a failure to establish concurrent validity.
The Old Coke, known to be popular, was the perfect benchmark, but it was never used. A simple blind taste test, asking people which flavor they preferred out of the two, would have saved Coca Cola millions of dollars.

Ultimately, the predictive validity was also poor, because their good results did not correlate with the poor sales. By then, it was too late!

Comments