11
   

A Psychometric Instrument Better than Myers Briggs.

 
 
ebrown p
 
  0  
Reply Tue 1 Jun, 2010 01:39 pm
@DrewDad,
Quote:
It's ironic that you still don't understand "begging the question". You're still simply assuming the validity of your argument.


Garbage DrewDad. It is you who don't understand "begging the question".

These claims about the effectiveness of the Myers-Briggs assessment for helping people with career counseling should be tested, with an objective scientific test, before they are assumed to be true.

Before they sell their product as an effective way to give people career advice-- they should make sure there is objective scientific evidence this is any better then picking random assessments out of a hat. Neither you, or anyone else has provided this evidence.

There is no circular argument there. In fact, I get just as upset with the people selling homeopathy, or copper bracelets or magnetic beds. I don't get upset by products that can support their claims of effectiveness with objective science testing.

Let me offer an important difference between my position and DrewDad's position.

My mind can be changed. If you offer me an independent scientifically valid study that showed that career counseling or selection using MBTI had a significant affect compared to a control group, I would say "hmmmm... well it seems like they have something". In fact, I would love it if DrewDad (or anyone else) could offer this type of surprise.

DrewDad's position on the topic is based on assumptions. He has not suggested he may be wrong, nor does he open the possibility that anyone or anything could change his mind on the subject.






DrewDad
 
  1  
Reply Tue 1 Jun, 2010 01:59 pm
@ebrown p,
Sounds to me like your reading comprehension is below average.

I've said several times that I have reservations about using the MBTI for hiring, and that I have no experience or data to suggest that it is useful in career counseling.
ebrown p
 
  1  
Reply Tue 1 Jun, 2010 02:26 pm
@DrewDad,
Quote:
I've said several times that I have reservations about using the MBTI for hiring, and that I have no experience or data to suggest that it is useful in career counseling.


We agree on this.

What data do you have that it is more effective then random classifications picked out of a hat for either facilitating communication, or for "predicting human behavior"?

((If there is any other practical use you have for the thing that I missed, , please include it.))
DrewDad
 
  1  
Reply Tue 1 Jun, 2010 02:32 pm
@ebrown p,
ebrown p wrote:
What data do you have that it is more effective then random classifications picked out of a hat for either facilitating communication, or for "predicting human behavior"?

I see no need to do your research for you. You're the one making the claim that it is not more effective than random classifications.

I suggest that you get busy on proving your claim.
ebrown p
 
  1  
Reply Tue 1 Jun, 2010 02:48 pm
@DrewDad,
Quote:
You're the one making the claim that it is not more effective than random classifications.


The claims you are making about Myers-Briggs (including the claim that it can "predict human behavior") have no objective scientific support. Yet you are asserting them as fact.

I am asserting as fact that in all of these posts in two different threads you are still unable to provide any objective scientific evidence that they are more effective, at any of the specific tasks that you have suggested, than random classifications. Sure, it is possible that some future study will provide this scientific evidence... and I will be fine with this. When this happens I will gladly drop my objections and agree with you. The problem is this product is being strongly marketed, based solely on untested assertions, right now (before there is any evidence).

In an ideal world, the people selling a product would have the responsibility to show their product is effective for its stated purpose. (I suppose in an ideal world people buying a product would also make sure a product was effective before buying).


DrewDad
 
  2  
Reply Tue 1 Jun, 2010 03:20 pm
@ebrown p,
As I've stated before, I'm just on this thread for the entertainment value of watching you swing in the wind.

You've made a fool of yourself in this "scientific inquiry" of yours, with false claims, logical fallacies, gross oversimplifications, repeating your logical fallacy, and proposing unethical conduct.

Why not kick the family dog while you're at it?
firefly
 
  3  
Reply Tue 1 Jun, 2010 04:13 pm
@ebrown p,
Quote:
You take 3,000 people or so and you randomly divide them into three groups. You give the first two groups the MBTI assessment. One group you give the "true" results, the the second group you give results picked randomly out of a hat. The third group doesn't take the MBTI assessment or receive any results.

Then based on these results, you give the first two groups career counseling based on their MBTI results (whether real or random). Of course neither the subjects nor the people giving the career counseling will know whether the MBTI results they are working with are real or random. The third group receives career counseling without the MBTI.

Then you go back to the participants in 1, 5 and 10 years to see if they are happy and successful (again using evaluators who don't know whether the results used in the counseling were real or not).


Again, you are over simplifying the methodology and the difficulty of evaluating what you propose to measure.

You are assuming that the career counseling will be uniform for all subjects that receive it. If it is not uniform, the counseling itself becomes a variable in addition to the MBTI test results.

You are assuming that people actually make career choices based on either MBTI results or career counseling. This may or may not be true. People make career choices for all sorts of reasons, and they change their choices for all sorts of reasons. How would you evaluate the extent to which the test scores (or the career counseling) influenced their career choice? This is an additional variable.

You would have to insure that all 1000 people in each group represented a random sample from the general population, but also a representative sample of the total general population. Where would you find such a population to participate in your study? Why would they want to participate?

You would have to insure that the 1000 people in each group differed in no significant way (age, income level, level of education, ethnic background, etc.) from the 1000 people in the other two groups, other than on the single factor of whether they received accurate MBTI results or not. And, if you give these groups career counseling, you are adding still another variable. You could not insure that the 3000 people differed only in whether they received accurate MBTI test results or any test results. There are simply too many other individual differences you could not control for that might play a significant part in career success. Simply using large groups is not an adequate control when looking at complex factors like "happiness" or "success". What about the differences in personality traits that the MBTI measures? Couldn't some personality types be more successful than others, apart from whether these people were given accurate test results?

You expect to be able to track down 3.000 people a year later? Or 5 or 10 years later? How?

How are you going to define and measure "happiness"? How are you going to define and measure "success"? How will you know that either of those factors was related only to the truthfulness of MBTI test results these people received years earlier? Will nothing else have influenced these people in the interim besides those MBTI test scores?

The study you glibly propose is not methodologically possible. Frankly, it is absurd because it ignores the complexity of what you would be trying to measure.



ebrown p
 
  1  
Reply Tue 1 Jun, 2010 04:44 pm
@firefly,
I am not simplifying that much. These studies are done are repeated in many fields.

Quote:
You are assuming that the career counseling will be uniform for all subjects that receive it. If it is not uniform, the counseling itself becomes a variable in addition to the MBTI test results.


This is simple to address. You randomly assign career counselors-- and make sure the career counselors (and the people assigning the career counselors) don't know who is in which group. If there are career counselors who is particularly good, or particularly rotten, it is almost certain that similar numbers of each large group will get them (the odds that the randomly assigned goups will be coincidently sorted into counseling experiences is extremely low). This effect of these differences will cancel out over a large group.

There is a great deal of variability when you compare the counseling one person receives with the counseling another person receives.

But when you compare the counseling that one group 1000 people receives with the counseling another group of 1000 people receives there is very little variability when the groups are randomly assigned.

Quote:
You are assuming that people actually make career choices based on either MBTI results or career counseling. This may or may not be true. People make career choices for all sorts of reasons, and they change their choices for all sorts of reasons. How would you evaluate the extent to which the test scores (or the career counseling) influenced their career choice? This is an additional variable.


Again we are not comparing individuals. We are comparing groups of 1000 people that were randomly assigned in a blind test (meaning no one knows who is in what group).

If an individual does much better than another individual, that isn't statistically significant.

If a large group does better then another large group, and the only difference between the randomly selected group is that one got real results, and the other got random results... there is only one significant variable.

Quote:

You would have to insure that all 1000 people in each group represented a random sample from the general population, but also a representative sample of the total general population. Where would you find such a population to participate in your study? Why would they want to participate?


This is the most difficult part in these studies, and a valid concern. There are scientifically accepted ways to get a representative sample.

This doesn't invalidate the results of the experiment. Real scientific studies explain the population in the study carefully as the describe the research methods.

Quote:
You would have to insure that the 1000 people in each group differed in no significant way (age, income level, level of education, ethnic background, etc.) from the 1000 people in the other two groups, other than on the single factor of whether they received accurate MBTI results or not. And, if you give these groups career counseling, you are adding still another variable. You could not insure that the 3000 people differed only in whether they received accurate MBTI test results or any test results. There are simply too many other individual differences you could not control for.


This is not that difficult. Large groups that are randomly selected are a perfectly reasonable control for many variables-- including unknown variables. The random selection is important, but this is a well understood, statistical problem.

Quote:

You expect to be able to track down 3.000 people a year later? Or 5 or 10 years later? How?


It takes planning. The participants understand that they will be contacted. You anticipate some attrition and you account for it in any papers you write.

Quote:
How are you going to define and measure "happiness"? How are you going to define and measure "success"? How will you know that either of those factors was related only to the truthfulness of MBTI test results these people received years earlier? Will nothing else have influenced these people in the interim besides those MBTI test scores?


This is not difficult. The purpose of the experiment show the claims of the "effectiveness" of the MBTI instrument. So you define what "effectiveness" means and design your study around this.

Quote:

The study you glibly propose is not methodologically possible. Frankly, it is absurd because it ignores the complexity of what you would be trying to measure.


Studies like this are done all the time. The complexities you cite are the same complexities in all social science studies. You resolve them with statistics... comparing large random groups eliminates the variability that you would have if you compared individuals.

And, what is the alternative?

DrewDad wants to make bold claims that are backed on untested assumptions. I find this quite problematic.


DrewDad
 
  2  
Reply Tue 1 Jun, 2010 04:59 pm
@ebrown p,
ebrown p wrote:
DrewDad wants to make bold claims that are backed on untested assumptions.

"Don't use it if you don't want to."

Bold claims, indeed.
0 Replies
 
ebrown p
 
  1  
Reply Tue 1 Jun, 2010 05:52 pm
@DrewDad,
Geez DrewDad, you seem to really take this stuff personally. The tag spam is a bit excessive don't you think? (especially considering that you obviously asked for help since it would take multiple votes to get "ESL" in front of the original "psychology" tag.)

The amount of time and effort you are spending in this effort is a bit flattering (if not sad).
firefly
 
  3  
Reply Tue 1 Jun, 2010 07:09 pm
@ebrown p,
Quote:
Studies like this are done all the time.


Not in psychology they aren't.

Studying human behavior is considerably different than testing the effectiveness of a drug vs a placebo (something a double-blind study might be appropriate for). That's one reason it is difficult to construct valid and reliable tests of personality.

Just the basic idea of taking 3000 people (a sample you just can't get) and giving 1000 of them accurate test results, 1000 of them inaccurate results, and 1000 of them no test results, and then looking at them 5 years later to see if they are "happy" or "successful" in their careers, and then attributing any differences between these groups solely to the test results they were or weren't given, is downright absurd. How happy or successful those people reported they were 5 years later would likely have been determined by a multitude of factors, none of which would likely be related to those test results. How would you eliminate, or control for, all those other factors, besides those test scores, that would have affected happiness or success?

You really don't understand scientific methodology as it used in psychology. When you study the influence of one variable (i.e. test results) on behavior (i.e. career choice), you must control for the influence of all other variables on that same behavior, and you cannot magically do that with "statistics"--you must control those other variables by eliminating them in some way, or by holding them constant for all subjects. Reality dictates that all those other variables cannot be controlled in the sort of study you propose. You are dealing in fantasy, not reality, and you are not fully acquainted with scientific methodology in this area of study.

Do you realize it would be unethical to give subjects inaccurate test scores, about their own personality traits, and then further compound that unethical practice by giving them fraudulent career counseling?

And then you want to measure whether these subjects are happy or successful, years later, based on the false information you have given them?

You cannot deliberately mislead research subjects, in psychology, in any way which can potentially harm them--that is unethical. Giving someone false information about their own personality traits, based on false test scores, would definitely be construed as harmful.
At some point, probably before that person even left the room after being given those false test results, the person conducting this "experiment" would have to tell the subject the truth--that the results were not accurate. You would need to de-brief the subject and make sure that they were not unduly distressed by the false information they were given.

Test construction is a challenging enterprise in psychology. Test validity and reliability are accomplished in various ways, but the use of the "experimental method" really would not be among them.

What is the big deal if people want to take or use the Myers-Briggs test? No one claims the test is 100% accurate, or even 60% accurate. It is not intended to predict anything. The test can yield some information about an individual's personality traits. People might find such information useful. If some people didn't find this test useful, in some way, why is there even a market for it?

I honestly don't understand why you are so concerned about this particular test. No one is forcing you to either buy it or take it.





ebrown p
 
  1  
Reply Tue 1 Jun, 2010 08:18 pm
@firefly,
Quote:

You really don't understand scientific methodology as it used in psychology. When you study the influence of one variable (i.e. test results) on behavior (i.e. career choice), you must control for the influence of all other variables on that same behavior, and you cannot magically do that with "statistics"--you must control those other variables by eliminating them in some way, or by holding them constant for all subjects.


I can kind of accept that there is a problem with ethics. I would personally be willing to participate in a study where I understood I might be given random results-- and this seems similar to drug trials where, for example, half of participants in a Aids treatment tests were given a placebo instead of the drug being studied. But I understand that there may be rules about playing with advice that people impact people's life decisions. On the other hand, giving people untested advice that may impact people's life decisions seems equally problematic.

However I don't accept your explanation about variables. In drug tests, studies are run with significantly large sample sizes. These are scientifically valid even though on medical tests you can't control for the myriad variables (from genes to family news to exposure to random bacteria) sometimes diet is controlled, but often for long term medical studies, it is not.

The principle is fairly simple. If you randomly divide your study sample into two or more groups (including a control group), a big enough sample will smooth out variance in all of these variables.

This principle is certainly valid in hard science as well (my education is in Physics) and is used in medical research. There is plenty of variability and complexity (including variables you can't possibly understand) in many of these cases. Randomly selected large samples reduce this variability to the point it is insignificant. I don't understand why you think that the variability and complexity is different in social research.



0 Replies
 
ebrown p
 
  0  
Reply Tue 1 Jun, 2010 08:21 pm
@firefly,
Quote:

I honestly don't understand why you are so concerned about this particular test. No one is forcing you to either buy it or take it.


The principle bothers me-- that assertions about the effectiveness of this product for specific uses are stated as fact when in truth they are untested (and really the issue of whether they are testable is irrelevant).
0 Replies
 
DrewDad
 
  2  
Reply Tue 1 Jun, 2010 09:34 pm
@ebrown p,
ebrown p wrote:
Geez DrewDad, you seem to really take this stuff personally. The tag spam is a bit excessive don't you think? (especially considering that you obviously asked for help since it would take multiple votes to get "ESL" in front of the original "psychology" tag.)

You do love to jump to conclusions, don't you?

(hint: I may be an arrogant asshole, but I'm neither vindictive nor racist. A little reflection on the fact that I started the "May I see your papers, citizen" thread might've tipped you off.)

You don't really deserve an explanation, but I haven't changed any of my tags since the first page of the discussion.

* My Tags:
* [X] Logical Fallacy,
* [X] Logic Fail,
* [X] Begging The Question

I'm not going to hold my breath waiting for an apology, since you seem incapable of admitting error.
ebrown p
 
  1  
Reply Tue 1 Jun, 2010 09:48 pm
@DrewDad,
I admit my error, and I am deeply, deeply sorry.
DrewDad
 
  1  
Reply Tue 1 Jun, 2010 09:56 pm
@ebrown p,
No worries.

FWIW, I regret my comment about kicking the family dog.
0 Replies
 
fishgradguy
 
  1  
Reply Sat 5 Jun, 2010 04:48 pm
All this jazz about complex studies testing it's effect on careers is hilarious. Before you do that, you need to prove the test even works. One facet of that is that the interpretations may be self affirming, in much the same way as a horoscope (or people answer the yes/no according to what is most socially acceptable...something that is controlled for in other, more reputable personality indices). This is one major issue with the test, there are plenty of others. Here is how you can do it:

Give 500 people the same test. Score the test, but don't return their scores and interpretations. Instead, anonymize the scores, randomly shuffle them and give one to each participant. Then, after they have read it, give them a questionnaire asking whether they think the result correctly identified their personality type (best to use a 1-5 scale probably).

I would bet my left leg that the majority of people think that the score and interpretation fit their personality despite the fact that the chance that it does is worse than a coin flip (the probability of guessing correctly is 1/16, since there are 16 possible permutations of the 4 letter score and only one is correct)

This is the same experimental design used to disprove astrology in a peer reviewed journal article. In this case the scientists were actually helped in designing the experiment by the astrologers who wanted an objective test of their "powers" to determine personality based on 'natal charts' of the planets orientation. Boy did they get P'wned.

Here is the link to the Nature article:

http://www.nature.com/nature/journal/v318/n6045/pdf/318419a0.pdf
firefly
 
  4  
Reply Sat 5 Jun, 2010 07:57 pm
@fishgradguy,
Quote:
All this jazz about complex studies testing it's effect on careers is hilarious. Before you do that, you need to prove the test even works


A test "works" if it measures what it purports to measure--if it has validity. The MBTI does have some statistical reliability and validity--it is not totally lacking in either.

All this arguing about the Meyers-Briggs Type Indicator is just nonsense. The questionnaire has been around since about 1942. It was developed as a self-help tool to enable people to understand themselves better, based on an understanding of their personality type, and prefernces, according to Jungian personality theory. It was meant to indicate preferences in perceiving, interacting, etc. It was felt that a better understanding of one's personality type, and preferences, would enable people to make better decisions about the sorts of situations they would be most comfortable in.

There are no single scores, or interpretations on the MBTI. Each person would have preferences on each of the four pairs of dichotomies--lifestyle preferences and attitude preferences as well as perceiving preferences. The test sorts by type, not by traits. And individuals are not generally expected to just accept the test results--they are encouraged to form their own conclusions.

Quote:

Individuals are considered the best judge of their own type. While the MBTI questionnaire provides a Reported Type, this is considered only an indication of their probable overall Type. A Best Fit Process is usually used to allow respondents to develop their understanding of the four dichotomies, to form their own hypothesis as to their overall Type, and to compare this against the Reported Type. In more than 20% of cases, the hypothesis and the Reported Type differ in one or more dichotomies. Using the clarity of each preference, any potential for bias in the report, and often, a comparison of two or more whole Types may then help respondents determine their own Best Fit.
http://en.wikipedia.org/wiki/Myers-Briggs_Type_Indicator#Type_dynamics_and_development


So, I think it must be kept in mind that the MBTI was originally developed, by two non-psychologists, about 68 years ago, essentially as a self-help tool. This is quite different than more current personality tests developed by psychologists, who are experts in test construction, specifically for the more rigorous assessment of personality.

One does not have to be a psychologist to administer, score, or interpret the MBTI, it simply requires brief training in this particular test. The MBTI is not meant to predict anything. Its "claims" really do not go much beyond purporting to tell a person something about their Type and their preferences based on their test responses. It is up to the individual to decide whether the information is accurate or useful in any way. Many people apparently use such information when trying to make decisions about career choices, and some information about their preferences may enable them to make better choices. One can gain self knowledge in many different ways, and the MBTI is simply one way to possibly add to that knowledge.

It is ridiculous to demand large scale research studies to try to prove or disprove anything about the MBTI. What it offers to an individual is really in a class with a self-help book. The true "validity" of the MBTI lies in whether people find the results helpful, whether this knowledge helps them to understand themselves better, and to make better choices based on that knowledge. This is a very subjective matter. No one's self knowledge would rest on these test results alone, the MBTI would simply add something extra, or a different way of understanding oneself. The fact that the MBTI has been around, in one form or another, for 68 years, suggests that some people are attracted to it and find it helpful. Beyond that, there really isn't anything to "prove" about it. It is a "Type Indicator" and it can indicate personality type, if one accepts the basic Jungian premises of the test.





0 Replies
 
DrewDad
 
  1  
Reply Sat 5 Jun, 2010 08:51 pm
@fishgradguy,
fishgradguy wrote:
I would bet my left leg that the majority of people think that the score and interpretation fit their personality despite the fact that the chance that it does is worse than a coin flip (the probability of guessing correctly is 1/16, since there are 16 possible permutations of the 4 letter score and only one is correct)

But since there are four items being measured, there are four other MBTI types that share three of the four personality traits. There eight MBTI types that share two personality traits. There's only one MBTI type that's diametrically opposed (shares no traits).

So if the results are randomly assigned, there's still a damned good chance that some of the traits will be the same.

Also, the 16 MBTI types are not randomly distributed. About 14% of people test as ISFJ, while 1.8% test as ESTJ.
Thomas
 
  1  
Reply Sat 5 Jun, 2010 09:30 pm
@DrewDad,
DrewDad wrote:
So if the results are randomly assigned, there's still a damned good chance that some of the traits will be the same.

... which is an argument against taking the test, and for flipping a coin four times instead.

And fishgradguy's statistical test wouldn't even catch what I consider the greatest obstacle to accepting the test as a scientific device: It's basically an echo. To illustrate what I mean: In the course of participating in this thread, I took one of the longer online versions of the test. It told me my type is INTP, quite pronounced in each of the four dimensions, and offered some specific characteristics that impressed me at first. For example, it said that as an INTP, I probably struggle with routine maintenance tasks such as paying my bills on time, which I do. But then I remembered that the test had already asked me whether I usually show up for appointments on time (I don't), and whether my apartment is usually a mess (it is). So basically the test asked me if I'm a slob, I answered yes, and it concluded that---drumroll---I'm a slob.

This conclusion would be correct of course, and fishgradguy's statistical text would prove similar diagnoses correct for other test-takers too. Moreover, the test would "work" in this sense across cultures, and would be reproducible from one run to the next. (Both are points that the Myers-Briggs foundation mentions as evidence for their test's credibility.) Nevertheless, we shouldn't be impressed, because that's still not science. It's a glorified variant of cold reading. In order to qualify as science, the test should be able to pull a rabbit out of the hat without asking me to put it in there first. It should be able to tell me something I hadn't told it before.
 

Related Topics

I saw the girl who isn't there.... - Question by boomerang
Mentally ill. - Discussion by sometime sun
Adulthood Life Questions - Question by inkluv99
Trolls represent human's basic nature - Discussion by omaniac
weird dream - Discussion by void123
Is being too strong a weakness? - Question by ur2cdanger1
Zombies Existence - Discussion by RisingToShine
How can we be sure that all religions are wrong? - Discussion by reasoning logic
 
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 04/19/2024 at 10:30:32