Deductive reasoning screening tests discussion

Discuss applications to the clearing house (and to courses that are not in the clearing house system), screening assessments, interviews, reserve lists, places, etc. here
NoodleNew
Posts: 8
Joined: Mon Jan 06, 2020 1:13 am

Deductive reasoning screening tests discussion

Post by NoodleNew » Tue Feb 25, 2020 2:09 pm

Hi All

First time applicant here. I'm fascinated and overwhelmed by this (incredibly long and complex) application process.... let's all hang on in there

Edinburgh (test + questions done)
Glasgow

Lancaster (tests done) Absolutely no way deductive test went well! - Realised too late that the main thing seemingly being tested here was how to take tests – the key was to get an answer WRONG once you were doing well… Instead, I tried to answer the question, which when up against a clock and, clearly, hard questions take longer than easy questions, means that you don’t perform ‘well’ on that test (no bearing on how good you are at deductive reasoning).

Trent (invited to written test day)

RJParker
Posts: 256
Joined: Thu Feb 13, 2014 3:44 pm

Re: 2020 Clinical Doctorate Application Progress Thread

Post by RJParker » Tue Feb 25, 2020 2:22 pm

NoodleNew wrote:
Tue Feb 25, 2020 2:09 pm
Lancaster (tests done) Absolutely no way deductive test went well! - Realised too late that the main thing seemingly being tested here was how to take tests – the key was to get an answer WRONG once you were doing well… Instead, I tried to answer the question, which when up against a clock and, clearly, hard questions take longer than easy questions, means that you don’t perform ‘well’ on that test (no bearing on how good you are at deductive reasoning).
This is simply incorrect I'm afraid. I would warn any potential applicants not to pay heed to this kind of suggestion - it will only harm your score.

hanjenjag
Posts: 7
Joined: Fri Aug 17, 2018 12:07 am

Re: 2020 Clinical Doctorate Application Progress Thread

Post by hanjenjag » Tue Feb 25, 2020 3:35 pm

Have you heard back from Lancaster already?

RJParker
Posts: 256
Joined: Thu Feb 13, 2014 3:44 pm

Re: 2020 Clinical Doctorate Application Progress Thread

Post by RJParker » Tue Feb 25, 2020 3:49 pm

They haven't. I've not sent anything out except to people who didn't actually finish the tests.

NoodleNew
Posts: 8
Joined: Mon Jan 06, 2020 1:13 am

Re: 2020 Clinical Doctorate Application Progress Thread

Post by NoodleNew » Tue Feb 25, 2020 4:05 pm

hanjenjag wrote:
Tue Feb 25, 2020 3:35 pm
Have you heard back from Lancaster already?
Nope, not heard back - sorry if this misled you. This is my belief. Based on how well I did, I am confident that I will not advance to the next stage

RJParker
Posts: 256
Joined: Thu Feb 13, 2014 3:44 pm

Re: 2020 Clinical Doctorate Application Progress Thread

Post by RJParker » Tue Feb 25, 2020 4:14 pm

You may be correct about not having done well. However your assertion about the test is factually incorrect and has the potential to harm the performance of others.

NoodleNew
Posts: 8
Joined: Mon Jan 06, 2020 1:13 am

Re: 2020 Clinical Doctorate Application Progress Thread

Post by NoodleNew » Tue Feb 25, 2020 5:53 pm

RJParker wrote:
Tue Feb 25, 2020 4:14 pm
You may be correct about not having done well. However your assertion about the test is factually incorrect and has the potential to harm the performance of others.
I do apologise if this is how my comment came off. I, however, intentionally held off posting my thoughts until well after the deadline for taking the test had passed so as to ensure no-body acted on my comments. I was interested in hearing others' on the forums thoughts on the test and what strategy they used etc, as the internet and SHL itself has various titbits of info on how to approach such tests.

I understand SHL has a complex algorithm for figuring out how to mark its tests, but my comments were derived from having spoken with people who work in the field of creating such tests. My comments were based on the potential issues surrounding the use of timed, adaptive and negative marking all in one test. Sorry that I did not make this clear when posting

User avatar
Spatch
Posts: 1427
Joined: Sun Mar 25, 2007 4:18 pm
Location: The other side of paradise
Contact:

Re: 2020 Clinical Doctorate Application Progress Thread

Post by Spatch » Tue Feb 25, 2020 6:51 pm

Realised too late that the main thing seemingly being tested here was how to take tests – the key was to get an answer WRONG once you were doing well… Instead, I tried to answer the question, which when up against a clock and, clearly, hard questions take longer than easy questions, means that you don’t perform ‘well’ on that test (no bearing on how good you are at deductive reasoning).

I understand SHL has a complex algorithm for figuring out how to mark its tests, but my comments were derived from having spoken with people who work in the field of creating such tests. My comments were based on the potential issues surrounding the use of timed, adaptive and negative marking all in one test. Sorry that I did not make this clear when posting
As someone who has taken a range of computer adaptive tests since 2000, have some interest in Item Response Theory and psychometric development, I am with RJParker with their view on this.

Even from a theoretical viewpoint, while easier questions are quicker than harder questions, they are also going to be weighed more highly (especially towards the end of a testing session, when the differentiation happens). If you were to adopt a strategy of deliberately getting questions wrong in order to obtain easier questions earlier on, you are going to be introducing a ceiling to your grade far quicker. To use a blunt analogy, getting a lot of earlier questions correct puts you in the "A" category, with the latter questions zeroing in on whether that is a A+, A or A-. Getting them wrong deliberately earlier, would be establishing you in the "C" category, but the questions then discern whether you are a C+, C or C-. For these sorts of test to go well you really need to be getting harder items sooner, and for the difficulty to keep spiking. That suggests you are doing well and the test is still trying to find your upper threshold (Correct me if I am wrong or if I am failing deductive reasoning myself.)

If you did want to go about enhancing your 'test metagame', there is probably greater mileage in the route of anxiety manangement, controlled breathing, working under time pressure, not being intimidated by other people, and making sure you fully understand instructions before starting the test. Incidentally, all of these skills are very relevant to the day to day practice of actually being a clinical psychologist, so none of it will be wasted.
Last edited by Spatch on Tue Feb 25, 2020 7:54 pm, edited 1 time in total.
Shameless plug alert:

Irrelevant Experience: The Secret Diary of an Assistant Psychologist is available at Amazon
http://www.amazon.co.uk/Irrelevant-Expe ... 00EQFE5JW/

NoodleNew
Posts: 8
Joined: Mon Jan 06, 2020 1:13 am

Re: 2020 Clinical Doctorate Application Progress Thread

Post by NoodleNew » Tue Feb 25, 2020 7:08 pm

Hi there.
Thanks for this, very helpful indeed. Yeah, it was a weird situation of finding the question hard and having to decide whether to spend the time required to work out the answer - and thus blow your time - or put anything just to get onto another question - and assuage the stress of your brain reminding you that 'not completing the test incurs a penalty' was one of the last things you read! The anxiety is most certainly an issue here! I can totally relate to how the working under pressure element relates to the job and how a tough test is a good selection tool. I suppose my 'meta-concern' surrounded the issue of simultaneously managing working out the answers and being mindful of the strategy to use. Point to self: learn to compartmentalise!

RJParker
Posts: 256
Joined: Thu Feb 13, 2014 3:44 pm

Re: 2020 Clinical Doctorate Application Progress Thread

Post by RJParker » Wed Feb 26, 2020 8:40 am

Nice explanation Spatch, thanks.

Sorry to jump on this but these posts don't go away, they are still here next year when another group are doing tests and I want our applicants to perform to the best of their abilities.

User avatar
ell
Moderator
Posts: 2372
Joined: Mon Mar 01, 2010 12:45 pm

Re: Deductive reasoning screening tests discussion

Post by ell » Wed Feb 26, 2020 9:34 am

This is an interesting and important discussion, but have moved it to its own thread to ensure it doesn't detail the progress thread, and so any potential inaccurate info isn't the first thing people see when coming to the forum (as we know a lot of new users start with the progress thread).

PinkFreud19
Posts: 44
Joined: Sat May 18, 2019 3:08 pm

Re: 2020 Clinical Doctorate Application Progress Thread

Post by PinkFreud19 » Wed Feb 26, 2020 7:23 pm

Spatch wrote:
Tue Feb 25, 2020 6:51 pm
If you did want to go about enhancing your 'test metagame', there is probably greater mileage in the route of anxiety manangement...
This is precisely why I am sceptical of the use of such tests in DClin selection. State anxiety is going to be at it's absolute peak during the application process and the effects of anxiety on performance on these tests are difficult to deny. I'm sure each and every one of us has had moments when we need to re-read a sentence five times because our current emotional and/or attentional state is preventing the information from "going in". What happens when this starts occurring on, arguably, one of the biggest tests in one's career? The panic is only going to increase.

Also, mistakes compound on this test. A simple misread of the question could lead to having to restart multi-step calculations, which could delay the answer by minutes. This leaves even less time for the remaining questions, increasing the chance of more errors (and anxiety).

I agree that anxiety management and working under pressure are important for a clinical psychologist. However, I can't quite bring myself to agree that work-related pressure is anywhere equivalent to the unique pressures involved with completing shortlisting tests.

Brio52
Posts: 15
Joined: Tue Mar 08, 2016 10:36 am

Re: Deductive reasoning screening tests discussion

Post by Brio52 » Wed Feb 26, 2020 8:16 pm

I think it varies from person to person. For instance I'd argue (and I believe there is evidence suggesting that) that many interview formats are also a heavily pressured situation that do a poor job of capturing whether a person is actually suitable for a position.

I think that in the situation universities are in where there is an overwhelming amount of applications of people with no apparent shortage of the desired experience, qualifications, or aptitude for the role, there will need to be methods of selecting who proceeds to the next stage. There doesn't seem to be a systemic bias in the way that universities do this, so if people feel that aptitude tests would unfairly select against them it is possible to only apply to universities that do not use them. From what I've heard from Lancaster they don't have concerns about the quality of the applicants they end up offering places to, and I've never heard any type of rumours of "x course' s trainees are no good", so I think at the end of the day the chief complaint of the difficult selection process is that it feels pretty bad for the applicant to be discounted by something that feels unrelated to the course, rather than an actual problem with the process itself. Just my two cents!

PinkFreud19
Posts: 44
Joined: Sat May 18, 2019 3:08 pm

Re: Deductive reasoning screening tests discussion

Post by PinkFreud19 » Wed Feb 26, 2020 9:34 pm

Brio52 wrote:
Wed Feb 26, 2020 8:16 pm
I think it varies from person to person. For instance I'd argue (and I believe there is evidence suggesting that) that many interview formats are also a heavily pressured situation that do a poor job of capturing whether a person is actually suitable for a position.

I think that in the situation universities are in where there is an overwhelming amount of applications of people with no apparent shortage of the desired experience, qualifications, or aptitude for the role, there will need to be methods of selecting who proceeds to the next stage. There doesn't seem to be a systemic bias in the way that universities do this, so if people feel that aptitude tests would unfairly select against them it is possible to only apply to universities that do not use them. From what I've heard from Lancaster they don't have concerns about the quality of the applicants they end up offering places to, and I've never heard any type of rumours of "x course' s trainees are no good", so I think at the end of the day the chief complaint of the difficult selection process is that it feels pretty bad for the applicant to be discounted by something that feels unrelated to the course, rather than an actual problem with the process itself. Just my two cents!
I'd agree with that. I'm satisfied with their existence as long as there are plenty of universities that do not utilise selection tests, or utilise tests of a different sort that are, perhaps, less prone to the effects that I described (I had less of an issue with Surrey when I applied, because the time limit was not quite so imposing). I think it would be an awful shame that, if all universities used GMA tests, many potentially excellent psychologists would be barred from the profession on the basis that they do not do particularly well in the specific context that a GMA is administered (and I do not think difficulty on GMAs is necessarily an indicator of that person's intelligence). I acknowledge that, for all my criticisms, many benefit from this format that would be similarly held back by other selection systems that I prefer.

My concern is that they are, perhaps, seen as the "solution" to the diversity problem because, supposedly, Lancaster's system compensates for systemic inequalities of opportunity by attempting to circumvent prior experience. However, given that there is evidence to indicate that GMAs are enormously moderated by practice, and that people from disadvantaged backgrounds may have had less opportunity to practice numerical and verbal skills for the same reasons as them potentially having less opportunity to gain relevant experience, I'm not sure whether GMAs would be any better. I welcome evidence that contradicts this position, but I would want to see evidence that specifically demonstrates that the GMA component of selection benefits minorities and the disadvantaged, at least if we wish to use the diversity narrative as the justification of the use of GMAs.

Just my two cents too!

User avatar
Spatch
Posts: 1427
Joined: Sun Mar 25, 2007 4:18 pm
Location: The other side of paradise
Contact:

Re: Deductive reasoning screening tests discussion

Post by Spatch » Wed Feb 26, 2020 10:38 pm

I don't think that anyone is under the impression that GMAs are a perfect method of selecting future psychologists and they are not pretending to be. It just seems to be a transparent, scalable and measurable way to thin a massive herd of applicants. Unlike the traditional way of selection via a panel's subjective opinion of a form (that is equally if not more problematic), at least these tests are unbiased in that the same test is given under similar conditions.
I think it would be an awful shame that, if all universities used GMA tests, many potentially excellent psychologists would be barred from the profession on the basis that they do not do particularly well in the specific context that a GMA is administered
The reality is that potentially excellent psychologists are always going to be excluded under any system due to the small number of places and the massive demand. The existing system does that and any future system will do it too and I don't think there is any way around this. I guess for me the bigger question is that if somehow these excluded potentially excellent psychologists were to displace the actual successful cohort would there be any major difference to the advances and day to day working of the profession? I have no way of knowing this, but I suspect it would probably be fairly similar to the current situation.

The actual changes that would impact on diversity would probably be unpalatable to everyone even though they may be incredibly helpful for the advancement of the profession. For example fluency in a minimum of two languages would really bump up BME numbers and allow us to reach more client groups. Mandatory pre-training PhDs, would ensure that all trainees would have advanced research skill and a high degree of self managment ability. Weighting primary school postcodes which would increase the liklihood of having candidates from deprived areas of the country. I can't imagine anyone wanting such a system.

In any discussion of any potential selection change I always ask the question "If the system changes to one of your choosing, but implementing this change would automatically disqualify you from the process, would you still be happy to do it?" Ultimately, how much of this conversation is about building a system that picks the best clinical psychologists, or just building a system that picks us?
Shameless plug alert:

Irrelevant Experience: The Secret Diary of an Assistant Psychologist is available at Amazon
http://www.amazon.co.uk/Irrelevant-Expe ... 00EQFE5JW/

Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests