This post expands on the points made in what I believe is a must read review for all Sport Scientists and in fact any practitioner conducting tests in sport and exercise. As scientists we try to control for confounding variables in our testing environments, such as hour of the day, nutrient and caffeine intake prior to, training prior to, ambient temperature, location of test and age and gender of subjects. However, earlier this year a group from Australia published
They highlighted six additional variables they believed are often ‘overlooked’ and can therefore threaten the internal validity of tests. I will consider each of these in relation to my applied sport science environment. Please feel free to add your comments on what you think of these variables, what they might mean in your own environment or any additional variables you think are overlooked.
1 Attentional Focus
The authors present findings that demonstrate drawing attention to different aspects of a physical test can affect the outcome measure. More specifically, drawing attention to an external focus in which the individual focuses on the movement in relation to the environment rather than internally to their body can improve performance (Wulf, 2013).
Let’s relate this variable to a simple sit and reach test. Do you ask your athletes to focus on lengthening their hamstrings and/or flexing at the hips (internal focus) or to focus on the lines on top of the box (external focus)? Is instruction or any particular focus even given? If your athletes are carrying this out themselves daily as part of a monitoring protocol how do we know or how can we even control for their attentional focus? I’m sure they would get sick of someone stating the same control instructions to them every morning, as much as it would be boring to say it to every player every morning.
2 Knowledge of Exercise Endpoint
Halperin and colleagues present research in which providing athletes with a test/exercise endpoint affects performance, whether it is in terms of distance, time or repetitions. The authors include the study by Faulkner et al (2011) that found trained males completed a 6km self-paced running trail 6% faster when they received feedback at each kilometre compared to no feedback.
The obvious example here is a time trial but let’s also apply it to a possibly more commonly used test in team sports: the multistage fitness test e.g. the YoYo. Obviously the YoYo test announces each level anyway so this is consistent throughout but could additional feedback confound the test performance? Do you sometimes announce the number of runs until the next level and if so is this consistently announced? Do players know their previous score immediately before carrying out the test; they may use this as an exercise endpoint (“as long as I beat my last score!”) If you are incorporating a Submax test do you feedback the time until the endpoint? The important thing here is consistency – if this feedback is given, is it always given and/or at the same point?
3 Encouragement and Feedback
This is potentially a huge factor. The effect encouragement can have on intensity on the pitch by the coach (‘banging the drum’) is widely recognised but do we consider it in our own testing environment? The review highlights that both the frequency and pitch of spoken feedback can influence the performance of a test. In fact, the effect of frequency of encouragement on performance during maximal exercise has been published for many years (Andreacci et al, 2002). This suggests we should make an effort to control for frequency of encouragement and pitch and volume of the voice during all physical tests.
Furthermore, this review highlighted research that certain personality types are affected to different extents. They suggest it may be of use to assess personality types but does this also come down to ‘knowing your players’ and the softer side of the job – knowing which players will need your encouragement and which will achieve maximal on their own?
4 The Number and Gender of Observers
Quite often nowadays in the applied environment we are sharing facilities with a number of teams, whether it is Senior teams, Academy teams, Ladies teams or perhaps other sports or even members of the general public. Are we able to control our tests for the number of observers or is this confounding our results. This review provided evidence that suggests a potential dose-response relationship with the number of observers so should we go to greater lengths to minimise the observers within the environment. I think another important considerations that we are probably more aware of is the influence of the Manager, Coach or other senior staff on test motivation (this linking to the previous point of encouragement and feedback too).
As a female Sport Scientist, the effect of gender of observers on test results is obviously a particularly interesting one to me. Although I daresay I am just viewed as ‘one of the boys’ by many of the players I have worked with! But would we consider if an observing team of the other gender could influence the test results.
5 Mental Fatigue
Mental fatigue has been defined as a psychological state caused by prolonged periods of demanding cognitive activity (Marcora et al, 2009). This review presents a number of studies that demonstrated subjects who were mentally fatigued underperformed in subsequent physical tasks, with endurance based activities more affected than maximum voluntary contraction (MVC) tests. You may be familiar with the confusing Stroop test that was often used to mentally fatigue subjects by mismatching the ink colour and text of a colour e.g. the word red printed in black ink.
My first thought in relation to this variable was in relation to adolescent/Academy athletes. Do we consider the impact of their education on mental fatigue and their subsequent physical performance when scheduling tests? Do they carry out tests at evening training after a whole day of school? Perhaps the athletes arrive for testing directly after lessons during day release or onsite schooling?
Mental fatigue has also been discussed in relation to the demands of a professional senior level season. According to this UEFA funded study, Wenger, Klopp and Mancini have all been quoted discussing the effect of mental fatigue on their players (http://www.uefa.org/MultimediaFiles/Download/uefaorg/Medical/02/20/41/86/2204186_DOWNLOAD.pdf). And more recently a study demonstrated that mental fatigue impairs intermittent running performance (Smith et al, 2015), further highlighting the potential effect in team sports.
So when we conduct tests at the end of the season or during a particularly demanding period of fixtures, are we able to distinguish between the effects of mental and physical fatigue if players show a decrement. Do we even need to – is fatigue just fatigue regardless if it is induced physically or mentally, either way it affects performance?
Now we all know a gym is not a gym without music and I’m sure we have all experienced how important a good playlist is for our own workouts! But is this controlled or even considered during a test? Halperin and colleagues report that listening to music prior to or during a physical test can have positive effects on performance. Furthermore, the tempo of the music can affect physiological and psychological metrics during exercise. Interested readers are directed to the review of music in the exercise domain by Karageorghis and Priest (2010).
Based on these findings, music is an important consideration that could be confounding many everyday tests. Perhaps the easiest way to control for this variable is just to eliminate it? I’m sure that would be met with some disagreement in the gym! But it is also a better option than controlling it with the same song every time you conduct a test!! Is it realistic to expect us to be able to control for the music when the tests are carried out in these places?
If we are trying to establish ‘real’ changes in test performance and ideally the ‘smallest worthwhile change’ then we must try to account for all confounding variables. Some we already do very well, some may be particularly difficult to control in the reality of the applied environment but this review has helped to highlight some very important factors that may or may not have been considered previously. We owe it to our athletes to make their test results as true to their individual performance as possible.
The common theme throughout all of this is of course CONSISTENCY. As the authors recommend: ‘employing the same specific instructions, type, content and format of feedback, number and gender of observers, the presence or absence of music, and an acceptable mental state of the subjects across testing conditions in a consistent manner is of great importance’ (Halperin et al, 2015). They present a summary table of recommendations that I urge you to look up, which also includes suggestions of how specific metholodogical approaches can be reported in the literature to demonstrate how the risk of these confounding variables have been reduced.
Andreacci JL, LeMura LM, Cohen SL, Urbansky EA, Chelland SA, Von Duvillard SP. The effects of frequency of encouragement on performance during maximal exercise testing. J Sports Sci. 2002; 20(4): 345-52.
Halperin I, Pyne DB, Martin DT. Threats to internal validity in exercise science: A review of overlooked confounding variables. Int J Sports Physiol Perform. 2015 Oct;10(7): 823-9.
Karageorghis CL & Priest D-L. Music in the exercise domain: a review and synthesis (Part I). Int Rev Sport Exerc Psychol. 2012; 5(1): 44-66.
Marcora SM. The effects of mental fatigue on repeated sprint ability and cognitive performance in football players. Available at: http://www.uefa.org/MultimediaFiles/Download/uefaorg/Medical/02/20/41/86/2204186_DOWNLOAD.pdf
Marcora SM, Staiano W, Manning V. Mental fatigue impairs physical performance in humans. J Appl Physiol. 2009; 106(3): 857-64.
Smith MR, Marcora SM, Coutts AJ. Mental Fatigue Impairs Intermittent Running Performance. Med Sci Sports Exerc. 2015; 47(8): 1682-90.