1 Professor Mark Horswill; School of Psychology; The University of Queensland PSYC3020 ASSIGNMENT BRIEFING • See the ECP or Course Overview for the assignment deadline. • Please ensure that you read this entire document carefully, as soon as possible. You need to understand all of its contents to maximise your marks. • If you need help with this assignment (i.e., something not covered by this briefing), please contact YOUR TUTOR and not me (Mark Horswill). Remember that it is the tutors who will be marking your assignment. • See Lectures 1 to 5 for relevant information on this assignment, as well as the Assignment Primer activity in the second tutorial (Week 3). Contents SECTION 1 – What this assignment is about ……………………………………………………………………3 Preamble………………………………………………………………………………………………………………….3 What to do first ………………………………………………………………………………………………………….3 Your assignment topics ………………………………………………………………………………………………4 TOPIC 1 (hardest option; but easier to gain marks for independent thinking and more likely to be engaging from your perspective): Design and validate a new psychological behavioural test or new battery of tests for differentiating people in the domain of your choice. ………………………………………………………………………………………………………………….4 TOPIC 2 (hardest of the prescribed topics; least support): Design and validate a new test or new battery of tests to select the best applicants for entry into a postgraduate training program for clinical psychology…………………………………………………………………………………6 TOPIC 3: Design and validate a new test or new battery of tests to assess the competence of radiologists when screening (i.e., interpreting) mammograms for tumours…………………..7 TOPIC 4 (easiest; most support): Design and validate a new test or new battery of tests to assess whether older drivers are safe to continue to drive. (You may want to narrow the scope of this topic to a specific context e.g., a new test that is suitable for a busy doctor’s surgery or for a Queensland Transport office.) ……………………………………………………………9 SECTION 2 – Choosing a topic and searching the literature ……………………………………………..11 SECTION 3 – How to write the research proposal……………………………………………………………13 Warning against plagiarism and collusion ……………………………………………………………………13 General requirements……………………………………………………………………………………………….13 Your target audience……………………………………………………………………………………………..13 Topic vs. title………………………………………………………………………………………………………..14 2 Length…………………………………………………………………………………………………………………14 Writing style and tips ………………………………………………………………………………………………..14 Sections of the proposal……………………………………………………………………………………………16 Title page…………………………………………………………………………………………………………….17 Executive summary……………………………………………………………………………………………….17 Aims and significance ……………………………………………………………………………………………17 Background………………………………………………………………………………………………………….18 Proposed test/ test battery and rationale ………………………………………………………………….18 Study design ………………………………………………………………………………………………………..19 Test evaluation: Assessment of reliability and validity ………………………………………………..19 Conclusions …………………………………………………………………………………………………………20 References ………………………………………………………………………………………………………….20 Grade descriptions………………………………………………………………………………………………………21 Marking criteria …………………………………………………………………………………………………………..23 Frequently asked questions ………………………………………………………………………………………….24 How many references should I read?………………………………………………………………………….24 How many marks do I lose if I hand in my work late without an approved extension? ………..24 How do I apply for an extension? ……………………………………………………………………………….24 What do I do if I think my assignment has been marked unfairly?……………………………………24 Appendix: Considering Topic 1? ……………………………………………………………………………………26 3 SECTION 1 – What this assignment is about Preamble Imagine that you are an academic or a consultant with expertise in human measurement. You have been invited to write a research proposal describing how you will create and evaluate a novel test or battery of tests to measure people’s skill, ability, aptitude, personality trait, or mental state, in a specific real world domain (note that a battery of tests is a combination of two or more measures, which could be existing tests). The aim is to propose something that would be of genuine benefit to society/ the real world in some way. The design of the test must be based on empirical research and the research proposal must demonstrate your understanding and application of psychometric principles (i.e., central to your proposal must be a description of how you will establish your new test’s reliability and validity). For example, empirical research indicates that individuals with good hazard perception skills have fewer car crashes. In this case, you might aim to create a new hazard perception test that can differentiate safe vs. high-risk drivers. You would also need to detail how your new test offers a distinct advantage over existing hazard perception tests. What to do first To help illustrate what we’re after in this assignment, we have made a number of example assignments available for you to read, all of which received a Grade 7. You are not allowed to use the topics covered by these examples for your own assignment. Before you read any further through this briefing, go and download these example assignments and read them (available in the Assessment → Assignment Materials section on the PSYC3020 Blackboard website). This should give you a clearer idea of what this assignment is about and the sort of document we’d like you to produce. 4 Your assignment topics In choosing your topic for this assignment, you have two options: (1) If there is an area in which you are particularly interested, construct your own topic (Topic 1 below); or (2) Select one of the ‘Suggested assignment topics’ listed (Topics 2 to 4 below). The topics suggested are listed below in order of difficulty, with Topic 1 being the most difficult for which to write a research proposal (you have to think up your own topic, see the Appendix) and Topic 4 being the easiest (with the most support provided). Bear in mind that if you select an easier topic, your proposal will have to be proportionally more interesting and engaging (which also demonstrates extensive evidence of originality, independent thinking, and critical analysis of the empirical literature) to achieve a high grade. If you pick a more challenging topic, there will be less support provided in this briefing and fewer previous research studies on which you can base your research proposal, but there may be more opportunities for innovative thinking. Again, you will need to write an interesting and engaging proposal which also demonstrates extensive evidence of originality, independent thinking, and critical analysis of the (limited) empirical literature to achieve a high grade. You have been provided with starting references for Topics 2 to 4 (note that the references for this assignment are not on the PSYC3020 library site – this is deliberate to help develop your literature search skills). You should read other literature beyond these starting references in order to maximise your mark. TOPIC 1 (hardest option; but easier to gain marks for independent thinking and more likely to be engaging from your perspective): Design and validate a new psychological behavioural test or new battery of tests for differentiating people in the domain of your choice. If you construct your own assignment topic, you must obtain written approval for it from your tutor. Although you are welcome to discuss your ideas with your tutor in person, the final topic must be requested and approved via email (to ensure it is documented). If you choose this option, it is your responsibility to ensure that your suggested topic is of an appropriate scope for the required word length. You cannot choose a topic, or use more than one measure, from any of the example assignments. You also cannot write an assignment on testing the effects of concussion because this topic is being used in a tutorial activity. Narrowing a suggested topic Even if you choose a suggested topic, you may find that you wish to narrow it to make the scope of your assignment more manageable, given the word limit. If you simply wish to narrow a topic, you do not need your tutor’s approval. However, it is your responsibility to ensure that the narrowed version of the topic is of an appropriate scope for the required word length. 5 Special considerations for Topic 1 Your research proposal tests any skill/ ability/ trait in an applied setting of your choice if you choose to write this topic. However, keep in mind that your new test/ battery of tests needs to constitute a psychological behavioural measure. You are required to seek your tutor’s approval for a Topic 1 research proposal. You are strongly advised to contact your tutor early because approval can be a drawn-out process (note: check if your tutor has a deadline for Topic 1 approvals). The process of seeking topic approval may be conducted via email, verbal communication or a combination of both, depending on your tutor. However, final topic approval from your tutor must be given in writing via email. It is advisable for you to chat with your tutor about your proposal idea before seeking approval. You will need to address the following questions/ points to obtain approval for your proposal idea: 1. Define/ operationalise what skill/ ability/ trait the proposed test measures (e.g., postconcussion cognitive ability; clinical skills; driving ability). Identify up to 3-4 tests/ underlying variables that are used to measure this skill/ ability/ trait. Word count limits the number of tests/ underlying variables you include in the proposal. 2. State how the proposed test is novel. Does it address problems of existing tests? Does it apply an existing test to a new population? Is it brand new? Why should the proposed test be funded? 3. Provide a rough description of the form of the proposed test. 4. How will two reliability and two validity strategies be used to evaluate the proposed test? Depending on your proposal idea, your tutor may also follow up with additional questions that are related to the points provided below [refer to Section 3 and the Appendix of the briefing for more details]: 5. Why is it important to measure your chosen variable(s)? What is the rationale? Remember, it needs to be of real benefit to society. 6. Who is your population of interest? Industry? Age? 7. How will your variable(s) be measured? Self-report? Behavioural observation? How will the test be administered? How is it scored? HOW does WHO do WHAT, WHEN and WHY? Refer to the Assignment Primer 2 – Concussion Activity tutorial for inspiration. 8. What practical and/ or ethical considerations of your proposed test are important? How are they addressed? 9. Is a dependent/ outcome variable required? What about a contrast group? Ideally, your answers to these questions should be concrete. Your tutor should have a clear idea of how/ why the research proposal is important, how the test(s) will be measured, and how the test(s) will be validated. Past experience has shown a positive relationship between detail and performance: Clearer/ more concrete approvals are associated with research proposals that have scored higher marks — probably because they are better reasoned through. Good luck! If you want to consider Topic 1 further, then read the Topic 1 Appendix at the end of this briefing. 6 TOPIC 2 (hardest of the prescribed topics; least support): Design and validate a new test or new battery of tests to select the best applicants for entry into a postgraduate training program for clinical psychology. There is little previous research into the entry selection for postgraduate clinical psychology courses. However, one approach might be to review the literature on selection into other postgraduate training programs in related areas (e.g., see Carr, 2009, below) to provide insight into what could be done. Background information references: Fauber, R. L. (2006). Graduate admissions in clinical psychology: Observations on the present and thoughts on the future. Clinical Psychology: Science and Practice, 13(3), 227-234. https://doi.org/10.1111/j.1468-2850.2006.00029.x Johnson, W. B., & Campbell, C. D. (2002). Character and fitness requirements for professional psychologists: Are there any? Professional Psychology: Research and Practice, 33(1), 46-53. https://doi.org/10.1037/0735-7028.33.1.46 Johnson, W. B., & Campbell, C. D. (2004). Character and fitness requirements for professional psychologists: Training directors’ perspectives. Professional Psychology: Research and Practice, 35(4), 405-411. https://doi.org/10.1037/0735-7028.35.4.405 Other potentially interesting references: Carr, S. E. (2009). Emotional intelligence in medical students: Does it correlate with selection measures? Medical Education, 43(11), 1069-1077. https://doi.org/10.1111/j.1365- 2923.2009.03496.x Kelly, E. L., Goldberg, L. R., Fiske, D. W., & Kilkowski, J. M. (1978). Twenty-five years later: A follow-up of the graduate students in clinical psychology assessed in the VA Selection Research Project. American Psychologist, 33(8), 745-755. https://doi.org/10.1037/0003- 066X.33.8.746 Pope-Davis, D. B., Reynolds, A. L., Dings, J. G., & Nielson, D. (1995). Examining multicultural counseling competencies of graduate students in psychology. Professional Psychology: Research and Practice, 26(3), 322-329. https://doi.org/10.1037/0735-7028.26.3.322 Rem, R. J., Oren, E. M., & Childrey, G. (1987). Selection of graduate students in clinical psychology: Use of cutoff scores and interviews. Professional Psychology: Research and Practice, 18(5), 485-488. https://doi.org/10.1037/0735-7028.18.5.485 7 TOPIC 3: Design and validate a new test or new battery of tests to assess the competence of radiologists when screening (i.e., interpreting) mammograms for tumours. Download and read the background information references below to gain an understanding of what is known about the expertise required to detect tumours in mammograms. Consider (1) what measures previous researchers have found to predict performance in this skill, and (2) what methods/ tests researchers have used to measure performance in the past. One option might be to develop a new simulation test where participants view lots of example mammograms and have to indicate whether a tumour is present or not. One way to assess the validity of such a measure might be to compare novice radiologists with expert radiologists (i.e., a group you would expect to be bad at detecting tumours vs. a group you would expect to be good at detecting tumours). If your new test could tell these groups apart (i.e., experts score significantly higher than novices), then this could be considered evidence for its validity. In some of the suggested references below, you may find yourself unfamiliar with some of the performance analyses and terms. For example, many of the articles talk about ‘sensitivity’, ‘specificity’ and Receiver Operator Characteristic (ROC) curves. Don’t worry about these. We do not expect you to understand these concepts, nor use them in your test design (these issues will be covered toward the end of this course). The articles provided should still give you a start with the background literature, as well as ideas for methodology. If you wish to discuss or measure performance in detecting objects in images, a good alternative to using unfamiliar terms/ methods such as sensitivity, specificity, and ROC curves is instead to talk about false alarm and correct detection rates to measure performance in this domain. This is essentially the sources from which sensitivity, specificity, and ROC curves are derived anyway. Using this alternative approach is perfectly acceptable to achieve a high Grade 7. Background information references: Barlow, W. E., Chi, C., Carney, P. A., Taplin, S. H., D’Orsi, C., Cutter, G., Hendrick, R. E., & Elmore, J. G. (2004). Accuracy of screening mammography interpretation by characteristics of radiologists. Journal of the National Cancer Institute, 96(24), 1840- 1850. https://doi.org/10.1093/jnci/djh333 Beam, C. A., Layde, P. M., & Sullivan, D. C. (1996). Variability in the interpretation of screening mammograms by US radiologists: Findings from a national sample. Archives of Internal Medicine, 156(2), 209-213. https://doi.org/10.1001/archinte.1996.00440020119016 Elmore, J. G., Jackson, S. L., Abraham, L., Miglioretti, D. L., Carney, P. A., Geller, B. M., Yankaskas, B. C., Kerlikowske, K., Onega, T., Rosenberg, R. D., Sickles, E. A., & Buist, D. S. M. (2009). Variability in interpretive performance at screening mammography and radiologists’ characteristics associated with accuracy. Radiology, 253(3), 641-651. https://doi.org/10.1148/radiol.2533082308 Nodine, C. F., Kundel, H. L., Lauver, S. C., & Toto, L. C. (1996). Nature of expertise in 8 searching mammograms for breast masses. Academic Radiology, 3(12), 1000-1006. https://doi.org/10.1016/S1076-6332(96)80032-8 Other potentially interesting references: Carney, P. A., Sickles, E. A., Monsees, B. S., Bassett, L. W., Brenner, R. J., Feig, S. A., Smith, R. A., Rosenberg, R. D., Bogart, T. A., Browning, S., Barry, J. W., Kelly, M. M., Tran, K. A., & Miglioretti, D. L. (2010). Identifying minimally acceptable interpretive performance criteria for screening mammography. Radiology, 255(2), 354-361. https://doi.org/10.1148/radiol.10091636 Elmore, J. G., Wells, C. K., Lee, C. H., Howard, D. H., & Feinstein, A. R. (1994). Variability in radiologists interpretations of mammograms. New England Journal of Medicine, 331(22), 1493-1499. https://doi.org/10.1056/NEJM199412013312206 Goddard, C. C., Gilbert, R. J., Needham, G., & Deans, H. E. (1998). Routine receiver operating characteristic analysis in mammography as a measure of radiologists’ performance. British Journal of Radiology, 71(850), 1012-1017. https://doi.org/10.1259/bjr.71.850.10211059 Miglioretti, D. L., Gard, C. C., Carney, P. A., Onega, T. L., Buist, D. S. M., Sickles, E. A., Kerlikowske, K., Rosenberg, R. D., Yankaskas, B. C., Geller, B. M., & Elmore, J. G. (2009). When radiologists perform best: The learning curve in screening mammogram interpretation. Radiology, 253(3), 632-640. https://doi.org/10.1148/radiol.2533090070 9 TOPIC 4 (easiest; most support): Design and validate a new test or new battery of tests to assess whether older drivers are safe to continue to drive. (You may want to narrow the scope of this topic to a specific context e.g., a new test that is suitable for a busy doctor’s surgery or for a Queensland Transport office.) The first thing you should do is download and read the hazard perception assignment written by Mark Horswill about hazard perception in driving – which is available on the PSYC3020 Blackboard website under Assessments → Assignment Materials (“Topic 4 – Hazard Perception Assignment written by Mark”). Note that this assignment does NOT meet all the current requirements for the PSYC3020 assignment (e.g., it doesn’t use the Assignment Template) – so don’t replicate these aspects of it or you’ll fail! Instead, pay attention to the type of arguments posed and rationale made as to why this new test should exist and thus be funded. In particular, look at how the new test seeks to address the key limitations of the existing measures. Then search for and read the background information article cited below (i.e., Morgan & King, 1995). Consider all the measures that – according to the empirical research evidence – appear to be able to predict older driver performance. Perhaps you could propose a new test battery which included some of the measures found to be most predictive of older driver performance? Or maybe you could propose some sort of driving simulator measure in which older drivers had to demonstrate a competent ability level to cope with relevant challenging situations, especially those driving situations/ scenarios known to be particularly problematic for elderly drivers? Read the articles given below to gain ideas of the sort of reliability and validity studies that could be proposed and how to describe them (e.g., Horswill, 2016a, 2017; Wetton et al., 2011). To give one example, you could examine the correlation between your new test or new test battery and crash records on a sample of older drivers as one way of establishing validity. This could mean proposing a study where you tested a few hundred older drivers using your measure and then found out how many accidents in which they had been involved over the previous few years (e.g., Horswill et al., 2015). Another option might be to examine if test scores could predict risky on-road behaviour (see Hill et al., 2019). Don’t forget to consider the practicalities of your test. For example, developing a measure that requires use of a computer mouse or technology that calls for its users to react in a way not typical of driving responses, or even reading small text, might be a problem for this age group. These types of factors need to be kept in mind when developing your new test or new test battery. Background information references: Hill, A., Horswill, M. S., Whiting, J., & Watson, M. O. (2019). Computer-based hazard perception test scores are associated with the frequency of heavy braking in everyday driving. Accident Analysis & Prevention, 122, 207-214. https://doi.org/10.1016/j.aap.2018.08.030 Horswill, M. S. (2016a). Hazard perception in driving. Current Directions in Psychological Science, 25(6), 425-430. https://doi.org/10.1177/0963721416663186 10 Horswill, M. S. (2017). Hazard perception tests. In D. L. Fisher, J. K. Caird, W. J. Horrey & L. M. Trick (Eds.), Handbook of Teen and Novice Drivers: Research, Practice, Policy, and Directions (pp. 439-450). CRC Press. https://doi.org/10.1201/9781315374123 Horswill, M. S., Anstey, K. J., Hatherly, C. G., & Wood, J. (2010). The crash involvement of older drivers is associated with their hazard perception latencies. Journal of the International Neuropsychological Society, 16(5), 939-944. https://doi.org/10.1017/S135561771000055X Horswill, M. S., Hill, A., & Wetton, M. (2015). Can a video-based hazard perception test used for driver licensing predict crash involvement? Accident Analysis & Prevention, 82, 213-219. https://doi.org/10.1016/j.aap.2015.05.019 Morgan, R., & King, D. (1995). The older driver – A review. Postgraduate Medical Journal, 71(839), 525-528. http://dx.doi.org/10.1136/pgmj.71.839.525 Wetton, M. A., Hill, A., & Horswill, M. S. (2011). The development and validation of a hazard perception test for use in driver licensing. Accident Analysis and Prevention, 43(5), 1759- 1770. https://doi.org/10.1016/j.aap.2011.04.007 Wetton, M. A., Horswill, M. S., Hatherly, C., Wood, J. M., Pachana, N. A., & Anstey, K. J. (2010). The development and validation of two complementary measures of drivers’ hazard perception ability. Accident Analysis and Prevention, 42(4), 1232-1239. https://doi.org/10.1016/j.aap.2010.01.017 Other potentially interesting references: George, S., Clark, M., & Crotty, M. (2008). Validation of the Visual Recognition Slide Test with stroke: A component of the New South Wales occupational therapy off-road driver rehabilitation program. Australian Occupational Therapy Journal, 55(3), 172-179. https://doi.org/10.1111/j.1440-1630.2007.00699.x Horswill, M. S., Anstey, K. J., Hatherly, C. G., & Wood, J. M. (2010). The crash involvement of older drivers is associated with their hazard perception latencies. Journal of the International Neuropsychological Society, 16(5), 939-944. https://doi.org/10.1017/S135561771000055X Mallon, K., & Wood, J. M. (2004). Occupational therapy assessment of open-road driving performance: Validity of directed and self-directed navigational instructional components. American Journal of Occupational Therapy, 58(3), 279-286. https://doi.org/10.5014/ajot.58.3.279 O’Connor, M. G., Kapust, L. R., & Hollis, A. M. (2008). DriveWise: An interdisciplinary hospitalbased driving assessment program. Gerontology and Geriatrics Education, 29(4), 351- 362. https://doi.org/10.1080/02701960802497894 Unsworth, C. A., Pallant, J. F., Russell, K. J., Germano, C., & Odell, M. (2010). Validation of a test of road law and road craft knowledge with older or functionally impaired drivers. American Journal of Occupational Therapy, 64(2), 306-315. https://doi.org/10.5014/ajot.64.2.306 11 SECTION 2 – Choosing a topic and searching the literature • Read broadly before choosing a topic. This will allow you to establish some background knowledge on the topics, before focussing on the area in which you wish to design and evaluate a new test/ test battery. • You are strongly encouraged to go beyond the provided references to do well in this assignment. These references are given as a broad and general background to provide you with a starting point to help generate ideas for your assignment. o Try searching for keywords in databases such as Google Scholar and Web of Science. o Another route to finding more articles is to look up references cited in the reference sections of the recommended readings and use databases to see what they cite and who has cited them. • Find an empirically-supported relationship between the skills or abilities that you want your test to measure and what this measure actually indicates in a real world domain (i.e., upon what aspect of performance this skill or ability impacts). o Do a little background reading and general searches for all skills or abilities that might impact on performance in the domain you are investigating. From this, you can get ideas for specific abilities for which you would like to design a test or test battery. o One skill or ability is not expected to predict overall performance in a domain. Therefore, you should specify your argument (e.g., it would be incorrect to say that hazard perception skills alone indicate someone is a better driver; a better way to argue this would be to specify what aspect of performance this impacts e.g., safer drivers generally have good hazard perception abilities). o You must find empirical research to support the design of your test or at least lead the reader to understand why you have employed a particular method (e.g., if you were proposing a hazard perception test, then find other research that has used hazard perception tests and any evidence that hazard perception tests are reliable and valid indicators of actual real world driving ability). • Search for other tests that assess your selected skills/ abilities/ traits: o Look at the current test(s) of the skill/ ability/ trait. What properties can you improve? Can you aggregate a few existing tests into one comprehensive battery, or just improve elements of existing tests? • Within this, consider the way the test is currently administered (including format of presentation, response requirements etc.). Are there any issues that should be addressed that may interfere with this test accurately gauging what it claims to measure? o Do these tests display adequate validity and reliability? 12 o Can you develop a way of testing this skill/ ability/ trait that will address any psychometric limitations/ shortcomings of current tests in this area? • Focus the research question/ aim of your research proposal in order to ensure you know exactly what you are looking at and why. o For example, ask yourself questions such as these: What domain am I investigating? (E.g., driving). What performance outcome measures am I interested in? (E.g., preventing car crashes). What skills/ abilities/ traits are related to these performance measures? (E.g., fast and accurate anticipation of potential traffic conflicts). What tests of these skills/ abilities/ traits already exist? How can I improve them or improve the evidence for their effectiveness with my knowledge of psychometric principles (i.e., reliability and validity)? (E.g., more evidence for the predictive criterion-related validity of hazard perception tests but specific to older drivers, current hazard perception tests do not include scenes of night-time driving hazards, or more alternate forms of hazard perception tests are needed). What are some risk factors that could potentially limit the effectiveness of my new measure? (E.g., if the hazard perception test has audio instructions, people with a hearing impairment will not be able to take the test unless they are provided with subtitles). 13 SECTION 3 – How to write the research proposal Warning against plagiarism and collusion Your assignment must be your own work. Any plagiarism will be detected by the TurnItIn software and reported to the School of Psychology for further investigation. If any of the wording in your assignment is not your own, then you risk getting into very serious trouble. Copying other people’s writing and/ or ideas and trying to pass it off as your own in order to gain a degree is effectively committing both robbery and fraud. Note that TurnItIn has a huge database of articles that are available both in the scientific literature and on the internet (including websites). It also contains previous and current assignments by other students, both in this course and in all other courses around the world that use this software. It also contains all the assignment readings and briefings, including the one you’re reading right now, as well as the example assignments. In previous semesters, we’ve caught people trying to copy materials from obscure unpublished documents on (what they thought were) obscure websites, we’ve caught people trying to “mash up” a friend’s assignment in order to beat the software, and we’ve caught people copying from assignments written by previous PSYC3020 students several years ago (where they thought the assignments were old enough not to be in the system any more). This also means that you should never lend your assignment to a fellow student because if they copy your work, then you also risk being charged with academic misconduct. In the past, some students caught plagiarising have had their assignment marks dramatically reduced (often to zero). If warranted, a permanent note of this misconduct appears on their academic record. Once plagiarism or other types of academic misconduct is suspected, the resulting process is incredibly stressful for those concerned and has potential dire consequences – please please please don’t do it to yourself! Do you really want to have to repeat this course if you fail as a result of some form of academic misconduct (this has happened multiple times in the past)? And did I mention the School of Psychology Punishment Dome? You really don’t want to end up in there, believe me. General requirements Your target audience Write as if your research proposal is for an intelligent layperson who has a general understanding of things like statistical significance, but no specific insight into your topic. Within this, you can assume your reader knows what reliability and validity are (hence definitions should not be supplied for these), but you will need to demonstrate your understanding of the various types of reliability and validity via how you establish your study design and phrase your study predictions. 14 Topic vs. title Note that you are required to come up with a title for your research proposal – this is not the same as your assignment topic. It goes without saying that the title should be appropriate for both the research proposal (encapsulating the topic in some way) and your audience. Just as importantly, it’s also your first opportunity to engage the reader’s interest – so don’t squander that opportunity (note that ‘interest’ is one of the marking criteria). Length 2000 words is the length set for this assignment and you must include the word count of your assignment on the title page. Note that this includes all words, except in-text references, the References list, and the title page. There will be 10% leeway on this length (i.e., if you really want to ‘poke the bear’ then you can get away with 2200 words). Assignments that are longer than this will be penalised (i.e., 2201-2399 words = 5% penalty; 2400-2599 words = 10% penalty; >2600 words = 15% penalty). Writing style and tips • This assignment is intended to assess how well you can construct an argument involving scientific evidence, as well as how well you can apply psychometric principles in constructing a new test or new battery of tests. • Address the task directly. Don’t write a generic proposal or you’ll fail. Be specific – pretend this is a real research proposal with unlimited funds and resources. Approach it like a professional. • Produce a well-written, well-argued piece of scientific writing based around empirical research. You must include a thorough review of empirical evidence regarding the design of your test to do well in this assignment. • Try to replicate the sort of writing style (formal and concise) used in the example assignments provided on the PSYC3020 Blackboard page. • You are encouraged to be provocative and critical in appraising the evidence that you present. Don’t just rattle off a list of studies. Explain why they are relevant to your proposal. What are their strengths and how do these apply to your proposal? What are their limitations and how does your proposal overcome these limitations? • Don’t be afraid to be contentious and critical. We will not mark you down for disagreeing with conventional wisdom and if you make a good argument to support your case, then you’ll be rewarded even if we disagree with you. Obviously, this is not the same as being ignorant of conventional wisdom, for which we will mark you down. • Don’t ignore evidence that contradicts your viewpoint. You should mention the opposing evidence, but then illustrate how the evidence supporting your argument is more convincing. 15 • Include personal reflections on the evidence (though be clear when your claims are backed by evidence vs. speculation). If you can find no evidence to support a point, then try to suggest an experimental study that would provide this evidence (this makes you sound thoughtful and imaginative, and makes it clear that you’re not just regurgitating the references). • Highlight gaps in the literature. What unmet need(s) will you address with your new test or new battery of tests? • Be sure to separate the description of scientific results from interpretations of scientific results and possible implications. Sometimes study results can mean different things if certain considerations are or are not adopted (e.g., an alternative explanation may have been overlooked, results may only apply to certain populations, or some variables may not have been controlled/ included which could change how the results are interpreted). • Use original examples to illustrate your points. • Be careful when using jargon. If you use a technical word that you don’t explain, then bear in mind that the marker might believe you’re trying to fool them into thinking you’re cleverer than you are. • A paragraph should always be less than a double-spaced A4 page (this is a recommendation in the APA guide). However, a paragraph should also contain a minimum of 3 sentences. • You are allowed to write from a first-person perspective (using “I” or “we”) in this assignment. However, note that other courses may not permit this. Also be mindful that when speaking about other authors and their work, you should use gender-neutral pronouns such as “they” and “their” (in accordance with APA guidelines). • You are allowed – and even encouraged – to use sub-headings if this will help you structure your assignment and make it more readable. • Academic writing takes practice. You will need to be succinct with every sentence. Avoid waffly, flowery, or unnecessarily descriptive prose. Remember, this is a scientific research proposal, not an article in a tabloid magazine or literature by Shakespeare. • When you’ve finished your assignment, put it to one side for a time, then read it back through (preferably aloud). Obvious mistakes should leap out at you and, hopefully, more subtle things like clumsy sentences and gaps in your logical argument should also become apparent. • If you’re having a problem with a particular paragraph, sit back and ask yourself: “What is the point of this paragraph? What purpose does it serve? What’s the idea I’m trying to get across?” Try swapping around the phrases and sentences. Try out alternative wordings. Sometimes it might be enough just to change a key word or two. If you can’t think of anything, then maybe the proposal would be more effective without that paragraph. 16 • When reading your in-text references, see how professional writers describe things and try to emulate the sorts of phrases and language they use. Of course, this is easier said than done, but it does get easier with practice. • Follow APA format (7 th edition) for all formatting and referencing. The importance of linking sentences and topic sentences: • A good paper flows well – it tells a coherent story throughout. • Flow is achieved by using linking sentences between paragraphs that lead the reader logically from one paragraph to the next. Linking sentences are also an opportunity to remind the reader about the argument you are making. • A good paper is easy to read because it is always clear how the information being presented is relevant to the argument, and how each point relates to the next. • Think of how you would summarise what you’re planning to say in each paragraph in a single key sentence – and keep this summary in mind while you write. This should help keep your writing focussed. • For policies on collusion, plagiarism, extensions, and late work, see the PSYC3020 Electronic Course Profile (ECP). Sections of the proposal Write your research proposal with the following sections, which have been chosen to reflect the type of format typically required for real research proposals: • Title page • Executive summary • Aims and significance • Background • Proposed test/ test battery and rationale • Study design • Test evaluation: Assessment of reliability and validity • Conclusions • References Unless otherwise stated, type each section of the research proposal into the relevant text boxes in the assignment template provided (see PSYC3020 Blackboard → Assessments). Adhere to APA 7 th edition formatting guidelines and use a black font. In the template, the grey text dotpoints within each text box are brief reminders of what information should go in each section (delete these before submission). Make sure you also read the more detailed section briefings below. 17 Title page Complete the title page (see assignment template), with the following information: (1) Your name (2) Your student number (3) The date of submission (4) The title of your assignment (which should also appear at the beginning of the proposal) (5) Your tutor’s name and tutorial group (e.g., Marc Chan, EX-T01) (6) The total word count (e.g., “Total word count: 1,873”. Note that this includes all words, except in-text references, the References list, and the title page). This is compulsory. Please note that presenting a misleading word count will be treated as an attempt to obtain an unfair advantage and is considered academic misconduct. (7) If you are doing Topic 1 (i.e., your own topic), complete the declaration statement that identifies you have sought the requisite written permission to do your chosen topic (otherwise, delete it). E.g., “Topic 1 Declaration: I, [Insert Full Name], declare that I have received written permission to do the topic of [Insert Topic] on [Insert Date Email Permission was Granted], by my tutor [Insert Tutor’s Name].” Executive summary • Enter this information into the text box labelled “Executive Summary”. • Identify the problem issue with the current measurement of your chosen skill/ ability/ trait (i.e., why a new measure is needed), including why it is problematic if this skill/ ability/ trait is not measured properly. • State the novel contributions of your new proposed test/ test battery (i.e., how it will address the identified problem). • Indicate what test components are to be included in the proposed test/ test battery, including clear identification of relevant established tests and/ or new components to be developed. The latter requires a brief outline as to how these will be created and a brief overview of the new test/ tests. Any modifications to existing tests also need to be made explicit. • Indicate which psychometric principles will be used to assess the proposed test/ test battery, and briefly outline how evaluation of these aspects will be achieved. • Generally, there is no need for references in an executive summary. In terms of APA 7 th guidelines, treat it like an abstract. Aims and significance • Begin with 1-2 general sentences that introduce the reader to the key subject area of the proposal. It is important not to waffle – get straight to the point. • Narrow down the topic and define the problem issue that your proposed test/ test battery aims to address. The definition of the problem issue needs to include a justification of the elements that will be included in the proposed test/ test battery. • Explain how/ why the problem is significant, including the ramifications of not measuring the skill/ ability/ trait properly. • In 1 single final sentence, state how your proposed test/ test battery will address the identified problem. 18 Background • Briefly review current relevant measures of your skill/ ability/ trait. o Provide a general overview of how these measures are tested in the literature with relevant results. • Reviewed measures should include an overview of key psychometric properties and how these were demonstrated in studies. • Build your argument using empirical evidence, while acknowledging or rebutting any contradictory findings. o When describing your empirical evidence, select a number of studies that most strongly support your argument, and give the reader some details on these, including: Participants (if relevant), Methodology (if relevant), and Key relevant findings and what these indicate. o Remember, only list evidence that is relevant to your proposal. We do not want the entire historical record of your topic. • Critique key problem areas that your test/ test battery aims to overcome. • If reviewed papers are in a different context than the current desired one, present an argument as to why a transfer of context for these tests is appropriate. o E.g., why are previous tests of concussion in sports suitable for use in a military context? • Collectively, the literature review should make a case for the need for your new test/ test battery by identifying gaps/ flaws in past measures and – if relevant – highlight feasible existing measures that have yet to be applied appropriately. • For some topics, you may not always have direct empirical evidence. If there is no direct evidence in your chosen setting (e.g., you find that there are no studies on older drivers having their hazard perception assessed in a doctor’s office), then you can use evidence from another setting or another population to argue for the soundness of these aspects of your proposal (e.g., there is evidence showing that young males with ADHD can benefit from an office-based hazard perception testing and training package; Poulsen et al., 2010). These findings imply that a hazard perception intervention strategy works for one population, and thus the established findings/ principles may transfer to your desired population of interest (though you would need to present an argument as to why these would transfer across the given contexts). However, always acknowledge that such findings would need replication in your given setting. Proposed test/ test battery and rationale • Provide an overview of the proposed test/ test battery design i.e., briefly outline relevant test components and what specific constructs they aim to measure. NB: more technical details for this will go under the Study Design section below. Include a broad overview of any changes to existing measures if relevant. • Justify the design of the proposed test/ test battery as informed by the literature i.e., how and why particular measures, adaptations of measures and/ or the development of new measures are required to address the identified problem issues. • Outline future uses for the test/ test battery once developed (i.e., “the big sell”), including tangible and/ or intangible outcomes from its use. Be sure to address all the relevant outcomes for all key stakeholders when doing this. 19 Study design • Detail a specific study/ series of studies regarding how you plan to (a) develop the materials for the proposed test/ test battery, and (b) evaluate its reliability and validity. Lay this out like a Method section of a research report i.e., provide participants, design, materials and measures (i.e., test components) and procedure subsections. o Participants: Who will you sample (including approximate number and any special characteristics)? How will you recruit them? o Design: Make clear the number of testing time points, which measures will be implemented at which time points, and which groups of participants will be completing each measure at each time point. While doing this, ask yourself questions such as: Am I utilising pre- and post-test measures, comparing two or more groups, or is it a longitudinal study? If I am giving a measure twice, do I need to deal with practice effects (e.g., do I need alternate forms of the test) and what is a suitable time frame for subsequent administrations of the measure(s)? o Materials and measures: What resources and/ or materials will you need to create and validate the new test/ test battery? Approach this as realistically and professionally as possible. We do not expect you to understand how to configure the settings of complex technical equipment. However, we do expect you to mention what equipment you would use if it is appropriate. Be very clear when highlighting any performance measures that are included. These are measures that do not form part of the test/ test battery itself, but rather have been included in the study design as a means to evaluate the proposed test/ test battery (e.g., to assess its concurrent validity). o Procedure: Outline the order of tasks and how these should be carried out. Within this, always remember pragmatics. Are there any elements or considerations that you need to add to your procedure? For example, if you are testing multicultural populations, how will you standardise the instructions so that everyone will understand what to do? o If appropriate, when describing existing measures or those you plan to modify, provide the full scale name, a reference, the construct it aims to measure, the number of items, an example item, response scale details (including scale anchors, if appropriate), internal reliability, meaning of high scores, and how overall scores will be calculated (e.g., summed or averaged?). In addition, if you are modifying an existing measure, be very clear about how you plan to adapt the scale length/ items/ response scale/ scoring system etc. • Justify your design decisions with support from past studies or logical reasoning (e.g., sampling strategy, overall test length, mode of test presentation, what is to constitute a ‘correct’ score in the absence of a known true result/ answer, response options/ mode, etc.). Test evaluation: Assessment of reliability and validity • Outline how you will evaluate your new test/ test battery. You must describe at least two reliability (e.g., internal consistency, test-retest) and two construct validity (e.g., predictive validity, convergent validity, content validity) strategies. The construct validity principles must include at least one empirically-based form of validity. Note that while content validity is okay to assess (as your one non-empirical form of construct validity), face validity is not (because face validity does not count as construct validity). However, make sure you provide sufficient detail of how you will evaluate content validity. 20 o Formulate reliability and construct validity hypotheses. The latter should be based on findings in your earlier literature review. NB: A hypothesis should have a direction of effect where appropriate. o Consider what kind of general data analysis you will require (e.g., comparison of test performance for different specified conditions over time, comparison of test performance means between two different groups of participants at the same testing time point)? o Briefly state how you expect your test/ test battery to differentiate between people on your selected outcome measures. Likewise, briefly state how you expect your test/ test battery to change or remain consistent over study conditions/ trials (if relevant). o Ensure that each of these reliability and validity predictions can actually be tested given your proposed study design. I.e., all relevant samples of participants, testing time points, performance measures, etc., should be included in the study/ series of studies outlined earlier (in the Study Design section) to allow for appropriate evaluation of each prediction. o Highlight any separate issues that may occur when measuring the reliability and validity for the proposed test/ test battery (i.e., practical and/ or ethical constraints that may prevent you from gauging the most accurate measure of reliability and/ or validity for the new test/ test battery). • THE TEST EVALUATION PART OF YOUR PROPOSAL IS EXTREMELY IMPORTANT. This section is where you demonstrate to your marker that you understand and know how to evaluate the core psychometric concepts of reliability and validity. Writing this section well is crucial to obtaining a good mark in this assignment. Conclusions • Briefly reiterate the main points of the proposal without introducing new material/ information. • Be sure to convey “the big sell” to your reader, so they understand why funding agencies/ investors should be throwing buckets of money at your project! References • Provide a list of all cited sources in the References section. This should be formatted according to appropriate APA 7 th guidelines. • Remember, to find additional references, use the Web of Science or Psycinfo databases, other literature search techniques (Google Scholar), or follow up references cited in other studies. 21 Grade descriptions Grade 7: Overall the assignment is considered to be of an exceptional standard for thirdyear university (“high distinction” by UQ criteria). • The overall piece is exceptionally interesting. • The proposal contains evidence that the author has consulted an exceptional range of relevant primary literature. • The proposal incorporates an exceptional range of relevant empirical evidence discussed in appropriate depth. • Exceptional critical analysis of the literature is demonstrated. • Evidence of exceptional independent thinking is shown. • An exceptional rationale is presented, with extremely compelling arguments. • Fully comprehensive research hypotheses are provided (i.e., at least 2 reliability and 2 construct validity), consistent with empirical evidence. • Study design allows for the full and complete evaluation of the given research questions (at least 2 forms of construct validity and 2 forms of reliability), with no methodological issues. • Written expression is exceptional and of a publishable standard. • No formatting issues and/ or other technical errors are present. Grade 6: Overall the assignment is considered to be of a very high standard for third-year university (“distinction” by UQ criteria). • The overall piece is very interesting. • The proposal contains evidence that the author has consulted a very wide range of relevant primary literature. • The proposal incorporates a very wide range of relevant empirical evidence discussed in appropriate depth. • Very good critical analysis of the literature is demonstrated. • Evidence of very good independent thinking is shown. • A very good rationale is presented, with compelling arguments. • Almost fully comprehensive research hypotheses are provided (i.e., at least 2 reliability and 2 construct validity), consistent with empirical evidence. • Study design allows for the full and complete address of given research questions (at least 2 forms of construct validity and 2 forms of reliability), with few minor methodological issues. • Written expression is of a very high standard. • There are a few minor formatting issues (or one minor issue repeated a few times). Very few other technical errors are present. Grade 5: Overall the assignment is considered to be of a high standard for third-year university. • The overall piece is generally interesting. • The proposal contains evidence that the author has consulted a wide range of relevant primary literature. • The proposal incorporates a wide range of relevant empirical evidence discussed in appropriate depth. • Good critical analysis of the literature is demonstrated. • Evidence of good independent thinking is shown. • A good rationale is presented, with solid supporting arguments. • Comprehensive research hypotheses are provided (i.e., at least 2 reliability and 2 construct validity). There may be some inconsistency with empirical evidence on minor aspects. 22 • Study design allows for the address of given research questions (at least 2 forms of construct validity and 2 forms of reliability), but contains some minor methodological issues. • Written expression is of a high standard. • There are several minor formatting issues (or one minor issue repeated several times). Few other technical errors are present. Grade 4: Overall the assignment is considered to be of an acceptable standard for thirdyear university. • The overall piece is somewhat interesting. • The proposal contains evidence that the author has consulted an acceptable range of relevant primary literature. • The proposal incorporates an acceptable range of relevant empirical evidence discussed in appropriate depth. • Acceptable critical analysis of the literature is demonstrated. • Evidence of acceptable independent thinking is shown. • An adequate rationale is presented, but there is room for improvement with regards to the supporting arguments. • Acceptable research hypotheses are provided (i.e., at least 2 reliability and 2 construct validity). There may be some inconsistency with empirical evidence. • Study design allows for the address of most of the given research questions (i.e., allows assessment of most of 2 forms of construct validity and 2 forms of reliability), but contains some methodological issues. • Written expression is acceptable for a third-year standard, but there is room for improvement. • There is one major formatting issue or many minor issues (or one minor issue repeated many times). Some technical errors are present. Fail: Overall the assignment is not considered to be of an acceptable standard for thirdyear university. • The overall piece holds limited interest. • The proposal contains evidence that the author has not consulted an appropriate range of relevant primary literature. • The proposal does not incorporate an acceptable range of relevant empirical evidence discussed in appropriate depth. • Inadequate critical analysis of the literature is demonstrated. • Inadequate evidence of independent thinking is shown. • An unclear, inappropriate, or no rationale is presented for test development. • Research hypotheses are not provided (i.e., at least 2 reliability and 2 construct validity were needed), inappropriate or are inconsistent with empirical evidence. • Study design does not allow for the address of given research questions (i.e., does not allow assessment of at least 2 forms of construct validity and 2 forms of reliability), or contains some major methodological issues. • Written expression is not acceptable for a third-year standard. • There are several major formatting issues and/ or an unacceptable number of technical errors. 23 Marking criteria Your assignment will be marked using the following rubric. Each of 24 criteria will be graded 1 to 7 (or 0 if that section is absent). This will be automatically converted into a % mark (the mid-point of the relevant grade range, based on the published PSYC3020 grade cut-offs). These will be averaged according to the weighting given in the table below to give an overall %. This overall % mark will then be adjusted using the additional mark modifiers in the second table below. Section Criteria (each of these will be graded 0 to 7 to give your unadjusted mark) % weighting of each criteria within its section % weighting of each criteria relative to the overall unadjusted assignment mark Executive summary (5%) Content 40% 2% Interest 40% 2% Written expression 20% 1% Aims and significance (10%) Quality of justification 50% 5% Interest 30% 3% Written expression 20% 2% Background (20%) Evidence of background reading 25% 5% Provision of relevant empirical evidence 25% 5% Critical analysis 20% 4% Independent thinking 15% 3% Written expression 15% 3% Proposed test/ test battery and rationale (20%) Quality of rationale 40% 8% Independent thinking 30% 6% Interest 15% 3% Written expression 15% 3% Study design (15%) Study design permits evaluation of all appropriate reliability and validity estimates 35% 5.25% Quality of the proposed methodology 50% 7.50% Written expression 15% 2.25% Test evaluation: Assessment of reliability and validity (25%) Validity evaluation quality 45% 11.25% Reliability evaluation quality 45% 11.25% Written expression 10% 2.50% Conclusions (5%) Content 40% 2% Interest 40% 2% Written expression 20% 1% Mark modifiers (these are applied to your unadjusted assignment mark) Modification details APA errors 0 (no errors) to -5% (substantial errors throughout) Spelling, grammar, and punctuation errors 0 (no errors) to -5% (substantial errors throughout) Exceeding word count 2201-2399 words = -5%; 2400-2599 words = -10%; >2600 words = -15% Late penalty -10% per day (or part thereof, including weekends and public holidays) Fine tuning bonus. This is a discretionary bonus your tutor can award if, for example, you are graded 7 for all criteria (which would give you a mark at the mid-point of the grade 7 range) but the tutor believes your assignment deserves more than this. 0 to +10% (if you receive a bonus which results in your adjusted mark being more than 100%, then your mark will be capped at 100%) 24 Frequently asked questions How many references should I read? This is equivalent to asking “what is the length of a piece of string?” The answer obviously depends on the length and quality of your references, as well as how thoroughly you read them. But if you really want a ball-park guideline, then read the equivalent of about 10 short-to-medium length relevant articles. How many marks do I lose if I hand in my work late without an approved extension? 10% per day or part thereof (see Electronic Course Profile). How do I apply for an extension? Applications for extension should be submitted before the due date. Applications for extension after the due date will only be accepted in exceptional cases (e.g., severe illness, hospitalisation, or for compassionate reasons). Please refer to the course policy and guidelines (section 6.1 of the Electronic Course Profile) for further information, as well as the following link on my.UQ for extension eligibility: https://my.uq.edu.au/information-andservices/manage-my-program/exams-and-assessment/applying-extension Go to the following link to request an extension: https://my.uq.edu.au/node/218/2#2 Do not contact Jo Brown or your tutor to apply for an extension (though it is a good idea to keep your tutor in the loop with regards to your situation). Also keep in mind that just because you apply for an extension, does not guarantee one will be granted (and especially not necessarily for the time length you have requested). Be sure to check that your extension request has been granted and the subsequent extended due date for this. What do I do if I think my assignment has been marked unfairly? 1. Arrange a meeting (which can be via Zoom) with your marker to discuss the assignment (this is a compulsory requirement). The aim of this meeting is for them to explain how your mark was derived based on the marking criteria. If there is an unambiguous error in the marking, then they may be able to just correct it at this stage. 2. If you aren’t satisfied with the outcome of this conversation, then you can formally request a remark as outlined in the Handbook of University Policies and Procedures (HUPP). You need to submit this request within four weeks of the release of the assignment marks. To do this, you need to go to the following link: https://my.uq.edu.au/information-and-services/manage-my- 25 program/exams-and-assessment/querying-result and go to “Request a re-mark”. Then click the “Request now” button. Note that on the assessment re-mark form it will ask you if you’ve sought feedback from the lecturer/ course co-ordinator/ head of school: if you’ve met with your marker, then you can respond “yes” to this (they can be considered to be acting on the course co-ordinator’s behalf in this instance). You also need to provide a written explanation as to why you believe the mark awarded does not reflect your performance with respect to the published assessment criteria for the piece of assessment. 3. If a re-mark is approved, then the mark you get from whoever is allocated to be the second marker will be recorded instead (note that this final mark could be either higher or lower than the original mark). Please see the Electronic Course Profile (ECP) for further details of the policies on collusion, plagiarism, and assessment extensions. 26 Appendix: Considering Topic 1? Topic 1 is the open-ended topic of the assignment. Your research proposal tests any skill/ ability/ trait in an applied setting of your choice if you choose to write this topic. However, keep in mind that your new test/ battery of tests needs to constitute a psychological behavioural measure. You are required to seek your tutor’s approval for a Topic 1 research proposal. You are strongly advised to contact your tutor early because approval can be a drawn-out process (note: check if your tutor has a deadline for Topic 1 approvals). The process of seeking topic approval may be conducted via email, verbal communication or a combination of both, depending on your tutor. However, final topic approval from your tutor must be in writing via email. It is advisable for you to chat with your tutor about your proposal idea before seeking approval. You will need to address the following questions/ points to obtain approval for your proposal idea: 1. Define/ operationalise what skill/ ability/ trait the proposed test measures (e.g., postconcussion cognitive ability; clinical skills; driving ability). Identify up to 3-4 tests/ underlying variables that are used to measure this skill/ ability/ trait. Word count limits the number of tests/ underlying variables you include in the proposal. 2. State how the proposed test is novel. Does it address problems of existing tests? Does it apply an existing test to a new population? Is it brand new? Why should the proposed test be funded? 3. Provide a rough description of the form of the proposed test. 4. How will two reliability and two validity strategies be used to evaluate the proposed test? Depending on your proposal idea, your tutor may also follow up with additional questions that are related to the points provided below [refer to Section 3 of the briefing for more details]: 5. Why is it important to measure your chosen variable(s)? What is the rationale? Remember, it needs to be of real benefit to society. 6. Who is your population of interest? Industry? Age? 7. How will your variable(s) be measured? Self-report? Behavioural observation? How will the test be administered? How is it scored? HOW does WHO do WHAT, WHEN and WHY? Refer to the Assignment Primer 2 – Concussion Activity tutorial for inspiration. 8. What practical and/ or ethical considerations of your proposed test are important? How are they addressed? 9. Is a dependent/ outcome variable required? What about a contrast group? Ideally, your answers to these questions should be concrete. Your tutor should have a clear idea of how/ why the research proposal is important, how the test(s) will be measured, and how the test(s) will be validated. Past experience has shown a positive relationship between detail and performance: Clearer/ more concrete approvals are associated with research proposals that have scored higher marks — probably because they are better reasoned through. Good luck!