You are here


2018 Impact Factor: 3.804
2018 Ranking: 14/130 in Psychology, Clinical
Source: Journal Citation Reports (Web of Science Group, 2019)

Douglas B. Samuel Purdue University, USA

eISSN: 15523489 | ISSN: 10731911 | Current volume: 26 | Current issue: 4 Frequency: 8 Times/Year

Assessment (ASM) focuses on advancing clinical assessment science and practice, with an emphasis on information relevant to the use of assessment measures, including test development, validation, and interpretation practices. Articles cover the assessment of cognitive and neuropsychological functioning, personality, and psychopathology, as well as empirical assessment of clinically relevant phenomena, such as behaviors, personality characteristics, and diagnoses. This journal is a member of the Committee on Publication Ethics (COPE).

About the Title

Keep abreast of the current research in assessment science and practice with Assessment, the journal that brings you important articles derived from psychometric research, clinical comparisons, theoretical formulations and literature reviews that fall within the broad domain of clinical and applied psychological assessment. The journal presents information of direct relevance to the use of assessment measures, including the practical applications of measurement methods, test development and interpretation practices, and advances in the description and prediction of human behavior. In addition, the journal examines the role of psychological assessment in advancing major issues in clinical science and practice.

The scope of the journal extends from the evaluation of individuals in clinical, counseling, health, and forensic settings.

This journal is a member of the Committee on Publication Ethics (COPE).


Assessment publishes articles advancing clinical assessment science and practice. The emphasis of this journal is on publication of information of relevance to the use of assessment measures, including test development, validation, and interpretation practices. The scope of the journal includes research that can inform assessment practices in mental health, forensic, medical, and other applied settings. Papers that focus on the assessment of cognitive and neuropsychological functioning, personality, and psychopathology are invited. Most papers published in Assessment report the results of original empirical research, however integrative review articles and scholarly case studies will also be considered. Papers focusing on a) new assessment methodologies and techniques for both researchers and practitioners, b) how assessment methods and research informs understanding of major issues in clinical psychology such as the structure, classification, and mechanisms of psychopathology, and c) multi-method assessment research and the integration of assessment methods in research and practice are of particular interest. The journal also encourages submissions introducing useful, novel, and non-redundant instruments or demonstrating how existing instruments have applicability in new research or applied contexts. All submissions should provide strong rationales for their efforts and articulate important implications for assessment science and/or practice.
Articles are invited that target empirical assessment of clinically relevant phenomena such as behaviors, personality characteristics, and diagnoses. Research subjects may represent diverse age and socioeconomic categories and both clinical and nonclinical populations. Research reviews and methodologically-focused papers will be considered.

Associate Editors
A. Alexander Beaujean Baylor University, USA
Mike Chmielewski Southern Methodist University, USA
Nicholas Eaton Stony Brook University, USA
Frank G. Hillary Pennsylvania State University, USA
Kristian E. Markon University of Iowa, USA
Michelle M. Martel University of Kentucky, USA
Stephanie N. Mullins-Sweatt Oklahoma State University, USA
Kristin Naragon-Gainey University of Buffalo, USA
Thomas Olino Temple University, Department of Psychology, USA
Gina Rossi Vrije Universiteit Brussel (VUB), Belgium
Consulting Editors
Robert Ackerman University of Texas - Dallas, USA
James R. Allen University of Minnesota Medical School, USA
Jaime L. Anderson Sam Houston State University, USA
Emily Ansell Yale University School of Medicine, USA
Paul A. Arbisi Minneapolis Veterans Affairs Medical Center, USA
Randolph C. Arnau University of Southern Mississippi, USA
Michael C. Ashton Brock University, Canada
Bradley Axelrod Wayne State University, John D. Dingell VA Medical Center, USA
Lindsay E. Ayearst, PhD Multi-Health Systems Inc., USA
R. Michael Bagby University of Toronto, Canada
Matthew Baity Alliant International University, Sacramento, USA
William B. Barr NYU Comprehensive Epilepsy Center, USA
Yossef S. Ben-Porath Kent State University, Ohio, USA
Stephen D. Benning Vanderbilt University, USA
David T. R. Berry University of Kentucky, USA
Mark Blais Massachusetts General Hospital/Harvard University Medical School, USA
Robert F. Bornstein Adelphi University
Amy B. Brunell Ohio State University, USA
Danielle Burchett California State University, Monterey Bay, CA, USA
Nicole M. Cain Rutgers University, USA
Matthew Calamia Louisiana State University, USA
David S. Chester Virginia Commonwealth University, USA
Lee Anna Clark University of Notre Dame, USA
David E. Conroy The Pennsylvania State University, USA
Cristina Crego University of Kentucky, USA
Keith R. Cruise Fordham University, USA
Mark D. Cunningham Clinical & Forensic Psychologist
Barbara De Clercq Ghent University, Belgium
Julia M. DiFilippo Strongsville, OH, USA
Jacobus A. M. Donders Mary Free Bed Rehabilitation Hospital, USA
Brent Donnellan Texas A&M University, Department of Psychology
Lea Dougherty University of Maryland, USA
Kevin S. Douglas Simon Fraser University, Canada
John F. Edens Texas A&M University, USA
Joseph L. Etherton Texas State University, USA
Thomas A. Fergus Baylor University, USA
Jessica K. Flake McGill University, Canada
Johnathan D. Forbey Ball State University, USA
Andrea Fossati LUMSA University, Rome, Italy, and San Raffaele Hospital, Milano, Italy
Eiko Fried Leiden University, Netherlands
Tania Giovannetti Temple University, USA
Todd Girard Ryerson University, Canada
Catherine Glenn University of Rochester, New York, USA
Robin Green University of Toronto/Toronto Rehab Centre, Canada
Michael Gurtman University of Wisconsin Parkside, USA
Michael Hallquist University of Pittsburgh, USA
Richard W. Handel Eastern Virginia Medical School, USA
Robert K. Heaton University of California, San Diego, USA
Kirk S. Heilbrun Drexel University, USA
Brian Hicks University of Minnesota, USA
Joeri Hofmans Vrije Universiteit Brussel, Belgium
Christopher J. Hopwood University of California - Davis, USA
John Hunsley University of Ottawa, Canada
John A. Johnson Pennsylvania State University, DuBois, USA
Zornitsa Kalibatseva Stockton University, USA
Jan H. Kamphuis University of Amsterdam, Netherlands
Radhika Krishnamurthy Florida Institute of Technology, USA
Daryl G. Kroner Southern Illinois University Carbondale, USA
Robert Krueger University of Minnesota, USA
Kevin R. Krull Texas Children's Hospital, USA
Ian Kudel Cincinnati Children's Hospital Medical Center, USA
John E. Kurtz Villanova University, USA
Sean Lane Purdue University, USA
Tayla T.C. Lee Ball State University, USA
Freedom Y.K. Leung Shaw College, Chinese University of Hong Kong, Hong Kong, China
Libo Li UCLA Integrated Substance Abuse Programs, USA
Sara Lowmaster Boston VA, USA
Melissa Sue Magyar Texas A&M
Kenneth Mah Toronto General Hospital, Canada
E. Mark Mahone John Hopkins University School of Medicine, USA
Patrick A. Markey Villanova University, USA
Rob Meijer University of Groningen, Netherlands
Gregory J. Meyer University of Toledo, USA
Joshua D. Miller The University of Georgia, USA
Richard Miller Brigham Young University, USA
Leslie C. Morey Texas A&M University, USA
Jason Moser Michigan State University, USA
Daniel Murrie University of Virginia, USA
Tonia L. Nicholls British Columbia Mental Health & Addiction Services, Canada
Molly Nikolas The University of Iowa, USA
Thomas Oltmanns Washington University in St. Louis, Department of Psychological and Brain Sciences, USA
Augustine Osman University of Texas, San Antonio, USA
Randy K. Otto USA
Sam Parsons University of Oxford, UK
Thomas D. Parsons University of North Texas, USA
Christopher Patrick Florida State University, Department of Psychology, USA
Ralph L. Piedmont Loyola College in Maryland, USA
Aaron L. Pincus Pennsylvania State University, USA
James Prisciandaro Medical University of South Carolina, USA
Lena C. Quilty Centre for Addiction and Mental Health, Canada
Cecil R. Reynolds Texas A & M University, USA
Michael J. Roche Pennsylvania State University, Altoona
Richard Rogers University of North Texas, College of Liberal Arts and Social Sciences, USA
Barry Rosenfeld Fordham University, USA
Steve Rubenzer Diplomate in Forensic Psychology
Gentiana Sadikaj McGill University, Canada
Darcy A. Santor University of Ottawa, Canada
Shannon Sauer-Zavala Boston University, USA
Dan Segal University of Colorado at Colorado Springs, USA
Martin Sellbom University of Otago, Department of Psychology, New Zealand
Carla Sharp University of Houston, USA
Margaret Sibley Florida International University, USA
Leonard J. Simms University at Buffalo, USA
Susan C. South Purdue University, USA
Stephanie Stepp Western Psychiatric Institute and Clinic, USA
David Streiner McMaster University, Canada
Jennifer L. Tackett Northwestern University, Department of Psychology, USA
Antonio Terracciano Florida State University, USA
Sander Thomaes Utrecht University, Netherlands
Katherine M. Thomas Purdue University, USA
Timothy J. Trull University of Missouri, Department of Psychological Sciences, USA
Erik Turkheimer University of Virginia, Department of Psychology, USA
Carlo O.C. Veltri St. Olaf College, USA
Edelyn Verona University of South Florida, USA
David Watson University of Notre Dame, Department of Psychology, USA
Nathan C. Weed Central Michigan University, USA
Irving B. Weiner University of South Florida, USA
Thomas A. Widiger University of Kentucky, Department of Psychology, USA
Blair E. Wisco, Ph.D. University of North Carolina at Greensboro, USA
Frank C. Worrell University of California, Berkeley, USA
Aidan Wright University of Pittsburgh, USA
Dustin B. Wygant Eastern Kentucky University, USA
Andrew Zabel Kennedy Krieger Institute, USA
Patricia A. Zapf John Jay College of Criminal Justice, The City University of New York, USA
Assistants to the Editors
Meredith A. Bucher Purdue University, USA
  • Abstract Journal of the Educational Resources Information Center (ERIC)
  • Clarivate Analytics: Current Contents - Physical, Chemical & Earth Sciences
  • ERIC Current Index to Journals in Education (CIJE)
  • Index Medicus
  • PsycINFO
  • Psychological Abstracts
  • SafetyLit
  • Scopus
  • Social SciSearch
  • Social Sciences Citation Index (Web of Science)

The editor invites high quality manuscripts covering a broad range of topics and techniques in the area of psychological assessment. These may include empirical studies of assessment of personality, psychopathology, cognitive functions or behavior, articles dealing with general methodological or psychometric topics relevant to assessment, or comprehensive literature reviews in any of these areas. This journal encourages submissions evaluating a) new assessment methodologies and techniques for both researchers and practitioners, b) how assessment methods and research informs understanding of major issues in clinical psychology such as the structure, classification, and mechanisms of psychopathology, and c) multi-method assessment research and the integration of assessment methods in research and practice. Additionally, the journal encourages submissions introducing useful, novel, and non-redundant instruments or demonstrating how existing instruments have applicability in new research or applied contexts. All submissions should provide strong rationales for their efforts and articulate important implications for assessment science and/or practice

Research participants may represent both clinical and nonclinical populations. Manuscripts should include how sample size has been determined, all data exclusions, all manipulations, and all measures in the study.

In general, regular articles should not exceed 30 pages of text, excluding Title Page, Abstract, Tables, Figures, Footnotes and Reference list.

Authors submitting manuscripts to the journal should not simultaneously submit them to another journal, nor should manuscripts have been published elsewhere, including the World Wide Web, in substantially similar form or with substantially similar content.

This journal is a member of the Committee on Publication Ethics (COPE)

Manuscript Submission:

Manuscripts must be submitted in Microsoft Word or Rich Text Format (rtf) electronically at Figures may be submitted using any of the formats listed below. If requesting a masked blind review, please ensure that both a manuscript file with no identifying author information and a separate title page with author details are included in your submission. Questions should be directed to the ASSESSMENT Editorial Office by email:

If you or your funder wish your article to be freely available online to nonsubscribers immediately upon publication (gold open access), you can opt for it to be included in SAGE Choice, subject to the payment of a publication fee. The manuscript submission and peer review procedure is unchanged. On acceptance of your article, you will be asked to let SAGE know directly if you are choosing SAGE Choice. To check journal eligibility and the publication fee, please visit SAGE Choice. For more information on open access options and compliance at SAGE, including self/author archiving deposits (green open access) visit SAGE Publishing Policies on our Journal Author Gateway.

Preparation of Manuscripts:

Authors should carefully prepare their manuscripts in accordance with the following instructions.

Authors should use the Publication Manual of the American Psychological Association (6th edition, 2009) as a guide for preparing manuscripts for submission. All manuscript pages, including reference lists and tables, must be typed double-spaced.

The first page of the paper (the title page) should contain the article title, the names and affiliations of all authors, authors’ notes or acknowledgments, and the names and complete mailing addresses of the corresponding author. If requesting a masked blind review, the first page should contain only the article title and the title page should be uploaded as a separate document.

The second page should contain an abstract of no more than 150 words and five to seven keywords that will be published following the abstract.

The following sections should be prepared as indicated:

Tables. Each table should be fully titled, double-spaced on a separate page, and placed at the end of the manuscript. Tables should be numbered consecutively with Arabic numerals. Footnotes to tables should be identified with superscript lowercase letters and placed at the bottom of the table. All tables should be referred to in the text.

Figures. Electronic copies of figures can be submitted in one of the following file formats: TIFF, EPS, JPEG, or PDF. All figures should be referred to in text. Each figure should appear on a separate page at the end of the manuscript but before the tables, and all titles should appear on a single, separate page.

Endnotes. Notes should appear on a separate page before the References section. Notes should be numbered consecutively and each endnote should be referred to in text with a corresponding superscript number.

References. Text citations and references should follow the style of the Publication Manual of the American Psychological Association (6th edition, 2009).

Authors who want to refine the use of English in their manuscripts might consider utilizing the services of SPi, a non-affiliated company that offers Professional Editing Services to authors of journal articles in the areas of science, technology, medicine or the social sciences. SPi specializes in editing and correcting English-language manuscripts written by authors with a primary language other than English. Visit for more information about SPi’s Professional Editing Services, pricing, and turn-around times, or to obtain a free quote or submit a manuscript for language polishing.

Please be aware that SAGE has no affiliation with SPi and makes no endorsement of the company. An author’s use of SPi’s services in no way guarantees that his or her submission will ultimately be accepted. Any arrangement an author enters into will be exclusively between the author and SPi, and any costs incurred are the sole responsibility of the author.

Supplemental Materials:

Authors are encouraged to consider submitting ancillary analyses and other relevant information as electronic supplements. Such supplements should be uploaded using the supplemental files tag in Scholar One. Only doc, docx., and .pdf files are accepted for published electronic supplements. Electronic supplemental information for published manuscripts should take the form of Tables and Figures, formatted and annotated just as they would be for a manuscript, but numbered as Table S1, S2, S3, etc. and Figure S1, S2, S3 etc. Article text should refer to material in electronic supplements as appropriate, just as they would a table or figure in the published article.

Registered Reports:

Assessment now offers registered reports (RRs) as an alternative to the regular articles format. The primary distinction between these two manuscript types is that the regular articles are submitted to the journal after data collection and analyses are completed, whereas the RR format reverses this ordering such that manuscripts are submitted prior to data analysis (and often to data collection). Prospective authors of a RR submit a Stage 1 manuscript, which reviews the literature that motivates the study and specifies the methods to address the question. This Stage 1 manuscript then goes through the peer-review process. This RR review process is similar to that for regular articles and reviewers and the editorial team will suggest revisions and recommend acceptance or rejection. Ultimately, if the journal editor determines that the Stage 1 manuscript is suitable for publication, then it is offered In Principle Acceptance (IPA). A manuscript offered IPA is an agreement between the journal and the author indicating that assuming the authors carry out the research precisely has they have specified and draws a conclusion based on the evidence, then Assessment will publish the work. Following the IPA, the authors carry out the research and then prepare a Stage 2 manuscript that is submitted to the journal. Of course, the authors retain the right to withdraw the manuscript at any point, before or after IPA and these will be recorded by the journal in a publicly available section called Withdrawn Registrations. After resubmission, the Stage 2 manuscript is appraised by the editor and either accepted formally, or sent out to the original reviewers. In this case, the Stage 2 manuscript is only evaluated for the degree to which it faithfully followed the originally planned analyses, drew warranted conclusions, as well as for a review of any exploratory analyses that may have followed.

Guidelines for Registered Reports at Assessment:

As with all registered reports (RRs), those submitted to Assessment should include a complete introduction and methods section. In general, the approach to preparing these sections aims to be consistent with the guidelines offered by the Center for Open Science, but priority is given to these local guidelines where they may differ. The authors should provide all the relevant information that will facilitate peer-reviews and an editorial decision before the data have been collected (or analyzed, if that method is chosen). Given that the focus of this journal is on the development, validation, and interpretation of instruments, some additional recommendations are offered for preparing RRs for submission to Assessment. Here we discriminate between three broad classes of RRs that can be considered at Assessment.

  1. The first broad class of RRs are studies that seek to examine the psychometric properties, or construct validation, of existing instruments. This can include any number of examinations, such as temporal consistency, factor structure, measurement invariance, predictive validity, or diagnostic accuracy.

  2. The second class of RRs are for evaluations that seek to modify an existing instrument, such as by creating an abbreviated or revised version of a scale.

  3. The third and final class of RRs are for projects that aim to develop a novel instrument. This latter class of RRs present a unique and novel extension of the RR process and so authors are encouraged to consult with the Editor-In-Chief in advance of submitting such a manuscript.


The introduction section for all RRs should include a complete review of the relevant literature. For studies examining the properties of an existing instrument, this literature review should introduce the current empirical state of the literature for the focal instrument, as well as the degree to which these properties are known about other instruments of the same or similar constructs. This should pay particular attention to the relevant findings as well as the types of samples utilized to date. The authors should make the clear case for the limitations of this existing literature and what motivates the proposed study. It is worth noting here a series of recommendations offered by Dr. Samuel in his incoming editorial (see Samuel, 2019) in this regard, including the fact that simply demonstrating that something has not been done, does not mean it should be done. This is particularly true for tests of measurement invariance across demographic variables. Recall that for RRs, the authors should be able to make the case that the proposed research will yield relevant conclusions regardless of the findings. In line with best practices for RRs, authors should note that the introduction section is “locked” following the IPA and can only be altered with respect to correcting factual or typographic errors, or meaningful updates to the literature that occurred in the interim.


Authors should provide a full description of the proposed sampling method as well as the expected characteristics of the acquired sample, based on the proposed procedures that is written in the future tense. This should include inclusion or exclusion criteria, including any pre-screen testing as well as how invalid or incomplete responders will be identified and excluded. This should also include the rationale for utilizing the sample/population and how likely the results are to generalize to the assessment question of interest. The choice of sample size should be informed, where possible, by an a priori power analysis based on the best available estimated effect size, which in many cases will be the lowest meaningful estimate. A priori power should be above 0.8 for all proposed hypothesis tests. If the goal of the study is to estimate an effect size rather than test a hypothesis, then the authors should provide the best estimate available for that effect. For more complex models statistical models (e.g., factor analysis) a justification for the sample size should be provided. This could be a formal power analysis using simulation methods or justification from methodological literature for adequate power and estimation of the model (see MacCallum, Browne, & Sugawara, 2006). For Bayesian approaches, please consult the OSF guidelines as well as Schönbrodt & Wagenmakers (2016).

The method section should include a complete list of all instruments to be administered, their ordering, as well as the method of administering (e.g., computerized vs. paper and pencil) and scoring the scales (e.g., sum, mean, IRT). If any experimental manipulations are planned, those should be provided in sufficient detail to permit repetition. In short, the method section should provide all the details necessary for an independent researcher to repeat the study.

The proposed analysis pipeline can be included in either a statistical analyses sub-section of the methods, or as a prospective results section. If the latter is chosen, authors should write this section using placeholders for the statistical tests (e.g., “the CFI for model A was .XXX”), such that actual values can be substituted following data analysis. The analysis plan should include all data processing steps, the precise nature of planned analyses, including any covariates to be used and the approach to correcting for multiple comparisons. Ideally, all analysis scripts should be written beforehand and submitted with the Stage 1 report. Further, the use of simulated data to prepare Tables and Figures is highly encouraged. For analytic steps that are contingent upon results of initial analyses – this will be particularly true for instrument revision and development, please see below – the authors should specify the decision rules that will be used to determine how the results inform subsequent steps and support the rationale for those approaches. Of course, authors are free to conduct any number of analyses beyond those specified in the RR, but those would be included in a separate section of the results, labeled “exploratory analyses.” Finally, the method section should specify a timeline by which the study will be completed. Extensions to this deadline for submission of the Stage 2 report are negotiable with the Editor-in-Chief.

Special Considerations for Instrument Development and Revision:

Scale development is an inherently iterative process that includes a large number of decision points, recursive and dependent aspects, and typically multiple data collections that all serve the broader purpose of construct validation (Cronbach and Meehl, 1955). As such, the most appropriate approach to RRs may entail pre-registering a set of decision rules or standard operating procedures that seek to outline aspirational properties of the measure, as well as plans for how competing properties will be prioritized. For example, scale developers may seek to select indicators that correlate highly, but not too highly, with each other, while also balancing the need to cover the breadth of the latent construct. Thus, RR authors may specify a range of inter-indicator correlations that will be prioritized (e.g., values below r = .60, but above r = .20), while also recognizing that this lower threshold might need to be relaxed to r = .15 if low base-rate indicators are necessary to maximize test information between theta of -4.0 and 4.0. The values mentioned here are not suggestions, but rather examplars of the type of language and priorities that might be considered. Ultimately, the field may settle on a set of considerations to be specified in a Registered Report of a new instrument, as well as possibly recommendations for the specific properties. Rather than attempting to specify those at this point in time, or wait for them to be developed, the approach at Assessment is to encourage this work and learn from the process. Therefore, we encourage RR authors to consult seminal works on scale construction and construct validation (e.g., Clark & Watson, 1995) to consider the relevant steps that should pre-specified. In general, we encourage the authors to adopt the approach that is most sensible for their instrument. In some cases this might all be anticipated and specified in the Stage 1 manuscript, but Assessment is also open to the practice of incremental registrations that are resubmitted following preliminary data collection.

A critical consideration for RRs that seek to build a novel instrument, is that it is imperative to first demonstrate the need for a new instrument. This necessarily entails an introduction that reviews the most relevant existing instruments measuring the construct, as well as those that measure related or overlapping constructs. The key point here is to outline why an additional measure is needed and how the proposed instrument will fit into the existing literature. Importantly, this does not suggest that additional or complementary measures of existing constructs are unnecessary or unwelcome. Quite the contrary. Authors should, however, make clear that the other measures exist and make a compelling case that their conceptualization, operationalization, or measurement approach is a meaningful addition to the field.

Overarching Guidelines and Expectations:

  1. Ethics/Institutional Review Board approval for the proposed research is expected to be secured prior to the submission of an RR to Assessment. There may well be changes to the protocol suggested during the Stage 1 review that will need to be vetted as with the IRB/Ethics board, however, existing approval ensures that the research can be conducted as it is proposed. If there are extenuating circumstances that complicate this in a given situation, the prospective authors are encouraged to contact the editor.
  2. Similarly, the resources to complete the proposed research are expected to be secured before the Stage 1 submission. As above, this ensures that the resources are in place to carry out the research as it has been proposed. This should include both equipment/facilities as well as funds available for human subjects payments. If there are cases when suggestions during the Stage 1 review may conflict with funding stipulations those can be arbitrated during the Stage 1 review process.
  3. Authors of RRs are expected to provide all data, code, and materials publicly available, as a general rule. There will be times when certain materials (e.g., copyrighted instruments) cannot be posted as well as situations where datasets cannot be shared due to human subjects protections or other considerations. In such cases, authors are encouraged to discuss these with the editor as soon as possible to work toward a resolution. The guiding principle is one of openness and transparency so exceptions will require justification.
  4. At the point of Stage 1 in-principle acceptance, authors are required to formally register their protocol in a recognized repository, either publicly or under temporary embargo until submission of the Stage 2 manuscript. The Stage 2 manuscript, when submitted, must then include a link to the registered protocol. Stage 1 protocols can be quickly and easily registered at the dedicated Registered Reports registration portal at


Clark, L. A., & Watson, D. B. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7, 309-319. doi: 10.1037/1040-3590.7.3.309


Cronbach, L. J. & Meehl, P. E. (1955). Construct Validity in Psychological Tests. Psychological Bulletin, 52, 281-302.


MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (2006). Power analysis and determination of sample size for covariance structure modeling. Psychological methods, 1, 130-149.


Schönbrodt, F. D., & Wagenmakers, E. (2016, October 25). Bayes Factor Design Analysis: Planning for compelling evidence.

Individual Subscription, Print Only

Institutional Subscription, E-access

Institutional Backfile Purchase, E-access (Content through 1998)

Institutional Subscription, Print Only

Institutional Subscription & Backfile Lease, E-access Plus Backfile (All Online Content)

Institutional Subscription, Combined (Print & E-access)

Institutional Subscription & Backfile Lease, Combined Plus Backfile (Current Volume Print & All Online Content)

Individual, Single Print Issue

Institutional, Single Print Issue