COULD STATED POLITICAL AFFILIATION INFLUENCE A CANDIDATE'S PERCEIVED APPROPRIATENESS TO ATTEND GRADUATE SCHOOL? AN AUDIT STUDY

Bias and stereotypes, even in the professional realms are ubiquitous and are unfortunately an inescapable fact of life in society. Psychologists study bias and discrimination in order to more fully understand when it arises, as well as what can be done to confront it. Bias and discrimination researchers have demonstrated that women, racial/ethic minorities, members of the LBGTQ community, as well as other marginalized groups continue to suffer from the effects of discrimination. However, recent investigations have indicated that discrimination based on an individual’s stated political affiliation may also exist. Other researchers point out that political affiliation bias and discrimination may be particularly prevalent in the higher education community. Therefore, the aim of the present study was to use an audit-type quasiexperimental design to examine possible signs of bias and discrimination in a sample of undergraduate students and Amazon MTurk users. A structural equation model (SEM), specifically a path model, was used to investigate whether political affiliation contributed over and above a host of other variables to the subjective rating of a fictional applicant’s candidacy for graduate school and employment. Contrary to some reports, stated political affiliation of a particular party did not seem to influence the candidate’s rating. Further, the MTurk and undergraduate student samples showed remarkable consistency in their ratings. Future research may want to examine more salient cues of political affiliation as well as various operational definitions of discrimination and bias.


INTRODUCTION
Bias and stereotypes, even in the professional realms (Kannan & Khan, 2014) are ubiquitous and are almost an inescapable fact of life in society. It is widely believed that gender and racial discrimination contribute to the underrepresentation of women and minorities in top organizations (Ginther & Kahn, 2006). For example, as of 2008 women only accounted for 22% of the workforce within STEM (science, technology, engineering, and mathematics) fields that are traditionally associated with men (Fried & MacCleave, 2009). There has been substantial research on discrimination and how its influence on the job market has affected things like hiring and pay scale (Smith, 2002). In higher education, regardless of rank or institution, female faculty members earned about 80% of the salary of men (Okpara, 2005). This data is corroborated on a national scale, with the median earnings of women in the United States in 2016 found to be 80.5% of men's earnings (Semega, Fontenot, & Kollar, 2017).
Furthermore, it is not just women suffering workplace discrimination. For example, men and women are often penalized when successful in areas that are not consistent with their stereotypic role. Moreover, those who exhibit counterstereotypical behavior are often subject to penalties or punishments (Cialdini & Trost, 1998). In a randomized audit paradigm experiment, Gift and Gift (2015) sent 1,200 politically branded resumes in response to help-wanted ads in two U.S. counties-one highly conservative and the other, highly liberal. Results indicated that job seekers with minority partisan affiliations were statistically less likely to obtain a callback than candidates without any partisan affiliation. Additionally, applicants sharing the majority partisan affiliation were not significantly more likely to receive a callback than non-partisan candidates. These results suggest that individuals may sometimes place themselves at a disadvantage by including partisan cues on their resumes.
In academia, where substantial efforts have been made to promote diversity, egalitarianism, and multiculturalism (APA, 2002), fictional requests from prospective students seeking mentoring in the future indicated that faculty were significantly more responsive to white males than to other categories of students (e.g., females, Blacks, Hispanics, Chinese), collectively, particularly in higher-paying disciplines and private institutions (Milkman, Akinola, & Chungh, 2012).
Along with the established gender and race/ethnicity bias and discrimination, there have been claims suggesting a political affiliation bias in higher education.
Reports suggest that American professors are decidedly liberal in political selfidentification, party affiliation, voting, and a range of social and political attitudes (Gross & Simmons 2007;Rothman et al. 2005;Schuster& Finkelstein 2006;Zipp & Fenwick, 2006). In a 2007 study, 62% of professors described themselves as any shade of liberal, 18% as middle of the road, and 20% as any shade of conservative compared to the national averages of 29% liberal, 39% conservative and 32% moderate among Americans (Gross & Simmons, 2007). While political affiliation may not in and of itself be a problem, Gross and Fosse (2012) contend that it may become problematic when higher education institutions serving as loci of knowledge production and dissemination are influenced in important ways by professors' political views. In one investigation, an anonymous review from Inbar and Lammers (2012) found that ''In decisions ranging from paper reviews to hiring, many social and personality psychologists admit that they would discriminate against openly conservative colleagues. The more liberal respondents are, the more willing they are to discriminate' ' (p. 496).
Based on the review of the literature presented in chapter 2 and the surrounding contention regarding the possible discrepancy in political viewpoints in academia, more research is warranted on the topic. However, the measurement and detection of bias can be as Rom and Musgrave point out-"both a hot topic and a hot potato" (2014, p. 150). Therefore, the proposed study adapts some of the methods of Milkman, Akinola, and Chungh (2012) and Fosse, Gross, and Ma (2011) and integrates them with the resume audit paradigm of Gift and Gift (2015).
Given that: (a) studies (i.e., Bullers, Reece, & Skinner, 2010;Gross & Fosse, 2012) have been conducted investigating the possible political affiliation bias in academia and (b) research has indicated potential political affiliation preference (i.e., Duarte et. al., 2015), and (c) that no experimentally designed research has investigated the presence of political affiliation preference specifically in academia, the proposed study is justified. The present study would be significant in order to identify any potential political affiliation preference perceived by potential graduate students.
Additionally, the results of this study could be useful in designing professional development curricula aimed at confronting perceived bias and discrimination in academia. Finally, the proposed study could shed light into differences, but more importantly could begin to integrate perspectives of bias and discrimination in hopes of understanding them more completely.
Research questions addressed in the present study include: (a) Do undergraduate students perceive a preference towards a potential graduate student with a stated political affiliation?  Jackson, Dunton, & Williams, 1995). What's more, people may not be able to accurately and honestly self-report on possible biases and prejudices because these feelings may not always be consciously available to them (Greenwald & Banaji, 1995).
Adding to the confusion is that often words like bias, prejudice, discrimination, and stereotype are misused, misconstrued, or simply used interchangeably to refer to the same construct. Therefore, proper operational definitions are warranted before any further discussion. Bias is an overarching term regarding preference towards a certain status or social group and can be universal or location specific (Fiske, 2010). Biased individuals believe the biases they exhibit are right without regard for the truth. A stereotype is a fixed, over generalized belief about a particular group or class of people (Cardwell, 1996). Stereotyping, defined by Fiske (2010) is the application of an individual's own thoughts, beliefs, feelings, and expectations onto other individuals without first obtaining factual knowledge about the individuals. Prejudice is an emotional reaction to another individual or group of individuals based on preconceived ideas about the individual or group (Fiske, 2010). Finally, discrimination is the application of preconceived beliefs about an individual or group and is the denial of equal rights based on prejudices and stereotypes.
Researchers have been interested in studying stereotypes and discrimination for as far back as the 1920s. Journalist Walter Lippman coined the term stereotype in 1922. He referred to stereotypes as pictures in the head or mental reproductions of reality (Lippman, 1956). Researchers Katz and Braly (1933) conducted one of the first systematic studies of racial stereotyping. The two investigators at Princeton University distributed questionnaires to students asking them to describe different ethnic groups (e.g., Irish, German, African American etc.) using a list of 84 personality traits. The students were asked to pick out four or five traits that they thought were typical of each group. Katz and Braly (1933) discovered considerable agreement among the traits selected based on ethnic or racial status. White Americans were rated as progressive, industrious, and ambitious while African Americans were seen as lazy, ignorant and musical. Remarkably, the mostly white participants demonstrated consistent ratings for groups, even groups with whom they had never had personal contact. While groundbreaking, the Katz and Braly (1933) and subsequent early stereotype research has lacked ecological validity due to the fact that social desirability and demand characteristics were unavoidable. Further, early research relied solely on verbal self-reports of stereotypes and were therefore subject to social desirability and interpretation. Finally, there was a problem with cause and effect; that is, just because people were aware of stereotypes, didn't necessarily mean that the stereotype influenced their behavior.
In response to the challenge of measurement and definition of these phenomena, psychologists devised measures of implicit bias and prejudice. In other words, stereotypes and ultimately bias arising automatically without conscious awareness. These measures, when unchecked, could lead to explicit bias and discrimination of certain groups (Greenwald & Banaji, 1995). These more covert measures include more indirect forms of self-report, physiological measures such as EEG or EMG, reaction time measures, and direct behavioral observation. Chief among implicit measures, and maybe the most notable, is the Implicit Association Test (IAT) (Greenwald, McGhee, & Schwartz, 1998). The test measures a participant's implicit attitudes towards a stimulus, defined as introspectively unidentified or inaccurately identified traces of past experience that mediate favorable or unfavorable thought, feeling, or action towards social objects (Greenwald & Banaji, 1995). These social objects may be represented by pictures of faces of differing skin tones, body types, genders, ages, and by symbols of disability and sexual orientation. The IAT was the first attempt by psychologists to measure implicit attitudes by measuring participants' underlying automatic evaluation of a stimulus.
On one hand, approaches to understanding stereotyping and prejudice through cognitive appraisals that give rise to reactions which then shape action and behavior correspond nicely to theories linking beliefs, attitudes, and behavior in a logical manner i.e., the theory of reasoned action (Fishbein & Ajzen, 1975). On the other, more recent investigations (Dovidio & Gaertner, 2004) have indicated that prejudice, stereotyping, and discrimination can sometimes be elicited in a manner that is quite different from reasoned thought. Researchers argue that while self-report measures of bias and discrimination may capture how individuals deliberately process information about social impressions, indirect measures like the IAT may have the benefit of capturing more spontaneous and automatic responses to other social groups (Bodenhausen & Richeson, 2010).
Several theories have attempted to explain how and why stereotypes are formed. Social identity theory (Tajfel & Turner, 1986) and accompanying research have demonstrated that people tend to categorize themselves as similar or different from others based on shared identity-relevant traits, such as race, gender, and political orientation. Moreover, group attachment suggests that individuals are motivated to select categorization processes that privilege certain groups over others. These shared identities draw individuals together, creating a perception of similarity, which leads to attraction and better treatment of demographic in-group than out-group members.
Showing greater affinity toward members of one's own demographic group relative to others can result in organizational members providing preferential treatment to those who share their demographics. In an example, a traditional business setting historically dominated by white males may show preference and greater affinity towards hiring white men (the in-group) and may discriminate against women and ethnic minorities (the out-group).
In addition to theories in social psychology, schema theory has been borrowed and adapted from cognitive psychology as a way to conceptualize stereotypes. Schema theory has also been applied to understanding the derivation of stereotypes and discrimination. Bartlett (1932) originally described schemas as organized conceptions of people, places, and events that individuals utilize when processing new information.
Further, Bartlett (1932) suggested that schemas provide a framework for remembering information inasmuch as stimuli that can be integrated into the framework are fit to it, and stimuli that cannot be integrated will be forgotten. Stereotypes, in this way, act as schemas by directing mental resources and by guiding encoding and retrieval of information from memory. Social categorization is primarily based on salient and identifiable features of a person such as age, gender, race, or social status. Moreover, stereotypes can be understood as social schemas, in that they are theory driven, stable in memory, have internal organizational properties, and are learned by individuals usually during their early years (Augoustinos & Walker, 1998).
One application of schema theory was proposed by Bem (1981) who explained how individuals become gendered in society from a young age, and how sex-linked characteristics are maintained and transmitted to other members of a culture. Having a strong gender schema filters and processes incoming stimuli from the environment, which in turn leads to an easier ability to assimilate information that is stereotype congruent (Bem, 1981). This process has the effect of further solidifying the existence of gender stereotyping. Furthermore, these gender schemas are used to organize and direct a person's behavior based on his or her society's gender norms. For example, a young girl may receive societal messages that to be successful she has to get married and raise children. As a result, she may be dissuaded from pursuing a career in technology, science, or health care. Bem (1981) argues that these schemas that are formed in children early on create a gender lens that influences how they think and behave.
Until recently, stereotype and bias research has been conducted in a disparate manner. That is, distinct models of bias have been proposed for different forms of prejudice and stereotypes (i.e., ageism, racism, anti-Semitism, sexism). As a result, stereotypes about religion may not function in the same way that stereotypes about race or ethnicity do. Without insight into the underlying causal mechanism of stereotypes, social psychologists will not fully be able to understand the nature of stereotypes. As Augoustinos and Walker (1998) point out, stereotypes are more than just pictures in the head. They have distinct social and political consequences which generate behavioral expectancies. What's more, stereotypes are inevitably linked to discrimination and prejudice. Therefore, the more current research (Fiske et al., 2002) in the field of bias and discrimination attempts to pursue an integrative framework that examines similarities, as opposed to differences, between different forms of bias and prejudice. Doing so could assist social psychologists in learning more about how and when bias and prejudice arises and what we can do to mitigate their impact on society.
Some researchers have started to examine similarities and differences between different forms of bias and discrimination. In other words, researchers are interested in differentiating the shared psychological components of different forms of prejudice and stereotyping from elements that may be unique to particular varieties of bias. The BIAS map (Cuddy, Fiske, & Glick, 2008) is one example of researchers attempting to place prejudice toward different social groups within one common conceptual framework. The proposed behaviors from intergroup affect and stereotype systematically link discriminatory behavioral tendencies to the contents of group stereotypes and emotions, as rooted in structural components of intergroup relations (Cuddy, Fiske, & Glick, 2008 The investigators note that the BIAS map could theoretically link behavior to the two traits that most consistently emerge in social perception-competence and warmth. Further, The BIAS map attempts to shift the focus of study from personal stereotypes to stereotypes as culturally shared knowledge. Finally, the BAIS map attempts to chart how a group's location in the competence-warmth map of stereotypes predicts the bias climate that the group is likely to experience (Cuddy, Fiske, & Glick, 2007).
Turning the focus towards high education, several notable studies have investigated bias and discrimination in the academic setting. In their experiment published in a series of studies, Milkman, Akinola, and Chungh (2012) sent emails to more than 6,500 professors at more than 259 universities across the country. The emails were from fictional students expressing interest in the doctoral programs and were identical except for the name, which varied by gender and ethnicity (e.g., Meredith Roberts, Lamar Washington, Juanita Martinez, Raj Singh). Twenty different names in 10 different race/gender categories were used. One email per professor was sent out, and responses were used as an outcome variable. The crux of the study was an assumption that the average treatment of any particular student should not differ from that of any other. However, the treatment would differ if professors were implicitly or explicitly deciding which students to respond to on the basis of their race and gender.
The results of the first in the series of two studies was published in 2012 (Milkman, Akinola, & Chungh, 2012). In this report, 67% of the professors responded to the emails, and 59% of them agreed to meet at the student's proposed time. The average response rates for each category (e.g., white male, black female etc.) was calculated in the second paper (Milkman, Akinola, & Chungh, 2015) and revealed that responses from professors did indeed depend on students' race and gender identity.
Faculty were more likely to respond to the perceived white male names more than female, Black, Hispanic, and Chinese names. This bias held true in most disciplines and across a wide range of colleges and universities. The most pronounced biases were found in business schools and in private universities paying higher faculty salaries.
The researchers also noted that several of the supposed advantages that some people believe women and minorities have in academia are unfounded. For example, Asians as the model minority was not supported. In fact, Chinese students were the most discriminated upon group in the study. Additionally, the same levels of bias were observed in same-race and same-gender faculty to student interactions. Moreover, typically diverse disciplines (e.g., criminal justice) were no less likely to exhibit bias than traditionally less diverse disciplines (e.g., business). Finally, the representation of women and minorities and discrimination was uncorrelated, suggesting that greater representation in a particular program may not imply reduced discrimination of prospective students.
Along with the established gender and race/ethnicity bias and discrimination, there may be evidence suggesting a political affiliation preference in higher education (e.g., Gross & Simmons 2007;Rothman et al. 2005;Schuster& Finkelstein 2006;Zipp & Fenwick, 2006). Inbar and Lammers (2012) argue that there is a growing recognition among sociologists and social scientists that professors' politics matter.
For example, social scientists' commitment to paradigms and approaches to research may be bound up with political identity (Gross & Fosse, 2012). In other words, political orientation may inform the research questions addressed by scientists and thus could impact scientific and scholarly creativity.
A study by Fosse, Gross, and Ma (2011) examined bias and discrimination in a sample of directors of graduate programs in sociology, history, English, political science, and economics at universities in the United States. Directors were sent two emails expressing prospective student interest in the program with one of the emails serving as a control. Both were identical except for a line about extracurricular activities and either working on the Obama or McCain campaigns during the last election cycle. The outcome variable measured was the email response from the director to one, both, or neither of the two emails. The researchers found that the directors responded overall to more of the emails that indicated that the student worked on the Obama campaign. However, these findings did not reach statistical significance and no effect sizes were reported. As a result, Fosse, Gross, and Ma (2011) concluded that more investigation into possible response bias was warranted.
Accounts of grading bias on the basis of a student's political beliefs has also been acknowledged (Rom & Musgrave, 2014). The authors note that some conservatives have argued that liberals dominate American campuses and use their classrooms to indoctrinate students or to discriminate against those with differing political beliefs. Liberals have responded claiming that studies indicating bias are flawed and that their academic freedom is being attacked. While acknowledging that grading bias is hard to prove, the authors offer suggestions to mitigate potential bias in the classroom as well as implore professors to be aware of it. Further, the potential for political bias should be taken seriously and the academy should treat it with the appropriate gravity. Rom and Musgrave (2014) conclude that regardless of the magnitude of campus political bias, it is ill advised for the scholarly community to argue that they are immune to bias simply because they are fair.
A study by Bullers, Reece, and Skinner (2010) surveyed 226 current faculty members to examine personal perceptions of political bias in a university. Although all groups reported higher rates of bias against conservatives than against liberals, almost 50% of conservatives reported a bias against their own ideology group. This trend was reiterated in reports of having to conceal political views, and in negative effects of views on career decisions. Conservatives were about 10% more likely than Moderates or Liberals to report the need to conceal their political beliefs, and to report that their beliefs had a negative effect on their career decisions points.
A survey of 292 faculty members of the Society for Personality and Social Psychology (SPSP) found that as 85% of professors identify as liberal whereas only 8% identify as conservative (Inbar & Lammers, 2012 Finally, an analysis of 846 social psychology abstracts between the years 2003 and 2013 by Eitan and colleagues (2018) concluded that conservatives were described more negatively than liberals and also were more likely to be the focus of explanation than liberalism.
While there is nothing inherently wrong with polarized groups, research on group diversity and decision making suggests that diverse groups are better in overcoming biases, exhaustively searching the hypothesis space for good models of the world, and generating better reasoned solutions to problems (Bang & Firth, 2017).
Additionally, diverse groups seem to be especially appropriate for tasks involving innovation and the exploration of choices and new opportunities (Sommers, 2006).
Specifically, the mechanisms behind this benefit may include the multiplicity of sources of information, heterogeneous skills, and divergent perspectives of the group.
While it should be noted that diverse perspectives sometimes create disagreement among group members and can reduce members' confidence, disagreement is often associated with improved judgmental accuracy (Sniezek, 1992  In order to capture both self and other justice perceptions, the original belief in just world 8-item measure (i.e., Lipkus et al., 1996) was expanded by Lucas and colleagues (2007) to include 16 items that explicitly referred to beliefs about justice for both the self and others. Procedural justice beliefs for others (PJ-others) encompass the deservedness of rules, processes and treatment toward others (e.g., "Other people are generally subjected to processes that are fair"), whereas procedural justice beliefs for the self (PJ-self) refer to the deservedness of rules and processes treatment toward oneself (e.g., "I am generally subjected to processes that are fair"). Similarly, distributive justice beliefs for others (DJ-others) measures beliefs about the deservedness of outcomes or allocations (e.g., "Other people usually receive outcomes that they deserve"), whereas distributive justice beliefs for the self (DJ-self) measures beliefs about the deservedness of outcomes or allocations for the self (e.g., "I usually receive outcomes that I deserve"). All items are rated on a 7-point Likert-type scale ranging from 1 (strongly disagree) to 7 (strongly agree). Total subscale scores were created by summing and averaging each of the four appropriate items, with higher scores indicating a stronger belief in justice. Belief in a just world has been shown to be associated with political affiliation (Smith & Green, 1984) in that people who identify as conservative generally score higher on belief in a just world while those identifying as liberal, on average, score lower on belief in a just world.

Power Analysis
Traditionally for structural equation modeling (SEM), Bentler (2008) suggests at least 5-10 participants per estimated parameter, however it may be necessary to have as many as 20 to 50 participants if statistical assumptions are violated. Therefore, with a proposed path analysis with as many as 20 estimated parameters, 500-1000 participants were sought. As almost all statistical assumptions were validated (see chapter 4), 5-10 participants per parameter was deemed acceptable. In sum, the collected sample of 803 participants was adequate for the analyses conducted.

Procedures
An audit experiment methodology was employed for this study. Common in business type research, this methodology relies on pairs of matched testers who differ only on race, gender, or some other dimension of interest, and who attempt to obtain a desired outcome using identical techniques while treatment differences are measured (Pager, 2007). Audit studies across a wide range of contexts offer evidence and high external validity that discrimination continues to disadvantage minorities and women relative to white males with the same credentials. For example, in one study white job candidates received a 50% higher callback rate for interviews than identical black job candidates (Bertrand & Mullainathan, 2004).
Other audit studies have shown that African Americans and Hispanics receive fewer opportunities to rent and purchase homes than Caucasians (Turner et al., 2002).
Further, in yet another audit investigation, obese job applicants received fewer job interviews than non-obese applicants based on hiring managers' implicit biases (Agerström & Rooth, 2011). Finally, women and minority prospective graduate students receive less assistance than white males from prospective academic advisors when seeking meetings for a week in the future ( (2015) provide ample precedent that an audit type methodology could elicit potential political affiliation preference present in the proposed study.
Participants were asked to complete a set of online surveys via Qualtrics. The first screen showed a consent form informing participants that this study was approved by an Institutional Review Board and asked participants to please provide their consent before continuing with the study. If the participant chose to continue, they were asked to complete the Procedural and Distributive Just World Beliefs (PDJWB) scale (Lucas et al., 2007). Next, participants were randomly assigned to view one resume (either neutral, Democrat, or Republican) and then asked to provide hypothetical ratings for the candidate listed on the resume as well as the candidate's likelihood of being accepted into a graduate program, the likelihood of the candidate being successful in a graduate program, the likelihood of the candidate being hired by a business, and the likelihood of the candidate be hired by a government organization.
Next, participants completed demographic information. After completion, participants were thanked for their participation and given contact information in case they had any questions about the study. The Shapiro-Wilk tests for these variables were significant (p < .001), but interpreting the Q-Q plots suggested that the small deviations from normality were of little concern. A correlation matrix between all variables showed that no variable was correlated above |.70|, indicating no issues of multicollinearity (Harlow, 2014    Next, exploratory factor analyses were run for Belief in Just World scales. Based on the theoretical precedent set by Lucas, Znhdanova, and Alexander (2011), the four BJW subscales for the college student sample were put into an EFA restricted to four factors, using principal axis factoring with promax rotation. The four-factor solution explained 72.45% of the variance. The identical analysis was then repeated for the MTurk sample and the resulting four-factor solution explained 79.31% of the variance. See table 5 for the factor loadings for the Belief in Just World subscales. It should be noted that the eigenvalues for the MTurk sample suggested that there were only three factors (i.e., three eigenvalues greater than 1.0). However, this finding is consistent with Lucas, Znhdanova, and Alexander (2011), and thus the theoretical four-factor model was accepted.
Based on the results of the EFA in the MTurk and college student samples, the next step was to conduct a path analysis via a structural equation model (SEM). A path analysis can be used to assess a pattern of predictive relationships among measured variables while identifying the weights connecting the variables (Harlow, 2014). In the present study, a path analysis was conducted for each sample (i.e., MTurk and college) in which several demographic variables, the four Belief in Just World subscales, and experimental condition (i.e., type of resume) were used to predict the subjective rating variables of Success in a graduate program (SUC), Acceptance to a graduate program (ACP), Hired by a business (HBUS), and Hired by a government entity (HGOV). Table 5. A chi-square test, the comparative fit index (CFI), root mean square error of approximation (RMSEA), and root mean square residual (RMR) were used as fit indices for these models, where a CFI greater than .90/.95 shows good and great fit, an RMSEA lower than .10/.08/.05 shows acceptable, good, and great fit, respectively, and an RMR of .08 or less indicates acceptable fit (Hu & Bentler, 1999). A nonsignificant chi-square test indicates good fit, but the chi-square test is extremely sensitive and a significant result is not necessarily indicative of poor fit (Harlow, 2014;Kline, 2016). Additionally, R 2 values as well as the RMSEA were used as indicators of effect size at the macro-level. At the micro-level, parameter significance and effect size were determined by z-tests and coefficient loadings, respectively (Harlow, 2014). For both samples, the model was specified to estimate the parameters between the latent dependent rating factor with four indicators and the exogeneous independent predictor variables as well as the variances and covariances between all predictors. For the undergraduate student sample, the model estimating paths from 9 of the predictors but excluding paths from the two political affiliation conditions (i.e., Rep and Dem) showed great fit, χ2 (37) = 47.3, p < .001, CFI = .97, RMSEA = .10, RMR = .06.

Exploratory principal axis factor analysis for Belief in Just World Subscales
Adding the path from the Democrat predictor to the model with the other 9 predictors (still excluding the Republican predictor) also showed good fit, χ2 (36)  In regards to research questions (c), (d), and (e) and in contrast to other notable audit studies (Gift & Gift, 2015;Milkman, Akinola, & Chungh, 2012), there were no statistically significant differences in the subjective rating of a candidate between experimental conditions (i.e., resume type). What's more, the most parsimonious path model excluding political affiliation showed slightly better fit than models including party affiliation, indicating one's political affiliation regardless of political party did not meaningfully contribute to explaining the job ratings. In other words, participants did not perceive any stated political affiliation to be advantageous to the hypothetical candidate's opportunity to attend graduate school or to be hired at an entry level position. Similarly, there was no perceived advantage nor disadvantage to omitting political party affiliation on a resume.
Similar to the audit study conducted by Fosse, Gross, and Ma (2011), the present study found small mean differences, but not statistically significant differences, between potential candidates identifying as republicans or democrats.
However, in contrast to the present study Fosse, Gross, and Ma (2011) Zipp & Fenwick, 2006), the present study offers evidence that possible bias towards (or discrimination against) a particular political affiliation from undergraduate students or the general public (i.e., MTurk) may be unfounded.
There are several limitations to the current study that must be discussed. First, participants in audit studies like Gift and Gift (2015) and Milkman, Akinola, & Chungh (2012) were under the assumption that they were reviewing a potential candidate that they would possibly hire for their business or accept into their graduate program. Therefore, the participants in those studies may have had more incentive to scrutinize every detail on the candidate's resume/email. Scrutinizing every detail would certainly ensure that the participant would notice the candidate's stated political affiliation. However, participants in the current study may not have had appropriate motivation to closely review the candidate's resume as ultimately the participant was not directly related to the business or graduate program in question.
In addition, it is possible that because of the complexity and structure of the resume, the political affiliation manipulation was not salient enough to participants as a cue towards the candidate's political affiliation. In other words, the demand characteristics (i.e., the political affiliation stated) of the present study may have been too subtle and as a result may not have been fully perceived by participants. Other audit studies (i.e., Fosse, Gross, & Ma, 2011;Gift & Gift, 2015;Milkman, Akinola, & Chungh, 2012) have used more than one indication of political affiliation or have used a very salient cue to the candidate's identity (i.e., in the signature or subject line of the email). The author of the present study would be remiss not to mention that an initial attempt at a similar audit-experimental technique to examine political affiliation bias was perhaps overly salient in the political affiliation cues stated. As a result, several of the participants became upset about the nature of the study to a degree that data collection was abandoned and the study terminated. Perhaps the most challenging aspect of designing an audit-type experimental design is determining the appropriate level of salience of the test variable. In the future, pilot testing of the experimental manipulation will be considered.
As stated above, several studies (e.g., Fosse, Gross, & Ma, 2011;Gift & Gift, 2015;Milkman, Akinola, & Chungh, 2012) have attempted to study bias and discrimination from a top-down approach. In other words, bias and discrimination has been documented by observing faculty/department chairs or owners of businesses and their interaction with potential students or employees. However, the present study utilized a bottom-up approach which queried students and community members about possible bias or discrimination that one might face as they apply for graduate school/enter the workforce. Therefore, it is possible bias and discrimination may exist in these areas but may not be perceptible by those attempting to gain access to academia or the workforce.
Finally, it must be noted that the present study utilized convenience samples.
While efforts were made to control for possible personal confounds via random assignment to experimental condition and by collecting data on race/ethnicity, gender, income, and age, it is not possible to rule out participant selection bias. Future studies would do well to consider a true experimental design with random sampling as well as complete random assignment to experimental conditions.
There are several directions for future research based on the results of this study. Future studies may want to investigate how participants would rate a potential applicant that was applying to a position or graduate program that is of high personal salience to the participant. For example, a graduate student participant could rate an applicant's chances of acceptance into their own graduate program. It is possible that the added salience to the participant would precipitate a closer and more thorough examination of the candidate's resume. Another example would be asking an employee to rate the resume of a potential applicant to that employee's place of work.
Again, that added salience to the participant may induce a more careful examination to the details of the resume.
Along these lines, future studies using the resume audit paradigm may want to consider pilot testing of their experimental manipulations before undergoing data collection. Consequently, the researcher may benefit from this technique by fine tuning the salience of the political affiliation cue. In other words, one would use pilot testing to deduce whether political affiliation was noticed by participants when reviewing a resume. The researcher may also want to inquire as to how noticeable the political affiliation manipulation is. Thus, when collecting data, the researcher would be less concerned about the salience of the cue being too overt as to 'tip the hand' of the study or too benign as to go unnoticed by participants.
A variation of the current study may be conceptualized to increase salience of the political affiliation manipulation. For example, adding a section in the resume about working on presidential campaigns (e.g., 'worked on the 2016 Clinton/Trump for president campaign') could serve as an important indication of political affiliation.
Additionally, supplying political affiliation cues in other parts of the resume (e.g., volunteered for 'national association for progressive Americans') may bring political affiliation to the forefront of the participant's mind.
As email responses seem to be a popular choice for operationalizing bias in electronic research (i.e., Fosse, Gross, & Ma, 2011;Gift & Gift, 2015;Milkman, Akinola, & Chungh, 2012), future audit type experiments and quasi-experiments may want to consider increasing external validity by designing studies that utilize more than one measure of bias or discrimination such as email responses and hypothetical candidate ratings. Again, pilot studies attempting to measure bias and discrimination in novel ways may be extremely beneficial to the future of bias and discrimination research.
In conclusion, reports of bias favoring democratic political leanings and discrimination against conservatives in academia was not supported by the results of the present study. However, it may be possible that a bias exists and that the measures used in this study were not perceptible enough to elucidate it. As is generally the case, future research is warranted to confront any possible bias and discrimination in hopes of continually striving