CHAPTER 3
Research Methodology
3.1 Research Design
The type of this research is correlational study. According to Sekaran (2004, p126), correlational study is a study in delineating the important variables associated with the problem. The correlational study used in this research is to know the relationship of performance appraisal on job performance, and the relationship of employee development on job performance.
The unit of analysis of this research is the individual. Sekaran (2004, p132) defined unit of analysis as the level of aggregation of the data collected during the subsequent data analysis stage. This research will use the data gathered from each individual and treat each employee’s response as an individual data source. The data that are used in this research are gathered just once over a period of days in order to answer a research question, this is called the crosssectional studies (in Sekaran, 2004, p135).
Table 3.1 Research Design
Objectives

Types and method of the research

Unit of analysis

Time horizon

O1

Correlational study

Individual

Cross  sectional

O2

Correlational study

Individual

Cross  sectional

Description:

O1: To know the relationship between performance appraisal and job performance.

O2: To know the role of employee development in moderating the relationship between performance appraisal and job performance.
3.2 Operational Variables
Operational variables are used to defining concept to render it measurable, which is done by looking at the behavioral dimension of the concept. Then the behavioral dimension are translated into observable and measurable elements so as to develop an index of measurement of the concept (in Sekaran, 2004, p176).
The operational variables that are used in this research are divided into three groups, which are:

Dependent Variable
The dependent variable is the variable of primary interest to the researcher. The researcher’s goal is to understand and describe the dependent variable, or to explain its variability, or predict it. In other words, it is the main variable that lends itself for investigation as a viable factor (in Sekaran, 2004, p88). The dependent variable of this research is job performance.

Independent Variable
An independent variable is one that influences the dependent variable in either a positive or negative way. That is, when the independent variable is present, the dependent variable is also present, and with each unit of increase in the independent variable, there is an increase or decrease in the dependent variable as well (in Sekaran, 2004, p89). The independent variable of this research is performance appraisal.

Moderating Variable
The moderating variable is one that has a strong contingent effect on the independent variabledependent variable relationship. That is, the presence of a third variable (the moderating variable) modifies the original relationship between independent and dependent variables (in Sekaran, 2004, p91). The moderating variable of this research is employee development.
Table 3.2 Operational Variables
Variable

Concept of Variable

Dimension

Main Indicators

Scale

Measurement Scale

Performance Appraisal (x)

The way for organizations to boost employees’ motivation and hone their competitive edge. Longenecker (1997)

1.Process of The System
2. Informational Factors
3.Rater Accuracy
4.Interpersonal Factors
5.Employee Attitude

1. Provide Information about what employee will achieve from their effort.
2. Opportunities to decide employee’s training needs.
1.Get feedback from supervisor
2.Review from the job process
1.Measured by work behavior
2.Measured by work activities
1.Supervisor support the development process of the employees
2.Supervisor give trust to the employees to do their job
1.Opportunity to appeal their performance ratings
2.Having action plans to deal with any weaknesses

Interval

Likert Scale

Employee Development (z)

The acquisition of knowledge, skills, and behaviors that improve an employee’s ability to meet changes in job requirements and in client and customer demand. Noe, Hollenback, Gerhart, Wright (2008)

1.Formal Education
2.Assessment
3.Job Experiences
4.Interpersonal Relationship

1.Offsite programs
2.Insite programs
1.Provide information and feedback about behavior
2.Provide Information and feedback about communication style
3.Provide information and feedback about skills
1.Enlarging the current job
2.Job rotation
3.Transfer, promotion, and downward moves
4.Temporary assignment
1.Opportunities to give mentoring
2.Opportunities to give coaching

Interval

Likert Scale

Job Performance (y)

Performance is essentially what an employee does or does not do. Employee performance common to most jobs includes quantity of output, quality of output, timeliness of output, presence of work, and cooperativeness.
Mathis and Jackson, (2003)

1.Effort
2.Individual ability
3.Organizational support

1.Motivation
2.Work ethic
3.Attendance
4.Job design
1.Talents
2.Interests
3.Personality factors
1.Training and development
2.Equipment and technology
3.Performance standards
4.Management and Coworkers

Interval

Likert Scale

Source: Researcher Operational Variable Analysis
3.3 Types and Sources of Data
Sources of data can be obtained from primary or secondary sources. Primary data refers to information obtained firsthand by the researcher on the variables of interest for the specific purpose of the study. While secondary type refers to information gathered from sources already existing (in Sekaran, 2004, p219). The source of data that are used in this research is the primary data, because it is collected directly from the employees that are used as the respondents.
Type of data in this research is quantitative. Based on theory from Aczel & Sounderpandian (2008, p24) quantitative variable can be described by a number for which arithmetic operations such as averaging make sense. Quantitative research is most helpful when “answering questions of who, where, how many, how much, and what is the relationship between specific variables (Adler, 1996, p5 in Leech and Onwuegbuzie 2007).
Table 3.3 Type and Source of Data
Objectives

Data

Type of Data

Data Source

O1

Performance appraisal, Job performance

Quantitative

Primary Data

O2

Employee development, Job Performance, and Performance appraisal

Quantitative

Primary Data

Source: Researcher Data
3.4 Data Collection Method
Data collection methods are an integral part of research design. There are several data collection methods, each with its advantages and disadvantages. Research data will be collected by using the most appropriate method, so that can greatly enhance the value of the research. The data collection methods that are used in this research are questionnaires, face to face as well as telephone interviews and literature study.
3.4.1 Questionnaires
Sekaran (2004, p236) defines questionnaire as a preformulated written set of questions to which respondents record their answers, usually within rather closely defined alternatives. Questionnaires are an efficient data collection mechanism when the researcher knows exactly what is required and how to measure the variables of interest. The scale that is used in this research is Likert scale. The Likert scale is designed to examine how strongly subjects agree or disagree with the statement (p197). For example;
Picture 3.1 Likert Scale
Statement

Strongly Disagree
1

Disagree
2

Agree
3

Strongly Disagree
4

The responses over a number of items tapping a particular concept of variable (as per the following example) are then summated for every respondent. This is an interval scale and the differences in the responses between any two points on the scale remain the same.

Face to Face and Telephone Interviews
According to Sekaran (2004, p232), interviews can be conducted either face to face or over the telephone. They could also be computerassisted. Although most unstructured interviews in organizational research are conducted face to face, structured interviews could be either face to face or through the medium of the telephone, depending on the level of complexity of the issues involved, the likely duration of the interview, the convenience of both parties, and the geographical area covered by the survey.
There are some advantages and disadvantages in using this method. The advantages of interviews of face to face or direct interviews is that the researcher can adapt the question as necessary, clarify doubts, and ensure that the responses are properly understood, by repeating or rephrasing the questions. The researchers also pick up nonverbal cues from the respondent. Any discomfort, stress, or problems that the respondent experiences can be detected through frowns, nervous tapping, and other body language unconsciously exhibited. While the main disadvantages of face to face interviews are the location and time limitation that result to ineffective survey.
There are also advantages and disadvantages of telephone interviews. The main advantage is that a number of different people can be reached in a relatively short period of time. And it can also eliminate any discomfort that the respondent might feel in facing the interviewer. A main disadvantage of telephone interviewing is that the respondent could unilaterally terminate the interview without warning or explanation, by hanging up the phone.

Literature Study
A literature study by studying and taking the data from the relevant literature and other sources deemed to provide information about the study as articles in magazines and the internet.
3.5 Sampling technique
According to Sekaran (2004, p265), population refers to the entire group of people, events, or things of interest that the researcher wishes to investigate. While sampling is the process of selecting a sufficient number of elements from the population, so that a study of the sample and an understanding of its properties or characteristic would make it possible for us to generalize such properties or characteristics to the population elements. The reason for using a sample, rather than collecting data from the entire population, is selfevident. In research involving several hundreds and even thousands of elements, it would be practically impossible to collect data from, or test, or examine every element.
According to Sarjono & Julianita (2011, p22) research does not have to use sample if the element of population is heterogen and the number of population is relatively small (less than 100) because the calculation of sample with the number of population less than 100 will leads to a very small number of sample. Small number of sample are concern to make result of the research less accurate, so it is suggested to take all members from the population.
Considering the theory above, since the number of population used in this research is less than 100, then all member of population will be used as a sample for this research.
3.6 Analysis Method
Before continuing the analysis, researcher conducted a pretest questionnaire first, which aim to determine whether the questionnaire is feasible and applicable to get the data needed. The data was taken from a sample of 61 respondents, then the results was processed through two stages using SPSS 16.0 software namely validity and reliability testing.
If the questions items are valid and reliable then the process of analyzing data can be continued. After doing the validity and reliability test, the process of data analysis can be proceed. In this study the data collected will be analyzed with the Simple and Multiple Regression analysis using SPSS software (Statistical Product and Service Solutions) version 17.0. Here's an explanation of the method of data analysis in this study.

Validity Test
Testing the validity of a test is important to do the research before analyzing respondents' answers were obtained after data collection. Validity test is done to determine whether the measurement tools that have been developed can be used to measure what is to be measured accurately. Validity test how well an instrument designed to measure a particular concept to measure. (Sekaran, 2006, p39)The validity of a measurement scale can be defined as the extent to which the differences between the scores of the results of observations show that the actual difference between objects / respondents on the characteristics being measured and not due to systematic/ random error.
Validity is a measure of the degree of validity of a measuring instrument. A valid measurement tool has high validity. Conversely less valid measurement tool has a low level of validity. A measuring instrument said to be valid if it is able to measure what is desired. (Rangkuti, F, 2008, p77) is to say the higher the validity of an instrument, the instrument is getting hit on target or even able to indicate what should be measured.
The basis for decision making:

If significance < 0.05 then the question item is valid

If significance > 0.05 then the question is not valid

Reliability Test
If a measurement tool has been declared invalid, then the next step is to measure the reliability of the data. According to Umar (2003, p113114) reliability is a value that indicates the consistency of a measuring device for measuring the same phenomenon. Each measuring device should have the ability to provide consistent measurement results. After testing the validity of the questionnaire, the reliability of the questionnaire can be tested. As noted earlier, reliability is a measure of the shows stability in the measure. Stability here means that the questionnaire is consistent if it is used to measure the concept or construct of a state to another state .One way of measuring reliability can be done by One Shot. In this measurement technique performed only at one time, and then does a comparison with another question or by measuring the correlation between the response ratios. In SPSS, this method was conducted using Cronbach's Alpha, where a questionnaire said to be reliable if the Cronbach Alpha value is greater than r table. Cronbach Alpha formula can be used to find the reliability of the instrument which is the range of scores among several values or scale shaped.
Reliability has considerations as follows:

If r Alpha (Cronbach’s Alpha) is positive, and r Alpha ≥ r tabel, then the variabel is reliable.

If r Alpha (Cronbach’s Alpha) negative, and r Alpha < r tabel, then the variabel is not reliable.

Correlation Test
According to Riduwan and Kuncoro (2007, p61) to determine the relationship between variables X 1 and X 2 and Y the technique used is by using correlation analysis. Analysis of the correlation that is used is the Pearson Product Moment.
Pearson Product Moment Correlation is denoted (r) with the provisions of the r value is no more than the price (1≤r≤+1) if the value of r =1 means perfect negative correlation; r = 0 means no correlation; and r = 1 means very strong correlation. While the meaning of the price r will be shown on the chart
The interpretation value of r is as follows:
Table 3.4 Interpretation Value of Correlation

Interval Coefficient

Level of Correlation

0.80 – 1.000

Very Strong

0.60 – 0.799

Strong

0.40 – 0.599

Strong Enough

0.20 – 0.399

Low

0.00 – 0.199

Very Low

Source: Ridwan & Kuncoro (2007)

Classical Assumption Test
Classical assumption is the statistical requirements that must be met in multiple linear regression analysis based on ordinary least square (OLS). So the analysis is not based on the OLS regression does not require the classical assumption requirements, such as logistic regression or ordinal regression. Similarly, not all classical assumption test shall be performed on linear regression analysis, such as multicollinearity test cannot be used in simple linear regression analysis and autocorrelation test should not be imposed on cross sectional data. There are at least three classic assumption tests that will be use in this research, namely test multicollinearity, heteroscedasticity test, and normality test.

Normality Test
Normality test is to see whether the residual values are normally distributed or not. A good regression model is to have a residual value is normally distributed. So the normality test was not performed on each variable but the residual value. Normality test can be done with the test histograms, normal test P Plot, Chi Square test, skewness and Kurtosis or Kolmogorov Smirnov. No method is best or most appropriate.
In principle the normality can be detected by looking at the spread of data (dots) on the diagonal axis of the graph or by looking at the histogram of the residual.
The basis of decisionmaking:

If the data is spread around line diagonal and follow the direction of a diagonal line or chart pattern shows the histogram normal distribution, then the regression model meet the assumptions of normality.

If the data is spread further from the diagonal and not following directions or diagonal lines or graphic histogram did not show a normal distribution pattern, then the regression model does not satisfy the assumption of normality (Ghozali, 2005)
Statistical tests that can be used to test the residual normality is a nonparametric statistical testsKolmogrov Smirnov (KS). If the results of KolmogrovSmirnov showed significant values over 0.05 then residual data is distributed normally. Whereas if the results show the value of Kolmogrov Smirnovsignificantly below 0.05 then distributed the residual data is not normal (Ghozali, 2005).

Multicollinearity Test
Multicollinearity Test was to see whether or not there is a high correlation between the free variables in a multiple linear regression model. If there is a high correlation between independent variables, then the relationship between the independent variable on the dependent variable to be disturbed. Statistical tool that is often used to test multicollinearity problems are with the variance inflation factor (VIF), Pearson correlation between the free variables, or by looking at the eigenvalues and condition index (CI). Regression model that free of multicollinerity is that it has value tolerant above 0,1 or VIF under 10 ( ghozali 2005 ). When tolerance variance under 0,1 or VIF above 10, and it means there is multicollinearity.

Heteroscedasticity Test
Heteroscedasticity Test is to see whether there is inequality of variance of the residuals of the observations to other observations. Regression models that meet the requirements are where there is equality of variance of the residual one observation to another observation fixed or called homoskedastisity and if different called heteroscedasticity (Ghozali, 2005). Detection of heteroscedasticity can be done using scatter plots with plotting ZPRED value (predicted value) with SRESID (residual value). If the value of the probability of their significance above 5% confidence level and a Scatterplot graphs, dots spread above or below zero on the Y axis, then the regression models can be concluded does not contain any heteroscedasticity (Ghozali, 2005). A good model is obtained if there is no particular pattern on the graph, such as collects in the middle, narrowed and then widened or otherwise widened and then narrowed.
In addition to test heteroscedasticity, it can be detected by using test Glejser. Glejser test done by do the regression analysis the absolute value with independent variable that affect the dependent variable. It is then indicates no occurs heteroscedasticity.

Multiple Regression Analysis
Multiple linier regression test is a suspended technique so the variable will devide into independent variable (x) and dependent variable (y). This analysis shows the dependent variable will be influenced by more of one independent variable.
Regression analysis is conducted to examine how dependent variable could be predicted by independent variables individually. So the increasing or decreasing dependet variable can be decided by increasing or decreasing independent variables. Form of multiple regression :
Y = a + b_{1 }X_{1} + b_{2 }X_{2 }+ b_{3 }X_{1} X_{2}
Description:
a : a (Alpha) is the Constant or intercept
b : coefficient regression.
Y : dependent variable which is Job Performance.
X1 : independent variable which is Performance Appraisal.
X2 : independent variable which is Employee Development.

Hyphothesis Test
According to Sugiyono (2006) formulation of research hypothesis is the third step in the research, after researchers put forward the basic theory and frameworks. The hypothesis is defined as a temporary answer to the formulation of research problems and is basically a proportion or assumption may be true and is often used as a basis for decision making or problem solving or for further basic research. To be tested, a hypothesis must be expressed quantitatively. Statistical hypothesis testing is a procedure that allows a decision to be made is the decision to reject or not reject the hypothesis that is being tested.
In accepting or rejecting a hypothesis is tested, there is one thing that must be understood, that the rejection of a hypothesis means conclude that the hypothesis is wrong, while receiving a mere hypothesis implies that we have evidence to believe otherwise. Here is a hypothesis testing procedure to determine Ho and Ha.
Based on the research objectives, the draft prepared following hypothesis test that uses a confidence level of 95%, so the limit is the error rate of 5% or equal to 0.05.
Basis for decision making:
If Sig ≥ 0.05 then H0 is accepted
If Sig ≤ 0.05 then H0 is rejected
The hypothesis to be tested based on the research objectives are:
Based on the variables that use in this research where:
X: Performance Appraisal
Y: Job Performance
Z: Employee Development 