The Effect of Value Stream Mapping Toward Operational in A Company

The purpose of this study was to determine how the effect of value stream mapping on operational performance. Empirical data for this study was taken from a survey at PT PERTIWI AGUNG. PT PERTIWI AGUNG is engaged in the Pharmaceutical industry. The type of research according to the exploration is causal associative. The sampling technique used is probability sampling, which is a sampling technique that provides equal opportunities for each member of the population to be selected as a sample member. The data used are primary data derived from distributing questionnaires to PT PERTIWI AGUNG. Data analysis techniques using the Structural Equation Model (SEM) AMOS Version 22. Data testing uses validity and reliability tests. Normality Test, Model Suitability Test and Hypothesis Test. The results of the study show that there is an effect of value stream mapping on operational performance. Every increase of one unit of value stream mapping will increase operational performance by 0.841

Image: 1. Framework Sources: Gaspersz in Febiola (2011) and Gazpersz in Naibaho (2012) (Gaspersz, V., dan Avanti, 2011) Figure 1. describes that the value stream mapping variable is measured from the Raw Material Marking (VSM1) indicator, Raw Material Testing (VSM2). Operational performance variables are measured from the indicator of production results according to the target (KO1), the absence of product failure (KO2), conducted training of employees (KO3), Dilevery performance is good (KO4), when the production process is good (KO5). Based on previous research and the framework of thought made, the hypotheses that are compiled are: H1: VSM (value stream mapping) has a positive effect on operational performance.

Research Method
The research method that I use in this study is causal associative research with a survey approach, for example by distributing questionnaires, structured interviews, and so on. Causal associative research is research that aims to determine the relationship of two or more variables. In this research a module can be built that can function to explain, predict and control a symptom, (Sugiyono, 2015). Mean while, according to (Ferdinand, 2014),Causality research is research that wants to find an explanation in the form of cause and effect between several concepts or several variables or strategies developed by management.  (Goriwondo et al., 2011) 1. Marking of raw materials 2. Testing of raw materials 1. Marking process is carried out for each available raw material 2. Quality raw material test results VSM 1 VSM2 Operational Performance (KO) Company performance is a complete view of the state of the company for a certain period of time, is a result or achievement that is influenced by the company's operational activities in utilizing the resources owned (Srimindarti, 2004) 1. Production results are on target 2. The absence of a failed product 3.Conducted training for employees 4. Dilevery performance is good 5. The production process is good 1. The company is able to reach the targeted sales level 2. The company is able to produce products that do not fail (good product) 3. The company always provides training to existing employees 4. The company is able to meet the suitability of the delivery time of the goods or the time that has been promised 5. Source: Data processed (2018) Questionnaires for the two variables above use the interval scale (distance scale). Interval scale is an ordinal scale that has distance points in the regularity of ranking categories, and interval type data are included in quantitative data groups. European Journal of Business and Management www.iiste.org ISSN 2222-1905(Paper) ISSN 2222-2839(Online) Vol.12, No.18, 2020

Population and Sample
Population is a generalization area consisting of: objects / subjects that have certain qualities and characteristics that are determined by researchers to be studied and then draw conclusions. So the population is not just people, but includes all the characteristics / properties possessed by the subject or object, (Sugiyono, 2015). The population in the study consisted of 9 Production Departments, Foreman and Ast Foreman from each department with a population of 151 people, included because they were strategic positions that received training and members of the implementation team from the lean manufacturing system. Whereas at the operator level, the survey was not involved. The standard error chosen for this study was 5 percent which led to a 95 percent freedom level.

Data and Data Collection Methods
The data used in this study are primary data. Primary data is data obtained directly from research subjects / respondents contained in the questionnaire, with respondents being the employees of the Pertiwi Agung production department. Data collection procedures and techniques used in this study are: 1) Questionnaire, Field research (survey), this technique is a way to collect the main data which is carried out directly at the research location to obtain data, information and other information. The instrument used was to give a questionnaire, by distributing it to the employees of PT. Pertiwi Agung which is the object of research. 2) Interview, used to check and obtain a more complete / comprehensive data related to the object. 3) Documentation, in this technique the researcher collects data related to the history or records needed in the study. An example is the data on the total population of employees at PT. Pertiwi Agung. Data analysis method The Structural Equation Modeling (SEM) method is a multivariate analysis that can analyze the relationship of variables in a complex way, the data analysis method used in this study consists of three stages, namely the stages of data testing, the stages of research regression analysis. The use of Structural Equation Modeling (SEM) makes it possible to test relationships between complex variables, to obtain a comprehensive picture of the whole model (Ghozali, 2005). Hypothesis testing is done using the AMOS version 22 program to analyze causality relationships in the proposed structural model between Independent and dependent variables at the same time check the validity and reliability of the overall research instrument.

Hypothesis testing
According to (Ferdinand, 2014) testing a model using SEM using the steps below that will be used for hypothesis testing. 3.2.1.1 Development of a theory based model. As a theoretical model development, the research topic is explored in depth as hypothesized variables must be supported by a strong theoretical justification. That is because SEM will confirm the suitability of the data with the theory.

Development of Path Diagrams.
At this stage each latent construction is entered into the model and the measured indicator variable is latent construction. Although this identification can be demonstrated by equations, it is easier to represent this process with a diagram. The theoretical model that has been built will be illustrated in a path diagram to be estimated. In SEM modeling, researchers usually work with constructs or factors, which are concepts that have sufficient theoretical footing to explain various forms of relationships. Constructs that are built in flowcharts can be divided into two groups, namely exogenous constructs and endogenous constructs. Exogenous constructs are known as source variables or independent variables that are not predicted by other variables in the model. Constructs built in flowcharts can be divided into two groups, namely exogenous constructs and endogenous constructs. Exogenous constructs are known as source variables or independent variables that are not predicted by other variables in the model. From the research title, the influence of Value stream mapping (VSM) on operational performance at PT. The Great Pertiwi can be described in the Path Diagram as shown in Figure 2.   Table 3 Table 3 Load factor symbol (factor loading) γ(Gamma) The coefficient of influence of exogenous variables on endogenous variables ε(Epsilon) Measurement error in manifest variable Source: (Sugiyono, 2011)

.1.4 Selection of Input Matrix and Model Estimation
The data input matrix used is the variance / covariance matrix or correlation matrix. The suitable sample size for SEM is 100-200. Whereas the minimum sample size is 5 observations for each parameter estimate. The model estimation available in the AMOS software or program is the Maximum Likehood Estimation Method, Generalized Least Square Estimation Method, Unweighted Least Square Estimation (ULS), Scale Free Least Square Estimation (SLS) and Asymptotically Distribution-Free Estimation (SLS).

Evaluate the Goodness of Fit
Criteria In this step the suitability of the model is evaluated, through evaluation of various Goodness-of-fit criteria. 1) Evaluation of SEM Assumptions, Normality by using a critical value of ± 2.58 at a significance level of 0.01. If the Z-value is greater than the critical value, then it is suspected that the data distribution is not normal. a) Outliers, are observations or data that have unique characteristics that look very different from observations,  Vol.12, No.18, 2020 both for a single variable and combination variables. b) Multicollinearity and Singularity, in statistics, multicollinearity (also called collinearity) is a phenomenon where one predictor variable in a multiple regression model can be a linear prediction of another with a degree of substance of accuracy. It is necessary to observe the determinants of the sample covariance matrix whose determinants are small or near zero indicate the presence of multicollinearity or sigularity, so that the data can be used for research.

2) Conformity Test and Statistical Test
To conduct a suitability test and a statistical test, several conformity indexes and cut-off values are needed to be used in testing a model. a) X2-Chi-Square statistics, the smaller the better the model and accepted based on the probability with a cut-off value of p> 0.05 or p> 0.10. b) RMSEA (The Root Mean Square Error of Approximation), is an index used to compensate for chi-square in large samples. RMSEA value which is small or equal to 0.08 is an index for the acceptance of the model based on degree of freedom. c) GFI (Goodness of Fit Index), is a non-statistical measure that has a range of values from 0 to 1. High values in this index indicate a "better fit". d) AGFI (Adjusted Goodness of Fit Index), is a criterion that takes into account the weighted proportion of the variance of a sample covariance matrix. The recommended level is if AGFI has a value equal to or greater than 0.90. e) CMIN / DF (The Minimum Sample Discrepancy Function Devided with Degree of Freedom), is the chi-square statistic X2 divided by the degree of freedom so it is called X2 relative. A relative value of X2 less than 2.0 or 3.0 is an indication of acceptable models and data. f) TLI (Tucker Lewis Indeex), is an incremental index comparing a model that is tested against a baseline model, where the recommended value as a reference for accepting a model is ≥ 0.95 and a value close to 1 is very good fit. Where : Standard Loading is obtained from standardized loading for each indicator obtained from computer calculation results.
is the measurement error of each indicator (1-Std. Loading) 2 .

4) Validity Test a) Convergent Validity
The items or indicators of a latent contract must be converged or shared (a high proportion of variants) and this is called convergent validity. To measure the construct validity can be seen from the value of the loading factor. In cases where high construct validity occurs, high loading values for a factor (latent construct) indicate that they converge at one point. The requirements must be met, first the loading factor must be significant. The standardized loading estimate must be equal to 0.05 or greater and ideally should be 0.70. b) Discriminant Validity Discriminant Validity measures to what extent a construct is completely different from other constructs. The high value of discriminant validity provides evidence that a construct is unique and able to capture the phenomenon being measured. The way to test it is to compare the square value of AVE ( AVE ) with the correlation value between constructs.
European Journal of Business and Management www.iiste.org ISSN 2222-1905 (Paper) ISSN 2222-2839 (Online) Vol.12, No.18, 2020 Model Interpretation and Modification When the model has been accepted, the researcher can consider doing a modification of the model to improve the theoretical explanation or goodness-of-fit. If the model is modified, then the model must be cross-validated (estimated with separate data) before the modification model is accepted. Model measurement can be done with modification indices. The value of the modification indices is the same as the Chi-square decrease if the coefficient is estimated.

Result And Discussion
One of the data collection techniques in this study is to use a questionnaire distributed to all respondents of PT. Pertiwi Agung. This questionnaire consists of various statements made based on 7 indicators studied. The questionnaire was distributed to respondents totaling 105. Total 105 respondents had fulfilled the requirements for data processing, based on (Hair Jr et al., 2014) which stated that the number of samples followed the formula 5 x the number of indicator variables so that in this study the minimum number of samples taken was 5x7 = 28 person.  Table 5 Length of Work Respondents showed that respondents with a working period of more than 21 years totaled 28 respondents or 12%. Respondents who have worked between 11 and 20 years amounted to 12 respondents or 62%. Respondents with a work duration of between 6-10 years totaled 30 respondents or 16%. Respondents who have worked less than 5 years are 35 respondents or 10%. Thus the most respondents are those who have worked between <5 year.

Model Specifications and Test Validity and Reliability
The specification of the model is based on the theory that is the basis of this research. The latent variables in this study are divided into two, namely exogenous latent variables and endogenous latent variables. Exogenous latent variables are Value stream mapping (VSM), while endogenous latent variables are operational performance (KO).
-CFA Value stream mapping (VSM) Validity Test with CFA Test or Construct Validity Test (indicator), which is measuring whether the construct (indicator) is able or not to reflect its latent variables. The results meet the criteria of a Critical Ratio (CR) value> 1.96 with Probability (P) <0.05. The sign for *** is significant <0.001.
CFA test or construct validity test, is intended to know that each indicator can explain the existing construct. Indicators used as a measure of research variables are indicators that have p value <0.05 and loading factor> 0.5, while indicators that have p value> 0.05 and loading factor <0.5 are eliminated from the model, (Ghozali, 2014).
Confirmatory factor analysis functions to describe the relationship between the measured variable (observed variable) with its latent variable. In this case, CFA shows the contribution of the measured variable to the latent variable expressed by the loading factor. The latent variables tested in this analysis are the Value stream mapping (VSM) and Company Operational Performance (KO) variables.
According to (Ghozali, 2014)the first thing that needs to be seen is the significance value (P value) if more than 0.05, then the indicator is removed from the model, the second is to see the standardized loading factor (Estimate value), if it is below 0.50 then the indicator is removed from the model because it is considered invalid. Value stream mapping (VSM) is one of the factors that determines operational performance (KO). The unidimensionality of these dimensions is tested through confirmatory factor analysis, the results of which are as shown in Figure 4.  Source: Processed Data (2018) -CFA Operational Performance (KO) The company's operational performance is an endogenous variable. Unidimensionality of these dimensions is tested through confirmatory factor analysis, the results of which are as shown in Figure 6. Table 1. The capitals, assets and revenue in listed banks Description for the above table. Figure 6. CFA Test of Operational Performance variables Operational performance is an endogenous variable. Table 7. Indicates Output Regression Weight for Operational Performance variables at P (Probability), all probability values are shown *** which means significant at the level of 0.001 which means less than 0.05, seen from the Regression Weight variable Operational Performance shows that the Operational Performance variable is valid. Where : Standard Loading is obtained from standardized loading for each indicator obtained from computer calculation results.
is the measurement error of each indicator. Normality test aims to test whether the regression model of confounding or residual variables has a normal distribution, (Ghozali, 2005).The requirement to get a good regression model is the data is normal or close to normal. Data is stated as normal distribution if it is significantly greater than 5% or 0.05 (Priyatno, 2008). One of the requirements in SEM, especially if the data is estimated by the Maximum Likelihood estimation technique is data normality. To test data normality, statistical tests can be used such as observing the skewness of the data used. Evaluation of multivariate normality is done by using the criterion criterion ratio (cr) of multivariate in kurtosis, if it is in the range of -2.58 to 2.58, it means that the data are normally normally distributed multivariate (Haryono, 2017).  Table 10 shows the overall results (multivariate) normal, because the multivariate number 2,490 is in the range of -2.58 to 2.58. Interpretation of results if the data is not normal needs to be done deleting outlier data so that later it is expected to get data that meets the normality assumption. Oulier data in this study can be seen through the Mahalanobis d-squared output. This step is suitable for research that uses large data.
-Model Suitability Test The purpose of the test fit model or Goodnest of fit is to find out some right manifest variables (indicator variables) can explain the existing latent variables. The Goodness of Fit test or the model feasibility test is used to measure the accuracy of the sample regression function in estimating the actual value. Statistically the Goodness of Fit test can be done through the measurement of the coefficient of determination, the statistical value of F and the statistical value of t. According to (Ghozali, 2011), a statistical calculation is called statistically significant if its statistical test value is in a critical area (the area where Ho is rejected). On the other hand, the statistical calculation is called insignificant if the statistical test value is in the area where Ho is received. Modification of the model is done by connecting or covariating between variables on the model or removing indicator variables, as recommended by AMOS (on the Modification Indices output).

Figure 7 Full SEM Model after Modification
According to (Ghozali, 2012) as a whole Goodness of fit can be assessed based on a minimum of 5 criteria. European Journal of Business and Management www.iiste.org ISSN 2222-1905(Paper) ISSN 2222-2839(Online) Vol.12, No.18, 2020 In empirical research, a researcher is not required to meet all the criteria of goodness of fit, but depends on the judgment or decision of each researcher. Recommendations for getting a fit model are shown in the appendix After eliminating and connecting as recommended by the modification indices, the model can be seen in Figure 7.  Table 11. shows the goodness of fit after the model modification. More than 5 indicators are in accordance with the expected criteria.
-Hypothesis Test (Analysis of Influence of Variables) After overall a structural model can be considered fit, the next process is to look at the effect of the independent variable with the dependent variable. The basis for decision making is if the P value (Probability)> 0.05 then H0 is accepted or there is no influence, if the P value (Probalitias) <0.05 then H0 is rejected or there is an influence (Santoso, 2015) Based on Table 12, it can be concluded as below: 1) There is a significant influence between Employee Competence on High Performance Culture. This is because the probability value is less than 0.05 (*** <0.05). A positive estimate value of 0.841 means that the effect is positive -Structural Model Analysis Path analysis is an extension of linear regression analysis. Path analysis is the use of regression analysis to estimate causality relationships between variables that have been predetermined based on theory. 1) Structural equality As explained earlier that the model used in this study is a modified model. In addition, based on the results of a model match where five indicators are used, namely Chi Square, RMSEA, TLI, CFI, and IFI, the obtained model can be Where: KO = Company Operational Performance VSM = Vulue stream mapping e = error Based on the model, it can be seen that each increase in Cumul stream stream is one unit, it will increase Operational Performance by 0.84 units. 2) Direct influence This analysis is to determine the magnitude of the direct effect coefficient, so it can be known whether the mediating variable mediates the effect of the independent variable on the dependent or not. Table 13 shows the results of the influence of product quality control and the quality of production machinery on the company's operational performance.

Conclusion
Based on the results of SEM analysis, this section will discuss the results of calculations that have been made. This study aims to determine the effect of Value steam mapping (VSM) on operational performance. Testing is shown through existing hypotheses so that they can find out how the influence of each construct on the other constructs. European Journal of Business and Management www.iiste.org ISSN 2222-1905(Paper) ISSN 2222-2839(Online) Vol.12, No.18, 2020