Spss 13 0 Manual
Spss 13.0, size: 6.5 MB
Spss 13.0 Base
Here you can find all about Spss 13 0 like for windows and other informations. For example: torrent, statistical procedures companion, free download torrent, guide, download, guide to data analysis, software, free download.
Spss 13 0 manual (user guide) is ready to download for free.
On the bottom of page users can write a review. If you own a Spss 13 0 please write about it to help other people. [ Report abuse or wrong photo | Share your Spss 13 0 photo ]
One Way Analysis of Variance (ANOVA) in PASW/SPSS.mov
User reviews and opinions
No opinions have been provided. Be the first and add a new opinion/review.
SPSS for Instruction
SPSS Books and Manuals
Learn how to more productively and efficiently work with SPSS software. A number of SPSS books and manualsfor beginners and expert users alikecan show you how.
Below are descriptions of books and manuals of interest to SPSS users. Use these books to teach yourself SPSS or supplement techniques youve learned in the classroom.
SPSS 13.0 Guide to Data Analysis by Marija Norusis SPSS 13.0 Guide to Data Analysis is a friendly introduction to both data analysis and SPSS software. Easy-to-understand explanations and in-depth content make this guide both an excellent supplement to other statistics texts and an ideal primary text for any introductory data analysis course. With this book, you will learn how to describe data, test simple hypotheses, and examine relationships using real datasets. Exercises at the end of each chapter let you test your skills. SPSS 13.0 Guide to Data Analysis is designed for use with SPSS 13.0 for Windows, including the SPSS Student Version. A data CD is included with this book. To view a description of this book, table of contents, and preface, or to purchase a copy, visit www.prenhall.com. ISBN: 0-13-186535-8
SPSS 13.0 Statistical Procedures Companion by Marija Norusis SPSS 13.0 Statistical Procedures Companion is an introduction to SPSS and its most frequently used statistical procedures. The book takes you from creating, cleaning, and describing data files to analyzing the data using sophisticated statistical procedures such as logistic regression; factor, cluster, and discriminant analysis; log linear models; and general linear models. Youll find discussion of the statistical backgrounds for each procedure and a review of the basics of hypothesis testing. The book offers practical suggestions, emphasizing topics that arise when analyzing real data for presentations, reports, and dissertations. A data CD is included with this book. To view a description of this book, table of contents, and preface, or to purchase a copy, visit www.prenhall.com. ISBN: 0-13-186539-0
SPSS 13.0 Advanced Statistical Procedures Companion by Marija Norusis SPSS 13.0 Advanced Statistical Procedures Companion covers the procedures in two popular SPSS add-on modules, SPSS Advanced Models and SPSS Regression Models. The book provides introductions to and examples for procedures such as loglinear and logit analysis for categorical data; ordinal, multinomial, two-stage, and weighted least squares regression; Kaplan-Meier and Cox regression models for survival analysis; and variance components analysis. A data CD is included with this book. To view a description of this book, table of contents, and preface, or to purchase a copy, visit www.prenhall.com. ISBN: 0-13-186540-4
SPSS 13.0 Brief Guide SPSS 13.0 Brief Guide uses a convenient tutorial system to acquaint beginning users with the components of the SPSS system. Learn how to use the Data Editor, import data into SPSS, work with statistics and output, create and edit charts, modify data values, manage syntax and data files, calculate new data values, and sort and select data. To view a description of this book and table of contents, or to purchase this book, visit www.spss.com/estore/softwaremenu/book.cfm. ISBN: 0-13-154242-7
SPSS Base 13.0 User's Guide SPSS Base 13.0 User's Guide provides a thorough explanation of SPSS features. Topics covered include the Text Wizard,
SPSS Programming and Data Management: A Guide for SPSS and SAS Users, Second Edition by Raynald Levesque SPSS Programming and Data Management: A Guide for SPSS and SAS Users, Second Edition documents the wealth of functionality beneath the SPSS user interface. It includes detailed examples of command syntax, the macro facility, scripting, and the output management system. The accompanying CD-ROM includes the command and data files used in the book. The book also contains a chapter for SAS users, showing equivalent SPSS code for many common data management tasks. With knowledge gained from this book, you will be able to use the many tools available within SPSS to import data from almost any source, clean it, transform it, merge it with other data, and get it into the condition required to produce reliable models and informative results. To view the complete table of contents or to purchase this book, visit www.spss.com/estore/softwaremenu/book.cfm. ISBN: 0-56827-355-X
Database Wizard, Data Editor, scripting, data definition and modification, file and output management (including the SPSS Viewer and report cubes), statistical and graphical procedures (including output examples), production mode operation, and utilities for getting information (including help) and controlling the environment. To view a description of this book and table of contents, or to purchase this book, visit www.spss.com/estore/softwaremenu/book.cfm. ISBN: 0-13-185723-1
Manuals for add-on modules and stand-alone software User manuals are available for all SPSS add-on modules and many stand-alone products, such as Amos. Visit www.spss.com/estore/softwaremenu/book.cfm for a full list of available manuals, or to make a purchase.
To learn more, please visit www.spss.com. For SPSS office locations and telephone numbers, go to www.spss.com/worldwide.
SPSS is a registered trademark and the other SPSS products named are trademarks of SPSS Inc. All other names are trademarks of their respective owners. 2005 SPSS Inc. SB13INS-0405
Additional copies of SPSS product manuals may be purchased directly from SPSS Inc. Visit the SPSS Web Store at http://www.spss.com/estore, or contact your local SPSS office, listed on the SPSS Web site at http://www.spss.com/worldwide. For telephone orders in the United States and Canada, call SPSS Inc. at 800-543-2185. For telephone orders outside of North America, contact your local office, listed on the SPSS Web site. The SPSS Statistical Procedures Companion, by Marija Noruis, has been published by Prentice Hall. A new version of this book, updated for SPSS 13.0, is planned. The SPSS Advanced Statistical Procedures Companion, also based on SPSS 13.0, is forthcoming. The SPSS Guide to Data Analysis for SPSS 13.0 is also in development. Announcements of publications available exclusively through Prentice Hall will be available on the SPSS Web site at http://www.spss.com/estore (select your home country, and then click Books).
Tell Us Your Thoughts
Your comments are important. Please let us know about your experiences with SPSS products. We especially like to hear about new and interesting applications using the SPSS system. Please send e-mail to firstname.lastname@example.org or write to SPSS Inc.,
Attn.: Director of Product Planning, 233 South Wacker Drive, 11th Floor, Chicago, IL 60606-6412.
About This Manual
This manual documents the graphical user interface for the procedures included in the Regression Models add-on module. Illustrations of dialog boxes are taken from SPSS for Windows. Dialog boxes in other operating systems are similar. Detailed information about the command syntax for features in this module is provided in the SPSS Command Syntax Reference, available from the Help menu.
If you would like to be on our mailing list, contact one of our offices, listed on our Web site at http://www.spss.com/worldwide.
1 Choosing a Procedure for Binary Logistic Regression Models Logistic Regression
Logistic Regression Set Rule. 6 Logistic Regression Variable Selection Methods. 6 Logistic Regression Define Categorical Variables. 7 Logistic Regression Save New Variables. 9 Logistic Regression Options. 10 LOGISTIC REGRESSION Command Additional Features. 11
Logistic Regression provides the following unique features:
Hosmer-Lemeshow test of goodness of fit for the model Stepwise analyses Contrasts to define model parameterization Alternative cut points for classification Classification plots Model fitted on one set of cases to a held-out set of cases Saves predictions, residuals, and influence statistics
2 Chapter 1
Multinomial Logistic Regression provides the following unique features:
Pearson and deviance chi-square tests for goodness of fit of the model Specification of subpopulations for grouping of data for goodness-of-fit tests Listing of counts, predicted counts, and residuals by subpopulations Correction of variance estimates for over-dispersion Covariance matrix of the parameter estimates Tests of linear combinations of parameters Explicit specification of nested models Fit 1-1 matched conditional logistic regression models using differenced variables
Logistic regression is useful for situations in which you want to be able to predict the presence or absence of a characteristic or outcome based on values of a set of predictor variables. It is similar to a linear regression model but is suited to models where the dependent variable is dichotomous. Logistic regression coefficients can be used to estimate odds ratios for each of the independent variables in the model. Logistic regression is applicable to a broader range of research situations than discriminant analysis.
Example. What lifestyle characteristics are risk factors for coronary heart disease
(CHD)? Given a sample of patients measured on smoking status, diet, exercise, alcohol use, and CHD status, you could build a model using the four lifestyle variables to predict the presence or absence of CHD in a sample of patients. The model can then be used to derive estimates of the odds ratios for each factor to tell you, for example, how much more likely smokers are to develop CHD than nonsmokers.
Statistics. For each analysis: total cases, selected cases, valid cases. For each
categorical variable: parameter coding. For each step: variable(s) entered or removed, iteration history, 2 log-likelihood, goodness of fit, Hosmer-Lemeshow goodness-of-fit statistic, model chi-square, improvement chi-square, classification table, correlations between variables, observed groups and predicted probabilities chart, residual chi-square. For each variable in the equation: coefficient (B), standard error of B, Wald statistic, estimated odds ratio (exp(B)), confidence interval for exp(B), log-likelihood if term removed from model. For each variable not in the equation: score statistic. For each case: observed group, predicted probability, predicted group, residual, standardized residual.
Statistics and Plots. Allows you to request statistics and plots. Available options
are Classification plots, Hosmer-Lemeshow goodness-of-fit, Casewise listing of residuals, Correlations of estimates, Iteration history, and CI for exp(B). Select one of the alternatives in the Display group to display statistics and plots either At each step or, only for the final model, At last step.
Hosmer-Lemeshow goodness-of-fit statistic. This goodness-of-fit statistic is more
robust than the traditional goodness-of-fit statistic used in logistic regression, particularly for models with continuous covariates and studies with small sample sizes. It is based on grouping cases into deciles of risk and comparing the observed probability with the expected probability within each decile.
Probability for Stepwise. Allows you to control the criteria by which variables are
entered into and removed from the equation. You can specify criteria for Entry or Removal of variables.
Probability for Stepwise. A variable is entered into the model if the probability of
its score statistic is less than the Entry value, and is removed if the probability is greater than the Removal value. To override the default settings, enter positive values for Entry and Removal. Entry must be less than Removal.
Classification cutoff. Allows you to determine the cut point for classifying cases. Cases with predicted values that exceed the classification cutoff are classified as positive, while those with predicted values smaller than the cutoff are classified as negative. To change the default, enter a value between 0.01 and 0.99. Maximum Iterations. Allows you to change the maximum number of times that the
model iterates before terminating.
Include constant in model. Allows you to indicate whether the model should include a
constant term. If disabled, the constant term will equal 0.
LOGISTIC REGRESSION Command Additional Features
The SPSS command language also allows you to: Identify casewise output by the values or variable labels of a variable. Control the spacing of iteration reports. Rather than printing parameter estimates after every iteration, you can request parameter estimates after every nth iteration.
12 Chapter 2
Change the criteria for terminating iteration and checking for redundancy. Specify a variable list for casewise listings. Conserve memory by holding the data for each split file group in an external scratch file during processing. See the SPSS Command Syntax Reference for complete syntax information.
Multinomial Logistic Regression is useful for situations in which you want to be able to classify subjects based on values of a set of predictor variables. This type of regression is similar to logistic regression, but it is more general because the dependent variable is not restricted to two categories.
Example. In order to market films more effectively, movie studios want to predict what type of film a moviegoer is likely to see. By performing a Multinomial Logistic Regression, the studio can determine the strength of influence a persons age, gender, and dating status has upon the type of film they prefer. The studio can then slant the advertising campaign of a particular movie toward a group of people likely to go see it. Statistics. Iteration history, parameter coefficients, asymptotic covariance and
correlation matrices, likelihood-ratio tests for model and partial effects, 2 log-likelihood. Pearson and deviance chi-square goodness of fit. Cox and Snell, Nagelkerke, and McFadden R2. Classification: observed versus predicted frequencies by response category. Crosstabulation: observed and predicted frequencies (with residuals) and proportions by covariate pattern and response category.
Methods. A multinomial logit model is fit for the full factorial model or a
user-specified model. Parameter estimation is performed through an iterative maximum-likelihood algorithm.
Data. The dependent variable should be categorical. Independent variables can be factors or covariates. In general, factors should be categorical variables and covariates should be continuous variables. Assumptions. It is assumed that the odds ratio of any two categories are independent of all other response categories. For example, if a new product is introduced to a market, this assumption states that the market shares of all other products are affected proportionally equally. Also, given a covariate pattern, the responses are assumed to be independent multinomial variables.
14 Chapter 3
Obtaining a Multinomial Logistic Regression
E From the menus choose: Analyze Regression Multinomial Logistic. Figure 3-1 Multinomial Logistic Regression dialog box
E Select one dependent variable. E Factors are optional and can be either numeric or categorical. E Covariates are optional but must be numeric if specified.
15 Multinomial Logistic Regression
Multinomial Logistic Regression Models
Figure 3-2 Multinomial Logistic Regression Model dialog box
By default, the Multinomial Logistic Regression procedure produces a model with the factor and covariate main effects, but you can specify a custom model or request stepwise model selection with this dialog box.
Specify Model. A main-effects model contains the covariate and factor main effects but no interaction effects. A full factorial model contains all main effects and all factor-by-factor interactions. It does not contain covariate interactions. You can create a custom model to specify subsets of factor interactions or covariate interactions, or request stepwise selection of model terms. Factors and Covariates. The factors and covariates are listed with (F) for factor and
(C) for covariate.
Forced Entry Terms. Terms added to the forced entry list are always included in the
16 Chapter 3
Stepwise Terms. Terms added to the stepwise list are included in the model according to one of the following user-selected methods: Forward entry. This method begins with no stepwise terms in the model. At each
step, the most significant term is added to the model until none of the stepwise terms left out of the model would have a statistically significant contribution if added to the model.
Backward elimination. This method begins by entering all terms specified on the
stepwise list into the model. At each step, the least significant stepwise term is removed from the model until all of the remaining stepwise terms have a statistically significant contribution to the model.
Forward stepwise. This method begins with the model that would be selected by
the forward entry method. From there, the algorithm alternates between backward elimination on the stepwise terms in the model and forward entry on the terms left out of the model. This continues until no terms meet the entry or removal criteria.
Backward stepwise. This method begins with the model that would be selected by
the backward elimination method. From there, the algorithm alternates between forward entry on the terms left out of the model and backward elimination on the stepwise terms in the model. This continues until no terms meet the entry or removal criteria.
Include intercept in model. Allows you to include or exclude an intercept term for the
Schwarzs Bayesian information criterion (BIC).
Cell probabilities. Prints a table of the observed and expected frequencies (with
residual) and proportions by covariate pattern and response category.
Classification table. Prints a table of the observed versus predicted responses. Goodness of fit chi-square statistics. Prints Pearson and likelihood-ratio chi-square
statistics. Statistics are computed for the covariate patterns determined by all factors and covariates or by a user-defined subset of the factors and covariates.
Parameters. Statistics related to the model parameters. Estimates. Prints estimates of the model parameters, with a user-specified level of
Likelihood ratio test. Prints likelihood-ratio tests for the model partial effects. The
test for the overall model is printed automatically.
Asymptotic correlations. Prints matrix of parameter estimate correlations. Asymptotic covariances. Prints matrix of parameter estimate covariances. Define Subpopulations. Allows you to select a subset of the factors and covariates in
order to define the covariate patterns used by cell probabilities and the goodness-of-fit tests.
20 Chapter 3
Multinomial Logistic Regression Criteria
Figure 3-5 Multinomial Logistic Regression Convergence Criteria dialog box
You can specify the following criteria for your Multinomial Logistic Regression:
Iterations. Allows you to specify the maximum number of times you want to cycle through the algorithm, the maximum number of steps in the step-halving, the convergence tolerances for changes in the log-likelihood and parameters, how often the progress of the iterative algorithm is printed, and at what iteration the procedure should begin checking for complete or quasi-complete separation of the data. Log-likelihood convergence. Convergence is assumed if the absolute change in
the log-likelihood function is less than the specified value. The criterion is not used if the value is 0. Specify a non-negative value.
Parameter convergence. Convergence is assumed if the absolute change in the
parameter estimates is less than this value. The criterion is not used if the value is 0.
Delta. Allows you to specify a non-negative value less than 1. This value is added to
each empty cell of the crosstabulation of response category by covariate pattern. This helps to stabilize the algorithm and prevent bias in the estimates.
Singularity tolerance. Allows you to specify the tolerance used in checking for
backward stepwise methods, this specifies the minimum number of terms to include in the model. The intercept is not counted as a model term.
Maximum Stepped Effect in Model. When using the forward entry or forward
stepwise methods, this specifies the maximum number of terms to include in the model. The intercept is not counted as a model term.
Hierarchically constrain entry and removal of terms. This option allows you to
choose whether to place restrictions on the inclusion of model terms. Hierarchy requires that for any term to be included, all lower order terms that are a part of the term to be included must be in the model first. For example, if the hierarchy requirement is in effect, the factors Marital status and Gender must both be in the model before the Marital Status*Gender interaction can be added. The three radio button options determine the role of covariates in determining hierarchy.
23 Multinomial Logistic Regression
Multinomial Logistic Regression Save
Figure 3-7 Multinomial Logistic Regression Save dialog box
The Save dialog box allows you to save variables to the working file and export model information to an external file.
Saved variables: Estimated response probabilities. These are the estimated probabilities of
classifying a factor/covariate pattern into the response categories. There are as many estimated probabilities as there are categories of the response variable; up to 25 will be saved.
Predicted category. This is the response category with the largest expected
probability for a factor/covariate pattern.
Predicted category probabilities. This is the maximum of the estimated response
Actual category probability. This is the estimated probability of classifying a
factor/covariate pattern into the observed category.
Export model information to XML file. Parameter estimates and (optionally) their
24 Chapter 3
NOMREG Command Additional Features
The SPSS command language also allows you to: Specify the reference category of the dependent variable. Include cases with user-missing values. Customize hypothesis tests by specifying null hypotheses as linear combinations of parameters. See the SPSS Command Syntax Reference for complete syntax information.
This procedure measures the relationship between the strength of a stimulus and the proportion of cases exhibiting a certain response to the stimulus. It is useful for situations where you have a dichotomous output that is thought to be influenced or caused by levels of some independent variable(s) and is particularly well suited to experimental data. This procedure will allow you to estimate the strength of a stimulus required to induce a certain proportion of responses, such as the median effective dose.
Criteria. Allows you to control parameters of the iterative parameter-estimation
algorithm. You can override the defaults for Maximum iterations, Step limit, and Optimality tolerance.
PROBIT Command Additional Features
The SPSS command language also allows you to: Request an analysis on both the probit and logit models. Control the treatment of missing values. Transform the covariates by bases other than base 10 or natural log. See the SPSS Command Syntax Reference for complete syntax information.
Nonlinear regression is a method of finding a nonlinear model of the relationship between the dependent variable and a set of independent variables. Unlike traditional linear regression, which is restricted to estimating linear models, nonlinear regression can estimate models with arbitrary relationships between independent and dependent variables. This is accomplished using iterative estimation algorithms. Note that this procedure is not necessary for simple polynomial models of the form Y = A + BX**2. By defining W = X**2, we get a simple linear model, Y = A + BW, which can be estimated using traditional methods such as the Linear Regression procedure.
Example. Can population be predicted based on time? A scatterplot shows that there seems to be a strong relationship between population and time, but the relationship is nonlinear, so it requires the special estimation methods of the Nonlinear Regression procedure. By setting up an appropriate equation, such as a logistic population growth model, we can get a good estimate of the model, allowing us to make predictions about population for times that were not actually measured. Statistics. For each iteration: parameter estimates and residual sum of squares. For
each model: sum of squares for regression, residual, uncorrected total and corrected total, parameter estimates, asymptotic standard errors, and asymptotic correlation matrix of parameter estimates. Note: Constrained nonlinear regression uses the algorithms proposed and implemented in NPSOL by Gill, Murray, Saunders, and Wright to estimate the model parameters.
Data. The dependent and independent variables should be quantitative. Categorical variables, such as religion, major, or region of residence, need to be recoded to binary (dummy) variables or other types of contrast variables.
34 Chapter 5
Nonlinear Regression Parameters
Figure 5-2 Nonlinear Regression Parameters dialog box
Parameters are the parts of your model that the Nonlinear Regression procedure estimates. Parameters can be additive constants, multiplicative coefficients, exponents, or values used in evaluating functions. All parameters that you have defined will appear (with their initial values) on the Parameters list in the main dialog box.
Name. You must specify a name for each parameter. This name must be a valid SPSS variable name and must be the name used in the model expression in the main dialog box. Starting Value. Allows you to specify a starting value for the parameter, preferably
as close as possible to the expected final solution. Poor starting values can result in failure to converge or in convergence on a solution that is local (rather than global) or is physically impossible.
Use starting values from previous analysis. If you have already run a nonlinear
regression from this dialog box, you can select this option to obtain the initial values of parameters from their values in the previous run. This permits you to continue searching when the algorithm is converging slowly. (The initial starting values will still appear on the Parameters list in the main dialog box.) Note: This selection persists in this dialog box for the rest of your session. If you change the model, be sure to deselect it.
35 Nonlinear Regression
Nonlinear Regression Common Models
The table below provides example model syntax for many published nonlinear regression models. A model selected at random is not likely to fit your data well. Appropriate starting values for the parameters are necessary, and some models require constraints in order to converge.
Table 5-1 Example model syntax
Name Asymptotic Regression Asymptotic Regression Density Gauss Gompertz Johnson-Schumacher Log-Modified Log-Logistic Metcherlich Law of Diminishing Returns Michaelis Menten Morgan-Mercer-Florin Peal-Reed Ratio of Cubics Ratio of Quadratics Richards Verhulst Von Bertalanffy Weibull Yield Density
Model expression b1 + b2 *exp( b3 * x ) b1 ( b2 *( b3 ** x )) ( b1 + b2 * x )**(1/ b3 ) b1 *(1 b3 *exp( b2 * x **2)) b1 *exp( b2 * exp( b3 * x )) b1 *exp( b2 / ( x + b3)) ( b1 + b3 * x ) ** b2 b1 ln(1+ b2 *exp( b3 * x )) b1 + b2 *exp( b3 * x ) b1* x /( x + b2 ) ( b1 * b2 + b3 * x ** b4 )/( b2 + x ** b4 ) b1 /(1+ b2 *exp(( b3 * x + b4 * x **2+ b5 * x **3))) ( b1 + b2 * x + b3 * x **2+ b4 * x **3)/( b5 * x **3) ( b1 + b2 * x + b3 * x **2)/( b4 * x **2) b1 /((1+ b3 *exp( b2 * x ))**(1/ b4 )) b1 /(1 + b3 * exp( b2 * x )) ( b1 ** (1 b4 ) b2 * exp( b3 * x )) ** (1/(1 b4 )) b1 b2 *exp( b3 * x ** b4 ) (b1 + b2 * x + b3 * x **2)**(1)
Predicted Values. Saves predicted values with the variable name pred_. Residuals. Saves residuals with the variable name resid. Derivatives. One derivative is saved for each model parameter. Derivative names
are created by prefixing 'd.' to the first six characters of parameter names.
Loss Function Values. This option is available if you specify your own loss
function. The variable name loss_ is assigned to the values of the loss function.
Nonlinear Regression Options
Figure 5-6 Nonlinear Regression Options dialog box
39 Nonlinear Regression
Options allow you to control various aspects of your nonlinear regression analysis:
Bootstrap Estimates. A method of estimating the standard error of a statistic using repeated samples from the original data set. This is done by sampling (with replacement) to get many samples of the same size as the original data set. The nonlinear equation is estimated for each of these samples. The standard error of each parameter estimate is then calculated as the standard deviation of the bootstrapped estimates. Parameter values from the original data are used as starting values for each bootstrap sample. This requires the sequential quadratic programming algorithm. Estimation Method. Allows you to select an estimation method, if possible. (Certain choices in this or other dialog boxes require the sequential quadratic programming algorithm.) Available alternatives include Sequential quadratic programming and Levenberg-Marquardt. Sequential Quadratic Programming. This method is available for constrained and
unconstrained models. Sequential quadratic programming is used automatically if you specify a constrained model, a user-defined loss function, or bootstrapping. You can enter new values for Maximum iterations and Step limit, and you can change the selection in the drop-down lists for Optimality tolerance, Function precision, and Infinite step size.
Levenberg-Marquardt. This is the default algorithm for unconstrained models.
The Levenberg-Marquardt method is not available if you specify a constrained model, a user-defined loss function, or bootstrapping. You can enter new values for Maximum iterations, and you can change the selection in the drop-down lists for Sum-of-squares convergence and Parameter convergence.
Interpreting Nonlinear Regression Results
Nonlinear regression problems often present computational difficulties: The choice of initial values for the parameters influences convergence. Try to choose initial values that are reasonable and, if possible, close to the expected final solution. Sometimes one algorithm performs better than the other on a particular problem. In the Options dialog box, select the other algorithm if it is available. (If you specify a loss function or certain types of constraints, you cannot use the Levenberg-Marquardt algorithm.)
44 Chapter 6
Weight Estimation Options
Figure 6-2 Weight Estimation Options dialog box
You can specify options for your weight estimation analysis:
Save best weight as new variable. Adds the weight variable to the active file. This variable is called WGT_n, where n is a number chosen to give the variable a unique name. Display ANOVA and Estimates. Allows you to control how statistics are displayed in the output. Available alternatives are For best power and For each power value.
WLS Command Additional Features
The SPSS command language also allows you to: Provide a single value for the power. Specify a list of power values, or mix a range of values with a list of values for the power. See the SPSS Command Syntax Reference for complete syntax information.
Standard linear regression models assume that errors in the dependent variable are uncorrelated with the independent variable(s). When this is not the case (for example, when relationships between variables are bidirectional), linear regression using ordinary least squares (OLS) no longer provides optimal model estimates. Two-stage least-squares regression uses instrumental variables that are uncorrelated with the error terms to compute estimated values of the problematic predictor(s) (the first stage), and then uses those computed values to estimate a linear regression model of the dependent variable (the second stage). Since the computed values are based on variables that are uncorrelated with the errors, the results of the two-stage model are optimal.
Example. Is the demand for a commodity related to its price and consumers incomes? The difficulty in this model is that price and demand have a reciprocal effect on each other. That is, price can influence demand and demand can also influence price. A two-stage least-squares regression model might use consumers incomes and lagged price to calculate a proxy for price that is uncorrelated with the measurement errors in demand. This proxy is substituted for price itself in the originally specified model, which is then estimated. Statistics. For each model: standardized and unstandardized regression coefficients,
multiple R, R2, adjusted R2, standard error of the estimate, analysis-of-variance table, predicted values, and residuals. Also, 95% confidence intervals for each regression coefficient, and correlation and covariance matrices of parameter estimates.
Data. The dependent and independent variables should be quantitative. Categorical variables, such as religion, major, or region of residence, need to be recoded to binary (dummy) variables or other types of contrast variables. Endogenous explanatory variables should be quantitative (not categorical).
46 Chapter 7
dependent variable must be normal. The variance of the distribution of the dependent variable should be constant for all values of the independent variable. The relationship between the dependent variable and each independent variable should be linear.
55 Categorical Variable Coding Schemes
which you specify by means of the following CONTRAST subcommand for MANOVA, LOGISTICREGRESSION, and COXREG:
/CONTRAST(TREATMNT)=SPECIAL( 3 -1 -1 -2 -1 --1 )
For LOGLINEAR, you need to specify:
/CONTRAST(TREATMNT)=BASIS SPECIAL( 3 -1 -1 -2 -1 --1 )
Each row except the means row sums to 0. Products of each pair of disjoint rows sum to 0 as well:
Rows 2 and 3: Rows 2 and 4: Rows 3 and 4: (3)(0) + (1)(2) + (1)(1) + (1)(1) = 0 (3)(0) + (1)(0) + (1)(1) + (1)(1) = 0 (0)(0) + (2)(0) + (1)(1) + (1)(1) = 0
The special contrasts need not be orthogonal. However, they must not be linear combinations of each other. If they are, the procedure reports the linear dependency and ceases processing. Helmert, difference, and polynomial contrasts are all orthogonal contrasts.
Indicator variable coding. Also known as dummy coding, this is not available in LOGLINEAR or MANOVA. The number of new variables coded is k1. Cases in the
reference category are coded 0 for all k1 variables. A case in the ith category is coded 0 for all indicator variables except the ith, which is coded 1.
asymptotic regression in Nonlinear Regression, 35
Cox and Snell R-square in Multinomial Logistic Regression, 18 custom models in Multinomial Logistic Regression, 15
backward elimination in Logistic Regression, 6 binary logistic regression, 1
categorical covariates, 7 cell probabilities tables in Multinomial Logistic Regression, 18 cells with zero observations in Multinomial Logistic Regression, 20 classification in Multinomial Logistic Regression, 13 classification tables in Multinomial Logistic Regression, 18 confidence intervals in Multinomial Logistic Regression, 18 constant term in Linear Regression, 10 constrained regression in Nonlinear Regression, 37 contrasts in Logistic Regression, 7 convergence criterion in Multinomial Logistic Regression, 20 Cooks D in Logistic Regression, 9 correlation matrix in Multinomial Logistic Regression, 18 covariance matrix in Multinomial Logistic Regression, 18 covariates in Logistic Regression, 7
delta as correction for cells with zero observations, 20 density model in Nonlinear Regression, 35 deviance function for estimating dispersion scaling value, 21 DfBeta in Logistic Regression, 9 dispersion scaling value in Multinomial Logistic Regression, 21
fiducial confidence intervals in Probit Analysis, 28 forward selection in Logistic Regression, 6 full factorial models in Multinomial Logistic Regression, 15
Gauss model in Nonlinear Regression, 35 Gompertz model in Nonlinear Regression, 35 goodness of fit in Multinomial Logistic Regression, 18
UW500 Speed 7910 Download SGH-I550W XEA301 C534N Vision B6784N0GB DVP-NS52P X5070 SKY-watcher HEQ5 Xenyx 802 AVR 155 Torrent 701 1 TX-SR607 Free Download Torrent Wusb11 TX-37LZ8F VGN-FZ11E Activity 2610 Arctic 25R Sens W LAV72730-WD TE 450 ML-2010PR Omron U22 Guide To Data Analysis Vario Statistical Procedures Companion LL 63 KLV-32U2530 68001KF-N 81F Ericsson T65S Powerview PRO Xguitar CS-321 FR975-01C Atlas XL UE-46C6820 MCD 550M ES-3024 GXT950 920NW NEX-VG10E AG-HMC150P RDM 126 DNS3700 R380S XR-L210 VP-D305I BAR321HGA CT-S810S XV-Z9000U 7 1 Mouse 200 A LP735 HCD-C450 KV27TS30 42PF3320 10 EEE PC KEH-P2033R Software Fulcrum KTC-V500E RP-201 Guide CDX-51 Of Fate Moov 360 GEM-P3200 T Zoom FM200 P1165E WDH-928 WRT54G2 Stratoliner-2006 Amplificator Skysport 6H Whistler 1734 16 GB DSP-A990 RD-300SX MPX 550 Instructions XVS650KC Twist-252 SCD364 RPC-1 DEH-M6356ZH Darkness RC5000I SPP-S9226 RM-EZ2T BOX 7270 Free Download TV Optio L30 Vintage2-AE1 KDC-4080R F-401S PL-Z460 GR-432SF 2200-3200-4100 Information Encore B 5140I
manuel d'instructions, Guide de l'utilisateur | Manual de instrucciones, Instrucciones de uso | Bedienungsanleitung, Bedienungsanleitung | Manual de Instruções, guia do usuário | инструкция | návod na použitie, Užívateľská príručka, návod k použití | bruksanvisningen | instrukcja, podręcznik użytkownika | kullanım kılavuzu, Kullanım | kézikönyv, használati útmutató | manuale di istruzioni, istruzioni d'uso | handleiding, gebruikershandleiding
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101