Classification and Regression Trees also known as CART refers to decision tree algorithms that can be used for classification or regression predictive models. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each prediction. In other words creating a CART model involves selecting input variables and splitting points on those variables until a suitable tree is constructed. The representation of CART model is decision tree. The good thing about CART in terms of data is that it does not require any special data preparation other than a good representation of the problem.
Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost.
For classification using CART algorithm Gini index function is used which provides an indication of how "pure" the leaf nodes are ( how mixed the training data assigned to each node is).
Regression trees are for designed for dependent variables that take continuous or ordered discrete values, with predication error typically measured by the squared difference between the observed and predicted values.
Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost.
For classification using CART algorithm Gini index function is used which provides an indication of how "pure" the leaf nodes are ( how mixed the training data assigned to each node is).
Regression trees are for designed for dependent variables that take continuous or ordered discrete values, with predication error typically measured by the squared difference between the observed and predicted values.
Advantages of CART
- Simple to understand, interpret, visualize.
- Decision trees implicitly perform variable screening or feature selection.
- Can handle both numerical and categorical data. Can also handle multi-output problems.
- Decision trees require relatively little effort from users for data preparation.
- Nonlinear relationships between parameters do not affect tree performance.
No comments:
Post a Comment