Day 3
Lets start with Random Forest 1. It combines the output of multiple decision tree to reach the single result. 2. It handles both regression and classification problems so we wont be having problems we encountered on Ordinary Square Method. 3. it is made of many decision tree but I am yet to learn decision tree. Lets move back and learn decision tree first. 1. Similar to Random forest as it can handle both regression and classification. Lets drive into some math before we start: 1. Entropy (Information Gain): Measure's the impurity or disorder of set of data. High entropy means the data is more mixed up (e.g., equal numbers of different classes), while low entropy means it's more pure (mostly one class). 2. Information Gain it is a decrease in entropy achieved by splitting the data on particular attribute. One of the main attribute of decision tree is that it gives highest information gain, as this leads to most information splits. Formula for Entropy: Entropy(S) = - Σ ...