You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was not super sure about normalizing the data wrt L1 and L2 norm (normalize = sklearn.preprocessing.Normalizer(norm='l1'))
The output is x_i/norm(x)^2 for each i \in d. (with L1 norm being absolute sum and L2 norm being euclidean distance) so normalizing data is the same as scaling wrt norm(x)^2. Hence it should have the same effects as scaling and not affect statistical properties nor runtime. Ie we keep it if it helps and remove it if it doesnt. But the problem I see here is that each data point is being scaled by a different value and it is not the same scalar that multiplies every x \in \mathcal(X). Hence, do the regularization effects apply as given below?
My understanding is that the above image holds true and I just wanted to see if it was correct/incorrect. Thank you!!
The text was updated successfully, but these errors were encountered:
I was not super sure about normalizing the data wrt L1 and L2 norm (normalize = sklearn.preprocessing.Normalizer(norm='l1'))
The output is x_i/norm(x)^2 for each i \in d. (with L1 norm being absolute sum and L2 norm being euclidean distance) so normalizing data is the same as scaling wrt norm(x)^2. Hence it should have the same effects as scaling and not affect statistical properties nor runtime. Ie we keep it if it helps and remove it if it doesnt. But the problem I see here is that each data point is being scaled by a different value and it is not the same scalar that multiplies every x \in \mathcal(X). Hence, do the regularization effects apply as given below?
My understanding is that the above image holds true and I just wanted to see if it was correct/incorrect. Thank you!!
The text was updated successfully, but these errors were encountered: