Most behavioural surveys are scored "linearly", without taking correlation of answers into account. However, given modern machine learning, we should be able to do much better
Excellent, Karthik. Put differently this is also a problem with many "average" measures. )The average temperature of coffee drunk would ocme out as lukewarm to cool, for example, to parody the isseu). Agree that ML wd be a better way; but they tend learn to weight the answers, (dont they?) to correlate with a desire outcome. Those come with their own issues. Also how do you have an out of model "obekctive" measure of, say, empathy that you can train the model on?
Very recently I took part in a pitch for an IIM which used a weighted formula to pick winners (L1 (costs)T1 (technical ability) sort pf thing, or analogous to it). We came second, narrowly: 79 to 82 out of 100. Now here's the thing: we were about 4x the cost (bad) and thus obviously far better on technical quality. What sort of decision making would come into play then? The decision makers could plead allegiance to the rules or some notion of objectivity or transparency goven the score difference, however thin. But to dramatise the issue what if we had tied at 80 out 100 each...?
reach out to AxisMyIndia, CSDS and CVoter and offer to do some work for them free that they can take credit for. Next time, insist they share the credit. Avoid going yourself on TV if you don't want to- can do webinars/ podcasts etc. It will be fantastic marketing for Babbage
Excellent, Karthik. Put differently this is also a problem with many "average" measures. )The average temperature of coffee drunk would ocme out as lukewarm to cool, for example, to parody the isseu). Agree that ML wd be a better way; but they tend learn to weight the answers, (dont they?) to correlate with a desire outcome. Those come with their own issues. Also how do you have an out of model "obekctive" measure of, say, empathy that you can train the model on?
Very recently I took part in a pitch for an IIM which used a weighted formula to pick winners (L1 (costs)T1 (technical ability) sort pf thing, or analogous to it). We came second, narrowly: 79 to 82 out of 100. Now here's the thing: we were about 4x the cost (bad) and thus obviously far better on technical quality. What sort of decision making would come into play then? The decision makers could plead allegiance to the rules or some notion of objectivity or transparency goven the score difference, however thin. But to dramatise the issue what if we had tied at 80 out 100 each...?
reach out to AxisMyIndia, CSDS and CVoter and offer to do some work for them free that they can take credit for. Next time, insist they share the credit. Avoid going yourself on TV if you don't want to- can do webinars/ podcasts etc. It will be fantastic marketing for Babbage
if and only if it is relevant to what we're buliding, and it doesn't seem so right now. don't want to distract till end of 2026 at least