@laughoutlood,
The Three Laws of Robotics, by science fiction author Isaac Asimov.
The Laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I have always thought the meaning of the above was straightforward. Now, it seems, it can only be understood after deep and meaningful arguments between linguistic philosophers. What rubbish!
Asimov's robots were constructed with advanced (but not perfect) cognitive functions, capable of logical decisions. Ethical decisions were a lot harder.
Robots were aware of their individuality.
Laws are supposedly written in such a way as to avoid ambiguity.
For this reason, taken in context, I believe the 'it' was necessary. A situation could arise where the 'human beings' in question may be giving orders to another robot, or another human for that matter and the individual robot would need help in deciding whether the orders being given applied to it.