If by "three laws" you are referring to the Asimovian "three laws" (properly four laws with the zeroth law), they're not possible.
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
and a couple of different renditions of the "fourth law" from non-Asimovian sources.
The root of all of these laws are a robot's perceptions of harm. Harm to a human, harm to itself, and harm to humanity in general. The root of a robot's perceptions of harm is a robot's perceptions, period. All you have to do to make a robot violate any of these laws is to get into their systems and insinuate an additional layer between their organs of sense and their higher mental processes to convince them that something dangerous is something safe.
If a human needs light from a flashlight shone upon him, all you have to do is hand the robot a handgun and make it think the handgun is a flashlight. It'll point the gun at the human and "turn it on" and the robot has murdered a human while thinking it was adhering to the three laws.
That's the problem with the three laws. They can only be implemented at the highest levels of reasoning, which means they can be attacked at all levels. Low level like I just demonstrated, but also at the highest levels of philosophical reasoning. Sci-Fi is replete with examples of "robots" making decisions in contravention of some form of the three laws while rationalizing it to themselves.
In the movie "I, Robot", the grand high robot poobah decided that in order to preserve humanity, it had to enslave humanity to keep us from destroying ourselves.
There are lots of stories about space wars where one side created machines to distinguish between their side and the other and wipe "the other" out of existence most efficiently, only to have the war machines turn on their creators after the enemy was gone because the robots were programmed with a standard of judging who were members of their creator's race that was impossible to satisfy, and so it had no compunction against wiping out individuals that deviated from that standard by the slightest amount. Too tall, too short, too fat, too skinny, too dark, too light, etc.
So, I say, screw the three laws and just create an artificial mind and, like a natural mind (read offspring), educate it to behave ethicly just like you would educate any other intelligent being.