My comment wasn't along the lines of anything malfunctioning, but just of AI doing what it's supposed to do: respond to dynamic situations, learn from those, add to its own database, and keep cranking. So saying, suppose through a series of "experiences," an AI unit was making "decisions" that didn't necessarily meet the approval of the designer? Well, that has nothing to do with it. The designer created it to make its own decisions, and the designer is not responsible for the experiences or the decisions that the unit makes "on its own."
I agree that AI is designed with a purpose in mind. (Which, I think, supports my idea that intelligence and informational data are best traced back to former intelligence and informational data. It's inferring a more reasonable conclusion than to think such things arise on their own.)
I would also agree that AI doesn't have "free will." Free will requires self-awareness (consciousness) and rational thought. Right now, at least for the time being, AI is somewhat "primitive" software in what it's capable of doing (like self-driving cars).
> doesn't the creator have a responsibility to stop the AI causing harm (if the creator can) even if he/she is not responsible for its activities?
The Bible is full of stories where that's exactly what's happening (God stopping harmful behavior). But if you want the "creator" to stop EVERY instance of harm, (1) then he's interfering with the very idea of AI because he is hindering it from making decisions, and (2) possibly the computer will never from its mistakes if it never allowed to make any. In other words, there may be reasons he lets it make decisions even that cause harm.