Tools automate activities away from human control. This can potentially be dangerous if a tool makes a decision that is not sound as it abstracts away the human's ability to capture and react to that error.
Tools should be written in a way where every decision being automated is guaranteed not to cause an a false negative scenario in smart contract code that will be deployed in a production environment. All tools should err on the side of caution basically.
Got a comment? Check out our Gitter Channel!
Copyright and related rights waived via CC0