This article is a great example of how humans misunderstand “robots”. The European Union proposes to treat robots as individual objects that have certain rights and responsibilities, and controls like “kill switches”.
This completely misunderstands the robot. The robot is just one face of a huge network. It doesn’t act as an individual object. If you turn it off, the robot beside it just picks up and carries on.
Computer networks do NOT think like humans. They work with detail, and with scale. They remember mostly details, and they don’t forget them, and they share vast amounts of detail. This sharing and processing can be done at a scale that I describe as “inhuman”. For example, a human security guard can watch about 6 cameras, and learn a lot, while paying attention. A computer can recognize a limited number of things, and understand few of them. But, the computer can do it across million cameras at the same time, and it can follow a million people as they move between those cameras. People tend to forget this is happening because the scale is so inhuman. Applied at the micro scale, this would mean that a computerized chess player is often acting as a search engine, matching game positions to actual historical games or to games that it played with itself in simulations. Applied at the company scale, this means that computers will probably be terrible at managing small companies, but better than humans at managing large companies (or large software projects, as I have already demonstrated) with masses of detail and interlinked work optimizations.
On the positive side, humans can’t be held accountable for systems effects — I don’t run over a kid, but I turn into traffic and cause a bus crash; or, I save a bank, but I kill an economy. Since computers think in systems and not individually, computers could actually be held responsible for the complete system.