Samsung's coolest gadget yet

And, of course, as we know from ED-209, an automated weapon system that can audibly warn people is entirely safe... ;)
 
Vocal warnings only work if the tresspasser understands English and is not deaf. It will be a VERY short time before a deaf person or foreigner gets shot because he or she did not hear/did not understand the warning.
 
If you saw one of those things dotting your country's border...would you be dumb enough to stick around to test its perimeters out? The US should buy a few hundred of them and stick them on the border with Mexico...
 
that looks a lot like all those auto-turrets from games such as Half-life, Metal Gear Solid and so on.
 
Strictly speaking, nothing can go wrong - an automated turret will always do precisely what it's programmed to do (well, I mean, unless its operating system goes screwy or something - but that's something for film directors to worry about). If it kills innocent people, that's not because something went wrong, but because it worked exactly as it was designed to. Automated turrets are no different to normal guns in this regard - the only difference is that it's a programmer decides when the gun is fired, rather than a soldier.
 
I've seen software that can recognise human shapes, so it won't fire at animals, just people and possibly Yetis. This is an interesting idea though, sentry drones have pretty much been only in Sci-fi so far.
 
I don’t think it’s that simple. Guns don’t kill people, but automated guns kills people. A programmer may decide that the turret should fire at armed individuals, or individuals who don't surrender upon a warning. He sets all sorts of nice parameters and hi-grade patter recognition software and lots of top notch fail safes. However, the software might fail to recognize such situations for whatever reason, as in Robocop, and gun down someone it was not supposed to. It won't be a programmer making a conscious decision, but the AI failing to behave as expected. That's very troubling, assuming the turret is completely automated, and can shot people without any human interference. If, for example, there's a security team monitoring the turrets that can give them a go, even if they do fire automatically from that point, there was someone making the decision.

Otherwise, if an automated turret guns down an innocent person, who goes to jail? The manufacturer of the hardware? The maker of the software? The guy who forgot to upgrade the firmware? The reseller of water pistols that look too much like guns? The owner of the turret? Bob from accounting?

This might fine work on war or shot-on-sight situations, but for everything else it's absurd to place this kind of decision making to an AI.
 
I don’t think it’s that simple. Guns don’t kill people, but automated guns kills people. A programmer may decide that the turret should fire at armed individuals, or individuals who don't surrender upon a warning. He sets all sorts of nice parameters and hi-grade patter recognition software and lots of top notch fail safes. However, the software might fail to recognize such situations for whatever reason, as in Robocop, and gun down someone it was not supposed to.

Right. What happens when the "target" fails to understand the warning due to being deaf, not knowing the language the warning was given in, or being severely mentally handicapped? The AI will generally be assuming that those who do not heed a waning are deliberately disregarding it.
 
Right. What happens when the "target" fails to understand the warning due to being deaf, not knowing the language the warning was given in, or being severely mentally handicapped? The AI will generally be assuming that those who do not heed a waning are deliberately disregarding it.

Yeah. And if they try to make the AI recognize those situations, malicious people might feint it in order to gain entrance.
 
I don’t think it’s that simple. Guns don’t kill people, but automated guns kills people. A programmer may decide that the turret should fire at armed individuals, or individuals who don't surrender upon a warning. He sets all sorts of nice parameters and hi-grade patter recognition software and lots of top notch fail safes. However, the software might fail to recognize such situations for whatever reason, as in Robocop, and gun down someone it was not supposed to.
Next, you'll be telling me about the dangers of leaving the automated gun in an open space where it can get struck by lighting that causes the AI to go berserk, like in Stealth :).

Notice that this automated turret has not been programmed to distinguish between armed and unarmed people, merely between people and trees. So, there's no chance of a mistake - it will work as intended, shooting at exactly everyone that approaches. I doubt anybody would ever bother with a turret that gets more sophisticated than that, as that would defeat the point - you set up such turrets in places where you don't want even your own people to be (notice this is the Korean border we're talking about). In places where there's a chance of a mistake, you'll put ordinary people. It's just like the Berlin Wall - East Germany didn't set up the wall with all of its defences in order to filter the bad refugees from the good ones, but to be able to kill them all.

Another thing to consider is that even if an automatic turret was set up somewhere where it can kill somebody it's not supposed to, the law would most certainly allow us to blame a specific person.
 
I blame Bob in accounting. After all, he's an easy target to go after. Besides, he warned us that producing gum that lasts too long might hurt sales! :D
 
Back
Top