New York City has deployed a robot to patrol its Times Square subway station. It will retire it within weeks, if not sooner.

The gizmo, called ”K5,” is shaped like a giant Weeble and is stuffed with cameras and other sensors. It has no arms, can’t navigate stairs, and has no capacity for communication (other than a mic connecting it to a human operator somewhere offsite).

But it’s covered in large NYPD stickers. The idea is that it will deter crime simply by its presence.

If that’s the goal, they should have hired a more threatening robot. Maybe something that looked like the fictional Robocop or its purported replacement, the ED-209. Guns and claws are scary. Eggs, not so much.

Heck, even a version of a Dalek, which is little more than a metallic salt shaker with an indeterminant proboscis, would be more likely to frighten away would-be criminals.

A real robot called Spot looks like a dog and moves like a stiff horse, and it can’t do much more than K5. But I’ve seen it live and it’s just scary, like it’s ready to pounce or maybe some laser bean will shoot out of the area where its head should be (like I said, scary). The LA Police Department is testing it.

So, why put a goofy giant rolling white egg in Times Square Station? I can think of at least three reasons:

First, the bureaucrats behind the scheme don’t know a thing about technology.

There’s no need for any special skills to see that K5’s appearance all but dares people to laugh out loud. Inside, it’s effectively a bucket of cameras on wheels, so it’s technically redundant to the cameras that are already peppered throughout the station.

As a robot, it’s bringing no new tech to work. Even its capacity for returning to a charging station to get another jolt was pioneered years ago by Roomba vacuum cleaners.

Second, the bureaucrats don’t know a thing about human behavior.

Privacy advocates are already worried that the surveillance eggs could use facial recognition or other as-yet uninvented tools for monitoring people. Most folks don’t like having machines watching and judging their behavior. There are extremely difficult questions about the future uses of AI in not just looking for criminal behavior but predicting its intent.

The only way the K5 makes a difference is if people assume it’s already doing such nefarious things.

This is why I give it weeks, if not days, before somebody spray paints over its sensors or pushes it down a flight of stairs. Maybe it gets hacked or gets whacked with a baseball bat. It will be interesting what the NYPD does with all of the footage of people giving K5 the middle finger.

Third, maybe the real story is about negotiating labor contracts.

It turns out that renting the K5 costs about $9/hr., which must be significantly less than the average hourly wage for human officers. The city recently reached a long-overdue contract agreement with the majority of its police unions, but it’s up for renegotiation in 2025.

I wonder if the implicit threat of using robots for policing factored into the last negotiation and/or is intended to temper the next one?

If so, the K5 test is already a success, even if it looks silly.

[This essay appeared originally at Spiritual Telegraph]