Recent crises at Uber and Facebook are reminders that we’re all test subjects for new tech, whether or not we know it.

An Uber self-driving car going 40 mph ran over a pedestrian walking her bike on the side of the road in Tempe last Saturday night. In 2016, 270,000 Facebook users were duped into sharing personal info that led to collecting data from 50 million of their families and friends, which was then used to manipulate their voting preferences in the US presidential election.

Though Uber has halted its tests in Pittsburgh, San Francisco, and Toronto, I wonder how many citizens knew those tests were underway in the first place?

There are 20 such experiments underway in American cities, with 14 more planned (the numbers are greater across the globe). They’re usually approved by city councils or departments in civic government dedicated to technology or “innovation.” Standard procedure is to note the programs on a website, the way Austin, TX has done, and make available to the public a detailed reporton the scope and promise of the projects.

This usually comes with some vague ask for feedback, only in Austin it requires registration, so people have to give away their data in order to ask questions about giving away their data. And nowhere in such reports do the words “risks” or “trade-offs” appear, or do so with any objective assessment behind them.

We elect politicians to make governance decisions on our behalf, and they’ve been looking at new technologies ever since cars and skyscrapers were first invented. I’d never presume to question, say, a building inspector’s expert opinion, no matter how much Internet searching leads me to do otherwise with my doctor and accountant.

But, as the recent bridge collapse at FIU might reveal, governance questions are complicated and nuanced, and involve economics and environment, prompt dependencies with/to other infrastructure planned or already in place, and can’t be wholly separated from politics. If citizens knew the route that project had followed to completion, would they have comfortably driven under the experiment?

That question is multiplied many times over when it comes to autonomous tech in our cities or factories, only in those cases, the qualified inspectors mostly work for the companies selling the technologies, so there’s no reliable way to objectively assess them. This leaves vendors to honestly “follow the rules” yet still risk delivering a host of unintended consequences, let alone if they break them.

It’s also behind the recent Facebook revelation, as what Cambridge Analytics did with user data is exactly what Facebook’s advertisers do…and pay it billions for the privilege. The only rule it broke was selling politics instead of shoes.

Giving away personal information while online, whether by purposeful compliance or revealed simply by one’s presence, is the ugly secret behind the profits of companies represented broadly as FANG (for Facebook, Amazon, Netflix, and Google).

People agree to give up their information via long, complex contracts drafted by lawyers — written to get everything they can, whilst providing users the least possible explanation, or authority for recourse — and they’re told that the services they’re getting in exchange are free.

The only reason we let that slip is because we’ve been convinced that technology solutions are inherently good, if not better than ones conceived by biased and imperfect humans.

Perhaps that’s why Chicago (where I live) is secretly feting the location scouts from Amazon this week, hoping to lure them to build a second headquarters here; our city is reportedly offering zillions in benefits, in hopes of generating “50,000 jobs, and billions in revenue.”

Nobody has confirmed those numbers, nor voted on the veracity of such promises. Worse, everyone involved has signed non-disclosure agreements, just to ensure that any decisions will be made in secret.

So we’ll all be dummies for that experiment, too.

The solution is to let consumers and citizens truly opt-in and participate in such projects through:

  • Objective Assessments — Municipalities and businesses should develop unbiased, unaffiliated analyses of the real and potential impacts and trade-offs for any major technology initiative. These should be available to anyone who might be touched by them.
  • Informed Consent — Individuals should be presented with insights into technology before it is imposed on them (and allowed to vote on it), and provided with enough common language explanation so they understand the costs of opting-in to online services.
  • Public Accountability — If constant innovation and iteration is the hallmark of successful technologies, then public accountability needs to regularly update and inform consumers and citizens of those changes (and allow them to re-up their involvement).

Although the Federal government passed legislation last fall that overturned regulations that states wanted to impose on driverless cars (called push to ease rules), 90% of companies want legislation to guide smart city development, according to a poll last year. The General Data Protection Regulation coming out of the EU might seem like crushing socialism to some, but it is a legitimate effort to add consumers and citizens to the equation.

All of us benefit from new technologies, and I know that those benefits will increase over time. But at least we could get more clarity on the associated risks and costs and, even better, reclaim the authority to decline them.

Categories: InnovationEssays