Seriously, *Who Decides?* Obviously such a system could simultaneously optimize for different goals. Some goals will be synergistic, some will be opposed, some will be unexpected.
For example when breeding dogs for a behavior trait, researchers found that the coats of the dogs were changing color and length and texture. If Google wants to create a Lamarckian Genetic idea of information control to change opinions and actions, there will be unexpected, unpredictable changes related and unrelated to the goals being optimized for and against.
So we have an opaque, invisible, automated system that is guiding behavior and thoughts (see Filter Bubble) at the behest of either individuals, organizations with goals or possibly the gestalt desires of all of the members of the data pool, as interpreted by the system itself. Even *IF* we have a system that is trustworthy, all of those sources are suspect and can directly lead to goals that are negative.Organizations setting goals for this system is the scariest idea. Of course advertising would love to control our information stream to point us to their products, possibly exclusively. Governments would love to squash dissent and political parties would love to target both their supporters and their opponents with goals. Gun control groups setting the goal of lower gun ownership to reduce gun deaths. So gun shops don't show up in your results, or you get all of the gun negative articles about accidents and murders and the examples of lawful use are downvoted. That's just a small goal that could be a "social good."
Imagine how churches could use such a tool, making increased church attendance a social good? Imagine *chan-style assholes gaming the system to increase the number of people at some protest that they arrange to troll or intimidate someone. these are small, short term goals that this system could be "Rented" for, because remember, Google (Alphabet) is a business to make money. It's not going to create an AI system just to make the world a better place, it's going to have to be profitable.
Any individual will have an incomplete, prejudiced, and simply incorrect view of the world as a whole. Even expert specialists can disagree about the right path to take to solve a problem they agree on. We as a people cannot trust any individual with this power for obvious reasons.
An automatically generated gestalt of the goals of humanity is what the video implies, but the problem with that is the problem of individuals writ large. We, as individuals have prejudices and patterns that society has made invisible to us. A gestalt system will take and magnify those issues. Unless there is some hand on the control, this system would accelerate emergent trends and latent patterns. That hand would necessarily be an organization or an individual with the limitations I've already discussed.
Now, this is a "Thought Experiment" and stated to be unrelated to any existing or planned product. But we've already seen Facebook experimenting with changing people's moods. We see China rolling out a social credit game designed to move the population away from dissent and towards the goals of the nation. We know that Google and other companies are collating and categorizing data in as many databases as possible, cross-linked and indexed on as may variables as possible. The companies do that because they have the data and think there will be profit in it. And this video and the other similar problems show that this is a coming storm.
Site Navigation