乱伦视频

乱伦视频

The 2016 Survey: Algorithm impacts by 2026

Will impacts of the networked, automated world be mostly positive?

Experts and highly engaged netizens participated in answering a five-question survey fielded by the Imagining the Internet Center and the Pew Internet, Science & Technology Project from July 1-August 12, 2016.聽This report, issued Feb. 8, 2017, is tied to a survey question asking respondents to share their answer to the following query:

Algorithms will continue to have increasing influence over the next decade, shaping people鈥檚 work and personal lives and the ways they interact with information, institutions (banks, health care providers, retailers, governments, education, media and entertainment) and each other. The hope is that algorithms will help people quickly and fairly execute tasks and get the information, products, and services they want. The fear is that聽 algorithms can purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts. Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society? Select from聽1)Positives outweigh negatives;听2)聽Negatives outweigh positives;聽3)聽The overall impact will be about 50-50. Please elaborate on the reasons for your answer.

Among the key themes emerging from 1,302 respondents’ answers in the report released Feb. 8, 2017 were:聽–聽Algorithms will continue to spread everywhere. –聽The benefits, visible and invisible, can lead to greater insight into the world.聽–聽The many upsides of algorithms are accompanied by challenges.聽–聽Code processes are being refined; ethics and issues are being worked out.聽–聽Data-driven approaches achieved through thoughtful design are a plus.聽–聽Algorithms don’t have to be perfect; they just have to be better than people.聽–聽In the future, the world may be governed by benevolent AI.聽–聽Humanity and human agency are lost when data and predictive modeling become paramount.聽– Programming primarily in pursuit of profits and efficiencies is a threat. – Algorithms manipulate people and outcomes, and even read our minds. – All of this will lead to a flawed yet inescapable logic-driven society. – There will be a loss of complex decision-making capabilities and local intelligence. – Suggested solutions include embedding respect for the individual. – Algorithms reflect the biases of programmers and datasets. – Algorithms depend upon data that is often limited, deficient, or incorrect. – The disadvantaged are likely to be more so. – Algorithms create filter bubbles and silos shaped by corporate data collectors. – Algorithms limit people’s exposure to a wider range of ideas and reliable information and elminate serendipity. – Unemployment numbers will rise as smarter, more-efficient algorithms will take on many work activities. – There is a need for a redefined global economic system to support humanity. – Algorithmic literacy is crucial. – There should be accountability processes, oversight, and transparency. – There is pessimism about the prospects for policy rules and oversight.

This page holds the content of the 87-page survey report. It includes hundreds of respondents’ elaborations – their best estimations of what lies ahead – embedded in an analysis of all 1,302 responses.

Summary of Key Findings
Code-Dependent: Algorithm Age Pros and Cons

Algorithms are aimed at optimizing everything. They can save lives, make things easier, and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity, and serendipity, and could result in greater unemployment.

Imagining the Internet LogoTo illuminate current attitudes about the potential impacts of algorithms in the next decade, Pew Research Center and 乱伦视频鈥檚 Imagining the Internet Center conducted a large-scale canvassing of technology experts, scholars, corporate practitioners and government leaders. Respondents to the 2016 Future of the Internet canvassing anticipate that algorithms will expand in their influence over daily life by 2026, with huge implications.

We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an 鈥渙pt in鈥 invitation to several thousand people who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet.

Some 1,302 experts responded to the following question:

Algorithm Impacts聽鈥 Algorithms will continue to have increasing influence over the next decade, shaping people鈥檚 work and personal lives and the ways they interact with information, institutions (banks, health care providers, retailers, governments, education, media and entertainment) and each other.聽Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society? Select from聽1)聽Positives outweigh negatives;听2)聽Negatives outweigh positives;聽3)聽The overall impact will be about 50-50. Please elaborate on the reasons for your answer.

The non-scientific canvassing found that 38% of these particular respondents predicted that the positive impacts of algorithms will outweigh negatives for individuals and society in general, while 37% said negatives will outweigh positives; 25% said the overall impact of algorithms will be about 50-50, positive-negative. [See the section below titled 鈥淎bout this canvassing of experts鈥 for further details about the limits of this sample.]

Participants were asked to explain their answers, and most wrote detailed elaborations that provide insights about hopeful and concerning trends. Respondents were allowed to respond anonymously; these constitute a slight majority of the written elaborations. These findings do not represent all the points of view that are possible to a question like this, but they do reveal a wide range of valuable observations based on current trends.

Seven key themes found among the responses are illustrated in the following graphic and the massive report that follows it.

Seven Major Themes of the Algorithm Age

In the next section we offer a brief outline of the seven key themes found among the written elaborations. Following that introductory section there is an even-more-in-depth look at additional respondents鈥 thoughts tied to each of the themes. Some responses are lightly edited for style.

Theme 1: Algorithms will continue to spread everywhere

There is fairly uniform agreement among these respondents that algorithms are generally invisible to the public and there will be an exponential rise in their influence in the next decade. A representative statement of this view came from聽Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp. He replied:

鈥溾業f every algorithm suddenly stopped working, it would be the end of the world as we know it.鈥 (Pedro Domingo鈥檚聽). Fact: We have already turned our world over to machine learning and algorithms. The question now is, how to better understand and manage what we have done?

鈥淎lgorithms are a useful artifact to begin discussing the larger issue of the effects of technology-enabled assists in our lives. Namely, how can we see them at work? Consider and assess their assumptions? And most importantly for those who don鈥檛 create algorithms for a living 鈥 how do we educate ourselves about the way they work, where they are in operation, what assumptions and biases are inherent in them, and how to keep them transparent? Like fish in a tank, we can see them swimming around and keep an eye on them.

鈥淎lgorithms are the new arbiters of human decision-making in almost any area we can imagine, from watching a movie () to buying a house (Zillow.com) to self-driving cars (Google). Deloitte Global predicted more than聽聽will have cognitive technologies 鈥 mediated by algorithms 鈥 integrated into their products by the end of 2016. As Brian Christian and Tom Griffiths write in聽, algorithms provide 鈥榓 better standard against which to compare human cognition itself.鈥 They are also a goad to consider that same cognition: How are we thinking and what does it mean to think through algorithms to mediate our world?

鈥淭he main positive result of this is better understanding of how to make rational decisions, and in this measure a better understanding of ourselves. After all, algorithms are generated by trial and error, by testing, by observing, and coming to certain mathematical formulae regarding choices that have been made again and again 鈥 and this can be used for difficult choices and problems, especially when intuitively we cannot readily see an answer or a way to resolve the problem. The 37% Rule,听聽and other algorithmic conclusions are evidence-based guides that enable us to use wisdom and mathematically verified steps to make better decisions.

鈥淭he secondary positive result is connectivity. In a technological recapitulation of what spiritual teachers have been saying for centuries, our things are demonstrating that everything is 鈥 or can be 鈥 connected to everything else. Algorithms with the persistence and ubiquity of insects will automate processes that used to require human manipulation and thinking. These can now manage basic processes of monitoring, measuring, counting or even seeing. Our car can tell us to slow down. Our televisions can suggest movies to watch. A grocery can suggest a healthy combination of meats and vegetables for dinner. Siri reminds you it鈥檚 your anniversary.

鈥淭he main negative changes come down to a simple but now quite difficult question: How can we see, and fully understand the implications of, the algorithms programmed into everyday actions and decisions? The rub is this: Whose intelligence is it, anyway? … Our systems do not have, and we need to build in, 聽the ability to not only create technological solutions but also see and explore their consequences before we build business models, companies and markets on their strengths, and especially on their limitations.鈥

Chudakov added that this is especially necessary because in the next decade and beyond, 鈥淏y expanding collection and analysis of data and the resulting application of this information, a layer of intelligence or thinking manipulation is added to processes and objects that previously did not have that layer. So prediction possibilities follow us around like a pet. The result: As information tools and predictive dynamics are more widely adopted, our lives will be increasingly affected by their inherent conclusions and the narratives they spawn.鈥

鈥淭he overall impact of ubiquitous algorithms is presently incalculable because the presence of algorithms in everyday processes and transactions is now so great, and is mostly hidden from public view. All of our extended thinking systems (algorithms fuel the software and connectivity that create extended thinking systems) demand more thinking 鈥 not less 鈥 and a more global perspective than we have previously managed. The expanding collection and analysis of data and the resulting application of this information can cure diseases, decrease poverty, bring timely solutions to people and places where need is greatest, and dispel millennia of prejudice, ill-founded conclusions, inhumane practice and ignorance of all kinds. Our algorithms are now redefining what we think, how we think and what we know. We need to ask them to think about their thinking 鈥 to look out for pitfalls and inherent biases before those are baked in and harder to remove.

鈥淭o create oversight that would assess the impact of algorithms, first we need to see and understand them in the context for which they were developed. That, by itself, is a tall order that requires impartial experts backtracking through the technology development process to find the models and formulae that originated the algorithms. Then, keeping all that learning at hand, the experts need to soberly assess the benefits and deficits or risks the algorithms create. Who is prepared to do this? Who has the time, the budget and resources to investigate and recommend useful courses of action? This is a 21st-century job description 鈥 and market niche 鈥 in search of real people and companies. In order to make algorithms more transparent, products and product information circulars might include an outline of algorithmic assumptions, akin to the nutritional sidebar now found on many packaged food products, that would inform users of how algorithms drive intelligence in a given product and a reasonable outline of the implications inherent in those assumptions.鈥

Theme 2: Good things lie ahead

A number of respondents noted the many ways in which algorithms will help make sense of massive amounts of data, noting that this will spark breakthroughs in science, new conveniences and human capacities in everyday life, and an ever-better capacity to link people to the information that will help them. They perform seemingly miraculous tasks humans cannot and they will continue to greatly augment human intelligence and assist in accomplishing great things. A representative proponent of this view is聽Stephen Downes, a researcher at the National Research Council of Canada, who listed the following as positive changes:

鈥淪ome examples:
Banks. Today banks provide loans based on very incomplete data. It is true that many people who today qualify for loans would not get them in the future. However, many people 鈥 and arguably many more people 鈥 will be able to obtain loans in the future, as banks turn away from using such factors as race, socio-economic background, postal code and the like to assess fit. Moreover, with more data (and with a more interactive relationship between bank and client) banks can reduce their risk, thus providing more loans, while at the same time providing a range of services individually directed to actually help a person鈥檚 financial state.

鈥淗ealth care providers. Health care is a significant and growing expense not because people are becoming less healthy (in fact, society-wide, the opposite is true) but because of the significant overhead required to support increasingly complex systems, including prescriptions, insurance, facilities and more. New technologies will enable health providers to shift a significant percentage of that load to the individual, who will (with the aid of personal support systems) manage their health better, coordinate and manage their own care, and create less of a burden on the system. As the overall cost of health care declines, it becomes increasingly feasible to provide single-payer health insurance for the entire population, which has known beneficial health outcomes and efficiencies.

鈥淕overnments. A significant proportion of government is based on regulation and monitoring, which will no longer be required with the deployment of automated production and transportation systems, along with sensor networks. This includes many of the daily (and often unpleasant) interactions we have with government today, from traffic offenses, manifestation of civil discontent, unfair treatment in commercial and legal processes, and the like. A simple example: One of the most persistent political problems in the United States is the gerrymandering of political boundaries to benefit incumbents. Electoral divisions created by an algorithm to a large degree eliminate gerrymandering (and when open and debatable, can be modified to improve on that result).鈥

A sampling of additional examples of positive expectations from anonymous respondents:

鈥 鈥淎lgorithms find knowledge in an automated way much faster than traditionally feasible.鈥
鈥 鈥淎lgorithms can crunch databases quickly enough to alleviate some of the red tape and bureaucracy that currently slows progress down.鈥
鈥 鈥淲e will see less pollution, improved human health, less economic waste.鈥
鈥 鈥淎lgorithms have the potential to equalize access to information.鈥
鈥 鈥淭he efficiencies of algorithms will lead to more creativity and self-expression.鈥
鈥 鈥淎lgorithms can diminish transportation issues; they can identify congestion and alternative times and paths.鈥
鈥 鈥淪elf-driving cars could dramatically reduce the number of accidents we have per year, as well as improve quality of life for most people.鈥
鈥 鈥淏etter-targeted delivery of news, services and advertising.鈥
鈥 鈥淢ore evidence-based social science using algorithms to collect data from social media and click trails.鈥
鈥 鈥淚mproved and more proactive police work, targeting areas where crime can be prevented.鈥
鈥 鈥淔ewer underdeveloped areas and more international commercial exchanges.鈥
鈥 鈥淎lgorithms ease the friction in decision-making, purchasing, transportation and a large number of other behaviors.鈥
鈥 鈥淏ots will follow orders to buy your stocks. Digital agents will find the materials you need.鈥
鈥 鈥淎ny errors could be corrected. This will mean the algorithms only become more efficient to humanity鈥檚 desires as time progresses.

Themes illuminating concerns and challenges

Participants in this study were in substantial agreement that the abundant positives of accelerating code-dependency will continue to drive the spread of algorithms; however, as with all great technological revolutions, this trend has a dark side. Most respondents pointed out concerns, chief among them the final five overarching themes of this report; all have subthemes.

Theme 3: Humanity and human agency are lost
when data and predictive modeling become paramount

Advances in algorithms are allowing technology corporations and governments to gather, store, sort and analyze massive data sets. Experts in this canvassing noted that these algorithms are primarily written to optimize efficiency and profitability without much thought about the possible societal impacts of the data modeling and analysis. These respondents argued that humans are considered to be an 鈥渋nput鈥 to the process and they are not seen as real, thinking, feeling, changing beings. They say this is creating a flawed, logic-driven society and that as the process evolves 鈥 that is, as algorithms begin to write the algorithms 鈥 humans may get left out of the loop, letting 鈥渢he robots decide.鈥 Representative of this view:

Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, replied, 鈥淎lgorithms will capitalize on convenience and profit, thereby discriminating certain populations, but also eroding the experience of everyone else. The goal of algorithms is to fit some of our preferences, but not necessarily all of them: They essentially present a caricature of our tastes and preferences. My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies, and users into zombies who exclusively consume easy-to-consume items.鈥

An anonymous futurist said, 鈥淭his has been going on since the beginning of the industrial revolution. Every time you design a human system optimized for efficiency or profitability you dehumanize the workforce. That dehumanization has now spread to our health care and social services. When you remove the humanity from a system where people are included, they become victims.鈥

Another anonymous respondent wrote, 鈥淲e simply can鈥檛 capture every data element that represents the vastness of a person and that person鈥檚 needs, wants, hopes, desires. Who is collecting what data points? Do the human beings the data points reflect even know or did they just agree to the terms of service because they had no real choice? Who is making money from the data? How is anyone to know how his/her data is being massaged and for what purposes to justify what ends? There is no transparency, and oversight is a farce. It鈥檚 all hidden from view. I will always remain convinced the data will be used to enrich and/or protect others and not the individual. It鈥檚 the basic nature of the economic system in which we live.鈥

A sampling of comments tied to this theme from other respondents (for more thorough details tied to this point read the fuller versions in the full report, below):

鈥 鈥淭he potential for good is huge, but the potential for misuse and abuse 鈥 intentional, and inadvertent 鈥 may be greater.鈥
鈥 鈥淐ompanies seek to maximize profit, not maximize societal good. Worse, they repackage profit-seeking as a societal good. We are nearing the crest of a wave, the trough side of which is a new ethics of manipulation, marketing, nearly complete lack of privacy.鈥
鈥 鈥淲hat we see already today is that, in practice, stuff like 鈥榙ifferential pricing鈥 does not help the consumer; it helps the company that is selling things, etc.鈥
鈥 鈥淚ndividual human beings will be herded around like cattle, with predictably destructive results on rule of law, social justice and economics.鈥
鈥 鈥淭here is an incentive only to further obfuscate the presence and operations of algorithmic shaping of communications processes.鈥
鈥 鈥淎lgorithms are 鈥 amplifying the negative impacts of data gaps and exclusions.鈥
鈥 鈥淎lgorithms have the capability to shape individuals鈥 decisions without them even knowing it, giving those who have control of the algorithms an unfair position of power.鈥
鈥 鈥淭he fact the internet can, through algorithms, be used to almost read our minds means [that] those who have access to the algorithms and their databases have a vast opportunity to manipulate large population groups.鈥
鈥 鈥淭he lack of accountability and complete opacity is frightening.鈥
鈥 鈥淏y utilitarian metrics, algorithmic decision-making has no downside; the fact that it results in perpetual injustices toward the very minority classes it creates will be ignored. The Common Good has become a discredited, obsolete relic of The Past.鈥
鈥 鈥淚n an economy increasingly dominated by a tiny, very privileged and insulated portion of the population, it will largely reproduce inequality for their benefit. Criticism will be belittled and dismissed because of the veneer of digital 鈥榣ogic鈥 over the process.鈥
鈥 鈥淎lgorithms are the new gold, and it鈥檚 hard to explain why the average 鈥榞ood鈥 is at odds with the individual 鈥榞ood.鈥欌
鈥 鈥淲e will interpret the negative individual impact as the necessary collateral damage of 鈥榩rogress.鈥欌
鈥 鈥淭his will kill local intelligence, local skills, minority languages, local entrepreneurship because most of the available resources will be drained out by the global competitors.鈥
鈥 鈥淎lgorithms in the past have been created by a programmer. In the future they will likely be evolved by intelligent/learning machines 鈥. Humans will lose their agency in the world.鈥
鈥 鈥淚t will only get worse because there鈥檚 no 鈥榗risis鈥 to respond to, and hence, not only no motivation to change, but every reason to keep it going 鈥 especially by the powerful interests involved. We are heading for a nightmare.鈥
鈥 鈥淲eb 2.0 provides more convenience for citizens who need to get a ride home, but at the same time 鈥 and it鈥檚 naive to think this is a coincidence 鈥 it鈥檚 also a monetized, corporatized, disempowering, cannibalizing harbinger of the End Times. (I exaggerate for effect. But not by much.)鈥

Theme 4: Biases exist in algorithmically-organized systems,
among programmers and inside datasets

Two strands of thinking tie together here. One is that the algorithm creators (code writers), even if they strive for inclusiveness, objectivity and neutrality, build into their creations their own perspectives and values. The other is that the datasets to which algorithms are applied have their own limits and deficiencies. Even datasets with billions of pieces of information do not capture the fullness of people鈥檚 lives and the diversity of their experiences. Moreover, the datasets themselves are imperfect because they do not contain inputs from everyone or a representative sample of everyone. The two themes are advanced in these answers:

Justin Reich, executive director at the MIT Teaching Systems Lab, observed, 鈥淭he algorithms will be primarily designed by white and Asian men 鈥 with data selected by these same privileged actors 鈥 for the benefit of consumers like themselves. Most people in positions of privilege will find these new tools convenient, safe and useful. The harms of new technology will be most experienced by those already disadvantaged in society, where advertising algorithms offer bail bondsman ads that assume readers are criminals, loan applications that penalize people for proxies so correlated with race that they effectively penalize people based on race, and similar issues.鈥

Dudley Irish, a software engineer, observed, 鈥淎ll, let me repeat that, all of the training data contains biases. Much of it either racial- or class-related, with a fair sprinkling of simply punishing people for not using a standard dialect of English. To paraphrase Immanuel Kant, out of the crooked timber of these datasets no straight thing was ever made.鈥

Following is a sampling of quote excerpts tied to this theme from other respondents (for details, read the fuller versions in the full report, below):

鈥 鈥淎lgorithms are, by definition, impersonal and based on gross data and generalized assumptions. The people writing algorithms, even those grounded in data, are a non-representative subset of the population.鈥
鈥 鈥淚f you start at a place of inequality and you use algorithms to decide what is a likely outcome for a person/system, you inevitably reinforce inequalities.鈥
鈥 鈥淲e will all be mistreated as more homogenous than we are.鈥
鈥 鈥淭he result could be the institutionalization of biased and damaging decisions with the excuse of, 鈥楾he computer made the decision, so we have to accept it.鈥欌
鈥 鈥淭he algorithms will reflect the biased thinking of people. Garbage in, garbage out. Many dimensions of life will be affected, but few will be helped. Oversight will be very difficult or impossible.鈥
鈥 鈥淎lgorithms value efficiency over correctness or fairness, and over time their evolution will continue the same priorities that initially formulated them.鈥
鈥 鈥淥ne of the greatest challenges of the next era will be balancing protection of intellectual property in algorithms with protecting the subjects of those algorithms from unfair discrimination and social engineering.鈥
鈥 鈥淎lgorithms purport to be fair, rational and unbiased but just enforce prejudices with no recourse.鈥
鈥 鈥淯nless the algorithms are essentially open source and as such can be modified by user feedback in some fair fashion, the power that likely algorithm-producers (corporations and governments) have to make choices favorable to themselves, whether in internet terms of service or adhesion contracts or political biases, will inject both conscious and unconscious bias into algorithms.鈥

Theme 5: Algorithmic categorizations deepen divides

Two connected ideas about societal divisions were evident in many respondents鈥 answers. First, they predicted that an algorithm-assisted future will widen the gap between the digitally savvy (predominantly the most well-off, who are the most desired demographic in the new information ecosystem) and those who are not nearly as connected or able to participate. Second, they said social and political divisions will be abetted by algorithms, as algorithm-driven categorizations and classifications steer people into echo chambers of repeated and reinforced media and political content. Two illustrative answers:

Ryan Hayes, owner of Fit to Tweet, commented, 鈥淭wenty years ago we talked about the 鈥榙igital divide鈥 being people who had access to a computer at home vs. those that didn鈥檛, or those who had access to the internet vs. those who didn鈥檛 鈥. Ten years from now, though, the life of someone whose capabilities and perception of the world is augmented by sensors and processed with powerful AI and connected to vast amounts of data is going to be vastly different from that of those who don鈥檛 have access to those tools or knowledge of how to utilize them. And that divide will be self-perpetuating, where those with fewer capabilities will be more vulnerable in many ways to those with more.鈥

Adam Gismondi,听a visiting scholar at Boston College, wrote, 鈥淚 am fearful that as users are quarantined into distinct ideological areas, human capacity for empathy may suffer. Brushing up against contrasting viewpoints challenges us, and if we are able to (actively or passively) avoid others with different perspectives, it will negatively impact our society. It will be telling to see what features our major social media companies add in coming years, as they will have tremendous power over the structure of information flow.鈥

Following is a sampling of quote excerpts tied to this theme from other respondents (for details, read the fuller versions in the full report, below):

鈥 鈥淚f the current economic order remains in place, then I do not see the growth of data-driven algorithms providing much benefit to anyone outside of the richest in society.鈥
鈥 鈥淪ocial inequalities will presumably become reified.鈥
鈥 鈥淭he major risk is that less-regular users, especially those who cluster on one or two sites or platforms, won鈥檛 develop that navigational and selection facility and will be at a disadvantage.鈥
鈥 鈥淎lgorithms make discrimination more efficient and sanitized. Positive impact will be increased profits for organizations able to avoid risk and costs. Negative impacts will be carried by all deemed by algorithms to be risky or less profitable.鈥
鈥 鈥淪ociety will be stratified by which trust/identity provider one can afford/qualify to go with. The level of privacy and protection will vary. Lois McMaster [Bujold]鈥檚 Jackson鈥檚 Whole suddenly seems a little more chillingly realistic.鈥
鈥 鈥淲e have radically divergent sets of values, political and other, and algos are always rooted in the value systems of their creators. So the scenario is one of a vast opening of opportunity 鈥 economic and otherwise 鈥 under the control of either the likes of Zuckerberg or the grey-haired movers of global capital or 鈥.鈥
鈥 鈥淭he overall effect will be positive for some individuals. It will be negative for the poor and the uneducated. As a result, the digital divide and wealth disparity will grow. It will be a net negative for society.鈥
鈥 鈥淩acial exclusion in consumer targeting. Gendered exclusion in consumer targeting. Class exclusion in consumer targeting …. Nationalistic exclusion in consumer targeting.鈥
鈥 鈥淚f the algorithms directing news flow suppress contradictory information 鈥 information that challenges the assumptions and values of individuals 鈥 we may see increasing extremes of separation in worldviews among rapidly diverging subpopulations.鈥
鈥 鈥淲e may be heading for lowest-common-denominator information flows.鈥
鈥 鈥淓fficiency and the pleasantness and serotonin that come from prescriptive order are highly overrated. Keeping some chaos in our lives is important.鈥

A number of participants in this canvassing expressed concerns over the change in the public’s information diets, the 鈥渁tomization of media,鈥 an over-emphasis of the extreme, ugly, weird news, and the favoring of 鈥渢ruthiness鈥 over more-factual material that may be vital to understanding how to be a responsible citizen of the world.

Theme 6: Unemployment numbers will rise

The spread of artificial intelligence (AI) has the potential to create major unemployment and all the fallout from that.
An anonymous CEO said, 鈥淚f a task can be effectively represented by an algorithm, then it can be easily performed by a machine. The negative trend I see here is that 鈥 with the rise of the algorithm 鈥 humans will be replaced by machines/computers for many jobs/tasks. What will then be the fate of Man?鈥

A sampling of quote excerpts tied to this theme from other respondents (for details, read the fuller versions in the full report, below):

鈥 鈥淎I and robots are likely to disrupt the workforce to a potential 100% human unemployment. They will be smarter more efficient and productive and cost less, so it makes sense for corporations and business to move in this direction.鈥
鈥 鈥淭he massive boosts in productivity due to automation will increase the disparity between workers and owners of capital.鈥
鈥 鈥淢odern Western society is built on a societal model whereby Capital is exchanged for Labour to provide economic growth. If Labour is no longer part of that exchange, the ramifications will be immense.鈥
鈥 鈥淣o jobs, growing population and less need for the average person to function autonomously. Which part of this is warm and fuzzy?鈥
鈥 鈥淚 foresee algorithms replacing almost all workers with no real options for the replaced humans.鈥
鈥 鈥淚n the long run, it could be a good thing for individuals by doing away with low-value repetitive tasks and motivating them to perform ones that create higher value.鈥
鈥 鈥淗opefully, countries will have responded by implementing forms of minimal guaranteed living wages and free education past K-12; otherwise the brightest will use online resources to rapidly surpass average individuals and the wealthiest will use their economic power to gain more political advantages.鈥

Theme 7: The need grows for algorithmic literacy, transparency, and oversight

The respondents to this canvassing offered a variety of ideas about how individuals and the broader culture might respond to the algorithm-ization of life. They argued for public education to instill literacy about how algorithms function in the general public. They also noted that those who create and evolve algorithms are not held accountable to society and argued there should be some method by which they are. Representative comments:

Susan Etlinger, industry analyst at Altimeter Group, said, 鈥淢uch like the way we increasingly wish to know the place and under what conditions our food and clothing are made, we should question how our data and decisions are made as well. What is the supply chain for that information? Is there clear stewardship and an audit trail? Were the assumptions based on partial information, flawed sources or irrelevant benchmarks? Did we train our data sufficiently? Were the right stakeholders involved, and did we learn from our mistakes? The upshot of all of this is that our entire way of managing organizations will be upended in the next decade. The power to create and change reality will reside in technology that only a few truly understand. So to ensure that we use algorithms successfully, whether for financial or human benefit or both, we need to have governance and accountability structures in place. Easier said than done, but if there were ever a time to bring the smartest minds in industry together with the smartest minds in academia to solve this problem, this is the time.鈥

Chris Kutarna, author of聽Age of Discovery聽and fellow at the Oxford Martin School, wrote, 鈥淎lgorithms are an explicit form of heuristic, a way of routinizing certain choices and decisions so that we are not constantly drinking from a fire hydrant of sensory inputs. That coping strategy has always been co-evolving with humanity, and with the complexity of our social systems and data environments. Becoming explicitly aware of our simplifying assumptions and heuristics is an important site at which our intellects and influence mature. What is different now is the increasing power to program these heuristics explicitly, to perform the simplification outside of the human mind and within the machines and platforms that deliver data to billions of individual lives. It will take us some time to develop the wisdom and the ethics to understand and direct this power. In the meantime, we honestly don鈥檛 know how well or safely it is being applied. The first and most important step is to develop better social awareness of who, how, and where it is being applied.鈥

A sampling of quote excerpts tied to this theme from other respondents (for details, read the fuller versions in the full report, below):

鈥 鈥淲ho guards the guardians? And, in particular, which 鈥榞uardians鈥 are doing what, to whom, using the vast collection of information?鈥
鈥 鈥淭here are no incentives in capitalism to fight filter bubbles, profiling, and the negative effects, and governmental/international governance is virtually powerless.鈥
鈥 鈥淥versight mechanisms might include stricter access protocols; sign off on ethical codes for digital management and named stewards of information; online tracking of an individual鈥檚 reuse of information; opt-out functions; setting timelines on access; no third-party sale without consent.鈥
鈥 鈥淯nless there is an increased effort to make true information literacy a part of basic education, there will be a class of people who can use algorithms and a class used by algorithms.鈥
鈥 鈥淐onsumers have to be informed, educated, and, indeed, activist in their orientation toward something subtle. This is what computer literacy is about in the 21st century.鈥
鈥 鈥淔inding a framework to allow for transparency and assess outcomes will be crucial. Also a need to have a broad understanding of the algorithmic 鈥榲alue chain鈥 and that data is the key driver and as valuable as the algorithm which it trains.鈥
鈥 鈥淎lgorithmic accountability is a big-tent project, requiring the skills of theorists and practitioners, lawyers, social scientists, journalists, and others. It鈥檚 an urgent, global cause with committed and mobilized experts looking for support.鈥
鈥 鈥淓ventually, software liability law will be recognized to be in need of reform, since right now, literally, coders can get away with murder.鈥
鈥 鈥淭he Law of Unintended Consequences indicates that the increasing layers of societal and technical complexity encoded in algorithms ensure that unforeseen catastrophic events will occur 鈥 probably not the ones we were worrying about.鈥
鈥 鈥淓ventually we will evolve mechanisms to give consumers greater control that should result in greater understanding and trust 鈥. The pushback will be inevitable but necessary and will, in the long run, result in balances that are more beneficial for all of us.鈥
鈥 鈥淲e need some kind of rainbow coalition to come up with rules to avoid allowing inbuilt bias and groupthink to effect the outcomes.鈥
鈥 鈥淎lgorithms are too complicated to ever be transparent or to ever be completely safe. These factors will continue to influence the direction of our culture.鈥
鈥 鈥淚 expect meta-algorithms will be developed to try to counter the negatives of algorithms.鈥
Anonymous respondents shared these one-liners on the topic:
鈥 鈥淭he golden rule: He who owns the gold makes the rules.鈥
鈥 鈥淭he bad guys appear to be way ahead of the good guys.鈥
鈥 鈥淩esistance is futile.鈥
鈥 鈥淎lgorithms are defined by people who want to sell you something (goods, services, ideologies) and will twist the results to favor doing so.鈥
鈥 鈥淎lgorithms are surely helpful but likely insufficient unless combined with human knowledge and political will.鈥

Finally, this prediction from an anonymous participant who sees the likely endpoint to be one of two extremes:
鈥淭he overall impact will be utopia or the end of the human race; there is no middle ground foreseeable. I suspect utopia given that we have survived at least one existential crisis (nuclear) in the past and that our track record toward peace, although slow, is solid.鈥

Key experts’ thinking about the future impacts of algorithms

Following is a brief collection of comments by several of the many top analysts who participated in this canvassing:

鈥楽teering people to useful information鈥

Vinton Cerf,听Internet Hall of Fame member and vice president and chief internet evangelist at Google: 鈥淎lgorithms are mostly intended to steer people to useful information and I see this as a net positive.鈥

Beware 鈥榰nverified, untracked, unrefined models鈥

Cory Doctorow,聽writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing, responded, 鈥淭he choices in this question are too limited. The right answer is, 鈥業f we use machine learning models rigorously, they will make things better; if we use them to paper over injustice with the veneer of machine empiricism, it will be worse.鈥 Amazon uses machine learning to optimize its sales strategies. When they make a change, they make a prediction about its likely outcome on sales, then they use sales data from that prediction to refine the model. Predictive sentencing scoring contractors to America鈥檚 prison system use machine learning to optimize sentencing recommendation. Their model also makes predictions about likely outcomes (on reoffending), but there is no tracking of whether their model makes good predictions, and no refinement. This frees them to make terrible predictions without consequence. This characteristic of unverified, untracked, unrefined models is present in many places: terrorist watchlists; drone-killing profiling models; modern redlining/Jim Crow systems that limit credit; predictive policing algorithms; etc. If we mandate, or establish normative limits, on practices that correct this sleazy conduct, then we can use empiricism to correct for bias and improve the fairness and impartiality of firms and the state (and public/private partnerships). If, on the other hand, the practice continues as is, it terminates with a kind of Kafkaesque nightmare where we do things 鈥榖ecause the computer says so鈥 and we call them fair 鈥榖ecause the computer says so.鈥欌

鈥楢 general trend toward positive outcomes will prevail鈥

Jonathan Grudin, principal researcher at Microsoft, said, 鈥淲e are finally reaching a state of symbiosis or partnership with technology. The algorithms are not in control; people create and adjust them. However, positive effects for one person can be negative for another, and tracing causes and effects can be difficult, so we will have to continually work to understand and adjust the balance. Ultimately, most key decisions will be political, and I鈥檓 optimistic that a general trend toward positive outcomes will prevail, given the tremendous potential upside to technology use. I鈥檓 less worried about bad actors prevailing than I am about unintended and unnoticed negative consequences sneaking up on us.鈥

鈥楩aceless systems more interested in surveillance and advertising than actual service鈥

Doc Searls, journalist, speaker and director of Project VRM at Harvard University鈥檚 Berkman Center, wrote, 鈥淭he biggest issue with algorithms today is the black-box nature of some of the largest and most consequential ones. An example is the one used by Dun & Bradstreet to decide credit worthiness. The methods behind the decisions it makes are completely opaque, not only to those whose credit is judged, but to most of the people running the algorithm as well. Only the programmers are in a position to know for sure what the algorithm does, and even they might not be clear about what鈥檚 going on. In some cases there is no way to tell exactly why or how a decision by an algorithm is reached. And even if the responsible parties do know exactly how the algorithm works, they will call it a trade secret and keep it hidden. There is already pushback against the opacity of algorithms, and the sometimes vast systems behind them. Many lawmakers and regulators also want to see, for example, Google鈥檚 and Facebook鈥檚 vast server farms more deeply known and understood. These things have the size, scale, and in some ways the importance of nuclear power plants and oil refineries, yet enjoy almost no regulatory oversight. This will change. At the same time, so will the size of the entities using algorithms. They will get smaller and more numerous, as more responsibility over individual lives moves away from faceless systems more interested in surveillance and advertising than actual service.鈥

A call for #AlgorithmicTransparency

Marc Rotenberg, executive director of the Electronic Privacy Information Center, observed, 鈥淭he core problem with algorithmic-based decision making is the lack of accountability. Machines have literally become black boxes 鈥 even the developers and operators do not fully understand how outputs are produced. The problem is further exacerbated by 鈥榙igital scientism鈥 (my phrase) 鈥 an unwavering faith in the reliability of big data. 鈥楢lgorithmic transparency鈥 should be established as a fundamental requirement for all AI-based decision-making. There is a larger problem with the increase of algorithm-based outcomes beyond the risk of error or discrimination 鈥 the increasing opacity of decision-making and the growing lack of human accountability. We need to confront the reality that power and authority are moving from people to machines. That is why #AlgorithmicTransparency is one of the great challenges of our era.鈥

The data 鈥榳ill be misused in various ways鈥

Richard Stallman, Internet Hall of Fame member and president of the Free Software Foundation, said, 鈥淧eople will be pressured to hand over all the personal data that the algorithms would judge. The data, once accumulated, will be misused in various ways 鈥 by the companies that collect them, by rogue employees, by crackers that steal the data from the company鈥檚 site, and by the state via National Security Letters. I have heard that people who refuse to be used by Facebook are discriminated against in some ways. Perhaps soon they will be denied entry to the U.S., for instance. Even if the U.S. doesn鈥檛 actually do that, people will fear that it will. Compare this with China鈥檚 social obedience score for internet users.鈥

People must live with outcomes of algorithms 鈥榚ven though they are fearful of the risks鈥

David Clark, Internet Hall of Fame member and senior research scientist at MIT, replied, 鈥淚 see the positive outcomes outweighing the negative, but the issue will be that certain people will suffer negative consequences, perhaps very serious, and society will have to decide how deal with these outcomes. These outcomes will probably differ in character, and in our ability to understand why they happened, and this reality will make some people fearful. But as we see today that people feel that they must use the internet to be a part of society, even if they are fearful of the consequences, people will accept that they must live with the outcomes of these algorithms, even though they are fearful of the risks.鈥

鈥楨VERY area of life will be affected. Every. Single. One.鈥

Baratunde Thurston, Director鈥檚 Fellow at MIT Media Lab, Fast Company columnist, and former digital director of The Onion, wrote: 鈥”Main positive changes:聽1)聽The excuse of not knowing things will be reduced greatly as information becomes even more connected and complete.聽2)聽Mistakes that result from errors in human judgment, ‘knowledge,’ or reaction time will be greatly reduced. Let’s call this the ‘robots drive better than people’ principle. Today’s drivers will whine, but in 50 years no one will want to drive when they can use that transportation time to experience a reality-indistinguishable immersive virtual environment filled with a bunch of Beyonce bots.聽3)聽Corruption that exists today as a result of human deception will decline significantly鈥攂ribes, graft, nepotism. If the algorithms are built well and robustly, the opportunity to insert this inefficiency (e.g., hiring some idiot because he’s your cousin) should go down.聽4)聽In general, we should achieve a much more efficient distribution of resources, including expensive (in dollars or environmental cost) resources like fossil fuels. Basically, algorithmic insight will start to affect the design of our homes, cities, transportation networks, manufacturing levels, waste management processing, and more. There’s a lot of redundancy in a world where every American has a car she never uses. We should become far more energy efficient once we reduce the redundancy of human-drafted processes.

鈥淏ut there will be negative changes,” he continued “1)聽There will be an increased speed of interactions and volume of information processed鈥攅verything will get faster. None of the efficiency gains brought about by technology has ever lead to more leisure or rest or happiness. We will simply shop more, work more, decide more things because our capacity to do all those will have increased. It’s like adding lanes to the highway as a traffic management solution. When you do that, you just encourage more people to drive. The real trick is to not add more car lanes but build a world in which fewer people need or want to drive.

鈥2)聽There will be algorithmic and data-centric oppression. Given that these systems will be designed by demonstrably imperfect and biased human beings, we are likely to create new and far less visible forms of discrimination and oppression. The makers of these algorithms and the collectors of the data used to test and prime them have nowhere near a comprehensive understanding of culture, values, and diversity. They will forget to test their image recognition on dark skin or their medical diagnostic tools on Asian women or their transport models during major sporting events under heavy fog. We will assume the machines are smarter, but we will realize they are just as dumb as we are but better at hiding it.

3)聽Entire groups of people will be excluded and they most likely won’t know about the parallel reality they don’t experience.

“Every area of life will be affected. Every. Single. One.鈥

A call for 鈥榠ndustry reform鈥 and 鈥榤ore savvy regulatory regimes鈥

罢别肠丑苍辞濒辞驳颈蝉迟听Anil Dash聽said, 鈥淭he best parts of algorithmic influence will make life better for many people, but the worst excesses will truly harm the most marginalized in unpredictable ways. We鈥檒l need both industry reform within the technology companies creating these systems and far more savvy regulatory regimes to handle the complex challenges that arise.鈥

鈥榃e are a society that takes its life direction from the palm of our hands鈥

John Markoff, author of聽Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots聽and senior writer at The New York Times, observed, 鈥淚 am most concerned about the lack of algorithmic transparency. Increasingly we are a society that takes its life direction from the palm of our hands 鈥 our smartphones. Guidance on everything from what is the best Korean BBQ to who to pick for a spouse is algorithmically generated. There is little insight, however, into the values and motives of the designers of these systems.鈥

Fix the 鈥榦rganizational, societal and political climate we鈥檝e constructed鈥

danah boyd, founder of Data & Society, commented, 鈥淎n algorithm means nothing by itself. What鈥檚 at stake is how a 鈥榤odel鈥 is created and used. A model is comprised of a set of data (e.g., training data in a machine learning system) alongside an algorithm. The algorithm is nothing without the data. But the model is also nothing without the use case. The same technology can be used to empower people (e.g., identify people at risk) as harm them. It all depends on who is using the information to what ends (e.g., social services vs. police). Because of unhealthy power dynamics in our society, I sadly suspect that the outcomes will be far more problematic 鈥 mechanisms to limit people鈥檚 opportunities, segment and segregate people into unequal buckets, and leverage surveillance to force people into more oppressive situations. But it doesn鈥檛 have to be that way. What鈥檚 at stake has little to do with the technology; it has everything to do with the organizational, societal and political climate we鈥檝e constructed.鈥

We have an algorithmic problem already: Credit scores

Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University, noted, 鈥淲e already have had early indicators of the difficulties with algorithmic decision-making, namely credit scores. Their computation is opaque and they were then used for all kinds of purposes far removed from making loans, such as employment decisions or segmenting customers for different treatment. They leak lots of private information and are disclosed, by intent or negligence, to entities that do not act in the best interest of the consumer. Correcting data is difficult and time-consuming, and thus unlikely to be available to individuals with limited resources. It is unclear how the proposed algorithms address these well-known problems, given that they are often subject to no regulations whatsoever. In many areas, the input variables are either crude (and often proxies for race), such as home ZIP code, or extremely invasive, such as monitoring driving behavior minute-by-minute. Given the absence of privacy laws, in general, there is every incentive for entities that can observe our behavior, such as advertising brokers, to monetize behavioral information. At minimum, institutions that have broad societal impact would need to disclose the input variables used, how they influence the outcome and be subject to review, not just individual record corrections. An honest, verifiable cost-benefit analysis, measuring improved efficiency or better outcomes against the loss of privacy or inadvertent discrimination, would avoid the 鈥榯rust us, it will be wonderful and it鈥檚 AI!鈥 decision-making.鈥

Algorithms 鈥榗reate value and cut costs鈥 and will be improved

Robert Atkinson, president of the Information Technology and Innovation Foundation, said, 鈥淟ike virtually all past technologies, algorithms will create value and cut costs, far in excess of any costs. Moreover, as organizations and society get more experience with use of algorithms there will be natural forces toward improvement and limiting any potential problems.鈥

鈥楾he goal should be to help people question authority鈥

Judith Donath聽of Harvard Berkman Klein Center for Internet & Society, replied, 鈥淒ata can be incomplete, or wrong, and algorithms can embed false assumptions. The danger in increased reliance on algorithms is that is that the decision-making process becomes oracular: opaque yet unarguable. The solution is design. The process should not be a black box into which we feed data and out comes an answer, but a transparent process designed not just to produce a result, but to explain how it came up with that result. The systems should be able to produce clear, legible text and graphics that help the users 鈥 readers, editors, doctors, patients, loan applicants, voters, etc. 鈥 understand how the decision was made. The systems should be interactive, so that people can examine how changing data, assumptions, rules would change outcomes. The algorithm should not be the new authority; the goal should be to help people question authority.鈥

Do more to train coders with diverse world views

Amy Webb, futurist and CEO at the Future Today Institute, wrote, 鈥淚n order to make our machines think, we humans need to help them learn. Along with other pre-programmed training datasets, our personal data is being used to help machines make decisions. However, there are no standard ethical requirements or mandate for diversity, and as a result we鈥檙e already starting to see a more dystopian future unfold in the present. There are too many examples to cite, but I鈥檒l list a few: would-be borrowers turned away from banks, individuals with black-identifying names seeing themselves in advertisements for criminal background searches, people being denied insurance and health care. Most of the time, these problems arise from a limited worldview, not because coders are inherently racist. Algorithms have a nasty habit of doing exactly what we tell them to do. Now, what happens when we鈥檝e instructed our machines to learn from us? And to begin making decisions on their own? The only way to address algorithmic discrimination in the future is to invest in the present. The overwhelming majority of coders are white and male. Corporations must do more than publish transparency reports about their staff 鈥 they must actively invest in women and people of color, who will soon be the next generation of workers. And when the day comes, they must choose new hires both for their skills and their worldview. Universities must redouble their efforts not only to recruit a diverse body of students 鈥揳dministrators and faculty must support them through to graduation. And not just students. Universities must diversify their faculties, to ensure that students see themselves reflected in their teachers.鈥

The impact in the short term will be negative; in the longer term it will be positive

Jamais Cascio, distinguished fellow at the Institute for the Future, observed, 鈥淭he impact of algorithms in the early transition era will be overall negative, as we (humans, human society and economy) attempt to learn how to integrate these technologies. Bias, error, corruption and more will make the implementation of algorithmic systems brittle, and make exploiting those failures for malice, political power or lulz comparatively easy. By the time the transition takes hold 鈥 probably a good 20 years, maybe a bit less 鈥 many of those problems will be overcome, and the ancillary adaptations (e.g., potential rise of universal basic income) will start to have an overall benefit. In other words, shorter term (this decade) negative, longer term (next decade) positive.鈥

The story will keep shifting

Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future, commented, 鈥淭he future effects of algorithms in our lives will shift over time as we master new competencies. The rates of adoption and diffusion will be highly uneven, based on natural variables of geographies, the environment, economies, infrastructure, policies, sociologies, psychology, and 鈥 most importantly 鈥 education. The growth of human benefits of machine intelligence will be most constrained by our collective competencies to design and interact effectively with machines. At an absolute minimum, we need to learn to form effective questions and tasks for machines, how to interpret responses and how to simply detect and repair a machine mistake.鈥

Make algorithms 鈥榗omprehensible, predictable and controllable鈥

Ben Shneiderman, professor of computer science at the University of Maryland, wrote, 鈥淲hen well-designed, algorithms amplify human abilities, but they must be comprehensible, predictable and controllable. This means they must be designed to be transparent so that users can understand the impacts of their use and they must be subject to continuing evaluation so that critics can assess bias and errors. Every system needs a responsible contact person/organization that maintains/updates the algorithm and a social structure so that the community of users can discuss their experiences.鈥

In key cases, give the user control

David Weinberger, senior researcher at the Harvard Berkman Klein Center for Internet & Society, said, 鈥淎lgorithmic analysis at scale can turn up relationships that are predictive and helpful even if they are beyond the human capacity to understand them. This is fine where the stakes are low, such as a book recommendation. Where the stakes are high, such as algorithmically filtering a news feed, we need to be far more careful, especially when the incentives for the creators are not aligned with the interests of the individuals or of the broader social goods. In those latter cases, giving more control to the user seems highly advisable.鈥

Fuller Details:
Predictions for Algorithm Impacts by 2026

Algorithms are instructions for聽or completing a task. Recipes are algorithms. Computer code is algorithmic. The internet runs on algorithms and all online searching is accomplished through them. Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling. Online dating and book-recommendation and travel websites would not function without algorithms. GPS mapping systems get people from point A to point B via algorithms. Artificial intelligence (AI) is naught but algorithms. The material people see on social media is brought to them by algorithms. In fact, everything people see and do on the web is a product of algorithms. Every time someone sorts a column in a spreadsheet, algorithms are at play, and most financial transactions today are accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms.聽are now emerging, so it is possible that in the future algorithms will write many if not most algorithms.

Algorithms are often elegant and incredibly useful tools used to accomplish tasks. They are mostly invisible aids, augmenting human lives in increasingly incredible ways. However, sometimes the application of algorithms created with good intentions leads to unintended consequences. Recent news items tie to these concerns:

鈥 The British pound聽, partly because of currency trades triggered by algorithms.
鈥 Microsoft engineers created a Twitter bot named 鈥淭ay鈥 in the spring of 2016 in an attempt to chat with Millennials by responding to their prompts, but within hours聽聽based on algorithms that had it 鈥渓earning鈥 how to respond to others聽
鈥 Facebook tried to create a feature to highlight Trending Topics from around the site in people鈥檚 feeds. First, it had a team of聽, but controversy erupted when some accused the platform of being聽 So, Facebook then turned the job over to algorithms only to find that they聽
鈥 Cathy O鈥橬eil, author of聽Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,听, using algorithmic hiring practices as an example.
鈥 Well-intentioned algorithms can be sabotaged by bad actors. An internet slowdown聽聽on Oct. 21, 2016, after hackers bombarded Dyn DNS, an internet traffic handler, with information that overloaded its circuits, ushering in a new era of internet attacks powered by internet-connected devices. This after internet security expert Bruce Schneier warned in September that聽聽And the abuse of聽聽algorithm and general promulgation of聽聽became controversial as the 2016 U.S. presidential election proceeded.
鈥 Researcher Andrew Tutt called for an聽聽noting, 鈥淭he rise of increasingly complex algorithms calls for critical thought about how to best prevent, deter and compensate for the harms that they cause 鈥. Algorithmic regulation will require federal uniformity, expert judgment, political independence and pre-market review to prevent 鈥 without stifling innovation 鈥 the introduction of unacceptably dangerous algorithms into the market.鈥
鈥 The White House released two reports in October 2016, one detailing the聽聽and another with聽, and it issued a December report outlining some of the potential聽
鈥 On January 17, 2017, the聽聽published a list of聽 The more than 1,600 signatories included Steven Hawking, 乱伦视频 Musk, Ray Kurzweil and聽聽and more than 1,600 other endorsers.

罢丑别听聽as massive amounts of data are being created, captured and analyzed by businesses and governments. Some are calling this the聽and predicting that the聽聽is tied to聽聽that will get better and better at an ever-faster pace.

While many of the 2016 U.S. presidential election post-mortems noted the revolutionary impact of web-based tools in influencing its outcome, XPrize Foundation CEO Peter Diamandis predicted that聽He said advances in quantum computing and the rapid evolution of AI and AI agents embedded in systems and devices in the Internet of Things will lead to hyper-stalking, influencing and shaping of voters, and hyper-personalized ads, and will create new ways to misrepresent reality and perpetuate falsehoods.

Analysts like Aneesh Aneesh of Stanford University foresee algorithms taking over public and private activities in a new era of聽that supplants 鈥渂ureaucratic hierarchies.鈥 Others, like Harvard鈥檚 Shoshana Zuboff, describe the emergence of聽聽that organizes economic behavior in an 鈥渋nformation civilization.鈥

The sizeable majority of experts surveyed for this report envision major advances in programmed artificial intelligence and other algorithmic advances in the coming decade.

Following are some themes that emerged.

Theme 1: Algorithms will continue to spread everywhere

Nearly all of these respondents see the great advantages of the algorithms that are already changing how connected institutions and people live and work. A significant majority expects them to continue to proliferate 鈥 mostly invisibly 鈥 and expects that there will be an exponential rise in their influence. They say this will bring many benefits and some challenges.

Jim Warren, longtime technology entrepreneur and activist, described algorithms: 鈥淎ny sequence of instructions for how to do something (or how a machine that can understand said instructions can do it) is 鈥 by definition 鈥 an 鈥榓lgorithm.鈥 All sides 鈥 great and small, benevolent and malevolent 鈥 have always created and exercised such algorithms (recipes for accomplishing a desired function), and always will. Almost all of the 鈥榞ood鈥 that humankind has created 鈥 as well as all the harm (sometimes only in the eye of the beholder) 鈥 has been from discovering how to do something, and then repeating that process. And more often than not, sharing it with others. Like all-powerful but double-edged tools, algorithms are. ;-)鈥

Terry Langendoen, a U.S. National Science Foundation expert whose job is to support research on algorithms, is enthusiastic about what lies ahead. 鈥淭he technological improvements in the past 50 years in such areas as speech-recognition and synthesis, machine translation and information retrieval have had profound beneficial impacts 鈥,鈥 he said. 鈥淭he field is poised to make significant advances in the near future.鈥

The benefits will be visible and invisible and
can lead to greater human insight into the world

Patrick Tucker, author and technology editor at Defense One, pointed out how today鈥檚 networked communications amplify the impacts of algorithms. 鈥淭he internet is turning prediction into an equation,鈥 he commented. 鈥淔rom programs that chart potential flu outbreaks to expensive (yet imperfect) 鈥榪uant鈥 algorithms that anticipate bursts of stock market volatility, computer-aided prediction is everywhere. As I write in聽The Naked Future, in the next two decades, as a function of machine learning and big data, we will be able to predict huge areas of the future with far greater accuracy than ever before in human history, including events long thought to be beyond the realm of human inference. That will have an impact in all areas including health care, consumer choice, educational opportunities, etc. The rate by which we can extrapolate meaningful patterns from the data of the present is quickening as rapidly as is the spread of the internet because the two are inexorably linked.鈥

Paul Jones, clinical professor at the University of North Carolina-Chapel Hill and director of ibiblio.org, was optimistic. 鈥淭he promise of standardization of best practices into code is a promise of stronger best practices and a hope of larger space for human insight,鈥 he predicted. 鈥淐ode, flexible and open code, can make you free 鈥 or at least a bit freer.鈥

David Krieger, director of the Institute for Communication & Leadership IKF, predicted, 鈥淒ata-driven algorithmic cognition and agency will characterize all aspects of society. Humans and non-humans will become partners such that identity(ies) will be distributed and collective. Individualism will become anachronistic. The network is the actor. It is the network that learns, produces, decides, much like the family or clan in collective societies of the past, but now on the basis of big data, AI and transparency. Algorithmic auditing, accountability, benchmarking procedures in machine learning, etc., will play an important role in network governance frameworks that will replace hierarchical, bureaucratic government. Not government, but governance.鈥

An anonymous software security consultant noted, 鈥淭here will be many positive impacts that aren鈥檛 even noticed. Having an 鈥榠ntelligent鈥 routing system for cars may mean most people won鈥檛 notice when everyone gets to their destination as fast as they used to even with twice the traffic. Automated decisions will indeed have significant impacts upon lots of people, most of the time in ways they won鈥檛 ever recognize. Already they鈥檙e being used heavily in financial situations, but most people don鈥檛 see a significant difference between 鈥榓 VP at the bank denied my loan鈥 and 鈥榮oftware at the bank denied my loan鈥 (and in practice, the main difference is an inability to appeal the decision).鈥

Another anonymous respondent wrote, 鈥淎lgorithms in general enable people to benefit from the results of the synthesis of large volumes of information where such synthesis was not available in any form before 鈥 or at least only to those with significant resources. This will be increasingly positive in terms of enabling better-informed choices. As algorithms scale and become more complex, unintended consequences become harder to predict and harder to fix if they are detected, but the positive benefit above seems so dramatic it should outweigh this effect. Particularly if there are algorithms designed to detect unintended discriminatory or other consequences of other algorithms.鈥

The many upsides of algorithms are accompanied by serious challenges

Respondents often hailed the positives while noting the need to address the downsides.

Galen Hunt, partner research manager at Microsoft Research NExT, reflected the hopes of many when he wrote, 鈥淎lgorithms will accelerate in their impact on society. If we guard the core values of civil society (like equality, respect, transparency), the most valuable algorithms will be those that help the greatest numbers of people.鈥

Alf Rehn, professor and chair of management and organization at 脜bo Akademi University in Finland, commented, 鈥淣ew algorithmic thinking will be a great boon for many people. They will make life easier, shopping less arduous, banking a breeze and a hundred other great things besides. But a shaved monkey can see the upsides. The important thing is to realize the threats, major and minor, of a world run by algorithms. They can enhance filter bubbles for both individuals and companies, limit our view of the world, create more passive consumers, and create a new kind of segregation 鈥 think algorithmic haves and have-nots. In addition, for an old hacker like me, as algorithmic logics get more and more prevalent in more and more places, they also increase the number of attack vectors for people who want to pervert their logic, for profit, for more nefarious purposes, or just for the lulz.鈥

Andrew Nachison, founder at We Media, observed, 鈥淭he positives will be enormous 鈥 better shopping experiences, better medical experience, even better experiences with government agencies. Algorithms could even make 鈥榖ureaucrat鈥 a friendlier word. But the dark sides of the 鈥榦ptimized鈥 culture will be profound, obscure and difficult to regulate 鈥 including pervasive surveillance of individuals and predictive analytics that will do some people great harm (鈥楽orry, you鈥檙e pre-disqualified from a loan.鈥 鈥楽orry, we鈥檙e unable to sell you a train ticket at this time.鈥). Advances in computing, tracking and embedded technology will herald a quantified culture that will be ever more efficient, magical and terrifying.鈥

Luis Lach, president of the Sociedad Mexicana de Computaci贸n en la Educaci贸n, A.C., said, 鈥淥n the negative side we will see huge threats to security, data privacy and attacks to individuals, by governments, private entities and other social actors. And on the positive we will have the huge opportunity for collective and massive collaboration across the entire planet. Of course the science will rise and we will see marvelous advances. Of course we will have a continuum between positive and negative scenarios. What we will do depends on individuals, governments, private companies, nonprofits, academia, etc.鈥

Frank Pasquale,聽author of聽The Black Box Society: The Secret Algorithms That Control Money and Information聽and professor of law at the University of Maryland, wrote, 鈥淎lgorithms are increasingly important because businesses rarely thought of as high-tech have learned the lessons of the internet giants鈥 successes. Following the advice of Jeff Jarvis鈥 What Would Google Do, they are collecting data from both workers and customers, using algorithmic tools to make decisions, to sort the desirable from the disposable. Companies may be parsing your voice and credit record when you call them, to determine whether you match up to 鈥榠deal customer鈥 status, or are simply 鈥榳aste鈥 who can be treated with disdain. Epagogix advises movie studios on what scripts to buy based on how closely they match past, successful scripts. Even winemakers make algorithmic judgments, based on statistical analyses of the weather and other characteristics of good and bad vintage years. For wines or films, the stakes are not terribly high. But when algorithms start affecting critical opportunities for employment, career advancement, health, credit and education, they deserve more scrutiny. U.S. hospitals are using big data-driven systems to determine which patients are high-risk 鈥 and data far outside traditional health records is informing those determinations. IBM now uses algorithmic assessment tools to sort employees worldwide on criteria of cost-effectiveness, but spares top managers the same invasive surveillance and ranking. In government, too, algorithmic assessments of dangerousness can lead to longer sentences for convicts, or no-fly lists for travelers. Credit scoring drives billions of dollars in lending, but the scorers鈥 methods remain opaque. The average borrower could lose tens of thousands of dollars over a lifetime, thanks to wrong or unfairly processed data. It took a combination of computational, legal and social scientific skills to unearth each of the examples discussed above 鈥 troubling collection, bad or biased analysis, and discriminatory use. Collaboration among experts in different fields is likely to yield even more important work. Grounded in well-established empirical social science methods, their models can and should inform the regulation of firms and governments using algorithms.鈥

Cindy Cohn, executive director at the Electronic Frontier Foundation, wrote, 鈥淭he lack of critical thinking among the people embracing these tools is shocking and can lead to some horrible civil liberties outcomes 鈥. I don鈥檛 think it鈥檚 possible to assign an overall 鈥榞ood鈥 or 鈥榖ad鈥 to the use of algorithms, honestly. As they say on Facebook, 鈥業t鈥檚 complicated.鈥欌

Bernardo A. Huberman, senior fellow and director of the Mechanisms and Design Lab at HPE Labs, Hewlett Packard Enterprise, said, 鈥淎lgorithms do lead to the creation of filters through which people see the world and are informed about it. This will continue to increase. If the negative aspects eventually overtake the positive ones, people will stop resorting to interactions with institutions, media, etc. People鈥檚 lives are going to continue to be affected by the collection of data about them, but I can also see a future where they won鈥檛 care as much or will be compensated every time their data is used for money-making purposes.鈥

Marcel Bullinga, trend watcher and keynote speaker, commented, 鈥淎I will conquer the world, like the internet and the mobile phone once did. It will end the era of apps. Millions of useless apps (because there are way too many for any individual) will become useful on a personal level if they are integrated and handled by AI. For healthy robots/AI, we must have transparent, open source AI. The era of closed is over. If we stick to closed AI, we will see the rise of more and more tech monopolies dominating our world as Facebook and Google and Uber do now.鈥

Michael Rogers, author and futurist at Practical Futurist, said, 鈥淚n a sense, we鈥檙e building a powerful nervous system for society. Big data, real-time analytics, smart software could add great value to our lives and communities. But at the same time they will be powerful levers of social control, many in corporate hands. In today鈥檚 market economy, driven by profit and shareholder value, the possibility of widespread abuse is quite high. Hopefully society as a whole will be able to use these tools to advance more humanistic values. But whether that is the case lies not in the technology, but in the economic system and our politics.鈥

An anonymous principal engineer commented, 鈥淭he effect will depend on the situation. In areas where human judgment is required, I foresee negative effects. In areas where human judgment is a hindrance it could be beneficial. For example, I don鈥檛 see any reason for there to be train accidents (head-on collisions, speeding around a curve) with the correct design of an intelligent train system. Positive and negative effects will also depend on the perception of the person involved. For example, an intelligent road system could relieve congestion and reduce accidents, but also could restrict freedom of people to drive their cars as they wish (e.g., fast). This could be generalized to a reduction in freedom in general, which could be beneficial to some but detrimental to others.鈥

Theme 2: Good things lie ahead

Many respondents to the canvassing pointed out that algorithms are already the backbone for most systems and it is quite evident they have been mostly of great benefit and will continue to improve every aspect of life. Their driving idea is that great things will be achieved thanks to recent and coming advances in algorithm-based actions. These respondents said that algorithms will help make sense of massive amounts of data, and this will inspire breakthroughs in science, new conveniences and human capacities in everyday life, and ever-better capacity to link people to the information that will help them. As an anonymous senior researcher employed by Microsoft replied, 鈥淭hey enable us to search the web and sequence genomes. These two activities alone dwarf the negatives.鈥

Demian Perry, director of mobile at NPR, said algorithmic 鈥渉elpmates鈥 add efficiencies. 鈥淎n algorithm is just a way to apply decision-making at scale,鈥 he explained. 鈥淢ass-produced decisions are, if nothing else, more consistent. Depending on the algorithm (and whom you ask), that consistency is either less nuanced or more disciplined than you might expect from a human. In the NPR One app, we have yet to find an algorithm that can be trusted to select the most important news and the most engrossing stories that everyone must hear. At the same time, we rely heavily on algorithms to help us make fast, real-time decisions about what a listener鈥檚 behavior tells us about their program preferences, and we use these algorithms to pick the best options to present to them at certain points in their listening experience. Thus algorithms are helpmates in the process of curating the news, but they鈥檒l probably never run the show. We believe they will continue to make our drudge work more efficient, so that we have more time to spend on the much more interesting work of telling great stories.鈥

Stowe Boyd, chief researcher at Gigaom, said, 鈥淎lgorithms and AI will have an enormous impact on the conduct of business. HR is one enormous area that will be revamped top to bottom by this revolution. Starting at a more fundamental level, education will be recast and AI will be taking a lead role. We will rely on AI to oversee other AIs.鈥

Data-driven approaches achieved through thoughtful design are a plus

Jason Hong, an associate professor at Carnegie Mellon University, predicted, 鈥淥n the whole, algorithms will be a net positive for humanity. Any given individual has a large number [of] cognitive biases, limited experiences, and limited information for making a decision. In contrast, an algorithm can be trained on millions or even billions of examples, and can be specifically tuned for fairness, efficiency, speed or other kinds of desired criteria. In practice, an algorithm will be deployed to work autonomously only in cases where the risks are low (e.g., ads, news) or where the certainty is high (e.g., anti-lock brakes, airplane auto-pilot) or good enough (e.g., Uber鈥檚 algorithm for allocating passengers to drivers). In most cases, though, it won鈥檛 be just a person or just an algorithm, but rather the combination of an expert with an algorithm. For example, rather than just a doctor, it will likely be a doctor working with an AI algorithm that has been trained on millions of electronic health records, their treatments and their outcomes. We have several thousand years of human history showing the severe limitations of human judgment. Data-driven approaches based on careful analysis and thoughtful design can only improve the situation.鈥

Marti Hearst, a professor at the University of California-Berkeley, said, 鈥淔or decades computer algorithms have been automating systems in a more-or-less mechanical way for our benefit. For example, a bank customer could set up automated payment for their phone bill. The change we are seeing more recently is that the algorithms are getting increasingly more sophisticated, going from what we might [have] called 鈥榗ut and dried鈥 decisions like 鈥榩ay the balance of my phone bill鈥 to much more complex computations resulting in decisions such as 鈥榮how products based on my prior behavior鈥 or eventually (and menacingly) 鈥榮hut off access to my bank account because of my political posts on social media.鈥 Every one of these advances is two-sided in terms of potential costs and benefits. The benefits can be truly amazing: automated spoken driving directions that take into account traffic congestion and re-route in real time is stunning 鈥 the stuff of science fiction in our own lifetimes. On the other hand, quiet side streets known only to the locals suddenly become full of off-route vehicles from out of town. These new algorithms are successful only because they have access to the data about the activity of large numbers of individual people. And the more reliant we become on them, the fewer options anyone has to go 鈥榦ff the grid.鈥 The rush toward 鈥榖ig data鈥 has not built in adequate protections from harm for individuals and society against potential abuses of this reliance. The bias issue will be worked out relatively quickly, but the excessive reliance on monitoring of every aspect of life appears unavoidable and irreversible.鈥

Why is the 鈥渕onitoring of every aspect of life鈥 likely to be 鈥渦navoidable and irreversible鈥? Because all of these improvements are data-dependent. Among the data-reliant innovations expected to rapidly expand are cognitive AI 鈥渄igital agents鈥 or 鈥渁ssistants.鈥

Scott Amyx, CEO of Amyx+, commented, 鈥淲ithin the field of artificial intelligence, there has been significant progress on cognitive AI as evidenced by Viv, IBM Watson, Amazon Echo, Alexa, Siri, Cortana and X.ai. Advancement in cognitive AI will usher in a new era of orchestration, coordination and automation that will enable humans to focus on human value-add activities (creativity, friendship, perseverance, resolve, hope, etc.) while systems and machines will manage task orientation. More exciting, in my opinion, is the qualitative, empathetic AI 鈥 AI that understands our deep human thoughts, desires and drivers and works to support our psychological, emotional and physical well-being. To that end, we are kicking off a research consortium that will further explore this area of research and development with emphasis on friend AI, empathetic AI, humorous AI and confidant AI. To enable hyper-personalization, these neural network AI agents would have to be at the individual level. All of us at some point in the future will have our own ambient AI virtual assistant and friend to help navigate and orchestrate life. It will coordinate with other people, other AI agents, devices and systems on our behalf. Naturally, concerns of strong AI emerge for some. There is active research, private and public, targeted at friendly AI. We will never know for sure if the failsafe measures that we institute could be broken by self-will.鈥

Marina Gorbis, executive director at the Institute for the Future, suggested these as 鈥渕ain positive impacts鈥: 鈥淎lgorithms will enable each one of us to have a multitude of various types of assistants that would do things on our behalf, amplifying our abilities and reach in ways that we鈥檝e never seen before. Imagine instead of typing search words and getting a list of articles, pushing a button and getting a narrative paper on a specific topic of interest. It鈥檚 the equivalent of each one of us having many research and other assistants 鈥. Algorithms also have the potential to uncover current biases in hiring, job descriptions and other text information. Startups like Unitive and Knack show the potential of this.鈥

Ryan Hayes, owner of Fit to Tweet, said he is looking forward to added algorithmic assistance in his daily life. 鈥淭here are a lot of ways in which the world is more peaceful and our quality of life better than ever before, but we don鈥檛 necessarily feel that way because we鈥檝e been juggling more than ever before, too,鈥 he said. 鈥淔or example, when I started my career as a [certified public accountant] I could do my job using paper and a 10-key calculator, and when I left for the day I could relax knowing I was done, whereas today I have over 300 applications that I utilize for my work and I can be reached every minute of the day through Slack, texts, several email accounts and a dozen social media accounts. Technology is going to start helping us not just maximize our productivity but shift toward doing those things in ways that make us happier, healthier, less distracted, safer, more peaceful, etc., and that will be a very positive trend. Technology, in other words, will start helping us enjoy being human again rather than burdening us with more abstraction.鈥

An anonymous deputy CEO wrote, 鈥淚 hope we will finally see evidence-based medicine and integrated planning in the human habitat. The latter should mean cities developed with appropriate service delivery across a range of infrastructures.鈥

An anonymous computer security researcher observed, 鈥淎lgorithms combined with machine learning and data analysis could result in products that predict self-defeating behaviors and react and incentivize in ways that could push users far further than they could go by themselves.鈥

Code processes are being refined; ethics and issues are being worked out

David Karger, a professor of computer science at MIT, said, 鈥淎lgorithms are just the latest tools to generate fear as we consider their potential misuse, like the power loom (put manual laborers out of jobs), the car (puts kids beyond the supervision of their parents), and the television (same fears as today鈥檚 internet). In all these cases there were downsides but the upsides were greater. The question of algorithmic fairness and discrimination is an important one but it is already being considered. If we want algorithms that don鈥檛 discriminate, we will be able to design algorithms that do not discriminate. Of course, there are ethical questions: If we have an algorithm that can very accurately predict whether someone will benefit from a certain expensive medical treatment, is it fair to withhold the treatment from people the algorithm thinks it won鈥檛 help? But the issue here is not with the algorithm but with our specification of our ethical principles.鈥

Respondents predict the development of 鈥渆thical machines鈥 and 鈥渋teratively improved鈥 code that will diminish the negatives.

Lee McKnight, an associate professor at Syracuse University鈥檚 School of Information Studies, wrote, 鈥淎lgorithms coded in smart service systems will have many positive, life-saving and job-creating impacts in the next decade. Social machines will become much better at understanding your needs, and attempting to help you meet them. Ethical machines 鈥 such as drones 鈥 will know to sense and avoid collisions with other drones, planes, birds or people, recognize restricted air space, and respect privacy law. Algorithmically driven vehicles will similarly learn to better avoid each other. Health care smart-service systems will be driven by algorithms to recognize human and machine errors and omissions, improving care and lowering costs.鈥

Jon Lebkowsky, CEO of Polycot Associates, wrote, 鈥淚鈥檓 personally committed to agile process, through which code is iteratively improved based on practice and feedback. Algorithms can evolve through agile process. So while there may be negative effects from some of the high-impact algorithms we develop, my hope and expectation is that those algorithms will be refined to diminish the negative and enhance the positive impact.鈥

Edward Friedman, emeritus professor of technology management at the Stevens Institute of Technology, expects more algorithms will be established to evaluate algorithms, writing, 鈥淎s more algorithms enter the interactive digital world, there will be an increase of Yelp-type evaluation sites that guide users in their most constructive use.鈥

Ed Dodds, a digital strategist, wrote, 鈥淎lgorithms will force persons to be more reflective about their own personal ontologies, fixed taxonomies, etc., regarding how they organize their own digital assets or bookmark the assets of others. AI will extrapolate. Users will then be able to run thought experiments such as 鈥極K, show the opposite of those assumptions鈥 and such in natural-language queries. A freemium model will allow whether or not inputting a user鈥檚 own preferred filters will be of enough value.鈥

An anonymous chief scientist observed, 鈥淪hort-term, the negatives will outweigh the positives, but as we learn and go through various experiences, the balance will eventually go positive. We always need algorithms to be tweakable by humans according to context, creating an environment of IA (intelligent assistants) instead of AI (artificial intelligence).鈥

Another anonymous respondent agreed, writing, 鈥淎lgorithms will be improved as a reactive response. So negative results of using them will be complained about loudly at first, word-workers will work on them and identify the language that is at issue, and fine-tune them. At some point it will be 50-50. New ones will always have to be fine-tuned, and it will be the complaining that helps us fine-tune them.鈥

鈥楢lgorithms don鈥檛 have to be perfect; they just have to be better than people鈥

Some respondents who predicted a mostly positive future said algorithms are unfairly criticized, noting they outperform human capabilities, accomplish great feats and can always be improved.

An anonymous professor who works at New York University said algorithm-based systems are a requirement of our times and mostly work out for the best. 鈥淎utomated filtering and management of information and decisions is a move forced on us by complexity,鈥 he wrote. 鈥淔alse positives and false negatives will remain a problem, but they will be edge cases.鈥

An anonymous chief scientist wrote, 鈥淲henever algorithms replace illogical human decision-making, the result is likely to be an improvement.鈥 And an anonymous principal consultant at a top consulting firm wrote, 鈥淔ear of algorithms is ridiculously overblown. Algorithms don鈥檛 have to be perfect, they just have to be better than people.鈥

Daniel Berleant, author of聽The Human Race to the Future, noted, 鈥淎lgorithms are less subject to hidden agendas than human advisors and managers. Hence the output of these algorithms will be more socially and economically efficient, in the sense that they will be better aligned with their intended goals. Humans are a lot more suspect in their advice and decisions than computers are.鈥

Avery Holton, an assistant professor and humanities scholar at the University of Utah, got into the details. 鈥淚n terms of communication across social networks both present and future, algorithms can work quickly to identify our areas of interest as well as others who may share those interests. Yes, this has the potential to create silos and echo chambers, but it also holds the promise of empowerment through engagement encouragement. We can certainly still seek information and relationships by combing through keywords and hashtags, but algorithms can supplement those efforts by showing us not only 鈥榳hat鈥 we might be interested in and 鈥榳hat鈥 we might be missing, but 鈥榳ho鈥 we might be interested in and 鈥榳ho鈥 we might be missing. Further, these algorithms may be able to provide us some insights about others (e.g., their interests, their engagement habits) that help us better approach, develop and sustain relationships.鈥

David Sarokin, author of聽Missed Information: Better Information for Building a Wealthier, More Sustainable Future聽(MIT Press), said algorithms are being applied to identify human bias and discrimination. 鈥淎pps/algorithms have a real capability of democratizing information access in important and positive [ways]. For example, phone apps have been developed to collect, collate and combine reports from citizens on their routine interactions 鈥 both positive and negative 鈥 with police. In widespread use, these can be an effective 鈥榬eport card鈥 for individual officers as well as overall community policing, and help identify potential problems before they get out of hand.鈥

Dan Ryan, professor of sociology at Mills College in Oakland, California, wrote, 鈥淭he worry that algorithms might introduce subtle biases strikes me as much social-science ado about very little. No more true than the ways that architecture, cartography, language, organizational rules, credentialing systems, etc., produce these effects.鈥

An anonymous respondent said, 鈥淚t would be a fallacy to say that without algorithms our society would be more fair. We can 鈥榰nteach鈥 discrimination in computers more easily than we can in human beings. The more algorithms are capable of mimicking human behavior, the more we will need to reconsider the implications of what makes us human and how we interact.鈥

An anonymous principal consultant at a consulting firm wrote, 鈥淧eople often confuse a biased algorithm for an algorithm that doesn鈥檛 confirm their biases. If Facebook shows more liberal stories than conservative, that doesn鈥檛 mean something is wrong. It could be a reflection of their user base, or of their media sources, or just random chance. What is important is to realize that everything has some bias, intentional or not, and to develop the critical thinking skills to process bias.鈥

In the future the world may be governed by benevolent AI

An anonymous respondent projected ahead several hundred years, writing:

“Algorithms initially will be an extension of the 鈥榮elf鈥 to help individuals maintain and process the overload of information they have to manage on a daily basis. 鈥楬ow鈥 identities are managed and 鈥榳ho鈥 develops the algorithms will dictate the degree of usefulness and/or exploitation. Fast-forward 200 years 鈥 no governments or individuals hold a position of power. The world is governed by a self-aware, ego-less, benevolent AI. A single currency of credit (a la bitcoin) is earned by individuals and distributed by the AI according to the 鈥榞ood鈥 you contribute to society. The algorithm governing the global, collective AI will be optimized toward the common good, maximizing health, safety, happiness, conservation, etc.鈥

Four themes that concentrate on concerns about algorithms

Participants in this study were in substantial agreement that the abundant positives of accelerating code-dependency will continue to drive the spread of algorithms. However, many argued that 鈥 as with all great technological revolutions 鈥 algorithms have their dark side. The remaining themes highlight answers that focused on the problems many foresee.

Theme 3: Humanity and human agency are lost when data and predictive modeling become paramount

Many respondents said that as people put too much faith in data, humanity can be lost. Some argued that because technology corporations and, sometimes, governments are most often the agencies behind the code, algorithms are written to optimize efficiency and profitability without much thought about the possible societal impacts of the data modeling and analysis. These respondents said people are considered to be an 鈥渋nput鈥 to the process and they are not seen as real, thinking, feeling, changing beings. Some said that as the process evolves 鈥 that is, as algorithms begin to write the algorithms 鈥 humans may get left completely out of the loop, letting 鈥渢he robots decide.鈥

An anonymous respondent wrote, 鈥淲e simply can鈥檛 capture every data element that represents the vastness of a person and that person鈥檚 needs, wants, hopes, desires. Who is collecting what data points? Do the human beings the data points reflect even know, or did they just agree to the terms of service because they had no real choice? Who is making money from the data? How is anyone to know how his/her data is being massaged and for what purposes to justify what ends? There is no transparency, and oversight is a farce. It鈥檚 all hidden from view. I will always remain convinced the data will be used to enrich and/or protect others and not the individual. It鈥檚 the basic nature of the economic system in which we live.鈥

Peter Eckart鈥檚聽comment reflects the attitude of many in this canvassing: 鈥淲e can create algorithms faster than we can understand or evaluate their impact. The expansion of computer-mediated relationships means that we have less interest in the individual impact of these innovations, and more on the aggregate outcomes. So we will interpret the negative individual impact as the necessary collateral damage of 鈥榩rogress.鈥欌

Axel Bruns, a professor at the Digital Media Research Center at Queensland University of Technology, said, 鈥淭here are competitive, regulatory and legal disadvantages that would result from greater transparency on behalf of the platform operator, and so there is an incentive only to further obfuscate the presence and operations of algorithmic shaping of communications processes. This is not to say that such algorithms are inherently 鈥榖ad,鈥 in the sense that they undermine effective communication; algorithms such as Google鈥檚 PageRank clearly do the job that is asked of them, for instance, and overall have made the web more useful than it would be without them. But without further transparency ordinary users must simply trust that the algorithm does what it is meant to do, and does not inappropriately skew the results it delivers. Such algorithms will continue to be embedded deeply into all aspects of human life, and will also generate increasing volumes of data on their fields. This continues to increase the power that such algorithms already have over how reality is structured, measured and represented, and the potential impact that any inadvertent or deliberate errors could have on user activities, on society鈥檚 understanding of itself, and on corporate and government decisions. More fundamentally, the increasing importance of algorithms to such processes also transfers greater importance to the source data they work with, amplifying the negative impacts of data gaps and exclusions.鈥

An anonymous community advocate said, 鈥淭here are a lot of places where algorithms are beneficial and helpful, but so far, none of them take into account the actual needs of humans. Human resources are an input in a business equation at the moment, not real, thinking, feeling symbiotes in the eyes of business.鈥
An anonymous associate professor of political science at a major U.S. university said, 鈥淎lgorithms are the typecasting of technology. They are a snapshot of behavior influenced by contextual factors that give us a very limited view of an individual. Typecasting is a bad way to be regarded by others and it is a bad way to 鈥榖e.鈥欌

Rebecca MacKinnon, director of the Ranking Digital Rights project at New America, commented, 鈥淎lgorithms driven by machine learning quickly become opaque even to their creators who no longer understand the logic being followed to make certain decisions or produce certain results. The lack of accountability and complete opacity is frightening. On the other hand, algorithms have revolutionized humans鈥 relationship with information in ways that have been life-saving and empowering and will continue to do so.鈥

Programming primarily in pursuit of profits and efficiencies is a threat

A large number of respondents expressed deep concerns about the primary interests being served by networked algorithms. Most kept their comments anonymous, which makes sense since a significant number of the participants in this canvassing are employed by or are funded in some regard by corporate or government interests. As an anonymous chairman and CEO at a nonprofit organization observed, 鈥淭he potential for good is huge, but the potential for misuse and abuse, intentional and inadvertent, may be greater.鈥 (All respondents not identified by name in this section submitted their comments anonymously.)

One participant described the future this way: 鈥淭he positives are all pretty straightforward, e.g., you get the answer faster, the product is cheaper/better, the outcome fits the needs more closely. Similarly, the negatives are mostly pretty easy to foresee as well, given that it鈥檚 fundamentally people/organizations in positions of power that will end up defining the algorithms. Profit motives, power accumulation, etc., are real forces that we can鈥檛 ignore or eliminate. Those who create the algorithms have a stake in the outcome, so they are, by definition, biased. It鈥檚 not necessarily bad that this bias is present, but it does have dramatic effects on the outputs, available inputs and various network effects that may be entirely indirect and/or unforeseen by the algorithm developers. As the interconnectedness of our world increases, accurately predicting the negative consequences gets ever harder, so it doesn鈥檛 even require a bad actor to create deleterious conditions for groups of people, companies, governments, etc.鈥

Another respondent said, 鈥淭he algorithms will serve the needs of powerful interests, and will work against the less-powerful. We are, of course, already seeing this start to happen. Today there is a ton of valuable data being generated about people鈥檚 demographics, behaviours, attitudes, preferences, etc. Access to that data (and its implications) is not evenly distributed. It is owned by corporate and governmental interests, and so it will be put to uses that serve those interests. And so what we see already today is that in practice, stuff like 鈥榙ifferential pricing鈥 does not help the consumer; it helps the company that is selling things, etc.鈥

An IT architect at IBM said, 鈥淐ompanies seek to maximize profit, not maximize societal good. Worse, they repackage profit-seeking as a societal good. We are nearing the crest of a wave the trough side of which is a new ethics of manipulation, marketing, nearly complete lack of privacy. All predictive models, whether used for personal convenience or corporate greed, require large amounts of data. The ways to obtain that are at best gently transformative of culture, and on the low side, destructive of privacy. Corporations鈥 use of big data predicts law enforcement鈥檚 use of shady techniques (e.g., Stingrays) to invade privacy. People all too quickly view law-enforcement as 鈥榞etting the bad guys their due鈥 but plenty of cases show abuse, mistaken identity, human error resulting in police brutality against the innocent, and so on. More data is unlikely to temper the mistakes; instead, it will fuel police overreach, just as it fuels corporate overreach.鈥

Said another respondent, 鈥淓verything will be geared to serve the interests of the corporations and the 1%. Life will become more convenient, but at the cost of discrimination, information compartmentalization and social engineering.鈥

A professor noted, 鈥淚f lean, efficient global corporations are the definition of success, the future will be mostly positive. If maintaining a middle class with opportunities for success is the criterion by which the algorithms are judged, this will not be likely. It is difficult to imagine that the algorithms will consider societal benefits when they are produced by corporations focused on short-term fiscal outcomes.鈥

A senior software developer wrote, 鈥淪mart algorithms can be incredibly useful, but smart algorithms typically lack the black-and-white immediacy that the greedy, stupid and short-sighted prefer. They prefer stupid, overly broad algorithms with lower success rates and massive side effects because these tend to be much easier to understand. As a result, individual human beings will be herded around like cattle, with predictably destructive results on rule of law, social justice and economics. For instance, I see algorithmic social data crunching as leading to 鈥楶reCrime,鈥 where ordinary, innocent citizens are arrested because they set off one too many flags in a Justice Department data dragnet.鈥

One business analyst commented, 鈥淭he outcome will be positive for society on a corporate/governmental basis, and negative on an individual basis.鈥

A faculty member at a U.S. university said, 鈥淗istorically, algorithms are inhumane and dehumanizing. They are also irresistible to those in power. By utilitarian metrics, algorithmic decision-making has no downside; the fact that it results in perpetual injustices toward the very minority classes it creates will be ignored. The Common Good has become a discredited, obsolete relic of The Past.鈥

Another respondent who works for a major global human rights foundation said, 鈥淎lgorithms are already put in place to control what we see on social media and how content is flagged on the same platforms. That鈥檚 dangerous enough 鈥 introducing algorithms into policing, health care, educational opportunities can have a much more severe impact on society.鈥

An anonymous professor of media production and theory warned, 鈥淲hile there is starting to be citizen response to algorithms, they tend to be seen as neutral if they are seen at all. Since algorithms are highly proprietary and highly lucrative, they are highly dangerous. With TV, the U.S. developed public television, what kind of public space for ownership of information will be possible? It is the key question for anyone interested in the future of democratic societies.鈥

David Golumbia, an associate professor of digital studies at Virginia Commonwealth University, wrote, 鈥淭he putative benefits of algorithmic processing are wildly overstated and the harms are drastically underappreciated. Algorithmic processing in many ways deprives individuals and groups of the ability to know about, and to manage, their lives and responsibilities. Even when aspects of algorithmic control are exposed to individuals, they typically have nowhere near the knowledge necessary to understand what the consequences are of that control. This is already widely evident in the way credit scoring has been used to shape society for decades, most of which have been extremely harmful despite the credit system having some benefit to individuals and families (although the consistent provision of credit beyond what one鈥檚 income can bear remains a persistent and destructive problem). We are going full-bore into territory that we should be approaching hesitantly if at all, and to the degree that they are raised, concerns about these developments are typically dismissed out of hand by those with the most to gain from those developments.鈥

An anonymous professor at the University of California-Berkeley observed, 鈥淎lgorithms are being created and used largely by corporations. The interests of the market economy are not the same as those of the people being subjected to algorithmic decision-making. Costs and the romanticization of technology will drive more and more adoption of algorithms in preference to human-situated decision-making. Some will have positive impacts. But the negatives are potentially huge. And I see no kind of oversight mechanism that could possibly work.鈥

Joseph Turow, a communications professor at the University of Pennsylvania, said, 鈥淎 problem is that even as they make some tasks easier for individuals, many algorithms will chip away at their autonomy by using the data from the interactions to profile them, score them, and decide what options and opportunities to present them next based on those conclusions. All this will be carried out by proprietary algorithms that will not be open to proper understanding and oversight even by the individuals who are scored.鈥

Karl M. van Meter, a sociological researcher and director of the Bulletin of Methodological Sociology, Ecole Normale Sup茅rieure de Paris, said, 鈥淭he question is really, 鈥榃ill the net overall effect of the next decade of bosses be positive for individuals and society or negative for individuals and society?鈥 Good luck with that one.鈥

Sometimes algorithms make false assumptions that place people in an echo chamber/ad delivery system that isn鈥檛 really a fit for them. An engineer at a U.S. government organization complained, 鈥淪ome work will become easier, but so will profiling. I, personally, am often misidentified as one racial type, political party, etc., by my gender, address, career, etc., and bombarded with advertising and spam for that person. If I had an open social profile, would I even have that luxury 鈥 or would everything now 鈥榤atch鈥 whichever article I most recently read?鈥

An anonymous respondent wrote, 鈥淥ne major question is: To what extent will the increased use of algorithms encourage a behaviorist way of thinking of humans as creatures of stimulus and response, capable of being gamed and nudged, rather than as complex entities with imagination and thought? It is possible that a wave of algorithm-ization will trigger new debates about what it means to be a person, and how to treat other people. Philip K. Dick has never been more relevant.鈥

Evan Selinger, professor of philosophy at the Rochester Institute of Technology, wrote, 鈥淭he more algorithmic advice, algorithmic decision-making and algorithmic action that occurs on our behalf, the more we risk losing something fundamental about our humanity. But because being 鈥榟uman鈥 is a contested concept, it鈥檚 hard to make a persuasive case for when and how our humanity is actually diminished, and how much harm each diminishment brings. Only when better research into these questions is available can a solid answer be provided as to whether more positive or negative outcomes arise.鈥

Algorithms manipulate people and outcomes, and even 鈥榬ead our minds鈥

Respondents registered fears about the ease with which powerful interests can manipulate people and outcomes through the design of networked intelligence and tools.

Michael Kleeman, senior fellow at the University of California-San Diego, observed, 鈥淚n the hands of those who would use these tools to control, the results can be painful and harmful.鈥

Peter Levine, professor and associate dean for research at Tisch College of Civic Life, Tufts University, noted, 鈥淲hat concerns me is the ability of governments and big companies to aggregate information and gain insight into individuals that they can use to influence those individuals in ways that are too subtle to be noticed or countered. The threat is to liberty.鈥

Freelance journalist聽Mary K. Pratt聽commented, 鈥淎lgorithms have the capability to shape individuals鈥 decisions without them even knowing it, giving those who have control the algorithms (in how they鈥檙e built and deployed) an unfair position of power. So while this technology can help in so many areas, it does take away individual decision-making without many even realizing it.鈥

Martin Shelton, Knight-Mozilla OpenNews Fellow at The Coral Project + New York Times, wrote, 鈥淧eoples鈥 values inform the design of algorithms 鈥 what data they will use, and how they will use data. Far too often, we see that algorithms reproduce designers鈥 biases by reducing complex, creative decisions to simple decisions based on heuristics. Those heuristics do not necessarily favor the person who interacts with them. These decisions typically lead software creators not to optimize for qualitative experiences, but instead, optimizing for click-through rates, page views, time spent on page, or revenue. These design decisions mean that algorithms use (sometimes quite misplaced) heuristics to decide which news articles we might be interested in; people we should connect with; products we should buy.鈥

Chris Showell, an independent health informatics researcher based in Australia, said, 鈥淭he organisation developing the algorithm has significant capacity to influence or moderate the behaviour of those who rely on the algorithm鈥檚 output. Two current examples: manipulation of the process displayed in online marketplaces, and use of 鈥榮ecret鈥 algorithms in evaluating social welfare recipients. There will be many others in years to come. It will be challenging for even well-educated users to understand how an algorithm might assess them, or manipulate their behaviour. Disadvantaged and poorly educated users are likely to be left completely unprotected.鈥

奥谤颈迟别谤听James Hinton聽commented, 鈥淭he fact the internet can, through algorithms, be used to almost read our minds, means those who have access to the algorithms and their databases have a vast opportunity to manipulate large population groups. The much-talked-about 鈥榚xperiment鈥 conducted by Facebook to determine if it could manipulate people emotionally through deliberate tampering with news feeds is but one example of both the power, and the lack of ethics, that can be displayed.鈥

An anonymous president of a consulting firm said, 鈥淟inkedIn tries to manipulate me to benefit from my contacts鈥 contacts and much more. If everyone is intentionally using or manipulating each other, is it acceptable? We need to see more-honest, trust-building innovations and fewer snarky corporate manipulative design tricks. Someone told me that someday only rich people will not have smartphones, suggesting that buying back the time in our day will soon become the key to quality lifestyles in our age of information overload. At what cost, and with what 鈥榖est practices鈥 for the use of our recovered time per day? The overall question is whether good or bad behaviors will predominate globally.鈥 This consultant suggested: 鈥淥nce people understand which algorithms manipulate them to build corporate revenues without benefiting users, they will be looking for more-honest algorithm systems that share the benefits as fairly as possible. When everyone globally is online, another 4 billion young and poor learners will be coming online. A system could go viral to win trillions in annual revenues based on micropayments due to sheer volume. Example: The Facebook denumerator app removes the manipulative aspects of Facebook, allowing users to return to more typically social behavior.鈥

Several respondents expressed concerns about a particular industry 鈥 insurers. An anonymous respondent commented, 鈥淭he increasing migration of health data into the realm of 鈥榖ig data鈥 has potential for the nightmare scenario of聽Gattaca聽writ real.鈥

Masha Falkov, artist and glassblower, said, 鈥淚t is important to moderate algorithms with human judgment and compassion. Already we see every day how insurance companies attempt to wrest themselves out of paying for someone鈥檚 medical procedure. The entire health care system in the U.S. is a madhouse presently moderated by individuals who secretly choose to rebel against its tyranny. Doctors who fight for their patients to get the medicine they need, operators within insurance companies who decide to not deny the patient the service, at the risk of their own job. Our artificial intelligence is only as good as we can design it. If the systems we are using presently do not evolve with our needs, algorithms will be useless at best, harmful at worst.鈥

Systems architect聽John Sniadowski聽noted, 鈥淧redictive modeling will make life more convenient, but conversely it will narrow choices and confine individuals into classes of people from which there is no escape. Predictive modeling is unstoppable because international business already sees massive financial advantages by using such techniques. An example of this is insurance where risk is now being eliminated in search of profits instead of the original concept of insurance being shared risk. People are now becoming uninsurable either because of their geographic location or social position. Premiums are weighted against individuals on control decisions on which the individual has no control and therefore cannot improve their situation.鈥

Ryan Sweeney, director of analytics at Ignite Social Media, commented, 鈥淓very human is different, so an algorithm surrounding health care could tailor a patient鈥檚 treatment plan. It could also have the potential to serve the interests of the insurance company over the patient.鈥

All of this will lead to a flawed yet inescapable logic-driven society

Some who assessed the impacts of algorithms in the next decade expressed the opinion that they are unreliable, 鈥渙versold鈥 and 鈥渃old,鈥 saying they 鈥済ive a false impression鈥 of efficacy and are 鈥渘ot easily subject to critique.鈥 An anonymous respondent said, 鈥淚t鈥檚 not that algorithms are the problem; it鈥檚 that we think that with sufficient data we will have wisdom. We will become reliant upon 鈥榓lgorithms鈥 and data and this will lead to problematic expectations. Then that鈥檚 when things will go awry.鈥

Jason Hong, an associate professor at Carnegie Mellon University, said, 鈥淧eople will forget that models are only an approximation of reality. The old adage of garbage in, garbage out still applies, but the sheer quantity of data and the speed of computers might give the false impression of correctness. As a trivial example, there are stories of people following GPS too closely and ending up driving into a river.鈥

An anonymous computer science PhD noted, 鈥淎lgorithms typically lack sufficient empirical foundations, but are given higher trust by users. They are oversold and deployed in roles beyond their capacity.鈥

Bob Frankston, internet pioneer and software innovator, said, 鈥淭he negatives of algorithms will outweigh the positives. There continues to be magical thinking assuming that if humans don鈥檛 intervene the 鈥榬ight thing鈥 will happen. Sort of the modern gold bugs that assume using gold as currency prevents humans from intervening. Algorithms are the new gold, and it鈥檚 hard to explain why the average 鈥榞ood鈥 is at odds with the individual 鈥榞ood.鈥欌

An anonymous respondent observed, 鈥淎lgorithms are opaque and not easily subject to critique. People too easily believe that they are scientific. Health care 鈥 there is not a single study that shows clinical improvement from the use of the electronic health record, and instead of saving costs, it has increased them. Resources going there are resources not gong into patient care. Consumer choice 鈥 we only see what we are allowed to see in whatever markets we鈥檝e been segmented into. As that segmentation increases, our choices decrease. Corporate consolidation also decreases choices. Likewise news, opportunities, access. Big data can be helpful 鈥 like tracking epidemics 鈥 but it can also be devastating because there is a huge gap between individuals and the statistical person. We should not be constructing social policy just on the basis of the statistical average but, instead, with a view of the whole population. So I am inclined to believe that big data gets us to Jupiter, it may help us cope with climate change but it will not increase justice, fairness, morality and so on.鈥

B. Remy Cross, an assistant professor of sociology at Webster University in Missouri, said, 鈥淎lgorithms in particular are prone to a sort of tehcno-fetishism where they are seen as perfectly unbiased and supremely logical, when they are often nothing of the sort.鈥

An anonymous technology developer commented, 鈥淎lgorithms will overestimate the certainty with which people hold convictions. Most people are pretty wishy-washy but algorithms try to define you by estimating feelings/beliefs. If I 鈥榢ind of like鈥 something I am liable to be grouped with fervent lovers of that thing.鈥

Some said the aura of definitive digital logic is already difficult to overcome. An anonymous software security consultant bemoaned the lack of quick and fair appeals processes for automated decisions. 鈥淚t鈥檚 already nearly impossible to correct an incorrect credit report, despite the existence of clear laws requiring support for doing so. It seems unlikely that similar problems will be easy to correct in the future unless significant regulation is added around such systems. I am hopeful the benefits will be significant, but I expect the downsides to be far more obvious and easy to spot than the upsides.鈥

Some respondents said human managers will ignore people鈥檚 needs or leave them unattended more as machine intelligence takes over more tasks. An anonymous participant commented, 鈥淭he use of algorithms will create a distance from those who make corporate decisions and the actual decision that gets made. This will result in the plausible deniability that a manager did not actively control the outcome of the algorithm, and as a result, (s)he is not responsible for the outcome when it affects either the public or the employees.鈥

Another anonymous participant wrote, 鈥淭he downsides to these are any situations that do not fit a standard set of criteria or involve judgment calls 鈥 large systems do not handle exceptional situations well and tend to be fairly inflexible and complicated to navigate. I see a great deal of trouble in terms of connections between service providers and the public they serve because of a lack of empathy and basic interaction. It鈥檚 hard to plan for people鈥檚 experiences when the lived experience of the people one plans for are alien to one鈥檚 own experiential paradigm.鈥

An anonymous computer scientist wrote, 鈥淭he tech industry is attuned to computer logic, not feelings or ethical outcomes. The industrial 鈥榩roductivity鈥 paradigm is running out of utility, and we need a new one that is centered on more human concerns.鈥

James McCarthy, a manager, commented, 鈥淪ometimes stuff just happens that can鈥檛 be accounted for by even a sufficiently complex rules set, and I worry that increasing our dependency on algorithmic decision-making will also create an increasingly reductive view of society and human behavior.鈥

An anonymous respondent said, 鈥淎n algorithm is only as good as the filter it is put through, and the interpretation put upon it. Too often we take algorithms as the basis of fact, or the same as a statistic, which they are not. They are ways of collecting information into subjects. An over-reliance on this and the misinterpretation of what they are created for shall lead to trouble within the next decade.鈥

Some fear a loss of complex decision-making capabilities and local intelligence

Since the early days of widespread adoption of the internet, some have expressed concerns that the fast-evolving dependence upon intelligence augmentation via algorithms will make humans less capable of thinking for themselves. Some respondents in this canvassing noted this as likely to have a negative impact on the capabilities of the individual.

Amali De Silva-Mitchell, a futurist and consultant, wrote, 鈥淧redictive modeling will limit individual self-expression hence innovation and development. It will cultivate a spoon-fed population with those in the elite being the innovators. There will be a loss in complex decision-making skills of the masses. Kings and serfs will be made and the opportunity for diversification lost and then even perhaps global innovative solutions lost. The costs of these systems will be too great to overturn if built at a base level. The current trend toward the uniform will be the undoing rather than building of platforms that can communicate with everything so that innovation is left as key and people can get the best opportunities. Algorithms are not the issue, the issue is a standard algorithm.鈥

An anonymous respondent said, 鈥淎utomated decision-making will reduce the perceived need for critical thinking and problem solving. I worry that this will increase trust in authority and make decisions of all kinds more opaque.鈥

Dave McAllister, director at聽Philosophy Talk, said, 鈥淲e will find ourselves automatically grouped into classes (caste system) by algorithms. While it may increase us being effective in finding the information we need while drowning in a world of big data, it will also limit the scope of synthesis and serendipitous discovery.鈥

Giacomo Mazzone聽wrote, 鈥淯nfortunately most algorithms that will be produced in the next 10 years will be from global companies looking for immediate profits. This will kill local intelligence, local skills, minority languages, local entrepreneurship because most of the available resources will be drained out by the global competitors. The day that a 鈥榤inister for algorithms toward a better living鈥 will be created is likely to be too late unless new forms of social shared economy emerge, working on 鈥榓lgorithms for happiness.鈥 But this is likely to take longer than 10 years.鈥

Jesse Drew, a digital media professor at the University of California-Davis, replied, 鈥淐ertainly algorithms can make life more efficient, but the disadvantage is the weakening of human thought patterns that rely upon serendipity and creativity.鈥

Ben Railton, a professor of English and American studies at Fitchburg State University in Massachusetts, wrote, 鈥淎lgorithms are one of the least attractive parts of both our digital culture and 21st-century capitalism. They do not allow for individual identity and perspective. They instead rely on the kinds of categorizations and stereotypings we desperately need to move beyond.鈥

Miles Fidelman, systems architect, policy analyst and president at the Center for Civic Networking, wrote, 鈥淏y and large, tools will disproportionally benefit those who have commercial reasons to develop them 鈥 as they will have the motivation and resources to develop and deploy tools faster.鈥
One respondent warned of looming motivations to apply algorithms more vigorously will limit freedom of expression.

Joe McNamee, executive director at European Digital Rights, commented, 鈥淭he Cambridge/Sanford studies on Facebook likes, the Facebook mood experiment, Facebook鈥檚 election turnout experiment and the analysis of Google鈥檚 ability to influence elections have added to the demands for online companies to become more involved in policing online speech. All raise existential questions for democracy, free speech and, ultimately, society鈥檚 ability to evolve. The range of 鈥榰seful鈥 benefits is broad and interesting but cannot outweigh this potential cost.鈥

Suggested solutions include embedding respect for the individual

Algorithms require and create data. Much of the internet economy has been built by groups offering 鈥渇ree鈥 use of online tools and access to knowledge while minimizing or masking the fact that people are actually paying with their attention and/or allegiance 鈥 as well as complete algorithmic access to all of their private information plus ever-more-invasive insights into their hopes, fears and other emotions. Some say it has already gotten to the point at which the data collectors behind the algorithms are likely to know more about you than you do yourself.

Rob Smith, software developer and privacy activist, observed, 鈥淭he major downside is that in order for such algorithms to function, they will need to know a great deal about everyone鈥檚 personal lives. In an ecosystem of competing services, this will require sharing lots of information with that marketplace, which could be extremely dangerous. I鈥檓 confident that we can in time develop ways to mitigate some of the risk, but it would also require a collective acceptance that some of our data is up for grabs if we want to take advantage of the best services. That brings me to perhaps the biggest downside. It may be that, in time, people are 鈥 in practical terms 鈥 unable to opt out of such marketplaces. They might have to pay a premium to contract services the old-fashioned way. In summary, such approaches have a potential to improve matters, at least for relatively rich members of society and possibly for the disadvantaged. But the price is high and there鈥檚 a danger that we sleepwalk into things without realising what it has cost us.鈥

David Adams, vice president of product at a new startup, said, 鈥淥verreach in intellectual property in general will be a big problem in our future.鈥

Some respondents suggested that assuring individuals the rights to and control over their identity is crucial.

Garth Graham, board member at Telecommunities Canada, wrote, 鈥淭he future positives will only outweigh the negatives if the simulation of myself 鈥 the anticipation of my behaviours 鈥 that the algorithms make possible is owned by me, regardless of who created it.鈥

Paul Dourish, chancellor鈥檚 professor of informatics at the University of California-Irvine, said, 鈥淢ore needs to be done to give people insight into and control over algorithmic processing 鈥 which includes having algorithms that work on individuals鈥 behalf rather than on behalf of corporations.鈥

Marshall Kirkpatrick, co-founder of Little Bird, previously with ReadWriteWeb and TechCrunch, said, 鈥淢ost commercial entities will choose to implement algorithms that serve them even at the expense of their constituents. But some will prioritize users and those will be very big. Meeting a fraction of the opportunities that arise will require a tremendous expansion of imagination.鈥

Susan Price, digital architect and strategist at Continuum Analytics, commented, 鈥淭he transparent provenance of data and transparent availability of both algorithms and analysis will be crucial to creating the trust and dialog needed to keep these systems fair and relatively free of bias. This necessary transparency is in conflict with the goals of corporations developing unique value in intellectual property and marketing. The biggest challenge is getting humans in alignment on what we collectively hope our data will show in the future 鈥 establishing goals that reflect a fair, productive society, and then systems that measure and support those goals.鈥

One respondent suggested algorithm-writing teams include humanist thinkers.聽Dana Klisanin, founder and CEO of Evolutionary Guidance Media R&D, wrote, 鈥淚f we want to weigh the overall impact of the use of algorithms on individuals and society toward 鈥榩ositive outweighs negative,鈥 the major corporations will need to hold themselves accountable through increasing their corporate social responsibility. Rather than revenue being the only return, they need to hire philosophers, ethicists and psychologists to help them create algorithms that provide returns that benefit individuals, society and the planet. Most individuals have never taken a course in 鈥楻ace, Class and Gender,鈥 and do not recognize discrimination even when it is rampant and visible. The hidden nature of algorithms means that it will take individuals and society that much longer to demand transparency. Or, to say it another way: We don鈥檛 know what we don鈥檛 know.鈥

An anonymous respondent who works for the U.S. government cited the difficulties in serving both societal good and the rights of the individual, writing, 鈥淭here is a tension between the wishes of individuals and the functions of society. Fairness for individuals comes at the expense of some individual choices. It is hard to know how algorithms will end up on the spectrum between favoring individuals over a functioning society because the trend for algorithms is toward artificial intelligence. AI will likely not work the same way that human intelligence does.鈥

As code takes over complex systems, humans are left out of the loop

As intelligent systems and knowledge networks become more complex and artificial intelligence and quantum computing evolve over the next decade, experts expect that humans will be left more and more 鈥渙ut of the loop鈥 as more and more aspects of code creation and maintenance are taken over by machine intelligence.
The vast majority of comments in this vein came from expert respondents who remained anonymous. A sampling of these statements:

An executive director for an open source software organization commented, 鈥淢ost people will simply lose agency as they don鈥檛 understand how choices are being made for them.鈥

One respondent said, 鈥淓verything will be 鈥榗ustom鈥-tailored based on the groupthink of the algorithms; the destruction of free thought and critical thinking will ensure the best generation is totally subordinate to the ruling class.鈥

Another respondent wrote, 鈥淐urrent systems are designed to emphasize the collection, concentration and use of data and algorithms by relatively few large institutions that are not accountable to anyone, and/or if they are theoretically accountable are so hard to hold accountable that they are practically unaccountable to anyone. This concentration of data and knowledge creates a new form of surveillance and oppression (writ large). It is antithetical to and undermines the entire underlying fabric of the erstwhile social form enshrined in the U.S. Constitution and our current political-economic-legal system. Just because people don鈥檛 see it happening doesn鈥檛 mean that it鈥檚 not, or that it鈥檚 not undermining our social structures. It is. It will only get worse because there鈥檚 no 鈥榗risis鈥 to respond to, and hence, not only no motivation to change, but every reason to keep it going 鈥 especially by the powerful interests involved. We are heading for a nightmare.鈥

A scientific editor observed, 鈥淭he system will win; people will lose. Call it 鈥楾he Selfish Algorithm鈥; algorithms will naturally find and exploit our built-in behavioral compulsions for their own purposes. We鈥檙e not even consumers anymore. As if that wasn鈥檛 already degrading enough, it鈥檚 commonplace to observe that these days people are the product. The increasing use of 鈥榓lgorithms鈥 will only 鈥 very rapidly 鈥 accelerate that trend. Web 1.0 was actually pretty exciting. Web 2.0 provides more convenience for citizens who need to get a ride home, but at the same time 鈥 and it鈥檚 naive to think this is a coincidence 鈥 it鈥檚 also a monetized, corporatized, disempowering, cannibalizing harbinger of the End Times. (I exaggerate for effect. But not by much.)鈥

A senior IT analyst said, 鈥淢ost people use and will in the future use the algorithms as a facility, not understanding their internals. We are in danger of losing our understanding and then losing the capability to do without. Then anyone in that situation will let the robots decide.鈥

What happens when algorithms write algorithms? 鈥淎lgorithms in the past have been created by a programmer,鈥 explained one anonymous respondent. 鈥淚n the future they will likely be evolved by intelligent/learning machines. We may not even understand where they came from. This could be positive or negative depending on the application. If machines/programs have autonomy, this will be more negative than positive. Humans will lose their agency in the world.鈥

And then there is the possibility of an AI takeover.

Seti Gershberg, executive producer and creative director at Arizona Studios, wrote, 鈥淎t first, the shift will be a net benefit. But as AI begins to pass the Turing test and potentially become sentient and likely super-intelligent, leading to an intelligence explosion as described by Vernor Vinge, it is impossible to say what they will or will not do. If we can develop a symbiotic relationship with AI or merge with them to produce a new man-machine species, it would be likely humans would survive such an event. However, if we do not create a reason for AI to need humans then they would either ignore us or eliminate us or use us for a purpose we cannot imagine. Recently, the CEO of Microsoft put forth a list of 10 rules for AI and humans to follow with regard to their programming and behavior as a method to develop a positive outcome for both man and machines in the future. However, if humans themselves cannot follow the rules set forth for good behavior and a positive society (i.e., the Ten Commandments 鈥 not in a religious sense, but one of common sense) I would ask the question, why would or should AI follow rules humans impose on them?鈥

Theme 4: Biases exist in algorithmically organized systems

There are two strands of thinking that tie together here. One is that the algorithm creators (code writers), even if they strive for inclusiveness, objectivity and neutrality, build into their creations their own perspectives and values. The other is that the datasets to which algorithms are applied have their own limits and deficiencies. Even datasets with millions or billions of pieces of information do not capture the fullness of people鈥檚 lives and the diversity of their experiences. Moreover, the datasets themselves are imperfect because they do not contain inputs from everyone, or a representative sample of everyone. This section covers the respondent answers on both those fronts.

Algorithms reflect the biases of programmers and datasets

Bias and poorly developed datasets have been widely recognized as a serious problem that technologists say they are working to address; however, many respondents see this as a problem that will not be remedied anytime soon.

Randy Bush, Internet Hall of Fame member and research fellow at Internet Initiative Japan, wrote, 鈥淎lgorithmic methods have a very long way to go before they can deal with the needs of individuals. So we will all be mistreated as more homogenous than we are.鈥

Eugene H. Spafford, a professor at Purdue University, said, 鈥淎lgorithmic decisions can embody bias and lack of adjustment. The result could be the institutionalization of biased and damaging decisions with the excuse of, 鈥楾he computer made the decision, so we have to accept it.鈥 If algorithms embody good choices and are based on carefully vetted data, the results could be beneficial. To do that requires time and expense. Will the public/customers demand that?鈥

Irina Shklovski, an associate professor at the IT University of Copenhagen, said, 鈥淒iscrimination in algorithms comes from implicit biases and unreflective values embedded in implementations of algorithms for data processing and decision-making. There are many possibilities to data-driven task and information-retrieval support, but the expectation that somehow automatic processing will necessarily be more 鈥榝air鈥 makes the assumption that implicit biases and values are not part of system design (and these always are). Thus the question is how much agency will humans retain in the systems that will come to define them through data and how this agency can be actionably implemented to support human rights and values.鈥

An anonymous freelance consultant observed, 鈥淏uilt-in biases (largely in favour of those born to privilege such as Western Caucasian males, and, to a lesser extent, young south-Asian and east-Asian men) will have profound, largely unintended negative consequences to the detriment of everybody else: women, especially single parents, people of colour (any shade of brown or black), the 鈥榦lds鈥 over 50, immigrants, Muslims, non-English speakers, etc. This will not end well for most of the people on the planet.鈥

Marc Brenman, managing partner at IDARE, wrote, 鈥淭he algorithms will reflect the biased thinking of people. Garbage in, garbage out. Many dimensions of life will be affected, but few will be helped. Oversight will be very difficult or impossible.鈥

Jenny Korn, a race and media scholar at the University of Illinois at Chicago, noted, 鈥淭he discussion of algorithms should be tied to the programmers programming those algorithms. Algorithms reflect human creations of normative values around race, gender and other areas related to social justice. For example, searching for images of 鈥榩rofessor鈥 will produce pictures of white males (including in cartoon format), but to find representations of women or people of color, the search algorithm requires the user to include 鈥榳oman professor鈥 or 鈥楲atina professor,鈥 which reinforces the belief that a 鈥榬eal鈥 professor is white and male. Problematic! So, we should discuss the (lack of) critical race and feminist training of the people behind the algorithm, not just the people using the algorithm.鈥

Adrian Hope-Bailie, standards officer at Ripple, noted, 鈥淥ne of the greatest challenges of the next era will be balancing protection of intellectual property in algorithms with protecting the subjects of those algorithms from unfair discrimination and social engineering.鈥

David Wuertele, a software engineer for a major company innovating autonomous vehicles, noted, 鈥淚 am optimistic that the services engineers build are capable of being free of discrimination, and engineers will try to achieve that ideal. I expect that we will have some spectacular failures as algorithms get blamed for this or that social tragedy, but I believe that we will have an easier time fixing those services than we will have fixing society.鈥

Eric Marshall, a systems architect, said, 鈥淎lgorithms are tweaked or replaced over time. Similar to open source software, the good will outweigh the bad, if the right framework is found.鈥

Kevin Novak, CEO of 2040 Digital, commented, 鈥淎lgorithms can lead to filtered results that demonstrate biased or limited information to users. This bias and limitation can lead to opinions or understanding that does not reflect the true nature of a topic, issue or event. Users should have the option to select algorithmic results or natural results.鈥

础耻迟丑辞谤听Paul Lehto聽observed, 鈥淯nless the algorithms are essentially open source and as such can be modified by user feedback in some fair fashion, the power that likely algorithm-producers (corporations and governments) have to make choices favorable to themselves, whether in internet terms of service or adhesion contracts or political biases, will inject both conscious and unconscious bias into algorithms.鈥

An anonymous respondent commented, 鈥淚f you start at a place of inequality and you use algorithms to decide what is a likely outcome for a person/system, you inevitably reinforce inequalities. For example, if you were really willing to use the data that exist right now, we would tell African-American men from certain metro areas that they should not even consider going to college 鈥 it won鈥檛 鈥榩ay off鈥 for them because of wage discrimination post-schooling. Is this an ethical position? No. But is it what a computer would determine to be the case based on existing data? Yes.鈥

And another respondent,聽Jeff Kaluski, predicted that trying to eliminate all bias may cause new problems, commenting, 鈥淣ew algs will start by being great, then a problem will emerge. The creator will be sued in the U.S. The alg will be corrected. It won鈥檛 be good enough for the marginalized group. Someone else will create a better alg that was 鈥榳ritten in part by marginalized group鈥 then we鈥檒l have a worse alg than the original+correction.鈥

Lisa Heinz, a doctoral student at Ohio University, said, 鈥淭hose of us who learn and work in human-computer areas of study will need to make sure our concerns about discrimination and the exclusionary nature of the filter-bubble are addressed in the oversight mechanisms of algorithm development. This means that all voices, genders and races need to be incorporated into the development of algorithms to prevent even unintentional bias. Algorithms designed and created only by young white men will always benefit young white men to the exclusion of all others.鈥

Following are additional comments by anonymous respondents regarding bias:

鈥 鈥淭he rise of unfounded faith in algorithmic neutrality coupled with spread of big data and AI will enable programmer bias to spread and become harder to detect.鈥
鈥 鈥淭he positives of algorithmic analysis are largely about convenience for the comfortable; the negatives vastly outweigh them in significance.鈥
鈥 鈥淏ias is inherent in algorithms. This will only function to make humans more mechanical, and those who can rig algorithms to increase inequality and unfairness will, of course, prevail.鈥
鈥 鈥淎lgorithms value efficiency over correctness or fairness, and over time their evolution will continue the same priorities that initially formulated them.鈥
鈥 鈥淎lgorithms can only reflect our society back to us, so in a feedback loop they will also reflect our prejudices and exacerbate inequality. It鈥檚 very important that they not be used to determine things like job eligibility, credit reports, etc.鈥
鈥 鈥淎lgorithms purport to be fair, rational and unbiased but just enforce prejudices with no recourse.鈥
鈥 鈥淧oor algorithms in justice systems actually preserve human bias instead of mitigating it. As long as algorithms are hidden from public view, they can pose a great danger.鈥
鈥 鈥淗ow can we expect algorithms designed to maximize 鈥榚fficiency鈥 (which is an inherently conservative activity) to also push underlying social reform?鈥

Algorithms depend upon data that is often limited, deficient or incorrect

Some made the case that datasets upon which algorithmic decisions are made may exclude some groups of people, eliminate consumer choice and not recognize exceptions. They may include limited, skewed or incorrect detail, and the analysis based on that can cause harm. An anonymous respondent noted, 鈥淯ntil we begin to measure what we value rather than valuing what we measure, any insights we may gain from algorithms will be canceled out by false positives caused by faulty or incomplete data.鈥

An anonymous senior program manager at Microsoft observed, 鈥淏ecause of inherent bias (due to a lack of diversity), many algorithms will not fully reflect the complexity of the problems they are trying to address and solutions will tend to sometimes neglect important factors. Unfortunately, it will take time until biases (or simply short-sighted thinking) baked into these algorithms will get detected. By then the American government will have banned innocent people from boarding planes, insurers will have raised premiums for the wrong people, and 鈥榩redictive crime prevention鈥 will have gotten out of hand.鈥

An anonymous respondent commented, 鈥淎utomated systems can never address the complexity of human interaction with the same degree of precision as a person would.鈥 And artist Masha Falkov said, 鈥淩eal life does not always mimic mathematics. Algorithms have a limited number of variables, and often life shows that it needs to factor in extra variables. There should always be feedback in order to develop better variables and human interaction when someone falls through the cracks of the new normalcy as defined by the latest algorithm. A person may be otherwise a good person in society, but they may be judged for factors over which they do not have any control.鈥

Randy Albelda, an economics professor at the University of Massachusetts-Boston, replied, 鈥淢y research is on poor people. I鈥檝e been doing this for a long time (close to 30 years). And no matter how much information, data, empirical evidence that is presented about poor people, we still have horrible anti-poverty policies, remarkable misconceptions about poor people, and lots more poor people. Collecting and analyzing data does not 鈥榮et us free.鈥 鈥楩acts鈥 are convenient. Political economic forces shape the way we understand and use 鈥榝acts/data鈥 as well as technology. If we severely underfund health care or much of health care dollars get sucked up by insurance companies, algorithms will be used to allocate insufficient dollars on patients. It will not improve health care.鈥

Will Kent, an e-resources specialist on the staff at Loyola University-Chicago, observed, 鈥淎ny amount/type of discrimination could occur. It could be as innocent as a slip-up in the code or a mistranslation. It could be as nefarious as deliberate suppression, obfuscation or lie of omission.鈥

An anonymous respondent said, 鈥淚 don鈥檛 think we understand intersectionality enough to engineer it in an algorithm. As someone who is LBGTQ, and a member of a small indigenous group who speaks a minority language, I have already encountered so many 鈥榖lind spots鈥 online 鈥 but who do you tell? How do you approach the algorithm? How do you influence it without acquiescing?鈥

Hume Winzar, an associate professor of business at Macquarie University in Sydney, Australia, wrote, 鈥淏anks, governments, insurance companies, and other financial and service providers will use whatever tools they can to focus on who are risks. It鈥檚 all about money and power.鈥

An anonymous lead field technician replied, 鈥淧redictive modeling is based on statistical analysis, which by its nature ignores edge cases. It will lead to less freedom of choice in products, more [subtle] coercive advertising and an inability for people to make human mistakes that don鈥檛 haunt them for long periods or their whole lives.鈥

M.E. Kabay, a professor of computer information systems at Norwich University in Vermont, said, 鈥淎 dictatorship like that in Orwell鈥檚 1984 would love to have control over the algorithms selecting information for the public or for subsectors of the public. If information is power, then information control is supreme power. Warning bells should sound when individualized or group information bubbles generated by the selective algorithms diverge from some definition of reality. Supervisory algorithms should monitor assertions or information flows that deviate from observable reality and documentary evidence; the question remains, however, of whose reality will dominate.鈥

An anonymous professor at the University of California-Berkeley observed, 鈥淎lgorithms are, by definition, impersonal and based on gross data and generalized assumptions. The people writing algorithms, even those grounded in data, are a non-representative subset of the population. The result is that algorithms will be biased toward what their designers believe to be 鈥榥ormal.鈥 One simple example is the security questions now used by many online services. E.g., what is your favorite novel? Where did your spouse go to college? What was your first car? What is your favorite vacation spot? What is the name of the street you grew up on?鈥

An anonymous respondent commented, 鈥淭here is a lot of potential for abuse here that we have already seen in examples such as sentencing for nonviolent offences. Less-well-off and minority offenders are more likely to serve sentences or longer sentences than others whose actions were the same. We also see that there is a lot of potential for malicious behaviour similar to the abuses corrected previously when nasty neighbours would spread lies and get their victims reclassified for auto or other insurance rates.鈥

Theme 5: Algorithmic categorizations deepen divides

Two lines of thinking about societal divisions were embodied in many respondents鈥 answers. First, they predicted that an algorithm-assisted future will widen the gap between the digitally savvy, who are the most desired customers in the new information ecosystem, and disadvantage those who are not nearly as connected or able to participate. The second observation about 鈥渄ivides鈥 is that social and political divisions will be abetted by algorithms, as algorithm-driven insights encourage people to live in echo chambers of repeated and reinforced media and political content. As one respondent put it, 鈥淎lgorithms risk entrenching people in their own patterns of thought and like-mindedness.鈥

The disadvantaged are likely to be more so

Some respondents predicted that those individuals who are already being left out or disadvantaged by the digital age will fall even further behind as algorithms become more embedded in society. They noted that the capacity to participate in digital life is not universal because fast-evolving digital tools and connections are costly, complicated, difficult to maintain and sometimes have a steep learning curve. And they said algorithmic tools create databased information that categorizes individuals in ways that are often to their disadvantage.

Pete Cranston聽of Euroforic Services wrote, 鈥淪mart(er) new apps and platforms will require people to learn how to understand the nature of the new experience, learn how it is guided by software, and learn to interact with the new environment. That has tended to be followed by a catch-up by people who learn then to game the system, as well as navigate it more speedily and reject experiences that don鈥檛 meet expectations or needs. The major risk is that less-regular users, especially those who cluster on one or two sites or platforms, won鈥檛 develop that navigational and selection facility and will be at a disadvantage.鈥

Christopher Owens, a community college professor, said, 鈥淚f the current economic order remains in place, then I do not see the growth of data-driven algorithms providing much benefit to anyone outside of the richest in society.鈥

Tom Vest, a research scientist, commented, 鈥淎lgorithms will most benefit the minority of individuals who are consistently 鈥榩referred鈥 by algorithms, plus those who are sufficiently technically savvy to understand and manipulate them (usually the same group).鈥

These proponents argued that 鈥渦pgrades鈥 often do very little to make crucial and necessary improvements in the public鈥檚 experiences. Many are incremental and mostly aimed at increasing revenue streams and keeping the public reputations of technology companies 鈥 and their shareholder value 鈥 high. An anonymous sociologist at the Social Media Research Foundation commented, 鈥淎lgorithms make discrimination more efficient and sanitized. Positive impact will be increased profits for organizations able to avoid risk and costs. Negative impacts will be carried by all deemed by algorithms to be risky or less profitable.鈥

Jerry Michalski, founder at REX, commented, 鈥淎lgorithms are already reshaping 鈥 might we say warping? 鈥 relationships, citizenship, politics and more. Almost all the algorithms that affect our lives today are opaque, created by data scientists (or similar) behind multiple curtains of privacy and privilege. Worse, the mindset behind most of these algorithms is one of consumerism: How can we get people to want more, buy more, get more? The people designing the algorithms seldom have citizens鈥 best interests at heart. And that can鈥檛 end well. On the positive side, algorithms may help us improve our behavior on many fronts, offsetting our weaknesses and foibles or reminding us just in time of vital things to do. But on the whole, I鈥檓 pessimistic about algorithm culture.鈥

Some of these experts said that 鈥 as smart, networked devices and big data combine and allow the creation of highly detailed databased profiles of individuals that follow them everywhere and impact their transactions 鈥 people of lesser means and those with some socially questionable acts in their backgrounds will be left out, cheated or forced to come up with alternate methods by which to operate securely, safely and fairly in information networks.

Dave Howell, a senior program manager in telecommunications, said, 鈥淎lgorithms will identify the humans using connected equipment. Identity will be confirmed through blockchain by comparison to trusted records of patterns, records kept by the likes of [Microsoft], Amazon, Google. But there are weaknesses to any system, and innovative people will work to game a system. Advertising companies will try to identify persons against their records, blockchains can be compromised (given a decade someone will . Government moves too slowly. The Big Five (Microsoft, Google, Apple, Amazon, Facebook) will offer technology for trust and identity; few other companies will be big enough. Scariest to me is Alibaba or China鈥檚 state-owned companies with power to essentially declare who is a legal person able to make purchases or enter contracts. Government does not pay well enough to persevere. I bet society will be stratified by which trust/identity provider one can afford/qualify to go with. The level of privacy and protection will vary. Lois McMaster [Bujold]鈥檚聽suddenly seems a little more chillingly realistic.鈥

Nigel Cameron, president and CEO of the Center for Policy on Emerging Technologies, observed, 鈥淧ositives: Enormous convenience/cost-savings/etc. Negatives: Radically de-humanizing potential, and who writes/judges the algos? In a consensus society all would be well. But we have radically divergent sets of values, political and other, and algos are always rooted in the value systems of their creators. So the scenario is one of a vast opening of opportunity 鈥 economic and otherwise 鈥 under the control of either the likes of Zuckerberg or the grey-haired movers of global capital or 鈥.鈥

贵谤别别濒补苍肠别谤听Julie Gomoll聽wrote, 鈥淭he overall effect will be positive for some individuals. It will be negative for the poor and the uneducated. As a result, the digital divide and wealth disparity will grow. It will be a net negative for society.鈥

Polina Kolozaridi, a researcher at the Higher School of Economics, Moscow, wrote, 鈥淭he Digital Gap will extend, as people who are good in automating their labour will be able to have more benefits.鈥

An anonymous associate professor observed, 鈥淲hether algorithms positively or negatively impact people鈥檚 lives probably depends on the educational background and technological literacy of the users. I suspect that winners will win big and losers will continue to lose 鈥撀. This is likely to occur through access to better, cheaper and more-efficient services for those who understand how to use information, and those who don鈥檛 understand it will fall prey to scams, technological rabbit holes and technological exclusion.鈥

An anonymous respondent wrote, 鈥淎lgorithms are not neutral, and often privilege some people at the expense of those with certain marginalized identities. As data mining and algorithmic living become more pervasive, I expect these inequalities will continue.鈥

Another respondent wrote, 鈥淭he benefits will accrue disproportionately to the parts of society already doing well 鈥 the upper middle class and above. Lower down the socioeconomic ladder, algorithmic policy may have the potential to improve some welfare at the expense of personal freedom: for example, via aggressive automated monitoring of food stamp assistance, or mandatory online training. People in these groups will also be most vulnerable to algorithmic biases, which will largely perpetuate the societal biases present in the training data. Since algorithms are increasingly opaque, it will be hard to provide oversight or prove discrimination.鈥

Algorithms create filter bubbles and silos shaped by corporate data collectors; they limit people鈥檚 exposure to a wider range of ideas and reliable information and eliminate serendipity

Code written to make individualized information delivery more accurate (and more monetizable for the creators of the code) also limits what people see, read and understand about the world. It can create 鈥渆cho chambers鈥 in which people see only what the algorithms determine they want to see. This can limit exposure to opposing views and random, useful information. Among the items mentioned as exemplars in these responses were the United Kingdom鈥檚 contentious vote to exit the European Union (which came to be known as聽“) and the 2016 U.S. presidential election cycle. Some respondents also expressed concerns over the public鈥檚 switch in news diet from the pre-internet 20th century鈥檚 highly edited, in-depth and professionally reported content to the algorithm-driven viewing and sharing of often-less-reliable or聽聽via social media outlets such as Facebook and Twitter.

Valerie Bock聽of VCB Consulting commented, 鈥淚t has definitely come to pass that it is now more possible than ever before to curate one鈥檚 information sources so that they include only those which strike one as pleasurable. That鈥檚 a real danger, which we鈥檙e seeing the impact of in this time of the Brexit and the 2016 U.S. election season. Our society is as polarized as it has ever been. We are going to need to be disciplined about not surrendering to what the robots think we would like to see. I worry that because it will become a hassle to see stuff we don鈥檛 鈥榣ike,鈥 that gradually, fewer and fewer people will see that which challenges them.鈥

M.E. Kabay, a professor of computer information systems at Norwich University, said, 鈥淲e may be heading for lowest-common-denominator information flows. Another issue is the possibility of increasingly isolated information bubbles or echo chambers. If the algorithms directing news flow suppress contradictory information 鈥 information that challenges the assumptions and values of individuals 鈥 we may see increasing extremes of separation in worldviews among rapidly diverging subpopulations.鈥

Vance S. Martin, an instructional designer at Parkland College, said, 鈥淎lgorithms save me time when my phone gets a sense for what I will be typing and offers suggestions, or when Amazon or Netflix recommends something based on my history. However, they also close options for me when Google or Facebook determine that I read or watch a certain type of material and then offer me content exclusively from that point of view. This narrows my field of view, my exposure to other points of view. Using history to predict the future can be useful, but overlooks past reasons, rationales and biases. For example, in the past, the U.S. based its immigration quotas on historical numbers of people who came in the past. So if in the early 1800s there was a large number of Scottish immigrants and few Italian immigrants, they would allow in more Scots, and fewer Italians. So a historical pattern leads to future exclusionary policies. So if an algorithm determines that I am male, white, middle-class and educated, I will get different results and opportunities than a female African-American, lower-class aspirant. So ease of life/time will be increased, but social inequalities will presumably become reified.鈥

Jan Schaffer, executive director at J-Lab, predicted, 鈥淭he public will increasingly be creeped out by the nonstop data mining.鈥

An anonymous assistant professor at a state university said, 鈥淚 worry that the use of algorithms, while not without its benefits, will do more harm than good by limiting information and opportunities. Algorithms and big data will improve health care decisions, for example, but they will really hurt us in other ways, such as their potential influence on our exposure to ideas, information, opinions and the like.鈥

Steven Waldman, founder and CEO of LifePosts, said, 鈥淎lgorithms, of course, are not values-neutral. If Twitter thrives on retweets, that seems neutral but it actually means that ideas that provoke are more likely to succeed; if Facebook prunes your news feed to show you things you like, that means you鈥檒l be less exposed to challenging opinions or boring content, etc. As they are businesses, most large internet platforms will have to emphasize content that prompts the strongest reaction, whether it鈥檚 true or not, healthy or not.鈥

The many acts of exclusion committed in the act of leveraging algorithms was a primary concern expressed by聽Frank Elavsky, a data and policy analyst at Acumen LLC. Outlining what he perceives to be the potential impacts of algorithmic advances over the next decade, he listed a series of points, writing, 鈥淣egative changes? Identity security. Privacy.鈥 He included the following on the list of concerning trends he ties to the 鈥淎lgorithm Age鈥:

鈥 鈥淚dentity formation 鈥 people will become more and more shaped by consumption and desire.
鈥 Racial exclusion in consumer targeting.
鈥 Gendered exclusion in consumer targeting.
鈥 Class exclusion in consumer targeting 鈥 see Google鈥檚 campaign to educate many in Kansas on the need for a fiber-optic infrastructure.
鈥 Nationalistic exclusion in consumer targeting.
鈥 Monopoly of choice 鈥 large companies control the algorithms or results that people see.
鈥 Monopoly of reliable news 鈥 already a problem on the internet, but consumer bias will only get worse as algorithms are created to confirm your patterns of interest.鈥

An anonymous social scientist spoke up for serendipity. 鈥淲e are mostly unaware of our own internal algorithms, which, well, sort of define us but may also limit our tastes, curiosity and perspectives,鈥 he said. 鈥淚鈥檓 not sure I鈥檓 eager to see powerful algorithms replace the joy of happenstance. What greater joy is there than to walk the stacks in a graduate library looking for that one book I have to read, but finding one I鈥檇 rather? I鈥檓 a better person to struggle at getting 5 out of 10 New Yorker cartoons than to have an algorithm deliver 10 they鈥檇 know I get. I鈥檓 comfortable with my own imperfection; that鈥檚 part of my humanness. Efficiency and the pleasantness and serotonin that come from prescriptive order are highly overrated. Keeping some chaos in our lives is important.鈥

An anonymous political science professor took the idea further, posing that a lack of serendipity can kill innovative thinking. He wrote, 鈥淭he first issue is that randomness in a person鈥檚 life is often wonderfully productive, and the whole purpose of algorithms seems to be to squash those opportunities in exchange for entirely different values (such as security and efficiency). A second, related question is whether algorithms kill experimentation (purposely or not); I don鈥檛 see how they couldn鈥檛, by definition.鈥

Several participants in this canvassing expressed concerns over the change in the public’s information diets, the 鈥渁tomization of media,鈥 an overemphasis of extreme, ugly, weird news, and the favoring of 鈥渢ruthiness鈥 over more-factual material that may be vital to understanding how to be a responsible citizen of the world.

搁别蝉辫辞苍诲别苍迟听Noah Grand聽commented, 鈥淎lgorithms help create the echo chamber. It doesn鈥檛 matter if the algorithm recognizes certain content or not. In politics and news media it is extremely difficult to have facts that everyone agrees on. Audiences may not want facts at all. To borrow from Stephen Colbert, audiences may prefer 鈥榯ruthiness鈥 to 鈥榯ruth.鈥 Algorithms that recognize 鈥榚ngagement鈥 鈥 likes, comments, retweets, etc. 鈥 appear to reward truthiness instead of truth.鈥

An anonymous respondent said, 鈥淚 am troubled by the way algorithms contribute to the atomization of media through Facebook and the like. We are quite literally losing the discursive framework we need to communicate with people who disagree with us.鈥

An anonymous technician said online discourse choreographed by algorithms creates a sophomoric atmosphere. 鈥淎lgorithms are just electronic prejudices, just as the big grown-up world is just high school writ large,鈥 he wrote. 鈥淲e鈥檒l get the same general sense of everything being kind of okay, kind of sucking, and the same daily outrage story, and the same stupid commentary, except algorithms will be the responsible parties, and not just some random schmuck, and artificial intelligences composed of stacks of algorithms will be writing the stories and being outraged.鈥

Robert Boatright, professor of political science at Clark University, said algorithms remove necessary cognitive challenges, writing, 鈥淭he main problem is that we don鈥檛 encounter information that conflicts with our prior beliefs or habits, and we鈥檙e rarely prompted to confront radically new information or content 鈥 whether in news, music, purchasing or any of the other sorts of opportunities that we are provided.鈥

An anonymous IT analyst noted, 鈥淔acebook, for example, only shows topics you鈥檝e previously shown interest in on their platform to show you more of the same. You鈥檙e far less likely to expand your worldview if you鈥檙e only seeing the same narrow-minded stuff every day. It鈥檚 a vast topic to delve into when you consider the circumstances a child is born into and how it will affect individuals鈥 education.鈥

Respondents also noted that the savviest tech strategists are able to take advantage of algorithms鈥 features, foibles and flaws to 鈥済ame the system鈥 and 鈥済et the highest profit out of most people.鈥

Theme 6: Unemployment numbers will rise

In the mid-1960s an ad hoc committee of 35 scientists and social activists including Linus Pauling and several other Nobel Prize winners sent聽聽warning that in the future a 鈥渃ybernation revolution鈥 would create a 鈥渟eparate nation of the poor, the unskilled, the jobless.鈥澛犅燙oncern over technological employment is nothing new, but it is seen as a much more imminent threat by many experts today.聽McKinsey, a global consulting company,听“as many as 45 percent of the activities that individuals are paid to perform can be automated by adapting currently demonstrated technologies鈥hese activities represent about $2 trillion in annual wages.”聽The emergence of autonomous vehicles and industrial systems is expected to eliminate many jobs, but the number of white-collar jobs is also expected to decline.

One participant in this canvassing went into detail about ways in which small human teams assisted by algorithms will be able to accomplish much more than large human teams do today, creating efficiencies and eliminating jobs in the process.

Stephen Schultz, an author and editor, wrote, 鈥淎lgorithms are to the 鈥榳hite-collar鈥 labor force what automation is to the 鈥榖lue-collar鈥 labor force. Lawyers are especially vulnerable, even more so if those with competency in computer programming start acquiring law degrees and passing legislation and rewriting the syntax of current legal code to be more easily parsed by AI. Another profession that might benefit from algorithmic processing of data is nursing. In the United States, floor nursing is one of the most stressful jobs right now, in part because floor [registered nurses] are being given higher patient loads (up to six) and at the same time being required to enter all assessment data into the EMR (electronic medical record), and then creating/revising care plans based on that data, all of which subsequently leave little time for face-to-face patient care. The nursing process consists of five stages: assessment, diagnosis, planning, implementation and evaluation. Algorithmic information processing would be most helpful in the diagnosis and evaluation stages 鈥 with self-reporting monitoring devices directly feeding into the EMR.鈥

Smarter, more-efficient algorithms will displace many human work activities

A number of respondents focused on the loss of jobs as the primary challenge of the Algorithm Age. They said the spread of artificial intelligence will create significant unemployment, with major social and economic implications. One respondent said that because they are 鈥渟marter, more efficient and productive and cost less 鈥 algorithms are deadly.鈥 One predicted 鈥減otential 100% human unemployment鈥 and another imagined 鈥渋n some places, a revolution.鈥

Don Philip, a retired PhD lecturer, commented, 鈥淚f this is improperly managed we will have a massively unemployed underclass and huge social unrest.鈥

Peter Brantley, director of online strategy at the University of California-Davis, criticized American capitalism and predicted 鈥渟ignificant unrest and upheaval,鈥 commenting, 鈥淭he trend toward data-backed predictive analytics and decision-making is inevitable. While hypothetically these could positively impact social conditions, opening up new forms of employment and enhanced access and delivery of services, in practice the negative impacts of dissolution of current employment will be an uncarried social burden. Much as the costs of 1960s-80s deindustrialization were externalized to the communities which firms vacated, with no accompanying subvention to support their greater needs, so will technological factors continue to tear at the fabric of our society without effective redress, creating significant unrest and upheaval. Technological innovation is not a challenge well accommodated by the current聽

Seti Gershberg, executive producer and creative director at Arizona Studios, wrote, 鈥淎I and robots are likely to disrupt the workforce to a potential 100% human unemployment. They will be smarter, more efficient and productive and cost less, so it makes sense for corporations and business to move in this direction.鈥
An anonymous respondent wrote, 鈥淭he big issue in the use of these algorithms is what the function of a 鈥榡ob鈥 is. If it is to keep a person participating in society and earning a living, then algorithms are deadly; they will inevitably reduce the number of people necessary to do a job. If the purpose is to actually accomplish a task (and possibly free up a human to do more-human things), then algorithms will be a boon to that new world. I worry, though, that too many people are invested in the idea that even arbitrary work is important for showing 鈥榲alue鈥 to a society to let that happen.鈥

Joe Mandese, editor-in-chief of MediaPost, wrote, 鈥淎lgorithms will replace any manual-labor task that can be done better and more efficiently via an algorithm. In the short term, that means individuals whose work is associated with those tasks will either lose their jobs or will need to be retrained. In the long run, it could be a good thing for individuals by doing away with low-value repetitive tasks and motivating them to perform ones that create higher value.鈥

Some seek a redefined global economic system to support humanity

While some predict humans might adjust well to a jobless future, others expect that 鈥 if steps aren鈥檛 taken to adjust 鈥 an economic collapse could cause great societal stress and perhaps make the world a much more dangerous place.

Alan Cain聽commented, 鈥淪o. No jobs, growing population, and less need for the average person to function autonomously. Which part of this is warm and fuzzy?鈥

An anonymous PhD candidate predicted, 鈥淲ithout changes in the economic situation, the massive boosts in productivity due to automation will increase the disparity between workers and owners of capital. The increase in automation/use of algorithms leads to fewer people being employed.鈥

An anonymous director of research at a European futures studies organization commented, 鈥淲e need to think about how to accommodate the displaced labour.鈥

Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN, wrote, 鈥淭he limits to human displacement by our own smart machines are not known or very predictable at this point. The broader question is how to redefine and reconstruct global economic systems to provide a decent quality of life for humanity.鈥

Polina Kolozaridi, a researcher at the Higher School of Economics, Moscow, wrote, 鈥淚t is a big political question, whether different institutions will be able to share their power, not knowing 鈥 obviously 鈥 how to control the algorithms. Plenty of routine work will be automated. That will lead to a decrease in people鈥檚 income unless governments elaborate some way of dealing with it. This might be a reason for big social changes 鈥 not always a revolution, but 鈥 in some places 鈥 a revolution as well. Only regular critical discussion involving more people might give us an opportunity to use this power in a proper way (by proper I mean more equal and empowering).鈥

础听聽鈥 a聽聽awarded to everyone in a community to cover general living expenses 鈥 is one potential solution that is often mentioned in discussions of a future with fewer jobs for humans.

Paul Davis, a director who participated in this canvassing, referred to this as a 鈥淟iving Wage鈥 in his response. He wrote, 鈥淭he age of the algorithm presents the opportunity to automate bias, and render Labour surplus to requirements in the economic contract with Capital. Modern Western society is built on a societal model whereby Capital is exchanged for Labour to provide economic growth. If Labour is no longer part of that exchange, the ramifications will be immense. So whilst the benefits of algorithms and automation are widespread, it is the underlying social impact that needs to be considered. If Labour is replaced, in a post-growth model, perhaps a 鈥楲iving Wage鈥 replaces the salary, although this would require Capital to change the social engagement contract.鈥

Michael Dyer, a computer science professor at the University of California-Los Angeles who specializes in artificial intelligence, commented, 鈥淭he next 10 years is transitional, but within the next 20 years AI software will have replaced workers鈥 jobs at all levels of education. Hopefully, countries will have responded by implementing forms of minimal guaranteed living wages and free education past K-12; otherwise the brightest will use online resources to rapidly surpass average individuals and the wealthiest will use their economic power to gain more political advantages.鈥

An anonymous respondent wrote, 鈥淭he positives outweigh the negatives, but only if we restructure how society works. We need a societal change that accepts the dwindling availability of traditional work, or we鈥檒l have PhDs rioting because they can鈥檛 afford to eat. Something like Basic Income will need to be implemented if increased automation is going to be a good for humanity.鈥

Another anonymous respondent commented, 鈥淲e will see less pollution, improved human health, less economic wastage, and fewer human jobs (which must be managed by increasing state-funded welfare).鈥

An anonymous professor observed, 鈥淯nless there is public support for education and continued training, as well as wage and public-service support, automation will expand the number of de-skilled and lower-paying positions paired by a set of highly skilled and highly compensated privileged groups. The benefits of increased productivity will need to be examined closely.鈥

One respondent said time being freed up by AI doing most of the 鈥渨ork鈥 for humans could be spent addressing oversight of algorithmic systems. Stewart Dickinson, digital sculpture pioneer, said, 鈥淏asic Income will reduce beholdenship to corporations and encourage participation in open-source development for social responsibility.鈥

A theme describing a societal challenge

Theme 7: The need grows for algorithmic literacy, transparency, and oversight

The respondents to this canvassing offered a variety of ideas about how individuals and the broader culture might respond to the algorithm-ization of life. They noted that those who create and evolve algorithms are not held accountable to society and argued there should be some method by which they are. They also argued there is great need for education in algorithm literacy, and that those who design algorithms should be trained in ethics and required to design code that considers societal impacts as it creates efficiencies.

Glenn Ricart, Internet Hall of Fame member, technologist and founder and CTO of US Ignite, commented, 鈥淭he danger is that algorithms appear as 鈥榖lack boxes鈥 whose authors have already decided upon the balance of positive and negative impacts 鈥 or perhaps have not even thought through all the possible negative impacts. This raises the issue of impact without intention. Am I responsible for all the impacts of the algorithm I invoke, or algorithms invoked in my behalf through my choice of services? How can we achieve algorithm transparency, at least at the level needed for responsible invocation? On the positive side, how can we help everyone better understand the algorithms they choose and use? How can we help people personalize the algorithms they choose and use?鈥

Scott McLeod, an associate professor of educational leadership at the University of Colorado, Denver, is hopeful that the public will gain more control. 鈥淲hile there are dangers in regard to who creates and controls the algorithms,鈥 he said, 鈥渆ventually we will evolve mechanisms to give consumers greater control that should result in greater understanding and trust. Right now the technologies are far outpacing our individual and societal abilities to make sense of what鈥檚 happening and corporate and government entities are taking advantage of these conceptual and control gaps. The pushback will be inevitable but necessary and will, in the long run, result in balances that are more beneficial for all of us.鈥

It starts with algorithm literacy 鈥 this goes beyond basic digital literacy

Because algorithms are generally invisible 鈥 even often referred to as 鈥渂lack box鈥 constructs, as they are not evident in user interfaces and their code is usually not made public 鈥 most people who use them daily are in the dark about how they work and why they can be a threat. Some respondents said the public should be better educated about them.

David Lankes, a professor and director at the University of South Carolina School of Library and Information Science, wrote, 鈥淭here is simply no doubt that, on aggregate, automation and large-scale application of algorithms have had a net-positive effect. People can be more productive, know more about more topics than ever before, identify trends in massive piles of data and better understand the world around them. That said, unless there is an increased effort to make true information literacy a part of basic education, there will be a class of people who can use algorithms and a class used by algorithms.鈥

An anonymous professor at MIT observed, 鈥淸The challenge presented by algorithms] is the greatest challenge of all. Greatest because tackling it demands not only technical sophistication but an understanding of and interest in societal impacts. The 鈥榠nterest in鈥 is key. Not only does the corporate world have to be interested in effects, but consumers have to be informed, educated and, indeed, activist in their orientation toward something subtle. This is what computer literacy is about in the 21st century.鈥

Trevor Owens, senior program officer at the Institute of Museum and Library Services, agreed, writing, 鈥淎lgorithms all have their own ideologies. As computational methods and data science become more and more a part of every aspect of our lives, it is essential that work begin to ensure there is a broader literacy about these techniques and that there is an expansive and deep engagement in the ethical issues surrounding them.鈥

Daniel Menasce, a professor of computer science at George Mason University, wrote, 鈥淎lgorithms have been around for a long time, even before computers were invented. They are just becoming more ubiquitous, which makes individuals and the society at large more aware of their existence in everyday life devices and applications. The big concern is the fact that the algorithms embedded in a multitude of devices and applications are opaque to individuals and society. Consider for example the self-driven cars being currently developed. They certainly have collision-avoidance and risk-mitigation algorithms. Suppose a pedestrian crosses in front of your vehicle. The embedded algorithm may decide to hit the pedestrian as opposed to ramming the vehicle against a tree because the first choice may cause less harm to the vehicle occupants. How does an individual decide if he or she is OK with the myriad decision rules embedded in algorithms that control your life and behavior without knowing what the algorithms will decide? This is a non-trivial problem because many current algorithms are based on machine learning techniques, and the rules they use are learned over time. Therefore, even if the source code of the embedded algorithms were made public, it is very unlikely that an individual would know the decisions that would be made at run time. In summary, algorithms in devices and applications have some obvious advantages but pose some serious risks that have to be mitigated.鈥

An anonymous policy adviser said, 鈥淭here is a need for algorithmic literacy, and to critically assess outcomes from, e.g., machine learning, and not least how this relates to biases in the training data. Finding a framework to allow for transparency and assess outcomes will be crucial. Also a need to have a broad understanding of a the algorithmic 鈥榲alue chain鈥 and that data is the key driver and as valuable as the algorithm which it trains.鈥

Alexander Halavais, director of the master鈥檚 program in social technologies at Arizona State University, said teaching these complex concepts will require a 鈥渞evolutionary鈥 educational effort. 鈥淔or society as a whole, algorithmic systems are likely to reinforce (and potentially calcify) existing structures of control,鈥 he explained. 鈥淲hile there will be certain sectors of society that will continue to be able to exploit the move toward algorithmic control, it is more likely that such algorithms will continue to inscribe the existing social structure on the future. What that means for American society is that the structures that make Horatio Alger鈥檚 stories so unlikely will make them even less so.聽Those structures will be 鈥榥aturalized鈥 as just part of the way in which things work. Avoiding that outcome requires a revolutionary sort of educational effort that is extraordinarily difficult to achieve in today鈥檚 America; an education that doesn鈥檛 just teach kids to 鈥榗ode,鈥 but to think critically about how social and technological structures shape social change and opportunity.鈥

Justin Reich, executive director at the MIT Teaching Systems Lab, observed, 鈥淭he advancing impact of algorithms in our society will require new forms and models of oversight. Some of these will need to involve expanded ethics training in computer science training programs to help new programmers better understand the consequences of their decisions in a diverse and pluralistic society. We also need new forms of code review and oversight, that respect company trade secrets but don鈥檛 allow corporations to invoke secrecy as a rationale for avoiding all forms of public oversight.鈥

People call for accountability processes, oversight, and transparency

2016 was a banner year for algorithm accountability activists. Though they had been toiling largely in obscurity, their numbers have begun to grow. Meanwhile, public interest has increased as large investments in AI by every top global technology company and breakthroughs in the design and availability of autonomous vehicles and the burgeoning of big data analytics have raised algorithm issues to a new prominence. Many respondents in this canvassing urged that new algorithm accountability, oversight and transparency initiatives be developed and deployed. After the period in which this question was open for comments by our expert group (July 1-Aug. 12, 2016),听听颈尘辫辞谤迟补苍迟听听谤别辫辞谤迟蝉听were released and the聽 an industry-centered working group including Amazon, Facebook, Google, IBM and Microsoft was announced (Apple joined the partnership in early 2017).

In this canvassing,听Frank Pasquale, author of聽The Black Box Society: The Secret Algorithms That Control Money and Information聽and professor of law at the University of Maryland, wrote: 鈥淓mpiricists may be frustrated by the 鈥榖lack box鈥 nature of algorithmic decision-making; they can work with legal scholars and activists to open up certain aspects of it (via freedom of information and fair data practices). Journalists, too, have been teaming up with computer programmers and social scientists to expose new privacy-violating technologies of data collection, analysis and use 鈥 and to push regulators to crack down on the worst offenders. Researchers are going beyond the analysis of extant data and joining coalitions of watchdogs, archivists, open data activists and public interest attorneys to assure a more balanced set of 鈥榬aw materials鈥 for analysis, synthesis and critique. Social scientists and others must commit to the vital, long-term project of assuring that algorithms are producing fair and relevant documentation; otherwise, states, banks, insurance companies and other big, powerful actors will make and own more and more inaccessible data about society and people. Algorithmic accountability is a big-tent project, requiring the skills of theorists and practitioners, lawyers, social scientists, journalists and others. It鈥檚 an urgent, global cause with committed and mobilized experts looking for support.鈥

Several participants in the canvassing said the law must catch up to reality.

Lee McKnight, an associate professor at Syracuse University鈥檚 School of Information Studies, said, 鈥淕iven the wide-ranging impact on all aspects of people鈥檚 lives, eventually, software liability law will be recognized to be in need of reform, since right now, literally, coders can get away with murder. Inevitably, regulation of implementation and operation of complex policy models such as [the] Dodd-Frank Volcker Rule capital adequacy standards will themselves be algorithmically driven. Regulatory algorithms, code and standards will be 鈥 actually already are 鈥 being provided as a service. The Law of聽 indicates that the increasing layers of societal and technical complexity encoded in algorithms ensure that unforeseen catastrophic events will occur 鈥 probably not the ones we were worrying about.鈥

Mark Lemley, a professor of law at Stanford Law School, pointed out the urgent need to address new issues arising out of the abundance of previously unavailable data. He explained, 鈥淎lgorithms will make life and markets more efficient and will lead to significant advances in health. But they will also erode a number of implicit safety nets that the lack of information has made possible. The government will need to step in, either to prevent some uses of information or to compensate for the discrimination that results.鈥

Tse-Sung Wu, project portfolio manager at Genentech, used emerging concerns tied to autonomous vehicles as a compelling example of the need for legal reform. He wrote, 鈥淧erhaps the biggest peril is the dissolution of accountability unless we change our laws. Who will be held to account when these decisions are wrong? Right now, it鈥檚 a person – the driver of a vehicle or, in the case of professional services, someone with professional education and/or certification (a doctor making a diagnosis and coming up with a treatment plan; a judge making a ruling; a manager deciding how to allocate resources, etc.). In each of these, there is a person who is the ultimate decision-maker, and, at least at moral level, the person who is accountable (whether they are held to account is a different question). Liability insurance exists in order to manage the risk of poor decision-making by these individuals. How will our legal system of torts deal with technologies that make decisions: Will the creator of the algorithm be the person of ultimate accountability of the tool? Its owner? Who else? The algorithm will be limited by the assumptions, world view/mental model and biases of its creator. Will it be easier to tease these out, will it be harder to hide biases? Perhaps, which would be a good thing. In the end, while technology steadily improves, once again, society will need to catch up. We live in a civilization of tools, but the one thing these tools don鈥檛 yet do is make important decisions. The legal concepts around product liability closely define the accountabilities of failure or loss of our tools and consumable products. However, once tools enter the realm of decision-making, we will need to update our societal norms (and thus laws) accordingly. Until we come to a societal consensus, we may inhibit the deployment of these new technologies, and suffer from them inadvertently.鈥

Patrick Tucker, author of聽The Naked Future, wrote, 鈥淲e can create laws that protect people volunteering information such as the Genetic Information Nondiscrimination Act, that ensures people aren鈥檛 punished for data that they share that then makes its way into an algorithm. The current suite of encryption products available to consumers shows that we have the technical means to allow consumers to fully control their own data and share it according to their wants and needs, and the entire FBI vs. Apple debate shows that there is strong public interest and support in preserving the ability of individuals to create and share data in a way that they can control. The worst possible move we, as a society, can make right now is to demand that technological progress reverse itself. This is futile and shortsighted. A better solution is to familiarize ourselves with how these tools work, understand how they can be used legitimately in the service of public and consumer empowerment, better living, learning and loving, and also come to understand how these tools can be abused.鈥

Many respondents agreed it is necessary to take immediate steps to protect the public鈥檚 interests.

Sandi Evans, an assistant professor at California State Polytechnic University, Pomona, said, 鈥淲e need to ask: How do we evaluate, understand, regulate, improve, make ethical, make fair, build transparency into, etc., algorithms?鈥

Lilly Irani, an assistant professor at the University of California-San Diego, wrote, 鈥淲hile algorithms have many benefits, their tendency toward centralization needs to be countered with policy. When we talk about algorithms, we sometimes are actually talking about bureaucratic reason embedded in code. The embedding in code, however, powerfully takes the execution of bureaucracy out of specific people鈥檚 hands and into a centralized controller – what Aneesh Aneesh has called algocracy. A second issue is that these algorithms produce emergent, probabilistic results that are inappropriate in some domains where we expect accountable decisions, such as jurisprudence.鈥

Thomas Claburn, editor-at-large at InformationWeek, commented, 鈥淥ur algorithms, like our laws, need to be open to public scrutiny, to ensure fairness and accuracy.鈥

One anonymous respondent offered some specific suggestions, 鈥淩egarding governance:聽1)聽Let鈥檚 start with it being mandatory that all training sets be publicly available. In truth, probably only people well-qualified will review them, but at least vested interests will be scrutinized by diverse researchers whom they cannot control.听2)聽Before any software is deployed it should be thoroughly tested not just for function but for values.3)聽No software should be deployed in making decisions that affect benefits to people without a review mechanism and potential to change them if people/patients/students/workers/voters/etc. have a legitimate concern.聽4)聽No lethal software should be deployed without human decision-makers in control.聽5)聽There should be a list of disclosures at least about operative defaults so that mere mortals can learn something about what they are dealing with.鈥

An anonymous senior fellow at a futures organization studying civil rights observed, 鈥淭here must be redress procedures since errors will occur.鈥

Another anonymous respondent wrote, 鈥淭here are three things that need to happen here: 1) A 21st-century solution to the prehistoric approach to passwords; 2) A means whereby the individual has ultimate control over and responsibility for their information; and 3) Governance and oversight of the way these algorithms can be used for critical things (like health care and finance), coupled with an international (and internationally enforceable) set of laws around their use. Solve these, and the world is your oyster (or, more likely, Google鈥檚 oyster).鈥

Robert Bell, co-founder of the Intelligent Community Forum, commented, 鈥淭ransparency is the great challenge. As these things exert more and more influence, we want to know how they work, what choices are being made and who is responsible. The irony is that, as the algorithms become more complex, the creators of them increasingly do not know what is going on inside the black box. How, then, can they improve transparency?鈥

Micah Altman, director of research at MIT Libraries, noted, 鈥淭he key policy question is: How [will we] choose to hold government and corporate actors responsible for the choices that they delegate to algorithms? There is increasing understanding that each choice of algorithms embody a specific set of choices over what criteria are important to 鈥榮olving鈥 a problem, and what can be ignored. To incent better choices in algorithms will likely require actors using them to provide more transparency, to explicitly design algorithms with privacy and fairness in mind, and holding actors who use algorithms meaningfully responsible for their consequences.鈥

Timothy C. Mack, managing principal at AAI Foresight, said, 鈥淭he use of attention analysis on algorithm dynamics will be a possible technique to pierce the wall of black box decisions, and great progress is being made in that arena.鈥

Respondents suggested a range of oversight mechanisms, including a 鈥渘ew branch of the [U.S. Federal Communications Commission] made up of coders鈥 and 鈥渟ome kind of a rainbow coalition,鈥 and said it must 鈥渓egislate humanely the protection of both the individual and society in general.鈥

Mary Griffiths, an associate professor in media at the University of Adelaide in South Australia, replied, 鈥淭he most salient question everyone should be asking is the classical one about accountability 鈥 鈥榪uis custodiet ipsos custodes?鈥 鈥 who guards the guardians? And, in particular, which 鈥榞uardians鈥 are doing what, to whom, using the vast collection of information? Who has access to health records? Who is selling predictive insights, based on private information, to third parties unbeknown to the owners of that information? Who decides which citizens do and don鈥檛 need additional background checks for a range of activities? Will someone with mental health issues be 鈥榖locked鈥 invisibly from employment or promotion? The question I鈥檝e been thinking about, following UK scholar [Evelyn] Ruppert, is that data is a collective achievement, so how do societies ensure that the collective will benefit? Oversight mechanisms might include stricter access protocols; sign off on ethical codes for digital management and named stewards of information; online tracking of an individual鈥檚 reuse of information; opt-out functions; setting timelines on access; no third-party sale without consent.鈥

An anonymous cloud-computing architect commented, 鈥淐losed algorithms in closed organizations can lead to negative outcomes and large-scale failures. If there is not enough oversight and accountability for organizations and how they use their algorithms, it can lead to scenarios where entire institutions fail, leading to widespread collapse. Nowhere is this more apparent than in critical economic institutions. While many of these institutions are considered 鈥榯oo big to fail,鈥 they operate based on highly secretive and increasingly complex rules with outcomes that are focused on only single factor 鈥 short-term economic gains. The consequence is that they can lead to economic disparity, increased long-term financial risk and larger social collapse. The proper response to this risk, though, is to increase scrutiny into algorithms, make them open, and make institutions accountable for the broader social spectrum of impact from algorithmic decisions.鈥

An anonymous system administrator commented, 鈥淲e need some kind of rainbow coalition to come up with rules to avoid allowing inbuilt bias and groupthink to effect the outcomes.鈥

Maria Pranzo, director of development at The Alpha Workshops, wrote, 鈥淧erhaps an oversight committee 鈥 a new branch of the FCC made up of coders 鈥 can monitor new media using algorithms of their own, sussing out suspicious programming 鈥 a watchdog group to keep the rest of us safely clicking.鈥

Fredric Litto, emeritus professor of communications at the University of S茫o Paulo, Brazil, said, 鈥淚f there is, built-in, a manner of overriding certain classifications into which one falls, that is, if one can opt out of a 鈥榮oftware-determined鈥 classification, then I see no reason for society as a whole not taking advantage of it. On the other hand, I have ethical reservations about the European laws that permit individuals to 鈥榚rase鈥 鈥榠nconvenient鈥 entries in their social media accounts. I leave to the political scientists and jurists (like Richard Posner) the question of how to legislate humanely the protection of both the individual and society in general.鈥

An anonymous postdoctoral fellow in humanities at a major U.S. university commented, 鈥淭he bias of many, if not most, of the algorithms and databases governing our world are now corporate. The recent debate over whether Facebook鈥檚 News Feed algorithm is biased against conservative news in the U.S., for example, does little to address the bias Facebook has in presenting news which is likely to keep users on Facebook, using and producing data for Facebook. A democratic oversight mechanism aimed at addressing the unequal distribution of power between online companies and users could be a system in which algorithms, and the databases they rely upon, are public, legible and editable by the communities they affect.鈥

Lauren Wagner聽wrote hopefully about聽 a nonprofit artificial intelligence research agency founded in December 2015 with $1 billion in funding from technologists and entrepreneurs including Sam Altman, Jessica Livingston, 乱伦视频 Musk, Reid Hoffman and Peter Thiel. 鈥淥verall, artificial intelligence holds the most promise and risk in terms of impacting peoples鈥 lives through the expanding collection and analysis of data. Oversight bodies like OpenAI are emerging to assess the impact of algorithms. OpenAI is a nonprofit artificial intelligence research company. Their goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.鈥

Some respondents said any sort of formal regulation of algorithms would not be as effective as allowing the marketplace to initiate debate that inspire improvements.

奥谤颈迟别谤听Richard Oswald聽commented, 鈥淎s the service industries use these tools more extensively, they will evolve or face discriminating use by consumers. The secret does not lie in government rules for the algorithms themselves but in competition and free choice allowing consumers to use the best available service and by allowing consumers to openly share their experiences.鈥

Michael Whitaker, vice president of emerging solutions at ICF International, expects the market to self-correct after public input. He wrote, 鈥淎lgorithms are delivering and will continue to deliver significant value to individuals and society. However, we are in for a substantial near- to mid-term backlash (some justified, some not) that will make things a bit bumpy on the way to a more transparent future with enhanced trust and understanding of algorithm impacts. Over the next few years, scrutiny over the real-world impacts of algorithms will increase and organizations will need to defend their application. Many will struggle and some are likely to be held accountable (reputation or legal liability). This will lead to increased emphasis on algorithm transparency and bias research.鈥

Respondents said that, regardless of projected efficacy, attention has to be paid to the long-term consequences of algorithm development.

John B. Keller, director of eLearning at the Metropolitan School District of Warren Township, Indiana, replied, 鈥淎s algorithms become more complex and move from computational-based operations into predictive operations and perhaps even into decisions requiring moral or ethical judgment, it will become increasingly important that built-in assumptions are transparent to end users and perhaps even configurable. Algorithms are not going to simply use data to make decisions 鈥 they are going to make more data about people that will become part of their permanent digital record. We must advocate for benefits of machine-based processes but remain wary, cautious and reflective about the long-term consequences of seemingly innocuous progress of today.鈥

An anonymous respondent wrote, 鈥淎 more refined sense of the use of data and algorithms is needed and a critical eye at their outputs to make sure that they are inclusive and relevant to different communities. User testing using different kinds of groups is needed. Furthermore, a more diverse group of creators for these algorithms is needed! If it is all young white men, those who have privilege in this country, then of course the algorithms and data will serve that community. We need awareness of privilege and a more diverse group of creators to be involved.鈥

Many are pessimistic about the prospects for policy rules and oversight

Is any proposed oversight method really going to be effective? Many have doubts. Their thoughts primarily fall into two categories. There are those who doubt that reliable and effective oversight and regulation can exist in an environment dominated by corporate and government interests, and there are those who believe oversight will not be possible due to the vastness, never-ending growth and complexity of algorithmic systems.

T. Rob Wyatt, an independent network security consultant, wrote, 鈥淎lgorithms are an expression in code of systemic incentives, and human behavior is driven by incentives. Any overt attempt to manipulate behavior through algorithms is perceived as nefarious, hence the secrecy surrounding AdTech and sousveillance marketing. If they told us what they do with our data we would perceive it as evil. The entire business model is built on data subjects being unaware of the degree of manipulation and privacy invasion. So the yardstick against which we measure the algorithms we do know about is their impartiality. The problem is, no matter how impartial the algorithm, our reactions to it are biased. We favor pattern recognition and danger avoidance over logical, reasoned analysis. To the extent the algorithms are impartial, competition among creators of algorithms will necessarily favor the actions that result in the strongest human response, i.e., act on our danger-avoidance and cognitive biases. We would, as a society, have to collectively choose to favor rational analysis over limbic instinctive response to obtain a net positive impact of algorithms, and the probability of doing so at the height of a decades-long anti-intellectual movement is slim to none.鈥

An anonymous respondent said, 鈥淚 expect a weak oversight group, if any, which will include primarily old, rich, white men, who may or may not directly represent vested interests especially in 鈥榠ntellectual property鈥 groups. I also expect all sorts of subtle manipulation by the actual organizations that operate these algorithms as well as single bad actors within them, to basically accomplish propaganda and market manipulation. As well as a further promulgation of the biases that already exist within the analog system of government and commerce as it has existed for years. Any oversight must have the ability to effectively end any bad actors, by which I mean fully and completely dismantle companies, and to remove all senior and any other related staff of government agencies should they be found to be manipulating the system or encouraging/allowing systemic discrimination. There would need to be strong representation of the actual population of whatever area they represent, from socioeconomic, education, racial and cultural viewpoints. All of their proceedings should be held within the public eye.鈥

Dariusz Jemielniak, a professor of management at Kozminski University in Poland and a Wikimedia Foundation trustee, observed, 鈥淭here are no incentives in capitalism to fight filter bubbles, profiling and the negative effects, and governmental/international governance is virtually powerless.鈥

John Sniadowski, a systems architect, noted that oversight is difficult if not impossible in a global setting. He wrote, 鈥淭he huge problem with oversight mechanisms is that globalisation by the internet removes many geopolitical barriers of control. International companies have the resources to find ways of implementing methods to circumvent controls. The more controls are put in place, the more the probability of unintended consequences and loophole searching, the net result being more complex oversight that becomes unworkable.鈥

Some respondents said these complex, fast-evolving systems will be quite difficult if not impossible to assess and oversee, now and in the future.

Software engineer聽Joshua Segall聽said, 鈥淲e already have the statistical tools today to assess the impact of algorithms, and this will be aided by better data collection. However, assessment will continue to be difficult regardless of algorithms and data because of the complexity of the systems we aim to study.鈥

An anonymous senior research scholar at a major university鈥檚 digital civil society lab commented, 鈥淭his is a question of the different paces at which tech (algorithmic) innovation and regulation work. Regulation and governing of algorithms lags way behind writing them and setting them loose on ever-growing (already discriminatory) datasets. As deep learning (machine learning) exponentially increases, the differential between algorithmic capacity and regulatory understanding and its inability to manage the unknown will grow vaster.鈥

An anonymous respondent warned, 鈥淲ho are these algorithms accountable for, once they are out in the world and doing their thing? They don鈥檛 always behave in the way their creators predicted. Look at the stock market trading algorithms, the ones that have names like 鈥楾he Knife.鈥 These things move faster than human agents ever could, and collectively, through their interactions with each other, they create a non-random set of behaviors that cannot necessarily be predicted ahead of time, at time zero. How can we possibly know well enough how these interactions among algorithms will all turn out? Can we understand these interactions well enough to correct problems with algorithms when injustice invariably arises?鈥

Another anonymous respondent noted, 鈥淎lgorithms affect quantitative factors more than relational factors. This has had a huge effect already on our society in terms of careers and in the shadow work that individuals now have to do. Algorithms are too complicated to ever be transparent or to ever be completely safe. These factors will continue to influence the direction of our culture.鈥

And an anonymous participant in this canvassing observed that the solution might be more algorithms: 鈥淚 expect meta-algorithms will be developed to try to counter the negatives of algorithms,鈥 he said. 鈥淯ntil those have been developed and refined, I can鈥檛 see there being overall good from this.鈥

About this Canvassing of Experts

The expert predictions reported here about the impact of the internet over the next 10 years came in response to one of eight questions asked by Pew Research Center and 乱伦视频鈥檚 Imagining the Internet Center in an online canvassing conducted between July 1 and Aug. 12, 2016. This is the seventh Future of the Internet study the two organizations have conducted together. For this project, we invited nearly 8,000 experts and members of the interested public to share their opinions on the likely future of the internet, and 1,537 responded to at least one of the questions we asked. This report covers responses to one of five questions in the canvassing. Overall, 1,302 people responded. Some 728 of them gave answers to this follow-up question, which asked them to elaborate on their answers about the future impact of algorithms:

Algorithms will continue to have increasing influence over the next decade, shaping people鈥檚 work and personal lives and the ways they interact with information, institutions (banks, health care providers, retailers, governments, education, media and entertainment) and each other. The hope is that algorithms will help people quickly and fairly execute tasks and get the information, products, and services they want. The fear is that algorithms can purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts.

Question: Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society?

The answer options were:

鈥⒙燩ositives outweigh negatives
鈥 Negatives outweigh negatives
鈥 The overall impact will be about 50-50

Then we asked:

Please elaborate on your answer and consider addressing these issues in your response: What are the main positive changes you foresee? What are the main negative ones? What dimensions of life will be most affected 鈥 health care, consumer choice, the dissemination of news, educational opportunities, others? How will the expanding collection and analysis of data and the resulting applications of this information impact people鈥檚 lives? What kinds of predictive modeling will make life more convenient for citizens? What kinds of discrimination might occur? What kind of oversight mechanisms might be used to assess the impact of algorithms?

No matter how they answered the question, nearly all respondents pointed out some negatives of algorithm-based decision-making, sorting, work activities and other applications. Some 38% opted for the prediction that the positive impacts of algorithms will outweigh negatives for individuals and society in general, while 37% said negatives will outweigh positives, and 25% said the overall impact of algorithms will be about 50-50, positive-negative.

While many of these respondents estimate that the impact of algorithms will be negative, most of these experts assume that 鈥 no matter what drawbacks may develop 鈥 algorithm-based decision-making will continue to expand in influence and impact.

The Web-based instrument was first sent directly to a list of targeted experts identified and accumulated by Pew Research Center and 乱伦视频 during the previous six 鈥淔uture of the Internet鈥 studies, as well as those identified across 12 years of studying the internet realm during its formative years. Among those invited were people who are active in global internet governance and internet research activities, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR), and the Organization for Economic Cooperation and Development (OECD). We also invited a large number of professionals and policy people from technology businesses; government, including the National Science Foundation, Federal Communications Commission and European Union; think tanks and interest networks (for instance, those that include professionals and academics in anthropology, sociology, psychology, law, political science and communications); globally located people working with communications technologies in government positions; technologists and innovators; top universities鈥 engineering/computer science, business/entrepreneurship faculty and graduate students and postgraduate researchers; plus many who are active in civil society organizations such as Association for Progressive Communications (APC), Electronic Privacy Information Center (EPIC), Electronic Frontier Foundation (EFF) and Access Now; and those affiliated with newly emerging nonprofits and other research units examining ethics and the digital age. Invitees were encouraged to share the survey link with others they believed would have an interest in participating, thus there was a 鈥渟nowball鈥 effect as the invitees were joined by those they invited to weigh in.

Since the data are based on a non-random sample, the results are not projectable to any population other than the individuals expressing their points of view in this sample.聽The respondents鈥 remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise.

About 80% of respondents identified themselves as being based in North America; the others hail from all corners of the world. When asked about their 鈥減rimary area of internet interest,鈥 25% identified themselves as research scientists; 7% as entrepreneurs or business leaders; 8% as authors, editors or journalists; 14% as technology developers or administrators; 10% as advocates or activist users; 9% as futurists or consultants; 2% as legislators, politicians or lawyers; and 2% as pioneers or originators; an additional 25% specified their primary area of interest as 鈥渙ther.鈥

More than half of the expert respondents elected to remain anonymous. Because people鈥檚 level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background, and this was noted where relevant in this report.

Here are some of the key respondents in this report:

Robert Atkinson, president of the Information Technology and Innovation Foundation; danah boyd, founder of Data & Society;聽Stowe Boyd, chief researcher at Gigaom;聽Marcel Bullinga, trend watcher and keynote speaker;聽Randy Bush, Internet Hall of Fame member and research fellow at Internet Initiative Japan;聽Jamais Cascio, distinguished fellow at the Institute for the Future;聽Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp.;聽David Clark, Internet Hall of Fame member and senior research scientist at MIT;聽Cindy Cohn, executive director at the Electronic Frontier Foundation;聽Anil Dash, 迟别肠丑苍辞濒辞驳颈蝉迟;听Cory Doctorow, writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing;聽Judith Donath, Harvard University鈥檚 Berkman Klein Center for Internet & Society;聽Stephen Downes, researcher at the National Research Council of Canada;聽Bob Frankston, Internet pioneer and software innovator;聽Oscar Gandy, emeritus professor of communication at the University of Pennsylvania;聽Marina Gorbis, executive director at the Institute for the Future;聽Jon Lebkowsky, CEO of Polycot Associates;聽Peter Levine, professor and associate dean for research at Tisch College of Civic Life;聽Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future;聽Rebecca MacKinnon, director of Ranking Digital Rights at New America Foundation;聽John Markoff, author of聽Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots聽and senior writer at The New York Times;聽Jerry Michalski, founder at REX;聽Andrew Nachison, founder at We Media;聽Frank Pasquale, author of聽The Black Box Society: The Secret Algorithms That Control Money and Information聽and professor of law at the University of Maryland;聽Demian Perry, director of mobile at NPR;聽Justin Reich, executive director at the MIT Teaching Systems Lab;聽Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN;聽Michael Rogers, author and futurist at Practical Futurist;聽Marc Rotenberg, executive director of the Electronic Privacy Information Center;聽David Sarokin, author of聽Missed Information: Better Information for Building a Wealthier, More Sustainable Future;聽Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University;聽Doc Searls, journalist, speaker, and director of Project VRM at Harvard University鈥檚 Berkman Klein Center for Internet & Society;聽Ben Shneiderman, professor of computer science at the University of Maryland;聽Richard Stallman, Internet Hall of Fame member and president of the Free Software Foundation;聽Baratunde Thurston, a Director鈥檚 Fellow at MIT Media Lab, Fast Company columnist, and former digital director of The Onion;聽Patrick Tucker, author of聽The Naked Future聽and technology editor at Defense One;聽Steven Waldman, founder and CEO of LifePosts;聽Jim Warren, longtime technology entrepreneur and activist;聽Amy Webb, futurist and CEO at the Future Today Institute; and聽David Weinberger, senior researcher at the Harvard Berkman Klein Center for Internet & Society.

Here is a selection of some of the institutions at which respondents work or have affiliations:

AAI Foresight, Access Now, Adobe, Altimeter, The Aspen Institute, AT&T, Booz Allen Hamilton, California Institute of Technology, Carnegie Mellon University, Center for Digital Education, Center for Policy on Emerging Technologies, Cisco, Computerworld, Craigslist, Cyber Conflict Studies Association, Cyborgology, DareDistrupt, Data & Society, Digital Economy Research Center, Digital Rights Watch, dotTBA, Electronic Frontier Foundation, Electronic Privacy Information Center, Ethics Research Group, European Digital Rights, Farpoint Group, Federal Communications Commission, Flipboard, Free Software Foundation, Future of Humanity Institute, Future of Privacy Forum, Futurewei, Gartner, Genentech, George Washington University, Georgia Tech, Gigaom, Gilder Publishing, Google, Groupon, Hack the Hood, Harvard University鈥檚 Berkman Klein Center for Internet & Society, Hewlett Packard, Human Rights Watch, IBM, InformationWeek, Innovation Watch, Institute for Ethics and Emerging Technologies, Institute for the Future, Institute of the Information Society, Intelligent Community Forum, International Association of Privacy Professionals, Internet Corporation for Assigned Names and Numbers, Internet Education Foundation, Internet Engineering Task Force, Internet Initiative Japan, Internet Society, Jet Propulsion Laboratory, Karlsruhe Institute, Kenya ICT Network, KMP Global, The Linux Foundation, Lockheed Martin, Logic Technology, MediaPost, Michigan State University, Michigan State University, Microsoft, MIT, Mozilla, NASA, National Institute of Standards and Technology, National Public Radio, National Science Foundation, Neustar, New America, New Jersey Institute of Technology, The New York Times, Nokia, Nonprofit Technology Network, NYU, OpenMedia, Oxford University鈥檚 Martin School, Philosophy Talk, Privacy International, Queensland University of Technology, Raytheon BBN, Red Hat, Rensselaer Polytechnic Institute, Rice University Humanities Research Center, Rochester Institute of Technology, Rose-Hulman Institute of Technology, Semantic Studios, Singularity University, Social Media Research Foundation, Spacetel, Square, Stanford University Digital Civil Society Lab, Syracuse University, Tech Networks of Boston, Telecommunities Canada, Tesla Motors, U.S. Department of Defense, U.S. Ignite, UCLA, UK Government Digital Service, Unisys, United Steelworkers, University of California-Berkeley, University of California-Irvine, University of California-Santa Barbara, University of Copenhagen, University of Michigan, University of Milan, University of Pennsylvania, University of Toronto, Vodaphone, We Media, Wired, Worcester Polytechnic Institute, Yale University, York University.