AI + Energy & Environment
This bibliography contains recent and relevant literature on AI and energy systems, automation, environmental decision-support systems (EDSS), and environmental ethics.
Please note that this bibliography is still in development.
55 items found
- 1. Victor Galaz, Fernanda Torre, Fredrik Moberg “The Biosphere Code Manifesto.” (n.d.)
- 2. Rob Kitchin: “Reframing, Reimagining Rethinking smart cities” (2016)
Over the past decade the concept and development of smart cities has unfolded rapidly, with many city administrations implementing smart city initiatives and strategies and a diverse ecology of companies and researchers producing and deploying smart city technologies. In contrast to those that seek… More
Over the past decade the concept and development of smart cities has unfolded rapidly, with many city administrations implementing smart city initiatives and strategies and a diverse ecology of companies and researchers producing and deploying smart city technologies. In contrast to those that seek to realize the benefits of a smart city vision, a number of critics have highlighted a number of shortcomings, challenges and risks with such endeavors. This short paper outlines a third path, one that aims to realize the benefits of smart city initiatives while recasting the thinking and ethos underpinning them and addressing their deficiencies and limitations. It argues that smart city thinking and initiatives need to be reframed, reimagined and remade in six ways. Three of these concern normative and conceptual thinking with regards to goals, cities and epistemology, and three concern more practical and political thinking and praxes with regards to management/governance, ethics and security, and stakeholders and working relationships. The paper does not seek to be definitive or comprehensive, but rather to provide conceptual and practical suggestions and stimulate debate about how to productively recast smart urbanism and the creation of smart cities.
- 3. Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate” (2016)
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes,… More
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organize the debate, reviews the current discussion of ethical aspects of algorithms, and assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
- 4. IEEE "Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems" (2016)
This report is addresses the need for AI to align with human and societal values as a means for creating AI systems that are beneficial to the environment. The report outlines principles for AI development and embedding values into AI. The report then discusses safety and ethical methodology for the… More
This report is addresses the need for AI to align with human and societal values as a means for creating AI systems that are beneficial to the environment. The report outlines principles for AI development and embedding values into AI. The report then discusses safety and ethical methodology for the development of AI, future law and policy, and issues that may arise from the intersection of AI and, personal data access and weapons systems.
- 5. Thomson, A.J: Artificial intelligence and environmental ethics.. AI Applications. Vol. 11, Iss. 1. (1997): 69-73.
AI systems have the potential to address ethical issues. Case-based reasoning systems may be the most promising approach to environmental ethics. Power issues, considered in applied ethics are a fundamental feature of issues arising from landscape-level management, and may be studied using stakeholder… More
AI systems have the potential to address ethical issues. Case-based reasoning systems may be the most promising approach to environmental ethics. Power issues, considered in applied ethics are a fundamental feature of issues arising from landscape-level management, and may be studied using stakeholder modeling techniques, while rule based systems may be appropriate for deontological ethical issues (based on duties or obligations). Semantic networks may be used to study and summarize the views of individuals in groups. Conflict between individual advantage and the common good can be explored through Game Theory.
- 6. Wikipedia: “Ten Commandments of Computer Ethics”
- 7. Etzioni, Amitai. Ethics and Information Technology; Dordrecht Vol. 18, Iss. 2. (2016): 149-156.
This article outlines the difficulties of enclosing Artificial Intelligence systems into a legal and ethical box. Using scenarios such as self driving cars breaking speed limits in emergencies, the article highlights areas where the law might need to be broken in order to align with our ethical codes.… More
This article outlines the difficulties of enclosing Artificial Intelligence systems into a legal and ethical box. Using scenarios such as self driving cars breaking speed limits in emergencies, the article highlights areas where the law might need to be broken in order to align with our ethical codes. How we might program these ethical codes and overrides into an algorithm accountable to the law? The authors also suggest creating and monitoring an oversight system to increase accountability for AI systems.
- 8. Rhea Butler. “Computer Hackers are Helping Illegal Loggers Destroy the Amazon Rainforest”, Mongabay. 2008.
- 9. Malia Wollan. “The End of Roadkill”, New York Times. November 8, 2017.
- 10. Nick Oliver, Thomas Calvard, and Kristina Potočnik, “The Tragic Crash of Flight AF447 Shows the Unlikely but Catastrophic Consequences of Automation”. Harvard Business Review, Sept. 25, 2017
This article reports on the crash of flight AF447 and finds that over automation may have contributed to the accident. As systems become safer, accidents become less frequent but more consequential. With “almost totally safe systems” human capacity to act in extreme situations becomes eroded.
- 11. Deep Mind. DeepMind AI Reduces Google Data Centre Cooling Bill by 40%. July, 2016.
- 12. A. Shehabi, S. Smith, D.A. Sartor, R.E. Brown, M. Herrlin, J.G. Koomey, E.R. Masanet, N. Horner, I, Azevedo, W. Lintner. United States Data Center Energy Usage Report. (2016).
- 13. Gabrielle Coppola, and Esha Dey. “Driverless Cars Are Giving Engineers a Fuel Economy Headache”. Bloomberg. October 11, 2017.
- 14. Circella, G., Ganson, C., Caroline R., “Keeping VMT and GHGs in Check in a Driverless Vehicle World.” UC Davis. (2017).
The environmental impact of driverless vehicles will differ depending on how they are integrated into our society. This report publishes recommendations for introducing driverless vehicles in a manner that limits greenhouse gas emissions and vehicle miles travelled.
- 15. Brad Smith. “AI for Earth Can Be a Game Changer for Our Planet.” Microsoft AI for Earth Blog. December 11, 2017.
- 16. Jeremy Hsu, “FDA Assembles Team to Oversee AI Revolution in Health,” (2017)
- 17. Steve Hanley. “Microsoft AI for Earth Project will Democratize Access to Climate Change Data”. Clean Technica. December 28, 2017.
- 18. Boston Consulting Group, “Boston Test of Self-Driving Cars Reveals Five Key Lessons for Cities Worldwide.” October 17, 2017.
- 19. Kate Crawford & Ryan Calo: “There is a blind spot in AI research” (2016).
Calo and Crawford argue that the development and evaluation of algorithms should be holistic in nature and consider how an AI system will impact all aspects of society. The article outlines three methods used to assess the impacts of algorithms and propose a fourth: a social systems approach.
- 20. Sandra Cummings: “Automation and Accountability in Decision Support System Interface”
Decision making instruments that use AI have the potential to create moral distance between a human user and an action being taken. This can reduce moral responsibility and increase reliance on an AI falsely characterized as omnipotent.
- 21. M. Kanevski, R. Parkin, A. Pozdnukhov, V. Timonin, M. Maignan, V. Demyanov, and S. Canu, Environmental data mining and modeling based on machine learning algorithms and geostatistics, Environmental Modelling & Software. Vol. 19, Iss. 9 (2004): 845-855
- 22. Andrew D. Selbst And Solon Barocas, “Regulating Inscrutable Systems” DRAFT
Taken from executive summary. This article takes seriously the calls for regulation via explanation to investigate how existing laws implementing such calls fare, and whether interpretability research can fix the flaws. Ultimately, it argues that while machine interpretability may make compliance… More
Taken from executive summary. This article takes seriously the calls for regulation via explanation to investigate how existing laws implementing such calls fare, and whether interpretability research can fix the flaws. Ultimately, it argues that while machine interpretability may make compliance with existing legal regimes easier, or possible in the first instance, a focus on explanation alone fails to fulfill the overarching normative purpose of the law, even when compliance can be achieved. The paper concludes with a call to consider where such goals would be better served by other means, including mechanisms to directly assess whether models are fair and just.
- 23. Christian Sandvig; “Algorithm Audit” (2014)
Algorithms affect larger numbers of people, use increasingly personal information, and their mechanisms are not easily understood. Given these parameters, accountability for adverse effects is necessary and that the potential for discrimination is high and not always obvious. This article discusses… More
Algorithms affect larger numbers of people, use increasingly personal information, and their mechanisms are not easily understood. Given these parameters, accountability for adverse effects is necessary and that the potential for discrimination is high and not always obvious. This article discusses methods for “auditing” algorithms and argues that we need a consumer reports for algorithms.
- 24. Matthias Spielkamp: “Inspecting Algorithms for Bias” (2017).
- 25. Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica: “Machine Bias” (2016).
- 26. Danielle Keats Citron, Frank Pasqualle: “The Scored Society: Due Process for Automated Predictions” (2014)
Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess whether we are good credit risks, desirable employees, reliable tenants, valuable customers — or deadbeats, shirkers, menaces, and “wastes of time.” Crucial opportunities are on the line, including the… More
Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess whether we are good credit risks, desirable employees, reliable tenants, valuable customers — or deadbeats, shirkers, menaces, and “wastes of time.” Crucial opportunities are on the line, including the ability to obtain loans, work, housing, and insurance. Though automated scoring is pervasive and consequential, it is also opaque and lacking oversight. In one area where regulation does prevail — credit — the law focuses on credit history, not the derivation of scores from data. Procedural regularity is essential for those stigmatized by “artificially intelligent” scoring systems. The American due process tradition should inform basic safeguards. Regulators should be able to test scoring systems to ensure their fairness and accuracy. Individuals should be granted meaningful opportunities to challenge adverse decisions based on scores miscategorizing them. Without such protections in place, systems could launder biased and arbitrary data into powerfully stigmatizing scores.
- 27. Andrew Tutt: “An FDA for Algorithms” (2017).
The rise of increasingly complex algorithms calls for critical thought about how best to prevent, deter, and compensate for the harms that they cause. This paper argues that the criminal law and tort regulatory systems will prove no match for the difficult regulatory puzzles algorithms pose. Algorithmic… More
The rise of increasingly complex algorithms calls for critical thought about how best to prevent, deter, and compensate for the harms that they cause. This paper argues that the criminal law and tort regulatory systems will prove no match for the difficult regulatory puzzles algorithms pose. Algorithmic regulation will require federal uniformity, expert judgment, political independence, and pre-market review to prevent - without stifling innovation - the introduction of unacceptably dangerous algorithms into the market. This paper proposes that a new specialist regulatory agency should be created to regulate algorithmic safety: an FDA for algorithms.
- 28. The Guardian “Discrimination by algorithm: scientists devise test to detect AI bias” (2016)
- 29. Association for Computing Machinery: “Statement on Algorithmic Transparency and Accountability” (2017)
- 30. Future of Life: “Asilomar AI principles”
- 31. Moritz Hardt Eric Price Nathan Srebro: “Equality of Opportunity in Supervised Learning” (2016)
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust… More
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
- 32. Ryan Calo “Robotics and CyberLaw” (2013)
Two decades of analysis have produced a rich set of insights as to how the law should apply to the Internet’s peculiar characteristics. But, in the meantime, technology has not stood still. The same public and private institutions that developed the Internet, from the armed forces to search engines,… More
Two decades of analysis have produced a rich set of insights as to how the law should apply to the Internet’s peculiar characteristics. But, in the meantime, technology has not stood still. The same public and private institutions that developed the Internet, from the armed forces to search engines, have initiated a significant shift toward robotics and artificial intelligence. This article is the first to examine what the introduction of a new, equally transformative technology means for cyberlaw (and law in general). Robotics has a different set of essential qualities than the Internet and, accordingly, will raise distinct issues of law and policy. Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument. Cyberlaw can and should evolve to meet these challenges. Cyberlaw is interested, for instance, in how people are hardwired to think of going online as entering a “place,” and in the ways software constrains human behavior. The new cyberlaw will consider how we are hardwired to think of anthropomorphic machines as though they were social, and ponder the ways institutions and jurists can manage the behavior of software. Ultimately the methods and norms of cyberlaw—particularly its commitments to interdisciplinary pragmatism—will prove crucial in integrating robotics, and perhaps whatever technology follows.
- 33. Bryce Goodman “European Union regulations on algorithmic decision-making and a “right to explanation”” (2016)
We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision- making (that is, algorithms that make… More
We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision- making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for machine learning researchers to take the lead in designing algorithms and evaluation frameworks which avoid discrimination.
- 34. Wachter, Sandra and Mittelstadt, Brent and Floridi, Luciano, Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation (December 28, 2016). International Data Privacy Law, 2017.
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed… More
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. Further, the ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raise questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative and policy steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018.
- 35. Jenna Burrell “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”
This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These… More
This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully. The analysis in this article gets inside the algorithms themselves. I cite existing literatures in computer science, known industry practices (as they are publicly presented), and do some testing and manipulation of code as a form of lightweight code audit. I argue that recognizing the distinct forms of opacity that may be coming into play in a given application is a key to determining which of a variety of technical and non-technical solutions could help to prevent harm.
- 36. Stuart Russell, Daniel Dewey, Max Tegmark: “Research Priorities for Robust and Beneficial Artificial Intelligence” (2015)
Understanding the pitfalls that can hinder potential benefits realized by AI is an important research priority. This article outlines areas of research that could help to mitigate potentially negative impacts of AI, including the effect of AI on labor markets, liability for autonomous vehicles and… More
Understanding the pitfalls that can hinder potential benefits realized by AI is an important research priority. This article outlines areas of research that could help to mitigate potentially negative impacts of AI, including the effect of AI on labor markets, liability for autonomous vehicles and law and ethics.
- 37. Paris Innovation Review: “AI Regulation: understanding the real challenges”
2016 saw a striking technology breakthrough: the AI robot Alpha Go beat world champion Lee SeDol with a score of 4:1. This remarkable victory was hailed as another milestone in the 60-year history of the artificial intelligence industry, but together with the euphoria came doubts and concerns: will… More
2016 saw a striking technology breakthrough: the AI robot Alpha Go beat world champion Lee SeDol with a score of 4:1. This remarkable victory was hailed as another milestone in the 60-year history of the artificial intelligence industry, but together with the euphoria came doubts and concerns: will robots replace humans? Will artificial intelligence ultimately endanger the human species? Such worries are not entirely unfounded; hence the frequent proposals that have called for government regulations on AI development, reminiscent of the tightening of regulations on gene technology research a decade ago. However, the essence of the problem is this: how do we impose effective, but not overly aggressive regulations on a threat that, for now, is imagined, one that has not yet become a reality? In fact, the dilemma of regulation is not how to balance the advantages and disadvantages of the technology, but how to thoroughly understand and contain the potential threats generated by artificial intelligence. In other words, is the hazard of artificial intelligence to be defined as “replacing humans”? If so, the only rational regulation would be to entirely ban further R&D efforts on this technology. But is it fair to sentence this emerging technology to death, while subjects as ethically challenging as gene engineering are still up and running? Artificial intelligence is already pervasive in our lives. It is widely used in search engines, social networks and news reporting. Its popularity begs us to reevaluate the concerns we hold against it. If we put aside these concerns, what would be the real dangers of artificial intelligence? The only way to devise appropriate, effective regulation is by finding the right answer to this question. This article will seek to achieve this.
- 38. Emiko Jozuka. “We Need to Control Our Algorithms Before They Destroy the Environment” (2015).
- 39. Nicholas Diakopoulos. “Algorithmic Accountability in Decision making” 2016.
Algorithms used to write articles for media process large sets of data quickly but result in frequent errors and a lack of nuance. This article provides an overview of different algorithms functions and the errors they can propagate. The article further explains the need for the adequate mechanisms… More
Algorithms used to write articles for media process large sets of data quickly but result in frequent errors and a lack of nuance. This article provides an overview of different algorithms functions and the errors they can propagate. The article further explains the need for the adequate mechanisms for accountability in these systems and argues that algorithms used by private companies and by the government should be held to different standards of accountability.
- 40. Vellido Alacena, AlfredoMés; Martin Guerrero, Jose D.; Lisboa, Paulo J.G. “Making Machine Learning Models interpretable” (2012)
- 41. Columbia School of Journalism
- 42. Davide Castelvecchi “Artificial intelligence called in to tackle LHC data deluge” (2015)
- 43. Nick Stockton: “A Curious Plan to Save the Environment with the Blockchain” (2017)
- 44. Rodney Brooks “The Seven Deadly Sins of AI Predictions” (2017)
- 45. Davide Castelvecchi “Can we open the black box of AI?” (2016)
- 46. Cortés, U., Sànchez-Marrè, M., Ceccaroni, L. et al. Applied Intelligence (2000) 13: 77.
An effective protection of our environment is largely dependent on the quality of the available information used to make an appropriate decision. Problems arise when the quantities of available information are huge and nonuniform (i.e., coming from many different disciplines or sources) and their… More
An effective protection of our environment is largely dependent on the quality of the available information used to make an appropriate decision. Problems arise when the quantities of available information are huge and nonuniform (i.e., coming from many different disciplines or sources) and their quality could not be stated in advance. Another associated issue is the dynamical nature of the problem. Computers are central in contemporary environmental protection in tasks such as monitoring, data analysis, communication, information storage and retrieval, so it has been natural to try to integrate and enhance all these tasks with Artificial Intelligence knowledge-based techniques. This paper presents an overview of the impact of Artificial Intelligence techniques on the definition and development of Environmental Decision Support Systems (EDSS) during the last fifteen years. The review highlights the desirable features that an EDSS must show. The paper concludes with a selection of successful applications to a wide range of environmental problems.
- 47. Andreas Hamann, David R. Roberts, Quinn E. Barber, Carlos Carroll, Scott E. Nielsen: “Velocity of climate change algorithms for guiding conservation and management” (2014)
The velocity of climate change is an elegant analytical concept that can be used to evaluate the exposure of organisms to climate change. In essence, one divides the rate of climate change by the rate of spatial climate variability to obtain a speed at which species must migrate over the surface of… More
The velocity of climate change is an elegant analytical concept that can be used to evaluate the exposure of organisms to climate change. In essence, one divides the rate of climate change by the rate of spatial climate variability to obtain a speed at which species must migrate over the surface of the earth to maintain constant climate conditions. However, to apply the algorithm for conservation and management purposes, additional information is needed to improve realism at local scales. For example, destination information is needed to ensure that vectors describing speed and direction of required migration do not point toward a climatic cul-de-sac by pointing beyond mountain tops. Here, we present an analytical approach that conforms to standard velocity algorithms if climate equivalents are nearby. Otherwise, the algorithm extends the search for climate refugia, which can be expanded to search for multivariate climate matches. With source and destination information available, forward and backward velocities can be calculated allowing useful inferences about conservation of species (present-to-future velocities) and management of species populations (future-to-present velocities).
- 48. Allison Linn “Building a better mosquito trap: How a Microsoft research project could help track Zika’s spread”
- 49. Peter Stone et al. "Artificial Intelligence and Life in 2030." 2016.
Taken from Executive Summary. Starting from a charge given by the AI100 Standing Committee to consider the likely influences of AI in a typical North American city by the year 2030, the 2015 Study Panel, comprising experts in AI and other relevant area, focused their attention on eight domains… More
Taken from Executive Summary. Starting from a charge given by the AI100 Standing Committee to consider the likely influences of AI in a typical North American city by the year 2030, the 2015 Study Panel, comprising experts in AI and other relevant area, focused their attention on eight domains they considered most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years.
- 50. Cade Metz, “Teaching A.I. Systems to Behave Themselves” (2017)
- 51. Shane Legg et al, “Learning Through Human Feedback” DeepMind Blog. June 12, 2017.
- 52. “Algorithms and Explanations” (conference) NYU 2017
- 53. European Parliament “Artificial Intelligence: Potential Benefits and Ethical Considerations” (n.d.)
This document is a European Parliament briefing document that provides an overview of the benefits of AI as well as potential issues that may arise due to AI. The document discusses the need for trust in order to realize societal benefits from AI systems. The document describes the International Business… More
This document is a European Parliament briefing document that provides an overview of the benefits of AI as well as potential issues that may arise due to AI. The document discusses the need for trust in order to realize societal benefits from AI systems. The document describes the International Business Machines Corporation’s efforts and suggestions for AI ethics and policy.
- 54. European Parliament “DRAFT REPORT with recommendations to the Commission on Civil Law Rules on Robotics”
- 55. UK Government Office for Science “Artificial intelligence: opportunities and implications for the future of decision making”