AI for Social Good, AI for Datong

The Chinese government and technology companies assume a proactive stance towards digital technologies and AI and their roles in users‟—and more generally, people‟s—lives. This vision of „Tech for Good‟, i.e., the development of good digital technologies and AI or the application of them for good, is also shared by major technology companies in the globe, e.g., Google, Microsoft, and Facebook. Interestingly, these initiatives have invited a number of critiques for their feasibility and desirability, particularly in relation to the social and political conditions of liberal democratic societies. In this article, I discuss whether these critiques also apply to the Chinese context and contend that Confucian philosophy provides the normative resources to answer these critiques. This cross-cultural analysis, therefore, allows us to formulate a different account of AI4SG, which I shall call „AI for Datong‟, and helps us in our reimagining of the normative vision for AI.


Introduction
On November 11, 2019, Tencent, one of the largest Internet and technology companies of China and in the world, has revised its mission and vision to "Value for Users, Tech for Good", and elaborated the new mission and vision. On the company"s website, it states: Technology is powerful and evolving rapidly. The appropriate use of technology can have a significant impact on our social welfare. Technology is a tool, but the use of technology for good is a conscious choice. To us, the choice is to provide better products and services to users, to continually enhance their productivity and quality of life. We have strong convictions of right or wrong. To live up to our promise, we prioritize the needs of our users and incorporate the consideration of social responsibility in our products and services […]. (Tencent 2019) There are multiple ways to interpret Tencent"s pronouncement of "Tech for Good". However, the new mission and vision clearly express a proactive stance of the company to choose (or, make available) good and right products and services that enhance productivity and quality of life and account for user needs and social responsibility for its users and more generally for the public. In China, Tencent is not alone in advocating the view of "Tech for Good", as the view has been echoed by other major Chinese technology companies such as Alibaba Group and Huawei (Cheng 2019).
This proactive stance is also evident in China"s digital and AI strategies, as digital technologies and AI are being developed and applied in the public in areas such as education, healthcare, urban operations, judicial services, social governance, national security, etc. in efforts to resolve problems China faces in those areas and improve on existing practices (see, Nesta 2020). Indeed, the three sets of AI governance principles published by a key research institute for AI in Beijing, i.e., Beijing AI Principles (BAAI 2019), by the industry, i.e., the Joint Pledge on Artificial Intelligence Industry Self-Discipline (Webster 2019), and by the Ministry of Science and Technology of China, i.e., the Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artificial Intelligence (Laskai & Webster 2019), have included "Do good", "Enhance wellbeing", and "Harmony and friendliness" (which amounts to "the objective of enhancing the common well-being of humanity"). While there is justified concern over whether the inclusion of "the good" and "well-being" is merely rhetorical, but, in fact, is an attempt to legitimatize the use of technologies and AI that strengthen state control (Lucero 2019), the explicit mentioning of "the good" and "wellbeing", I think, demonstrates a marked difference between the Chinese perspective and the European approach to AI governance, which focuses primarily on rights (and what is right). This is, however, not to suggest the vision of developing good digital technologies and AI or using them for good is absent outside China. Major technology companies, e.g., Google, Microsoft, and Facebook, have also introduced similar

AI for Social Good: Definition, Critiques, and Responses
Luciano Floridi and colleagues define AI4SG broadly as "the design, development, and deployment of AI systems in ways that (i) prevent, mitigate or resolve problems adversely affecting human life and/or the wellbeing of the natural world, and/or (ii) enable socially preferable and/or environmentally sustainable developments" (2020, 1773-4). [1] As they acknowledge, however, what constitutes a "socially good outcome" will remain deeply contested (Floridi et al. 2020(Floridi et al. , 1774. Indeed, the contested nature of "the (social) good" is an inherent feature of liberal democratic societies, which John Rawls calls "the fact of reasonable pluralism", i.e., "a pluralism of comprehensive religious, philosophical, and moral doctrines [and, more importantly,] a pluralism of incompatible yet reasonable comprehensive doctrines" (1993, xvi); and, a comprehensive doctrine includes, among other things, "conceptions of what is of value in human life" (Rawls 1993, 13). It is the contested nature of the (social) desirable, thereby neglecting important questions such as who will benefit from the AI4SG project, who will be harmed by it, whether and how power will be shifted after the implementation, who should make these decisions, etc. Moreover, Moore notes that the language of "AI for the good" can be "strategically vague [to leave out] the intensely political nature of any one of the areas associated with [AI4SG]" (2019, 5). So construed, the lack of a substantive understanding of the good and/or robust ways to specify the good calls into question the desirability of AI4SG. It should be acknowledged that this challenge from the multiplicity and/or vagueness of the good is not merely a problem in computer science and related fields, but it is the background condition of liberal democratic societies. AI4SG, therefore, requires a clearer articulation of the good in use, and to ensure it is not arbitrary or idiosyncratic but shared by the society.
In addition, Moore (2019) rightly observes that calling some AI technologies "for the good" seems to imply the technologies and technological solutions are intrinsically better than the social systems, which presumably are the sources of the problems which AI is supposed to solve. Green (2019)  Floridi and colleagues (2020) recognize the above difficulties for AI4SG and propose seven essential factors for AI4SG projects to be considered as genuinely advancing the good, and these factors are: (1) falsifiability and incremental deployment; (2) safeguards against the manipulation of predictors; (3) receivercontextualized intervention; (4) receiver-contextualized explanation and transparent purposes; (5) privacy protection and data subject consent; (6) situational fairness; and (7) human-friendly semanticization. Some of these factors are intended to ensure AI4SG projects in question to satisfy the basic moral requirement of doing no harm, e.g., (2), (5), (6), (7), thereby being just (or morally right); and, the other factors address more directly the nature of the good in AI4SG projects, i.e., (1), (3), (4).
For instance, the falsifiability (in (1)) of AI4SG projects allows us to assess whether AI technologies and AI-based solutions in use realize the values (and the good) they are designed for and intended. Accordingly, falsifiability prevents vague claims for the good made by AI4SG projects. Relatedly, the receivercontextualization (in (3) and (4)) aims at aligning the values and the understanding of the good in AI4SG projects to the values and the understanding of the good of those who will be affected by the projects, which is to be achieved through consultation in the process of design and implementation and through explanation and rational persuasion of the purpose of AI technologies and AIbased solutions. Receiver-contextualization, therefore, enables AI4SG projects to be grounded on a shared understanding of the good. More importantly, when receiver-contextualization is done right, the inputs from the users should broaden the scope of assessment for AI4SG to include the social and political dimension of the problems AI technologies and AI-based solutions are introduced to resolve, thereby preventing the AI4SG from focusing only on immediate results and/or technological details.
The seven essential factors identified by Floridi and colleagues helpfully provide a procedural account of the good for AI4SG (Mansbridge 1998). However, insofar as a substantive account of the good is missing, and AI4SG projects are

Confucian Dao, Harmony, and AI for Social Good
It is well to state in the outset that Confucianism does not distinguish sharply between the right and the good. Unlike the liberal view that looks to confine the good as personal matters, i.e., something to be determined by the individuals themselves, both the right and the good are subjected to a substantive normative ideal in Confucian philosophy. Accordingly, Confucians would reject both the   "How should the system interact with humans to achieve smooth interactions, avoid causing offense and positively mediate human-human interactions?" (tactful interaction and mediation) "Which information should the system not ask for, record, extract or share?" (tactful privacy)" (Berberich et al. 2020, 22). The inclusion of harmony broadens the scope of assessment for AI4SG through the emphasis on various situational factors and the primacy to enrich all parties affected by AI4SG projects, thereby addressing the concerns about the myopic vision of the good in computer science and related fields as well as the danger of technological solutionism in AI4SG projects.
So far, I have argued that Confucian dao, which encompasses both the good and the right, would provide the normative ground for AI4SG. I have also argued that harmony, as a normative standard, would broaden the scope of assessment for AI4SG projects and require them to optimize for all parties by balancing their interests (and needs). Dao and harmony, therefore, affords a different way to conceive AI4SG. What is still missing in this Confucian reconceptualization of AI4SG, however, is a more specific vision of good society for AI4SG projects to aim for. [3] There are many ways to characterize the Confucian vision of the good society, but I shall introduce the idea of datong (Grand Union) for our purpose. As Albert H. Y. Chen (2015) shows, datong is the expression of common good in Confucian philosophy.
Datong expresses the ideal society of Confucianism, and a datong society is described in Li Yun as follows: When the Grand course was pursued, a public and common spirit ruled all under the sky [tianxia wei gong]; they chose men of talents, virtue, and ability; their words were sincere, and what they cultivated was harmony. Thus men did not love their parents only, nor treat as children only their own sons. A competent provision was secured for the aged till their death, employment for the able-bodied, and the means of growing up to the young. They showed kindness and compassion to widows, orphans, childless men, and those who were disabled by disease, so that they were all sufficiently maintained. Males had their proper work, and females had their homes. (They accumulated) articles (of value), disliking that they should be thrown away upon the ground, but not wishing to keep them for their own gratification. (They laboured) with their strength, disliking that it should not be exerted, but not exerting it (only) with a view to their own advantage. In this way (selfish) schemings were repressed and found no development. Robbers, filchers, and rebellious traitors did not show themselves, and hence From the description of the datong society in Li Yun, several observations can be made. Firstly, the datong society is characterized by a world that belongs to and shared by the general public (gong), where the ruling authority and ruled subjects work to contribute to the (social) good and not for individual gain. Relatedly, the people of the datong society are characterized by their altruistic care for others and their aversion to self-centeredness. As Julia Tao (2000) and Li-Hsiang Lisa Rosenlee (2017) summarize, a datong society is one that all are cared for. Finally, the datong society induces a condition of moral (self-)transformation, where occasions for selfish desires and other vices do not arise.
We can now rethink AI4SG with the idea of datong. The idea of datong demands AI4SG to be public-centered, i.e., AI4SG projects ought to be motivated and justified by the good of the general public but not the interests of specific groups of individuals. Moreover, AI4SG should be based on altruistic care for all, and do not aim for any personal (or company) advantages. Most importantly, AI4SG should not merely attempt to prevent, mitigate or resolve problems adversely affecting human beings and the environment, but to more fundamentally transform the individuals and social conditions such that the problems do not arise. Here, the idea of datong helps to refocus on the causes of the problems but not the symptoms of the problems; and, it also helps to counter technological solutionism as the transformation is ultimately about people and society but not technology per se.
It is instructive to note that the Confucian reconceptualization of AI4SG, which I shall call AI for Datong, is (will be) compatible with the AI4SG projects aiming to achieve SDGs. However, AI for Datong differs from them by being (i) publiccentered, (ii) care-centric, and (iii) transformative. In other words, AI for Datong takes the general public as the basis of normative assessment (or, the source of the good), is motivated by care, and aims to transform the situations. Hence, I contend that AI for Datong offers a formulation of AI4SG with a more explicit normative ground than merely referring to SDGs. One may be tempted to explain the "success" of the Health Code systems exclusive in terms of state coercion or a lack of concern for privacy in China that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected […] data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behavior by analyzing how the environment is affected by their previous actions" (AI HLEG 2019).

Concluding Remarks
[2] It should be noted that the translation of Tian as "Heaven" is not uncontroversial. Notably, Ames and Rosemont have argued that "Heaven" has an unnecessary and undesirable connotation to the transcendental realm in Judeo-Christian tradition, which is not apparent in the Confucian tradition; and, the translation of Tian as "Heaven" also downplays and the moral (-political) connotation in the concept of Tian. Accordingly, they think that it will be misleading to translate Tian as "Heaven" (Ames and Rosemont 1999, 46ff). So, I use this translation only because of lacking a better terminology to convey the multifarious meanings of Tian.
[3] Alternatively, we can also look at the Confucian vision of the good life and examine how to design and implement AI to achieve this vision, see Wong (2019).
[4] The author wrote this article in his personal capacity. The views expressed in this article are his own and do not represent the views of his employer.

Author contribution
The entirety of this manuscript was prepared by Pak-Hang Wong.

Editor's notes
The editor responsible for the publication of this article was Rafael Capurro.
Style editing and linguistic revision to the wording in this text has been performed by Prof. Adj. Hugo E. Valanzano (State University, Uruguay).