Skip to main content

Advertisement

Log in

Artificial intelligence ethics has a black box problem

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

It has become a truism that the ethics of artificial intelligence (AI) is necessary and must help guide technological developments. Numerous ethical guidelines have emerged from academia, industry, government and civil society in recent years. While they provide a basis for discussion on appropriate regulation of AI, it is not always clear how these ethical guidelines were developed, and by whom. Using content analysis, we surveyed a sample of the major documents (n = 47) and analyzed the accessible information regarding their methodology and stakeholder engagement. Surprisingly, only 38% report some form of stakeholder engagement (with 9% involving citizens) and most do not report their methodology for developing normative insights (15%). Our results show that documents with stakeholder engagement develop more comprehensive ethical guidance with greater applicability, and that the private sector is least likely to engage stakeholders. We argue that the current trend for enunciating AI ethical guidance not only poses widely discussed challenges of applicability in practice, but also of transparent development (as it rather behaves as a black box) and of active engagement of diversified, independent and trustworthy stakeholders. While most of these documents consider people and the common good as central to their telos, engagement with the general public is significantly lacking. As AI ethics moves from the initial race for enunciating general principles to more sustainable, inclusive and practical guidance, stakeholder engagement and citizen involvement will need to be embedded into the framing of ethical and societal expectations towards this technology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

See supplemental material.

Code availability

Not applicable.

Notes

  1. While BKC indicated sampling 36 documents, two documents are reporting the exact same principles. The G20 Ministerial Statement on Trade and Digital Economy (G20 Trade Ministers and Digital Economy Ministers 2019) adopts the exact same principles as those of the OECD (Recommendation of the Council on Artificial Intelligence 2019). For the purpose of this study, we counted them as only one item and relied solely on the OECD document data.

  2. To ensure consistency in the categorization of our sample, we followed Fjeld and colleagues’ classification of authoring bodies: government, private sector, multi-stakeholder, inter-governmental organization, and civil society.

  3. These scales attempt to structure the degree of normative outputs’ completeness. Our scales differ from simpler assessments of AI ethics principles and guidance, such as Hagendorff’s scale (Hagendorff 2020) for evaluating the technical implementation of ethical goals and values, which is structured around three rather undefined levels: “yes”, “yes, but very few”, and “none”.

  4. It may appear oxymoronic that multi-stakeholder and inter-organizational initiatives do not entail stakeholder engagement. This is because in documents that were classified as not engaging stakeholders, development and drafting of ethical guidelines was done only by internal committees. Therefore, apart from individuals associated with the authoring organizations, no external stakeholders appear to have been involved.

  5. It would be possible to think that the year 2020 could be an outlier because of the health crisis. However, considering that the modalities of stakeholder engagement can be just as much carried out online (Dilhac et al. 2020) and that the year was a banner year in terms of ethical subjects blending normative considerations and AI uses (notably for contact tracing, epidemiological analysis, replacement of human functions deemed essential with automation, etc.), the year could also have been conducive to the enunciation of ethical benchmarks in a host of AI sectors that were affected by the crisis.

  6. The AI industry is already expressing complaints about too much government regulation (Crews 2020; Castro and McLaughlin 2019).

References

Download references

Funding

This project has been funded by a Partnership Engage Grant from the Social Sciences and Humanities Research Council (SSHRC) of Canada.

Author information

Authors and Affiliations

Authors

Contributions

JCBP designed the study and wrote the first draft. EM and JCBP read all the documents and extracted all pertinent information for content analysis. MCR and VC helped to mitigate the divergent interpretation. MCR, VC and EM critically revised the paper.

Corresponding author

Correspondence to Jean-Christophe Bélisle-Pipon.

Ethics declarations

Conflict of interest

The authors declare that they do not have any conflict regarding this project.

Ethical approval

This study does not involve human participants.

Consent for publication

This study has not been submitted elsewhere and the author consents to publication in the journal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 26 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bélisle-Pipon, JC., Monteferrante, E., Roy, MC. et al. Artificial intelligence ethics has a black box problem. AI & Soc 38, 1507–1522 (2023). https://doi.org/10.1007/s00146-021-01380-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01380-0

Keywords