<p dir="ltr">We examine the ways AI developers engaged in ethical discourse. AI ethics are defined as values, principles, and techniques that guide moral conduct in developing and deploying AI technologies. Comparing the pluralities of claims about AI ethics among developers, we aim to respond: How do AI developers evaluate their responsibility regarding the social implications of AI technologies? Based on qualitative analysis, this paper argues that in the construction of professional responsibility and accountability, developers of AI attend to different sets of epistemic and normative concerns organized around recursive forms of judgment named repertoires of evaluation. We identify four essential repertoires anchored on notions of technical efficiency criteria. However, they mobilize these criteria differently when combined with other values. This study connects the high-level guidelines around AI ethics with qualitative descriptions of developers’ values and experiences and proposes a theoretical framework to inspect them.</p>