AlgorithmWatch Logo

IMPACT ASSESSMENT TOOL – EN [V4 – current]

Checklist 1 & checklist 2, as of 07.01.2025

Purpose of the tool

Algorithmic systems and systems based on artificial intelligence (AI) can affect people and societies in ethically relevant ways. For this reason, those who use these systems bear a special responsibility. This impact assessment tool is a method to enable users to responsibly deploy algorithmic systems.

This impact assessment tool consists of two parts:

  1. Triage: The first step is to use the triage checklist to determine the possible ethically relevant impacts that the algorithmic system entails and that should be documented. This allows you to determine whether a transparency report needs to be drawn up at all for the algorithmic system in question and, if so, which questions need to be answered in it.
  2. Transparency Report: In a second step, you answer the relevant questions identified by the triage checklist. You declare what goals you are pursuing, and whether these goals and the means to reach them are ethically justifiable, which ethically relevant effects it could have, how you measure this and which measures you are taking to achieve your goals in an ethically sound way. The compilation of these answers results in a transparency report that creates internal and external transparency and thus enables control and accountability.

Guidelines for the use of the impact assessment tool:

→ When are we supposed to fill in the tool? We recommend that you start answering this questionnaire as early as possible – ideally when planning the project – in order to initiate an active reflection process. You should then revise your answers during implementation and only complete them once the system has been implemented. This ensures that relevant information is collected at appropriate stages.

→ How should we answer the questions in the tool? Your answers should be complete and comprehensible, including to people not involved in the project. In checklist 2, it is up to you whether you decide to use bullet points or full sentences. As you can continually revise the answers during the process, it is also possible to start with short bullet points and reformulate them at a later stage. The purpose of the tool is to help you use an algorithmic system responsibly – thus, it is up to you to ensure that the resulting transparency report fulfills this function.

→ To whom do we show the resulting transparency report? To ensure responsible use, we recommend making the transparency report transparent to relevant parties. This fulfills two functions: Firstly, it raises your own standards for the answers in the transparency report. Secondly, transparency enables the relevant bodies to monitor and control, which are essential elements of responsible use.

→ How do we deal with ethical challenges? If you, as a responsible user, are unable or unwilling to adequately address the ethical issues identified, you should re-evaluate the use of the specific algorithmic system or allocate additional resources to find solutions to the ethical challenges.

→ When do we have to reassess the system? If significant changes are made to the system or your internal processes after implementation, the checklists will need to be revisited to ensure that the original assessments remain valid.

→ Does this impact assessment tool guarantee legal compliance and sufficient AI expertise? This tool is not a legal compliance checker and does not replace the important task of ensuring that you as a deployer comply with legal requirements. The tool also does not eliminate the various other aspects (e.g. expertise, resources, training) that are required for the responsible use of algorithmic systems.

→ Can we use the tool for free? AlgorithmWatch offers this Impact Assessment Tool free of charge. If you would like to be accompanied when evaluating your tool or have questions, drop us a line at info@algorithmwatch.ch (Switzerland) or info@algorithmwatch.org (Germany). Otherwise, we also appreciate donations – to AlgorithmWatch CH (Switzerland) or AlgorithmWatch (Germany)

→ Is the tool available in other languages? Yes, it is also available in German.

Checklist 1: Transparency Triage for Algorithmic Systems

Function: The triage checklist identifies ethically relevant possible effects produced by the algorithmic system that should be documented. This will help you to determine whether a transparency report needs to be written and, if so, which questions need to be answered in it.

Note: Such a checklist can never fully cover all potential impacts for all contexts (there may be other risks to people, assets and society as a whole that are not mentioned here). Rather, it is a process-oriented tool, but the responsibility to ultimately minimize problems remains with you as the user.

Sources of harm, risks and wrongs

[1.1a] Data protection : Does the algorithmic system deal with sensitive categories of personal data, as defined by applicable legal norms?
Note: Consider which user data is fed into the system. Can this include “particularly sensitive” data (in accordance with Art. 5c of the Swiss Data Protection Act) or “special categories” (in accordance with Art. 9 of the EU General Data Protection Regulation) of personal data?
[1.1b] Could the content generated by the algorithmic system, or a recommendation, prediction or decision influenced by the algorithmic system, have an impact on people's privacy or private lives, including their family life?
[1.2] Cybersecurity: a) Do malicious parties have especially strong motives to hack the algorithmic system? b) Can a hacked system be used to achieve financial gain—including by means of blackmailing? c) Or can a hacked system be used to achieve political goals (including expressing political opposition against the system)?
[1.3] Individual decisions: Is the algorithmic system used for decisions about individuals, either: fully automated (making decisions without human review), or semi-automated (providing recommendations, predictions, or other inputs that influence the decision)?
Note: Answer “Yes” if the system either makes decisions about individuals automatically OR influences decisions through recommendations, predictions, or other inputs that affect which decision is ultimately taken.
[1.4] Legal rights: Is the system used to take, recommend, predict, or in other ways affect a decision about a legal duty or right of an individual? Is it used to generate outputs that affect a legal duty or right of an individual?
[1.5] Could the use of the algorithmic system have a significant impact on individuals? Which of the following scenarios would be conceivable in principle (regardless of whether it is very realistic or rather unlikely)?
Note: To answer this question, consider the stakeholders identified in Part A. A detailed legal assessment is not yet required to answer this triage question; it is about your initial assessment. Therefore, try to answer it without further effort with the help of your project team.
[1.6-1.9] Individual impact of the algorithmic decision (or the recommendation, prediction or decision influenced by the algorithmic system)
Note: This question is about the effects of content generated by the algorithmic system (such as text, images, videos or audio files) or predictions, recommendations or decisions influenced by the algorithmic system. Reflect on whether these outputs can be avoided, reversed or compensated for and select the appropriate answer option. If none of the options apply, do not check any box.
[1.10] Does the decision taken, recommended, predicted, or otherwise affected by the algorithmic system or the output generated by the algorithmic system concern any of the following areas of life or society?
Note: Tick those that apply, multiple options are possible.
[1.11] When the algorithmic system used by the public administration is or would be delegated to a private third-party (in its entirety or for aspects of its implementation), does this result in a change in … ?
Note: This question is only relevant for public administrations. Reflect on the possible effects of outsourcing and, if necessary, draw on the assessment of relevant experts in your organization. Tick the answers that apply. Several answers are possible.
[1.14-15] Algorithmic fairness
Note: A statistical proxy risk exists when a system evaluates people on the basis of characteristics that may be representative of other, possibly sensitive characteristics of their person: For example, hobbies can provide indicators of gender identity or the number of years of professional experience can be an indicator of a person’s age (watch explanatory video) Procedural non-regularity risk emerges typically with continuous learning systems, i.e., designs in which the model keeps changing in response to learning from interactions during its actual implementation (and not just in the training and testing phase). If you are unable to assess this yourself, consult appropriate experts from your organization.

Autonomy

[1.16] Fully automated decisions: Is the algorithmic system used to take a decision on an individual without any subsequent human checks, vetting, or adaptation?
[1.17] Complexity risk: Does the algorithmic system rely on parameters, features, factors, or decision criteria that are not normally considered for the task the system has to solve?
Note: Compare the algorithms’ input to parameters, features, factors, or criteria most experts do normally consider to solve the corresponding task.
For example: When a tax fraud algorithm introduces new criteria for selecting tax fraud suspects that have never been used before, select “yes”.
[1.18] Human in the Loop risk: Do users who handle the system’s result lack sufficient ability, right, power, authority, or resources to question the algorithmically generated content, prediction, recommendation, or decision?
Note: Select Yes if those using the system lack either the time or processes or expertise or permission to scrutinize the output of the system. If you lack any one of these aspects, answer “yes”.
[1.19] Third party infrastructure. Does the technical system rely on third-party infrastructure the deploying entity has no unrestricted control over and/or access to, e.g., data sets, servers, or computing power?
Note: If you are not sure about this question, consult the relevant experts in your organization.

Generative AI

[1.20N] Do you use generative AI for the following purposes:
Note: “Generative AI” is able to generate entirely new content (text, pictures, and/or audiovisual content) from prompts. Tick those that apply, multiple options are possible.
[1.21N] Have you modified and adapted an open source or customizable AI model for use in a generative AI tool (such as fine-tuning outputs based on any of the GPT models by OpenAI, or adapting an open source model such as Meta’s Llama models)?
[1.22N] Have you developed generative AI tools internally?

Additional control

Do you believe the project raises additional ethical issues (not explicitly considered above)?

Result: You must write a transparency report

Please confirm your acknowledgement by selecting “acknowledgement” from the drop-down menu above. This will open up the specific sections of the Transparency Report that you are required to fill, which is customized based on your replies to the triage questions. Please read the instructions below the reply boxes carefully.

Checklist 2: Transparency Report

In this second checklist, you declare what goals you are pursuing with the AI application and what values this is based on, what ethically relevant effects it could have, how you measure this and what measures you are taking to achieve your goals. The questionnaire is individualized and focuses on the key ethical issues previously identified in the triage checklist. The compilation of these answers results in a transparency report that creates internal and external transparency and thus enables control and accountability.

The transparency report follows the following structure (three sections):

  1. – Declare objectives: What objectives are pursued and implemented through the system and what are the principles, standards, ethical frameworks and values it abides to?
  2. – Declare methods to measure achievement of these objectives: How do you measure whether you are achieving the objectives declared above?
  3. – Declare results: What did your measurements reveal, i.e. how is the system performing in relation to your objectives declared above?

Section 1: Objectives

Questions 2.1 to 2.6 of the checklist: Try to answer these questions before you design the algorithmic system. You can return to them at a later stage and check whether you need to revise your answers. However, make sure that the old answers and thus the previous methods and processes remain traceable and comprehensible for members of your organization.

Value Transparency

This section is about being transparent about what the system is about, what problems you want to solve with it, what goals you are pursuing with it and the ethical constraints it is subject to.
Guide to answer: Begin with a brief introduction on the algorithmic system, including its title, how it works technically, and a general outline. This should give a clear and succinct picture of what the system is about and how it works on a technical level. However, try to keep it short to get the main points across. It is possible to answer using bullet points or short notes.

Recommended cross-check with: Project team

If necessary, have your draft answer checked by the project team to ensure that all relevant perspectives are included.

Guide to answer: Clearly define the specific problem or challenge the algorithmic system is designed to address in more detail. It should be detailed enough to give an understanding of the domain and context in which the system will operate.

Recommended cross-check with: Project team

If necessary, have your draft answer checked by the project team to ensure that all relevant perspectives are present.

Guide to answer: Describe the intended purpose of the algorithmic system, including the outcomes that you foresee or wish to achieve. Be specific about what results or outcomes the system is designed to produce. This could include efficiency or performance improvements, decision-making support, automation of tasks, cost reduction, staff satisfaction, quality assurance, etc. If there are several objectives, you can also use a bulleted list.

Recommended cross-check with: Project team

Ethical and legal requirements and dealing with potential negative effects of the system

In this section, you make transparent which ethical and legal requirements the system must fulfill, which potential damages or legal infringements could arise and how they are countered.
Guide to answer: For this question, explain which legal and non-legal guidelines or specifications are applicable to the system and its objectives formulated here. These may be non-binding internal guidelines, internal process requirements, external guidelines or legal requirements.

Recommended cross-check with: Relevant internal experts

Consult internal experts (e.g. controlling, legal department, the relevant department responsible for the topic, etc.) to answer this question. Keep it short – an answer using short notes and a list is sufficient.

Guide to answer: Describe briefly and concisely what personal data is processed, what specifications you have made with regard to data protection and what data protection objectives and standards you meet.

How do you define and protect the privacy of persons affected by the algorithmic system? Explain what aspects of privacy are affected and consider different types of privacy risks (e.g. individuals may have less control over new information produced about them; confidential information about individuals may be exposed to the public or unwanted parties; there may be too many connections between areas of life that people prefer to keep separate – e.g. sports, health, family, friendship, work, politics, religion, charity work, etc. Then explain how your privacy considerations influenced the decision to use or not use the algorithmic system for certain purposes or to exclude certain input data. For this question, please specifically address the topics listed in Checklist 1 – Question 1.1.

Recommended cross-check with: Data protection officer

To answer this question, you may need to contact your organization’s data protection officer or have them review your draft answer.

Guide to answer: In this part of the report, please specifically address the issues mentioned in checklist 1—question 1.2. Describe what cybersecurity risks exist (Do malicious actors have a particularly strong interest in hacking the algorithmic system? Can they use it to make a significant financial gain – including through blackmail? Or can a hacked system be used to achieve political goals? State your specific cybersecurity requirements, if relevant.

Recommended cross-check with: Cybersecurity managers

To answer this question, you may need to contact the cybersecurity officers in your organization or have them review your draft answer.

Guide to answer: Please explain what you consider to be fair treatment of people affected by the system. Then address whether and how your system could have an unfair impact on different groups or individuals and/or unfairly interfere with their fundamental rights.

Consider the issues mentioned in Checklist 1 – Questions 1.14 to 1.15 (Statistical proxy risk: Are predictions made on the basis of human behavioral or personal characteristics?; Procedural non-regularity risk: Is the system continuously learning and changing its responses based on new knowledge and interactions during use or is it possible that it makes different decisions for two cases that differ only in the timing of the decision?) Answer here briefly and precisely.

Recommended cross-check with: Ethics officer

To answer this question, you may need to seek support from people who are responsible for the fairness of the system (and who have the appropriate technical, ethical and legal understanding to assess the question) or have them check your draft answer.

Guide to answer: Describe the requirements that must be met by explanations that make the results of the system interpretable. What are the inputs and outputs of predictions or decisions and who is expected to understand them? Why is explainability important in this context? Who has to understand what and for what purposes? Who should be able to distinguish errors from valid outputs to guarantee trustworthiness? What type of explanations would be important? Can the output be used even if the reason behind the output is poorly understood? Please specifically address the issues mentioned in checklist 1—questions 1.17 and 1.18 (Complexity risk; Human in the loop risk).

Answer briefly.

Recommended cross-check with: Project team

If you are unsure, think about who could help you answer the questions and ask the project team if necessary.

Guide to answer: Explain how you make outputs of generative AI recognisable as generative AI. Evaluate the current state of knowledge regarding generative algorithms/AI (e.g. the timeliness and quality of training materials provided on this topic, the long-term plans for the introduction of generative AI, the involvement of experts, the structures for detecting and eliminating errors and the usage guidelines, etc.).

Recommended cross-check with: Project team

If necessary, have your draft answer checked by members of the project team who are responsible for these topics.

Guide to answer: Refer to your answers to 1.5 and 1.10 and explain how the algorithmic system has an impact on the relevant areas of human life, if any. Try to be brief and precise despite the complexity of the topic. You are also required to fill in this section if the system makes decisions on individuals that are “high impact” (in the sense that all the answers to questions 1.6, 1.7, 1.8, and 1.9 were negative). Please explain why none of the impact-mitigating strategies (1.6, 1.7, 1.8 or 1.9) is feasible.

Recommended cross-check with: Experts in ethics and law

To answer this question, seek advice from people in your organization who have the appropriate ethical and legal understanding.

Guide to answer:

Have you set goals and/or defined guidelines and requirements regarding the environmental performance of the system? Do you intend to measure the environmental performance of the system with regard to the following aspects – and if so, how?

  • 1. Energy consumption(during the development and use of the algorithmic system)
  • 2. CO2 and other greenhouse gas emissions
  • 3. Indirect resource consumption (e.g. production and disposal)

Further information and guidelines on the environmental compatibility of AI systems: Step by step to sustainable AI

Self-assessment tool for the sustainability of AI systems: SustAIn assessment tool

Recommended cross-check with: Sustainability managers

To answer these questions, seek advice from the relevant departments in your organization that work at the interface of IT and environmental sustainability.
Guide to answer: When answering this question, provide information and declare your objectives and constraints, if any, on: I. Market diversity: Are you deploying a system from a dominant supplier? Are there risks/dependencies associated with that? For more information on these questions, please visit: Step by step to sustainable AI II. Working Conditions and Jobs: If the algorithmic system is deployed at the workplace, what is the foreseeable impact on workers, and what measures have you taken to avoid or compensate for negative impact? What do you know about the working conditions before, during, and after development of the system?

Recommended cross-check with: External provider and/or internal contact person to provider

Guide to answer: Document here the intended functionality and specific requirements that went into the design or fine-tuning of an existing generative AI model. What specific features of a generative AI system have you implemented or modified? Also document the recognized model constraints and usage instructions. Example: If a software product is based on a commercial or open source model (e.g. ChatGPT from OpenAI or Llama from Meta) and customizations have been made for the specific use case, then you should indicate here the additional features or additional constraints that you have incorporated (or had incorporated).

Recommended cross-check with: Person responsible for development / CTO

Accountability / Transparency

This section is about making responsibilities around your algorithmic system transparent.

Who is responsible for the design of the algorithmic system within your institution (level of project organization)? And what procedures are used to document the design choices and development process of the algorithmic system?

Guide to answer: Who in your organization is responsible for the design of the algorithmic system (at project organization level)?

If an externally developed model is procured, who is responsible for defining and communicating design specifications and requirements?

What procedures are used to document the design decisions and the development process of the algorithmic system?

How were employees and other stakeholders for whom the algorithmic system is relevant involved in design decisions?

Have designers and developers received education or awareness training to help them become aware of biases and prejudices that the system may exhibit?

Recommended cross-check with: Project team, tech owner/CTO

Involve at least the team leads responsible for the design of the algorithmic system and their superiors (e.g. the CTO).

Guide to answer:

Who is responsible for the implementation of the system and its results (and which department or position does the person hold within your organization)? Name at least the people who will oversee the implementation of the project in your organization.

What procedures are in place to ensure that the algorithmic system is used in a lawful, ethical, safe and appropriate manner? List the procedures to inform employees for whom the system is relevant; to ensure that employees understand the system, its operation and limitations in order to use it appropriately; to ensure that employees can trust the system; and any mitigation measures you have put in place in the event that the system replaces certain tasks (e.g. retraining).

What contractual obligations exist or are individual users liable for incorrect outputs of the algorithmic system?

Recommended cross-check with: Project team

Guide to answer: Who is responsible for managing responses and feedback from people who use or are supported by the system? At a minimum, identify the people who oversee feedback to and opposition to the system, including issues related to bias, discrimination, and poor performance of the system. (Mechanisms to flag and document these issues do not need to be answered here, answer these below).

Recommended cross-check with: Project team

Section 2: Methods for measuring the achievement of goals

Questions 2.7 to 2.19 of the checklist: Ideally, answer these questions after you have tested the algorithmic system. You can return to them at a later date and check whether you need to revise your answers. However, make sure that the old answers and thus the previous methods and processes remain traceable and comprehensible for the members of your organization.

Transparency on implementation and control

This section is about making procedures for implementing and monitoring the system transparent.
Guide to answer: What methods were used to test and measure the performance of the system in relation to its main objective (see question 2.1)? How accurate, robust, reliable and efficient are the recommendations, predictions, classifications or decisions made by the algorithmic system?

Recommended cross-check with: Project team

Seek advice on this question from the people responsible for the tests in the project team.

Guide to answer: What methods were used to identify stakeholders (individuals and groups) directly affected by the outputs/predictions/recommendations/decisions of the system? What methods, if any, were used to identify risks and rights associated with these stakeholders? How did you assess whether these are marginalized groups and consider whether their fundamental rights may be affected? If you used the stakeholder analysis methodology in Part A, it is sufficient to mention it. You do not need to list the stakeholders themselves, this will follow in question 2.8.3.

Recommended cross-check with: Project team

Guide to answer:

What methods were used to identify persons or groups who are (i) indirectly affected by the algorithmic system (i.e. who are exposed to different risks than those directly affected), or persons or groups who are (ii) generally affected by the digital transformation (e.g. employees, customers, public administration staff)? How were these persons/groups or their representatives informed and consulted?

For group (i), also refer to the stakeholder analysis in Part A.

For group (ii), consider the impact of the change on the public IT infrastructure, public data assets or intangible assets of the public sector (e.g. powers and competences – e.g. if internal experts are replaced by a commercial system so that the public sector is no longer able to provide the same competences internally).

Recommended cross-check with: Project team

Guide to answer: Who are your stakeholders? Insert a summary of your answers to PART A, “Stakeholder analysis”.
Guide to answer: What methods have been used to assess risks of adverse effects deriving from predictable and unavoidable errors due to the system’s imperfect accuracy (e.g., when the system uses statistics)? What procedures are in place to deal with the risks of such damage? What procedures are in place to deal with unforeseeable system errors and malfunctions? How can employees record and document feedback and complaints about the system? How are these processed and how are employees informed of the outcome?

Keep your answer short and precise.

Recommended cross-check with: Project team

Guide to answer: What methods and procedures have you used to screen generative AI-generated content for bias and/or potentially offensive material, including potentially offensive stereotypes (e.g., related to presumed or ascribed origin and gender identity, racialization, sexual orientation, ability, age, class, or culture)? What procedures are in place and what measures have been taken to avoid or reduce the risk of such content?

This question specifically concerns content created by generative AI. Measurements of the accuracy, robustness, reliability and efficiency of predictions, classifications, recommendations and automated decisions should be included in 2.7.

Recommended cross-check with: Project team

Guide to answer: What methods have you used to assess the risk of fundamental rights being adversely affected by the algorithmic system? Have you taken measures to avoid negative impacts of the algorithmic system on the fundamental rights of affected people (2.2.5) and ensure that these measures are fair, transparent and accountable – and if so, what are they? Refer to other sections of this report whenever possible, explaining how designing 2.2.3 for fairness, 2.2.4 for explainability, 2.5 for feedback management, 2.9 for error correction and 2.14 for monitoring protects the rights of stakeholders. Important: It is not necessary to go into the assessments regarding data protection and cyber security here, as you have already answered or will answer these in 2.2.1 (data protection) and 2.11 (cyber security).

Recommended cross-check with: Project team and/or ethical or legal experts

Guide to answer: What methods have you used to assess cybersecurity risks? What cybersecurity measures are in place? Explain the methods used to protect the system from malicious interactions. Also consider whether cybersecurity risks could jeopardize privacy. Please specifically address the topics mentioned in Checklist 1 – Question 1.2.

Recommended cross-check with: Cybersecurity officer

Guide to answer: What procedures were used to define the fairness of the system and to measure, test and monitor any bias?

Describe your fairness measures and justify their use (due to differences between demographic groups or data quality issues). Refer to the individual fairness requirements described in 2.2.3. The issue mentioned in questions 1.14 (statistical proxy) and 1.15 (procedural regularity) must always be explicitly addressed in this section of the report.

Recommended cross-check with: Ethics experts

To answer this question, you might need to consult or check your draft answer with those responsible for the fairness of the system (who have the relevant technical, ethical and legal expertise to answer the question).

Guide to answer: How are outputs of the system explained to those interacting with the system and to individuals affected by the system?

In this part of the report, please specifically address the risks emerging from the issues mentioned in checklist 1—questions 1.17 and 1.18. Please also consider how to explain and justify decisions when procedural regularity (1.15) is not satisfied (it is possible for the system to take different decisions when provided with the same input at two different points in time).

Assess whether operators/users of the algorithmic system have the authority, resources (time, financial resources) or capacities to question the output – or whether there is a risk that they will accept the result without being asked due to a lack of resources and authorizations (risk of automation bias).

Recommended cross-check with: Project team

Guide to answer: Is system deployment continuously monitored after the testing phase … a) at all times? b) within a given timeframe? c) through which measures? (The role responsible for it should be mentioned in 2.5.)

Include here a discussion of how you actively counteract automation bias (if included in 2.13).

Recommended cross-check with: Project team

Guide to answer: Are people informed that they are affected by an algorithmic system’s prediction, recommendation or decision? If so, how? How can people challenge or reject recommendations/decisions influenced by the system? What motivations exist to object to the output of the algorithmic system?

Recommended cross-check with: Project team

Part 3: Results

Performance Transparency

Try to answer questions 2.16-2.19 when testing the algorithmic system. You can return to them at a later date and check whether you need to revise your answers. However, make sure that the old answers and thus the previous methods and processes remain findable and comprehensible for the members of your organization. This section is about making the performance of the system transparent.

Guide to answer:

How does the system perform relative to the chosen relevant metrics, to the process previously in place, if any, or to established benchmarks, if available? Are expectations of the system’s objectives, performance, accuracy or efficacy or cost-effectiveness, as outlined in the original plan, met? Evaluate the performance in relation to problem definition (2.1.2) and objectives (2.1.3) and include any consequences (positive and negative) that occurred.

Also state here,

(i) what your measurements of the fairness of the system have shown, based on the methods declared in question 2.12 (if this applies to your system and you had to answer this question);

(ii) what your measurements of the explainability of the system have shown, based on the methods declared in question 2.13 (if this applies to your system and you had to answer this question);

(iii) what your measurements have shown in relation to sustainability, based on the sustainability objectives declared in question 2.2.6 (if applicable).

Recommended cross-check with: Ethics experts

Guide to answer: What are the remaining security and privacy risks? Are they reasonably acceptable and proportionate, and if so, why?

Recommended cross-check with: Data protection or cybersecurity officer

Guide to answer:

Please describe relevant unresolved biases or possible sources of unfairness in the system and explain why they cannot be remedied.

It is not to be expected that a system can be fair in every respect. Most measures of algorithmic fairness are in conflict with each other, e.g. equalizing false positive and false negative rates and predictive accuracy for positive and negative predictions, and it is expected that a system that appears fair in one dimension may appear unfair from a different point of view. You should try to make the system fair according to the idea of fairness that seems most defensible to you and explain and justify this. Having done this, you need to be transparent about fairness and “freedom from bias” definitions that you do not meet.

Recommended cross-check with: Ethics officer

Evaluation transparency

Section 3: after deployment

Answer questions 2.20-2.22n after you have implemented the algorithmic system, i.e., when the system is being monitored during its use. You can return to them at a later date and check whether you need to revise your answers.

However, if you do so, do not simply delete and edit your previous answer, but add the new answer version with a note of the date, so that the old answers and thus the previous methods and processes remain visible and traceable.

Guide to answer: In retrospect, how did the system perform in relation to the requirements set out in 2.1 and 2.2? Does the performance during system use deviate from expectations?

Recommended cross-check with: Project team

During monitoring, have predictions/recommendations/decisions by the system ever been challenged … a) by system end-users? b) by individuals affected by the system’s output?

Recommended cross-check with: Project team

Guide to answer: During monitoring, have there been mistakes due to a misleading or factually inaccurate output of generative AI? Has any output of generative AI been reported as potentially harmful? How has generative AI impacted performance or cost for you as a user?

Recommended cross-check with: Project team

Guide to answer: Please refer to section 2.2 and all subsequent questions related to the ethical requirements mentioned in 2.2.

Recommended cross-check with: Project team and/or ethics experts

If you click on “save”, your answers will be saved for 360 days. You will receive a link where you can continue to work on and edit your answers.