| Clause 1 - 2 Preliminary | 1. This Act may be cited as the Artificial Intelligence Act, 2026. | This definition is comprehensive and provides a clear meaning of the term artificial intelligence. |
| 2. In this Act— “artificial intelligence” means a machine-based system or collection of technologies that leverage machine learning, data processing, algorithmic systems, or other methods to operate with varying levels of autonomy and adaptiveness, inferring outputs such as predictions, content, recommendations, or decisions from inputs, and includes systems or technologies that perform tasks typically requiring human intelligence, such as automated decision making, language processing, and computer vision; |
| Clause 2 | “Cabinet Secretary” means the Cabinet Secretary for the time being responsible for matters relating to information, communication and technology; | The designated Cabinet Secretary aligns with the subject matter of the Bill. |
| Clause 2 | “Advisory Committee” means the advisory committee established under section 17; | This definition is clear and appropriate. |
| Clause 2 | “Artificial Intelligence Commissioner” means the person appointed under section 5; | This definition is clear and appropriate. |
| Clause 2 | “data subject” has the meaning assigned to it under the Data Protection Act; | This definition is a good use of existing laws as the Data Protection Act defines a data subject as an identified or identifiable natural person. However, the definition may carry a limitation as it excludes legal persons, yet artificial intelligence can potentially harm and affect businesses and other non-natural entities. |
| We propose that the Bill should be clear that its protection extends to both natural and legal persons. |
| Clause 2 | “deployer” means a natural or legal person who puts an artificial intelligence system into service or uses it under their authority, but excludes end-users acting in a personal non-professional capacity; | This definition appropriately identifies the person or entity responsible for putting an AI system into use and excludes individuals using AI for personal, non professional purposes. This distinction is practical and reduces any unnecessary regulatory burden on ordinary users. |
| Clause 2 | “generative artificial intelligence” means an artificial intelligence system capable of generating text, images, audio, video, or other content based on learned patterns from data inputs; Short title. Interpretation. Cap. 411C | The definition is clear and practical, particularly in identifying systems that create content such as text, images, audio, or video. It reflects current technological developments and is easy to understand. |
| Clause 2 | “high-risk artificial intelligence system” means an artificial intelligence system that poses significant risks to health, safety, fundamental rights, or societal welfare, as prescribed by regulations; | This definition clearly identifies high-risk artificial intelligence systems to include those that pose specific types of risks to society. |
| Clause 2 | “Office” means the office of the Artificial Intelligence Commissioner established under section 4. | This definition clearly identifies the entity that will be responsible for oversight and enforcement of the Act. |
| Clause 2 | “provider” means a natural or legal person who develops an artificial intelligence system or has it developed and places it on the market or puts it into service under their own name or trademark; | The definition of a provider aligns with international practice by covering those who develop or place AI systems on the market under their name. |
| Clause 2 | “regulatory sandbox” means a controlled environment for testing artificial intelligence under regulatory oversight. | This definition is useful in promoting innovation by allowing controlled testing of AI systems under regulatory supervision. |
| Clause 2 | “synthetic media” means media content generated or manipulated using generative artificial intelligence that depicts events, speech, or appearances that did not occur; and | This definition is particularly useful for clause thirty-five (35). |
| Clause 2 | “user” means a natural or legal person who interacts with or is affected by an artificial intelligence system, including end-users in a personal or professional capacity. | The definition of a user is broad and inclusive, covering both individuals and organizations that interact with or are affected by AI systems. |
| Clause 4 | 4. (1) There is established the Office of the Artificial Intelligence Commissioner. | Clause four (4) creates a regulator tasked with overseeing Artificial Intelligence in Kenya. Further, this clause centralizes expertise and accountability. |
| Establishment of the Office of the Artificial Intelligence Commissioner | (2) The Office is designated as a State Office in accordance with Article 260(q) of the Constitution. | However, there exists a potential overlap with Kenya’s Office of Data Protection Commissioner and Cybercrime enforcement agencies. We propose that the roles must be clearly demarcated to avoid any overlap. |
| (3) The Office shall comprise the Artificial Intelligence Commissioner as its head and accounting officer, and other staff appointed by the Commissioner. |
| (4) The Office shall be independent in the performance of its functions and exercise of its powers under this Act. |
| (5) The Office shall be a body corporate with perpetual succession and a common seal and shall, in its corporate name, be capable of— |
| (a) suing and being sued; |
| (b) taking, purchasing or otherwise acquiring, holding, charging or disposing of movable and immovable property; |
| (c) entering into contracts; and |
| (d) doing or performing such other things or acts necessary for the proper performance of its functions, and which may be lawfully done or be performed by a body corporate. |
| Clause 5 | 5. (1) The Public Service Commission shall, whenever a vacancy arises in the position of the Artificial Intelligence Commissioner, initiate the recruitment process. | This clause proposes a rigorous recruitment and appointment process and enhances meritocracy. Further the checks and balances with the public service commission allows for transparency and accountability in the recruitment exercise. |
| (2) The Public Service Commission shall, within seven days of being notified of a vacancy under subsection (1), invite applications from persons who qualify for nomination and appointment for the position of the Artificial Intelligence Commissioner. |
| (3) The Public Service Commission shall within twenty-one days of receipt of applications under subsection (2)— |
| (a) consider the applications received to determine their compliance with this Act; Establishment of the Office. Appointment of the Artificial Intelligence Commissioner. |
| (b) shortlist qualified applicants; |
| (c) publish and publicise the names of the applicants and the shortlisted applicants; |
| (d) conduct interviews of the shortlisted persons in an open and transparent process; (e) nominate three qualified applicants in the order of merit for the position of Artificial Intelligence Commissioner; and |
| (f) submit the names of the persons nominated under paragraph (e) to the President. |
| (4) The President shall nominate and, with approval of Parliament, appoint Commissioner. the Artificial Intelligence |
| Clause 6 | 6. (1) A person shall be qualified for appointment as the Artificial Intelligence Commissioner if the person— | We note that the qualifications bar is high which will in turn ensure a high caliber in the eventual office holder, while also maintaining the same standard of qualification as that required for similar positions, including the Data Protection Commissioner. |
| (a) holds a master’s degree in artificial intelligence, computer science, information technology, engineering, data science, law, ethics or a related field from a university recognized in Kenya; |
| (b) has at least ten years' experience in a relevant field, including artificial intelligence governance, data protection, technology policy or regulatory oversight, ethics, management; human rights and risk |
| (c) has at least ten years’ experience in the management of public or private institutions; and |
| (d) meets the requirements of Chapter Six of the Constitution. (2) The Artificial Intelligence Commissioner shall hold office for a term of five years and shall be eligible for re-appointment for one further term of five years. |
| Clause 7 | 7. Before assuming office, the Artificial Intelligence Commissioner shall take and subscribe to the oath or affirmation of office as prescribed in the Schedule. | A formal oath pledging fidelity to the Constitution and ethical conduct is standard for public officers. This ensures accountability and it aligns with good governance by legally obligating impartiality and integrity. |
| Clause 8 | 8. The Office of the Artificial Intelligence Commissioner shall become vacant if the holder — | This clause provides for clear scenarios through which the office of the Artificial Intelligence Commissioner might become vacant. We propose that the interim appointment process or an acting office holder should also be clearly provided for to avoid power vacuums and ensure continuity of functions pending recruitment of a substantive appointee. |
| (a) dies; |
| (b) resigns from office by notice in writing addressed to the President; |
| (c) is convicted of an offence and sentenced to imprisonment for a term exceeding six months without the option of a fine; |
| (d) is removed from office under section 9. |
| Clause 9 | 9. (1) The Artificial Intelligence Commissioner may be removed from office by the President on the recommendation by the Cabinet Secretary only for— | This clause protects integrity of the Office. It also provides checks on abuse or incompetence. If this clause is implemented well, it safeguards the Commissioner from arbitrary dismissal, thus encouraging independent action. We propose that high standards of proof and transparent process are maintained for removal, to maintain trust and independence. |
| (a) serious violation of the Constitution or any other law; |
| (b) gross misconduct, whether in the performance of the functions of the Office or otherwise; |
| (c) physical or mental incapacity to perform the functions of the office; |
| (d) incompetence; or |
| (e) bankruptcy. |
| (2) A person desiring the removal of the Artificial Intelligence Commissioner on any ground specified under subsection (1) may present a complaint to the Public Service Commission setting out the alleged facts constituting that ground. |
| (3) Subject to Article 47 of the Constitution, the Public Service Commission shall consider the complaint and, if satisfied that the complaint discloses a ground under subsection (1) shall— (a) investigate the matter expeditiously; (b) report on the facts; and |
| (c) make a recommendation to the Cabinet Secretary. |
| (4) Prior to any action under subsection (3), the Artificial Intelligence Commissioner shall be— (a) informed, in writing, of the reasons for the intended removal; and |
| (b) offered an opportunity to put in a defence against any such allegations. Removal from office. |
| Clause 10 | 10. (1) The functions of the Office shall be to— (a) oversee the implementation and enforcement of this Act; | This clause centralizes all AI related oversight in one agency, which can provide coherent guidance. However, there exists an overlap risk because Kenya already has data protection, ICT and cybersecurity regulators. |
| (b) conduct risk assessments of artificial intelligence systems; | We propose that upon implementation the Office should prioritize collaborative frameworks to avoid conflicting directives. The office should provide clear guidance on each function to ensure adequate funding and expertise so the Office can realistically fulfill these duties |
| (c) perform conformity audits and post-market surveillance of artificial intelligence systems; (d) assess high-risk artificial intelligence systems to ensure compliance with ethical standards and risk mitigation requirements; |
| (e) develop policies, guidelines, codes of practice and standards on artificial intelligence governance, ethics, safety, risk classification and responsible deployment in consultation with the relevant agencies and the public; |
| (f) promote responsible development, deployment and use of artificial intelligence systems in Kenya; |
| (g) establish and manage regulatory sandboxes to facilitate safe innovation, testing and piloting of artificial intelligence while mitigating risks and promoting local solutions; (h) receive and investigate complaints relating to artificial intelligence systems, including harms such as bias, discrimination or infringement of rights; |
| (i) (j) collaborate with relevant national, regional and international bodies to align Kenya’s artificial intelligence governance with best practices, standards and treaties; advise the National Government and its entities and the county governments on matters relating to policy integration of artificial intelligence in devolved functions and international best practices in artificial intelligence; |
| (k) promote public awareness and education on artificial intelligence; Functions of the Office. (l) develop capacity building programs at national and county levels to enhance understanding, adoption and ethical use of artificial intelligence; |
| (m) conduct research and monitor trends in artificial intelligence, including foresight studies on environmental impacts and job displacement inform policy and recommend regulatory updates addressing evolving risks and opportunities; |
| (n) maintain a public register of high-risk artificial intelligence systems, including those used by county governments; |
| (o) promote equitable access to artificial intelligence infrastructure and benefits, including through partnerships for digital inclusion in underserved areas; and |
| (p) perform any other functions conferred on it by this Act or any other written law. |
| (2) The Office of the Artificial Intelligence Commissioner may, in the performance of its functions collaborate with such other relevant bodies as it considers appropriate. |
| Clause 11 | 11. (1) The Office shall have all powers necessary for the proper performance of its functions under this Act and, in particular, but without prejudice to the generality of the foregoing, the Office shall have power to— | This clause empowers the Commissioner to audit and inspect AI deployments , this essential in promoting compliance. |
| (a) enter premises and inspect artificial intelligence systems, records or data upon reasonable notice; | As much as this clause aims to promote compliance, there may be a privacy trade off where accessing training data could conflict with data protection laws. We propose that strict safeguards are put in place to prevent this. |
| (b) require the production of records, documents or information relating to artificial intelligence systems; |
| (c) issue enforcement notices, orders or directives to ensure compliance; |
| (d) impose administrative fines for non-compliance as prescribed by regulations; |
| (e) summon persons to give evidence or produce documents; |
| (f) collaborate with international bodies on artificial intelligence matters; Powers of the Office. 60 The Artificial Intelligence Bill, 2026 (g) establish an appeals mechanism for decisions made by the Office; and |
| (h) delegate any of its powers or functions to authorized officers. |
| (2) In the exercise of its powers under this section, the Office shall comply with the provisions of the Constitution relating to fair administrative action. |
| Clause 12 | 12. (1) The Office may appoint such deputy commissioners, assistant commissioners and other staff as may be necessary for the proper discharge of its functions under this Act, upon such terms and conditions of service as it may determine, in consultation with the Salaries and Remuneration Commission and the Public Service Commission. | This clause ensures the Office can employ technical and administrative personnel. The Offices’ ability to recruit staff to aid the implementation of the legislation, once enacted, is essential. |
| (2) The staff appointed under subsection (1) shall possess such qualifications and experience as may be prescribed by regulations. |
| Clause 13 | 13. The Artificial Intelligence Commissioner and staff of the Office shall be paid such renumeration and allowances as the Salaries and Renumeration Commission may advise. | The Salaries and Remuneration Commission is best placed to provide remuneration and allowance advisories for staff of the Office of the AI Commissioner which would allow for the hiring of appropriate talent from the market, while maintaining harmony with the salary scale applicable to other public officers.. |
| Clause 14 | 14. The Artificial Intelligence Commissioner may, subject to such conditions as they may impose, delegate any power conferred under this Act or any other written law to a regulator established through an Act of Parliament. | This clause provides operational flexibility. We propose that the Office must still ensure accountability by ensuring that the Commissioner remains responsible for actions of delegates. |
| Clause 15 | 15. The Artificial Intelligence Commissioner or any staff of the Office shall not be held liable for having performed any of their functions in good faith and in accordance with this Act. | The standard immunity clause for regulators protects the Commissioner from lawsuits when performing statutory duties in good faith, encouraging bold enforcement decisions. However, this should not exempt wilful misconduct or gross negligence. |
| Clause 16 | 16. The Artificial Intelligence Commissioner, or any staff of the Office, shall not, unless with lawful authority, disclose any information obtained for the purposes of this Act. | Given the sensitive nature of AI systems, a confidentiality obligation is essential. We propose that this clause should however be limited to disclosure of confidential information rather than any information, as a blanket disclosure restriction might make other functions and roles, including the preparation and publication of annual reports, impossible to perform by virtue of a wide non-disclosure obligation. |
| Clause 17 | 17. (1) There is established an Advisory Committee on Artificial Intelligence which shall consist of— Staff of the Office. Renumeration of the Artificial Intelligence Commissioner and staff. Delegation by the Artificial Intelligence Commissioner. Protection from personal liability. Confidentiality agreement. Establishment of the Advisory Committee. | The Committee created by this clause is well placed to provide technical and ethical guidance. |
| Establishment of the Advisory Committee | (a) the Artificial Intelligence Commissioner, who shall be the chairperson; | This is a good proposal as it is cross-cutting and representative of various stake-holders including governors, industry experts and the civil society, thereby promotinginclusivity. |
| (b) a representative of the Cabinet Secretary responsible for information and communications technology; | We propose that this clause should provide for periodic meetings and reports to ensure transparency by publishing minutes. |
| (c) a representative of the Office of the Data Protection Commissioner; |
| (d) a representative of the National Commission for Science, Technology and Innovation; |
| (e) two persons with expertise in artificial intelligence ethics and human rights, nominated by relevant professional bodies; |
| (f) two persons, being one man and one woman, nominated by the Council of Governors; |
| (g) one person representing the private sector in technology, nominated through a consultative process by registered private sector organizations; and |
| (h) one person representing civil society organizations with expertise in technology or human rights, nominated through a consultative process by registered civil society organizations. |
| (2) The members under paragraph subsection (1) (e) shall be appointed by the Cabinet Secretary on the recommendation of Commissioner. the Artificial Intelligence |
| (3) A person appointed under subsection (2) shall hold office for a term of three years and shall be eligible for re appointment for one further term of three years. |
| (4) In appointing members, the Cabinet Secretary and the Artificial Intelligence Commissioner shall ensure gender balance, regional representation and inclusion of persons with disabilities. |
| Clause 18 | 18. The functions of the Advisory Committee shall be to— | This is clause clearly sets out the scope and functions of the Advisory Committee, ensuring that there is a clear demarcation between the role the Committee plays vis-à-vis the Office of the AI Commissioner.. |
| (a) advise the Artificial Intelligence Commissioner on emerging trends, risks and opportunities in artificial intelligence; Functions of the Advisory Committee. |
| (b) review and provide recommendations on guidelines, standards and regulations proposed under this Act; |
| (c) facilitate stakeholder engagement on artificial intelligence matters; (d) promote multi-disciplinary research and collaboration on artificial intelligence governance; |
| (e) advise on strategies for workforce reskilling and transition in response to artificial intelligence induced job changes, promoting human-centric artificial intelligence; and |
| (f) perform any other advisory functions as may be requested by the Artificial Intelligence Commissioner. |
| Clause 19 | 19. (1) The Advisory Committee shall meet at least four times in a financial year. | This clause promotes good governance and sets minimum standards to ensure that the committee functions smoothly. |
| (2) The quorum for a meeting shall be half of the members. |
| (3) The Advisory Committee may regulate its own procedure. |
| Clause 20 | 20. The members of the Advisory Committee shall be paid such allowances as may be determined by the Cabinet Secretary in consultation with the Salaries and Remuneration Commission. | This clause is important as it helps to ensure that the allowances are fair, standardized, and aligned with national public sector compensation guidelines, reducing the risk of arbitrary or excessive payments. |
| Clause 21 | 21. The funds and assets of the Office shall consist of— | This clause is important as reliable funding is critical for any office. Further, this clause establishes the financial foundation of the Office of the Artificial Intelligence Commissioner by clearly identifying the sources of its funds and assets, which is essential for effective and continuous operations. |
| Financial Provisions | (a) monies allocated by the National Assembly for purposes of the Office; | |
| (b) any grants, gifts, donations, or other endowments given to the Office; and |
| (c) such funds as may vest in or accrue to the Office in the performance of its functions under this Act or any other written law. |
| Clause 22 | 22 (1) At least three months before the commencement of each financial year, the Artificial Meetings and procedure of the Advisory Committee. Allowances of the Advisory Committee. Funds of the Office. Annual estimates. Intelligence Commissioner shall cause to be prepared estimates of the revenue and expenditure of the Office for that year. | This clause ensures the Office plans its activities and justifies spending and ensures discipline. |
| (2) The annual estimates shall make provision for all the estimated expenditure of the Artificial Intelligence Commissioner for the financial year concerned, and in particular, shall provide for the— | We propose that the process should be time bound so that delays in budget approval do not hamper the Office’s establishment. |
| (a) payment of salaries, allowances and other charges in respect of staff of the Office and the Advisory Committee; |
| (b) operations of the functions of the Office; |
| (c) the payment of pensions, gratuities and other charges in respect of retirement benefits which are payable out of the finances of the Office; |
| (d) the acquisition, maintenance, repair and replacement of the equipment and other movable property of the Office; |
| (e) funding of training, research and development of activities of the Office; |
| (f) the creation of such reserve funds to meet future or contingent liabilities or in respect of such other matters as the Data Commissioner may deem fit; |
| (g) any other expenditure for the purposes of this Act. |
| (3) The annual estimates shall be submitted to the Cabinet Secretary for tabling in the National Assembly. |
| Clause 23 | 23. The annual accounts of the Office shall be prepared, audited and reported in accordance with the provisions of Articles 226 and 229 of the Constitution, the Public Finance Management Act and the Public Audit Act. | Mandatory audits by Auditor General provide transparency and builds public trust in the new Office. |
| Clause 24 | 24. (1) The Artificial Intelligence Commissioner shall, within three months after the end of each financial year, prepare and submit to the Cabinet Secretary a report of the operations of the Office for the immediately preceding year. (2) The annual report shall contain in respect of the year it relates— | This clause is crucial for legislative oversight and public awareness. The annual reports to National Assembly ensure accountability and enable policy review. They should cover enforcement actions, emerging risks, and progress on literacy and sandboxes. |
| (a) the financial statements and description of activities of the Office; Accounts and Audit. Cap. 412A Cap 412B. Annual reports of the Office | We propose that the Office consider requiring an independent expert review every few years to evaluate effectiveness. |
| (b) developments in artificial intelligence; |
| (c) risk assessments conducted; |
| (d) the impact of the exercise of any of the Artificial Intelligence Commissioner’s mandate or function; |
| (e) any impediments to the achievements of the object and purpose of this Act or any other written law; and |
| (f) any other information relating to its functions that the Artificial Intelligence Commissioner may consider necessary. (3) The Cabinet Secretary shall, within fourteen days of receipt of the report under subsection (1), cause the report to be laid before Parliament. |
| Clause 25 | 25. (1) The Artificial Intelligence Commissioner shall classify artificial intelligence systems according to the level of risk they pose to health, safety, fundamental rights, the environment or societal welfare. | This clause proposes a risk-based approach which is globally recommended. It allows targeting regulations where needed, while letting innovation flourish, within bounds. |
| Governance of Artificial Intelligence | (2) The Cabinet Secretary shall, on the recommendation of the Artificial Intelligence Commissioner and by regulations, prescribe the categories of classification under subsection (1), including— | We propose that each tier is defined, vague categories could confuse industry. The definitions may provide illustrative examples in guidance and define key terms in regulations. |
| (a) unacceptable risk, for systems that pose severe threats; |
| (b) high risk, for systems used in critical sectors including healthcare, finance, security, administration; education, agriculture, employment or public |
| (c) limited risk, for systems with moderate risks; (d) minimal risk, for systems with negligible risks. |
| (3) A system classified as unacceptable risk is prohibited. |
| (4) The Artificial Intelligence Commissioner shall, having regard to international standards, update the classification criteria periodically recommendations to the Cabinet Secretary. through Classification of artificial intelligence system. |
| Clause 26 | 26. (1) A provider or deployer of a high-risk artificial intelligence system shall— | These requirements mirror best practices and oversight which are critical for artificial intelligence. This clause fosters accountability for AI that could affect safety or human rights. |
| (a) conduct a risk assessment before deployment and implement mitigation measures, including human oversight; |
| (b) conduct a human rights impact assessment before deployment; |
| (c) ensure transparency, traceability and explainability of the system's decision-making processes; |
| (d) maintain records of data inputs, training datasets, outputs and performance metrics for at least five years; |
| (e) comply with the Data Protection Act, in relation to personal data processing, including conducting data protection impact assessments where required; |
| (f) incorporate measures for robustness, accuracy and cybersecurity; and |
| (g) where the system generates or manipulates images, voice or likeness, obtain explicit consent from the affected person or their legal representative, and ensure the output is clearly labelled as AI generated. |
| (2) The Artificial Intelligence Commissioner shall review assessments for systems used in the public sector and may prescribe guidelines for conducting assessments. |
| (3) The Artificial Intelligence Commissioner shall prescribe guidelines on risk management, security protocols, bias detection, and ethical standards for high-risk artificial intelligence systems. |
| Clause 27 | 27. The Artificial Intelligence Commissioner shall maintain a public register of high-risk artificial intelligence systems, including those used by county governments. | A public register lets public know which systems are high-risk, enhancing oversight and trust. It aligns with global moves for artificial intelligence transparency and security. |
| Clause 28 | 28. (1) A provider or deployer of an artificial intelligence system shall disclose to users and affected persons | This clause establishes transparency, accountability, and user protection obligations for artificial intelligence systems, particularly those classified as high risk, and it is one of the operational provisions in the Bill. |
| (a) the nature, purpose and limitations of the system; |
| (b) the extent to which decisions or outputs are generated by automated processes, including any human intervention; and |
| (c) measures taken to identify, mitigate and monitor biases and to ensure fairness. |
| (2) A user of an artificial intelligence system shall, where automated decisions produce significant legal or similar effects on a person, comply with the Data Protection Act by providing safeguards including the right to human intervention, to express views and to contest the decision. |
| (3) A provider of a high-risk artificial intelligence system shall submit annual compliance reports to the Artificial Intelligence Commissioner, and non-confidential information from the reports shall be made available to the public. |
| Clause 29 | 29. (1) The Artificial Intelligence Commissioner shall establish regulatory sandboxes for testing artificial intelligence systems in a controlled environment. | This clause is a promotes innovation measure, sandboxes have been successfully used in fintech to foster innovation while monitoring risks. |
| (2) The Artificial Intelligence Commissioner shall prescribe the conditions for participation in a regulatory sandbox, including safeguards for ethics, data protection and risk monitoring. (3) The Artificial Intelligence Commissioner shall, in approving participation in a regulatory sandbox, give priority to innovations that address national priorities and encourage collaboration with county governments. |
| Clause 30 | 30. (1) The Artificial Intelligence Commissioner shall develop and publish ethical guidelines for the development, deployment and use of artificial intelligence systems. | This clause provides industry with non-binding best-practices and is a good measure for raising awareness of values, as the guidelines help disseminate statutory edicts in a manner that is clear, practical, digestible and generally allows for better understanding of the legislation. |
| (2) The guidelines under subsection (1) shall address— (a) prevention of bias, discrimination and exclusion, with particular regard to vulnerable groups; |
| (b) protection of privacy, personal data and human dignity; |
| (c) promotion of human oversight, accountability and redress for harms; Cap. 411C Regulatory sandboxes. Ethical guidelines. |
| (d) environmental sustainability, including assessments of energy consumption and carbon footprint in artificial intelligence systems; |
| (e) equitable access to benefits; and |
| (f) prohibition of non-consensual use of personal images or likenesses in AI-generated content, and measures to prevent misinformation or harm to reputation. |
| (3) The Artificial Intelligence Commissioner shall develop the guidelines in consultation with the relevant agencies and stakeholders and shall review them periodically. |
| Clause 31 | 31. (1) The Artificial Intelligence Commissioner shall implement artificial intelligence literacy programmes to educate the public on the benefits, risks and ethical implications of artificial intelligence systems. | This clause is important for building capacity and raising awareness of members of the public. Further, it equips citizens and officials to understand AI risks and benefits. |
| (2) The programmes under subsection (1) shall be conducted at national and county levels, in partnership with relevant institutions, including educational bodies and information and communications technology hubs. | We propose that the Office partner with universities and industry to deliver training. |
| Clause 32 | 32. (1) A person who designs or deploys an artificial intelligence system shall— | This principal clause affirms that technology serves people, not vice versa. This clause could potentially justify rejecting any AI that undermines values. |
| (a) design or deploy the system in a manner that enhances rather than replaces human capabilities; |
| (b) incorporate features that involvement in the system; and support human |
| (c) provide for human oversight in critical decisions made by the system. (2) The oversight referred to under subsection 3(c) must include review mechanisms that allow a qualified person to intervene or override the system's outputs where the decisions may affect human rights, safety or societal well-being. |
| (3) The Cabinet Secretary shall make regulations prescribing— |
| (a) the manner in which a system enhances human capabilities; Artificial Intelligence literacy. Human-centric Artificial Intelligence. (b) the features that support human involvement; |
| (c) the critical decisions requiring human oversight; and |
| (d) the review mechanisms for intervention or override. |
| Clause 33 | 33. (1) A provider or deployer of an artificial intelligence system that is likely to impact employment shall— | This clause addresses the impact of artificial intelligence on employment – as one of the fears of AI is that it will cause large-scale job losses. The provision is a forward-looking and integrates technology advancement with labour and economic policy. The need to conduct a workforce impact assessment will go a long way to ensure that the risk of job losses upon the introduction of AI to the workplace is recognized in advance and sufficiently mitigated. |
| (a) conduct a workforce impact assessment, including an assessment of potential job displacement; and |
| (b) implement mitigation measures, including reskilling programmes, in collaboration with relevant national and county governments agencies. |
| (2) The Artificial Intelligence Commissioner shall in consultation with the relevant agencies develop guidelines on workforce transition, including— |
| (a) partnerships for vocational training; and |
| (b) incentives for artificial intelligence adoption that creates jobs. |
| Clause 34 | 34. A public entity, including a county government, that uses an artificial system shall ensure compliance with this Act | We propose the Office to provide technical support to ministries and counties for compliance. Leverage the Council of Governors’ involvement to extend rules to counties’ use of AI |
| Clause 35 | 35. (1) A person commits an offence if the person— | This provision establishes a comprehensive enforcement framework for regulating artificial intelligence by criminalising a wide range of non-compliant behaviours across both private and public sector use. The punishments prescribed for the offences are reasonable and proportionate and are sufficient for deterrent purposes. |
| General Provisions | (a) deploys or operates an artificial intelligence system classified as an unacceptable risk under section 25, except in circumstances prescribed by regulations; | |
| (b) deploys a high-risk artificial intelligence system without conducting the required risk assessment or implementing mitigation measures prescribed under sections 26; |
| (c) fails to comply with disclosure or transparency obligations under section 28; |
| (d) participates in a regulatory sandbox under section 29 without adhering to the prescribed conditions; Workforce impact obligations. Use of artificial intelligence in the public sector. Offences and Penalties. |
| (e) fails to conduct a workforce impact assessment under section 32; |
| (f) contravenes ethical guidelines published under this Act, resulting in bias, discrimination or harm to individuals; |
| (g) uses an artificial intelligence system in the public sector contrary to this Act, causing prejudice to public benefit or rights; |
| (h) obstructs the Office of the Artificial Intelligence Commissioner in the performance of its functions under this Act, including by providing false information or failing to submit required reports; or |
| (i) generates, deploys or distributes artificial intelligence generated content, including synthetic media, using a person's image, voice or likeness without their explicit consent, where such content causes or is likely to cause harm, misinformation, defamation or infringement of privacy. |
| (2) A person who commits an offence under section 34 is liable, on conviction— |
| (a) in the case of an offence under section 34(a), (b), (d), (e), (g) or (i) to a fine not exceeding five million shillings or to imprisonment for a term not exceeding two years, or to both; and |
| (b) in the case of an offence under section 34(c), (f) or (h), to a fine not exceeding one million shillings or to imprisonment for a term not exceeding six months, or to both. |
| (3) Where the offence is committed by a body corporate, every director or officer of the body corporate who had knowledge of the commission of the offence and who did not exercise due diligence to ensure compliance with this Act shall be guilty of the offence. |
| Clause 36 | 36. (1) The Cabinet Secretary shall, in consultation with the Artificial Intelligence Commissioner, make regulations for the better carrying out of the provisions of this Act. Regulations. | This clause is important, it ensures that the process of making these regulations includes public consultation for transparency of rule making is in line with best practices. |
| (2) Without prejudice to the generality of subsection (1), regulations made under this section may provide for— | We propose that this clause is amended to mandate that draft regulations be published for public participation. |
| (a) the detailed criteria, examples, and procedures for classifying artificial intelligence systems according to risk levels; |
| (b) the forms, processes, and timelines for conducting risk assessments, human rights impact assessments, and conformity audits for high-risk artificial intelligence systems; |
| (c) the conditions, eligibility criteria, monitoring requirements, and exit strategies for participation in regulatory sandboxes; |
| (d) the content, format, and review processes for ethical guidelines, transparency disclosures, and compliance reports; |
| (e) frameworks for data governance, including standards for secure data sharing, anonymization, dataset diversity, and localization in artificial intelligence applications; (f) procedures for enforcement, investigations, appeals against decisions of the Office; |
| (g) mechanisms for public participation, artificial intelligence literacy programs, and collaboration with county governments in devolved sectors; |
| (h) fees, forms, and administrative procedures necessary for the implementation of this Act; |
| (i) procedures for obtaining consent, labelling artificial intelligence generated content, and reporting non-consensual synthetic media; |
| (j) exceptions for legitimate uses of image manipulation in artificial intelligence with safeguards against abuse; and, |
| (k) any other matter required to be prescribed under this Act. |
| Clause 37 | 37. (1) The Cabinet Secretary shall, every three years from the date of commencement of this Act, cause a review of this Act to be undertaken to determine its effectiveness and suitability in addressing technological advancements in artificial intelligence. Review of the Act. | Mandating a statutory review ensures the law evolves with technology. This specific clause allows for alignment with international developments and correction of unforeseen issues. |
| (2) The review under subsection (1) shall include— |
| (a) consultations with the Artificial Intelligence Commissioner, the Advisory Committee, relevant stakeholders and the public; |
| (b) an assessment of emerging risks, opportunities and international best practices; and |
| (c) recommendations for amendments to this Act or related legislation. |
| (3) The Cabinet Secretary shall, within six months after completion of the review, submit a report on the review, including any recommendations, to Parliament for consideration. |