AI Code of Ethics

Our Commitment to Ethical and Secure AI Solutions

Artificial Intelligence (AI) is transforming the way we serve our customers, offering innovative solutions that enhance efficiency, decision-making, and user experience. At RethinkFirst, we are committed to ensuring that our AI-driven tools and technology are developed and deployed responsibly, with a focus on transparency and security.

We recognize our unique obligations to our employers, healthcare providers and educators, and the need for AI systems to comply with stringent ethical, legal, and clinical standards, ensuring safety, equity, and effectiveness for vulnerable populations. That’s why we are committed to continuously evaluating and refining our AI solutions to meet emerging challenges and ensure responsible, effective outcomes.

Using AI the Right Way

We use AI thoughtfully and responsibly to enhance—not replace—human expertise. Here’s how it supports our work across five key areas:

  1. Decision-Making: AI-generated insights and recommendations supplement the decision-making process but never supplant it.
  2. Efficiency: Automated repetitive tasks and streamlined workflows allow professionals to focus on high-value work.
  3. User Experience: Tailored solutions meet the unique needs of individuals, ensuring a more effective and intuitive user experience.
  4. Customer Support: Enhanced customer support through AI-driven tools provides timely, accurate, and high-quality assistance.
  5. Compliance: AI solutions are designed to protect PII in accordance with regulatory frameworks including HIPAA, HITECH, FERPA, CCPA, GDPR, and AI-specific healthcare governance guidelines such as FDA’s Good Machine Learning Practice (GMLP) and AI/ML-based Software as a Medical Device (SaMD) standards.

AI with Purpose and Principle

Our approach to AI development and deployment is grounded in the following guiding principles:

1. Secure

We actively safeguard user data and privacy, complying with laws like HIPAA, GDPR, and more. For healthcare AI, we rigorously align with the FDA’s AI/ML Action Plan and other relevant regulations. We continuously monitor and adapt to evolving standards to ensure compliance.

2. Transparent

We document our AI technologies, detailing their functionality, data usage, optimization targets, and limitations. For healthcare applications, we ensure explainability, auditability, and clinical validation, adhering to AMA AI Policy and relevant health laws. Our goal is for users to understand the intent and appropriate use cases of our AI technologies.

3. Responsible

Just because we can, doesn’t mean we will. We develop AI systems with a commitment to fairness, human rights, and accountability, following OECD AI Principles. We understand the strengths and limitations of AI and use this knowledge to guide our development, especially in healthcare applications where outcomes could directly or indirectly impact an individual’s wellbeing.

4. Human-centric

We design our AI technologies to inform, augment, and enhance the decisions made by expertly trained personnel, never to replace them. In clinical or educational settings, AI-generated recommendations support but do not solely determine decisions impacting health, education access, or wellbeing, in compliance with HIPAA Security Rule safeguards and FDA GMLP guidance.

5. Significant

We strive to build AI technologies that meaningfully improve the lives of those impacted by our systems in an innovative and actionable way for all relevant stakeholders. We assess potential impacts on protected populations and align with anti-discrimination laws.

6. Rigorous

We continuously engage in internal and external peer review through publications, presentations, and open discussions about how our AI technologies work and the results we obtain. To ensure our AI systems are effective and reliable, we undergo ongoing post-deployment monitoring and risk assessment, following FDA’s AI/ML Action Plan and emerging AI audit standards.

From Principles to Practice: Our Ethical AI Process

To maintain ethical AI practices, we have implemented the following safeguards:

Expert in the Loop: Automation is never without oversight. We provide valuable resources, but the final decision always remains in the hands of the expert.

Validation and Testing: AI systems undergo thorough testing for bias, fairness, and potential risks before deployment.

Healthcare applications: We conduct bias testing and risk assessments as recommended by the FDA GMLP and other AI-specific ethical guidance for clinical AI tools.

Transparent Communication: We communicate how our AI systems work and continuously update our policies based on new findings.

Responsible Research and Development: We document data sources, model usage, and evaluation methods to ensure AI integrity.

Our North Star: The Ethical AI Checklist

RethinkFirst maintains a comprehensive checklist that guides our development. Adapted from the NeurIPS Code of Ethics, it emphasizes fairness, transparency, accountability, and privacy. Through Socratic questioning, we ensure ethical considerations are integrated at every stage of the development lifecycle.

Walk through our detailed, step-by-step process below, or download a copy for yourself.

Data-Related Concerns: The points listed below apply to all datasets used to develop AI products.
ItemRelevant to your product?Notes on relevance and how to address (include links)
Privacy: Have you minimized the use and exposure of any personally identifiable information (PII), personal health information (PHI), and student education records (SER)?  
Legal Use of Data: Have you confirmed your use of the data complies with the end user license agreement from the product from which that data were collected?  
Deprecated Datasets: Have you documented the statistical distributions of your data and established boundary conditions for useful and appropriate uses of the model?  
Representative Evaluation Practice: Have you assessed and documented how well the data used to build the AI product aligns with the characteristics of the users who will use the product?  
Tracking Model Drift and Degradation: Have you established data pipelines to monitor, log, and report input drift and change in loss metrics from development models?  
Societal Impact and Potential Harmful Consequences: Developers should transparently communicate the known or anticipated consequences of the product use. The following specific areas are of particular concern.
ItemRelevant to your product?Notes on relevance and how to address (include links)
Safety: Are there foreseeable situations in which the technology can cause harm or injury through its direct application, side effects, or potential misuse?  
Security: Is a risk that the applications could open security vulnerabilities or cause serious accidents when deployed in real world environments?  
Discrimination: Can the technology developed be used to discriminate, exclude, or otherwise negatively impact people, including impacts on the provision of services such as healthcare or education?  
Bias and Fairness: Have you assessed and documented any potential biases or limitations to the scope of performance of models or the contents of datasets? For example, have you inspected these to ascertain whether they encode, contain or exacerbate bias against people of a certain gender, race, sexuality, or other protected characteristics.  
Impact Mitigation Measures: It is important to reflect and take action to mitigate any potential harmful consequences that may result from an AI product.
ItemRelevant to your product?Notes on relevance and how to address (include links)
Data and Model Documentation: Have you communicated the details of the dataset or the model via a structured template?  
Data and Model Artifacts: If releasing data or models for others to use, have you documented the intended use and limitations of these artifacts to prevent misuse or inappropriate use?  
Secure and Privacy-Preserving Data Storage and Distribution: Have you adhered to company standards around privacy protocols, encryption and anonymization to reduce the risk of data leakage or theft?  
Responsible Release and Publication Strategy: If your model has a high risk for misuse or dual-use, have you released it with the necessary safeguards to allow for controlled use of the model? For example, by requiring that users adhere to a code of conduct to access the model?  
Allowing Access to Research Artifacts: Have you made accessible the information required to enable scrutiny and auditing (e.g., information required to understand your code, execution environment versions, weights, hyperparameters of systems, etc.)? This should be accomplished in a manner allowing for the sufficient reproduction of described results.  

The Future of AI at Rethink

We recognize that AI is a rapidly evolving field and are committed to continuously adapting and refining our strategies based on the emerging challenges and opportunities. That means closely tracking advancements in AI regulation, including the EU AI Act, U.S. AI Bill of Rights, and state-specific AI and privacy legislation impacting healthcare, education, and behavioral health technology.

We will continue to refine our policies, collaborate with experts, and share insights with the broader community to ensure AI remains ethical, responsible, and thoughtfully adopted. We are committed to transparency and sharing our progress with all stakeholders so they can make the most informed decision possible for their specific use case.