At RethinkFirst, we believe Artificial Intelligence is a powerful tool—but one that must be approached with care, responsibility, and deep respect for the people it serves.
As a company at the intersection of healthcare and technology, we don’t take shortcuts when it comes to integrating AI into our work. Where others may rush forward, we pause and ask the tough questions. What’s best for the individuals and families we serve? How do we safeguard their privacy and dignity? What does it truly mean to build AI ethically?
Our answers to those questions guide everything we do.
More Than a Buzzword: AI With Purpose and Principle
AI isn’t just another feature in our toolbox—it’s a responsibility. That’s why we’ve built a robust AI Code of Ethics to ensure our solutions are developed with security, transparency, and human well-being at the center. Unlike many companies that tout AI as a fix-all, we see it as a way to thoughtfully enhance the expertise of human professionals—not replace them.
Our AI is:
- Secure – Built to meet and adapt to regulatory standards like HIPAA, HITECH, and FERPA, with a clear priority on protecting user data.
- Transparent – We explain how our AI works, what it’s designed for, and where its limitations lie—because users deserve to understand the tools they’re using.
- Responsible – We resist the urge to build something simply because we can. Our developers consider whether it should be built, who benefits, and who could be impacted.
- Human-centric – Every system we develop is designed to support professionals in making informed decisions—not replace their judgment.
- Significant – We aim for meaningful impact, not just innovation for its own sake.
- Rigorous – Our work is continuously tested, validated, and peer-reviewed—because quality and integrity go hand in hand.
To turn principles into practice, we’ve embedded ethics into every phase of our AI development lifecycle.
Dustin Carter, Head of AI, RethinkFirst Share this
Walking the Talk: Our Ethical AI Process
To turn principles into practice, we’ve embedded ethics into every phase of our AI development lifecycle.
- Expert Supervision ensures our tools enhance human work, never operate in a vacuum.
- Bias and Fairness Testing identifies and addresses potential risks before deployment.
- Transparent Communication keeps users informed and in control.
- Responsible R&D means we document data sources, model behaviors, and continually refine our methods.
We even maintain an Ethical AI Checklist, adapted from the NeurIPS Code of Ethics, which helps our teams reflect deeply on the societal impact of every AI tool we develop. This isn’t just compliance—it’s conscience.
Why We’re Different
In a fast-moving AI landscape, it’s easy to be dazzled by speed and scale. But at RethinkFirst, we measure success by something else: trust. Our customers trust us with sensitive, deeply personal information. They count on us to uphold the highest standards of privacy, ethics, and care.
That’s why our approach to AI isn’t just about what technology can do—it’s about what it should do.
Looking Ahead
AI is evolving, and so are we. We remain committed to continuous learning, improvement, and partnership with the broader research and healthcare communities. And as we innovate, we’ll keep putting people first—because that’s who we’re here to serve.
At RethinkFirst, AI isn’t just a capability. It’s a commitment.
About the Author
Dustin Carter
Head of AI at RethinkFirst
Dustin Carter is currently the Head of AI at RethinkFirst. Previously, Dustin served as the VP of Product for Rethink Behavioral Health. Before that, Dustin was the President and Co-founder of TotalABA, Inc., a leading provider of software for therapists and clinics in the behavioral health industry, which was later acquired by RethinkFirst. Dustin also spent 15 years at LeonardoMD, Inc., and subsequently assumed the position of Director of Customer Relations at Azalea Health, where he managed teams responsible for implementing and supporting EHR and medical billing platforms.