Skip to content
macbook pro turned on displaying music

More work can be done to understand the ethical use of AI in the future of coaching

Coaching bodies and professionals will need to create clear guidelines for the ethical use of AI technologies that address challenges such as data security, bias, and information credibility.


Global leaders explore guidelines and policy for the ethical use of AI

Artificial intelligence (AI) is a powerful set of tools for processing, synthesizing, applying, and augmenting information. As these technologies become more sophisticated, they will transform the world of work and how we use the internet. However, these programs also present new risks where AI can falsify images, voice clips, and information or be used to reinforce harmful biases. Without clear guidelines for the ethical use of AI, these technologies can support the misuse of personal data, worsen inequalities, and spread misinformation.

What is Artificial Intelligence?

Artificial Intelligence (AI) is an umbrella term that describes how programs analyze large amounts of data to identify patterns that mirror human decision-making.

Algorithms: a set of rules used to solve a problem. AI is based on algorithms concerning how to sort and categorize data. Algorithms may also include additional instructions when certain criteria are met.

Machine Learning: uses algorithms to process large data sets. As more data is analyzed, machine learning programs are able to predict future outputs based on pattern recognition.  

Natural Language Processing: uses human language as a data source to reproduce human-like responses, including written and conversational outputs.

To address the potential threats and weaknesses of AI programming, industries, non-governmental organizations, and policymakers are exploring the development of ethical frameworks and policies that govern the responsible implementation of AI. For example, the Council of Europe’s Ethical Frameworks for the Application of Artificial Intelligence emphasizes the importance of: human autonomy, harm prevention, fairness, explainability,  and monitoring in AI systems. Building off the original report, the European Union is examining regulations to ensure accurate, unbiased, and privacy-conscious AI-generated information that upholds human rights. Part of this regulation includes assigning liability to organizations that create and use AI to uphold existing data security and privacy laws

In the coaching industry, AI is already being used to help coaches market their individual businesses, to support organizations in identifying development needs, and to facilitate coach-coachee matching. Early trials of coaching chatbots demonstrate how AI can replicate basic parts of a coaching conversation. AI chatbots cannot mirror or replace the holistic experience of human coaching; however, there is growing interest in how these technologies can supplement human coaching and expand coaching services to new audiences.

To set a foundation for the ethical use of AI, coaching researchers, accrediting bodies, and service providers will need to develop guidelines that address the following areas:

  1. Quality and guidelines for AI coaching
  2. Information reliability
  3. Data security and privacy
  4. Bias and discrimination

Click here for More Resources for Coaches


1. Will AI coaching bots replace human coaches?

AI is already being used to inform human decision-making. Companies can use apps to manage administrative tasks like scheduling meetings or taking minutes. In Human Resources, AI algorithms sort resumes looking for the most-qualified applicants. AI is also used to generate marketing materials, translate content for different audiences, and ensure that written content follows brand tone and style guidelines. Because AI can cover all these tasks with little cost and time burden, many fear that AI will lead to mass unemployment by eliminating entry-level white-collar jobs and increasing the skills gap.

The World Economic Forum predicts that AI will disrupt the future of work by eliminating specific roles while outlining how these tools cannot replace creative, social, and emotional skills unique to humans. Coaching relationships cannot be fully replaced by AI because coaching conversations rely on these human skills. Instead, AI will streamline administrative tasks, allowing coaches to focus on their core competencies such as collaboration, empathy, and curiosity. Coaches are also trained to identify when clients have needs outside of coaching, including continued education or behavioral health support. AI chatbots, however, do not currently meet these core coaching skills and competencies and should not be considered equivalent to traditional coaching.

With AI being tested to mimic conversations and ask coach-like questions, human coaches are reasonably hesitant about coaching chatbots. Because there are no regulations outlining the training and skills required to be a coach, it will be up to individual consumers to decide what type of coaching experience works best. Regardless of quality and credentialing concerns, coaching bots may be used by coaches as a support tool or create new tiers of “coaching” services. Potential uses include coaching chatbot integration in organizational onboarding, training, and development. These programs may also appeal to clients who cannot otherwise afford traditional coaching. However, the credentialing for human coaches often signals a higher quality of service. The 2023 International Coaching Federation Global Coaching Study found that 80% of coaches reported that their clients expect coaches to be credentialed, indicating that demand for human coaches will remain strong in the future. 

What this means for coaches:

  • AI is a powerful tool to support administrative tasks, freeing up coaches to prioritize client interaction and professional development.
  • AI chatbots do not have the same emotional intelligence and empathetic understanding as humans. The ability to build trust, establish rapport, and navigate complex emotions and situations is still a crucial aspect of coaching that AI cannot fully replicate.

2. Is information produced by AI reliable and safe?

Machine learning and language processing programs require human oversight. These technologies produce “results” based on available data, human-generated prompts or commands, and their design specifications. For example, ChatGPT by OpenAI scans information from the internet or a provided resource to produce conversational responses or summarize information based on a given prompt.  According to the Open AI Safety and Best Practices guide, AI can “hallucinate” by creating false or fabricated information and producing harmful responses that are inappropriate or biased. AI should not be used as a replacement for ethical, legal, financial, or medical advice. As an example of the risks of improper AI application, a lawyer used ChatGPT to create a legal argument citing fake cases. In academia, institutions are concerned that these technologies could lead to a culture of rampant cheating, leaving young professionals unprepared for their careers.

Coaches using generative AI programs to create written materials and engage with clients need to remember that AI still requires human oversight. Information produced by AI may be incorrect, discriminatory, disrespectful, or unclear. When used in a marketing context, AI can also produce redundant, simplistic, or awkward language, leaving a bad impression on potential clients. Even more alarming, AI chatbots may not be able to detect when a user needs mental health support and may provide dangerous information that causes harm.

Overreliance on AI in the coaching profession may reduce the quality of communication and client engagement. If chatbots are integrated into coaching practice, they will need to be monitored for quality and safety, especially for vulnerable clients. Because chatbots are trained for a certain user group, they may also not be as beneficial for diverse types of clients, creating different levels of support for clients based on factors including culture, race, age, gender, and language. The use of these programs without prior acknowledgment and consent could also lead to issues related to data security and privacy.

What this means for coaches:

  • Because AI can produce false, misleading, or biased information, coaches will still need to fact-check and supervise AI-generated content.
  • Coaches will need to consider the limitations and dangers of using AI generative language tools before applying them in coaching sessions.
  • Coaches should be mindful of ethical considerations when using AI chatbots in a way that maintains their professional boundaries and responsibilities.

3. Does AI uphold standards for confidentiality and data security?

As a tool, AI is dependent on data. Machine learning programs require large amounts of data to identify patterns and correlations which then fuel “learning” and allow the AI to generate predictions. When a consumer uses an AI tool to analyze a document, recording, or video, they no longer have control over how that data is stored or used. An analysis by the European Union Scientific Foresight Unit explains how the collection of data and use of AI can lead to risks like surveillance, harassment, and discriminatory profiling. The ISACA, a global association of leaders in Information Sciences and Information Technology outlines how AI programs collect data that may then be repurposed for a secondary use. Another risk outlined by the ISACA involves lack of consent, where information is shared without the knowledge of one or more parties. For example, if AI is analyzing a conversation or written correspondence one or more parties may not have the ability to consent or opt out of their data being analyzed.

A primary condition of the coaching agreement is client confidentiality. Coaches using AI tools to record, transcribe, summarize, or edit information from coaching sessions or client communications could potentially violate confidentiality agreements. This is because AI programs process and store information that could expose sensitive client information, intellectual property, or business strategy to hackers. Coaches will also need to adhere to local and global data privacy and security laws when applicable if using AI in their client interactions. Clients should be informed if coaching conversations or personal information are being analyzed by AI.

All major coaching accrediting bodies have ethics guidelines. For example, the International Coaching Federation Code of Ethics requires that coaches “explain…the nature and limits of confidentiality,” “comply with all applicable laws that pertain to personal data and communications,” and “maintain, store, and dispose of any records including electronic files and communications…in a manner that promotes confidentiality.” Similarly, the Association of Coaches and the European Mentoring and Coaching Counsel (EMCC) have a joint Global Code of Ethics for Coaches, Mentors, and Supervisors. Building off the Global Code of Ethics, the EMCC launched a global task force to create standards for digital coaching and AI. As these technologies become more sophisticated, coaching bodies and digital coaching organizations will need to collaborate to create and routinely update standards for ethical use of AI in coaching.

What this means for coaches:

  • Coaches can consider how AI technologies might expose client information, including intellectual property, personal information, and voice and image recordings.
  • Clear communication should be established with clients regarding the role of AI in the coaching process, data privacy and security, and the limitations of AI chatbots.

4. How can we ensure the ethical use of AI that avoids bias and discrimination?

Bias and discrimination can present in many ways. When data includes one specific group, AI will generate patterns and conclusions about the training data set that may not apply to wider audiences. For example, a program aimed at analyzing the English language may not be as effective for English language students or second language users if it was only trained on inputs from native English speakers. Another type of bias involves algorithm bias where patterns are based on human judgments and connections. A program trained to preference graduates from prestigious universities might also filter out students from marginalized communities or those from lower socio-economic backgrounds. AI can also promote bias and discrimination if it is intentionally designed to identify, track, or harass individuals from a specific background. A unique case of AI surveillance used facial recognition to identify lawyers and ban them from public arenas in New York. A more sinister application could be used to identify, track, and discriminate against political adversaries or minority groups.

AI may also lead to bias and discrimination in a coaching or development practice. Organizations using AI to identify potential candidates for coaching may want to consider how AI might accidentally exclude different groups of employees from coaching and development opportunities because of bias. Coaches can be mindful of how AI tools and resources may be trained to work for only one client group and may not be helpful or appropriate for all audiences. Specifically, many AI chatbots are trained to work with Western and individualistic cultures. These tools may not be useful for multi-national teams or clients in non-Western and collectivist societies.

Coaching researchers, digital coaching platforms, and accrediting bodies are already looking to recruit diverse coach and client participants to reduce bias in AI coaching tools. Before coaching chatbots can be adopted wide scale, the AI programmers will need to imagine how the training pool might inform coaching chatbots in both helpful and harmful ways. By predicting the unintended and negative consequences of AI bias, organizations can work to intentionally recruit a wider array of training participants to create a more balanced algorithm. If this is not possible, AI coaching chatbots should clearly describe the training audience to help inform clients about potential limitations.

What this means for coaches:

  • AI chatbots operate based on algorithms and predefined rules. While they can offer general guidance and resources, they may struggle with providing highly individualized coaching tailored to a specific client’s unique needs, circumstances, and context.
  • Coaches should focus on leveraging their expertise and intuition to provide personalized support.
  • Coaches can help to reduce bias in coaching AI by participating in digital coaching research or providing feedback to chatbot research programs like CoachHub’s AIMEY.

More Resources for Coaches

AI provides unique opportunities to supplement and streamline certain tasks. AI is already being applied to support administrative and marketing activities. Early trials with coaching indicate that “coaching bots” may be able to replicate certain coaching approaches aimed at behavior change with short-term success. However, these technologies cannot fully replicate the human coaching experience. Human social and emotional intelligence skills will continue to inform coaching conversations and contribute to coaching success. Human coaches will also be essential in guiding the development, testing, application, and refinement of AI coaching products. This includes monitoring client interactions with coaching bots if used as coaching support. 

Transformational Questions

The following questions can help professional coaches engage in critical thinking and exploration of the potential applications, challenges, and implications of AI in the coaching profession:

  • How can AI chatbots be effectively integrated into the coaching process without compromising the personalized and human-centered aspects of coaching?
  • How might the introduction of AI chatbots impact the relationship between coaches and clients, and what steps can coaches take to maintain a strong rapport and trust with their clients in this new context?
  • How can coaches leverage the data collected by AI chatbots to enhance their understanding of clients’ progress, preferences, and patterns, and how might this data be effectively utilized in the coaching process?
  • In what ways can coaches collaborate with AI chatbots rather than viewing them as competitors, and how might this collaboration lead to new coaching opportunities or innovations?
  • Should there be regulations or licensure for AI coaches? What are the implications of allowing unregulated AI coaching systems to operate without oversight, and how can ethical standards be enforced?
  • In the context of on-demand AI chatbots, should there be safeguards in place to prevent clients from becoming overly reliant on AI coaches, hindering their ability to develop self-efficacy and autonomy?

Discover research and discussions on the ethical use of AI in coaching

Learn about global initiatives to ensure the safe and ethical use of AI by reading

Back To Top