Responsible AI Usage Policy
Private practice psychologists in Canada must comply with the Personal Information Protection and Electronic Documents Act (PIPEDA), which governs how they collect, use, and disclose personal information during commercial activities. This legislation requires psychologists to inform clients about their privacy rights and how their information will be handled.
Artificial Intelligence (AI) is an increasingly important tool in healthcare, offering unprecedented possibilities for data collection and practice efficiency. However, there are potential risks with this new technology. This AI usage policy is designed to guide Dr. Sira in the responsible, transparent, and ethical use of AI in her work. The aim of this policy is to ensure that Dr. Sira’s use of AI aligns with her obligations under the College of Health and Care Professionals Code of Conduct and the ethical practice guidelines outlined by the American Psychological Association and the Canadian Psychological Association so she can continue to respects her client’s rights. Clients have the ability to “opt-out” of AI and recording encounters without impacting the psychological care they receive.
Transparency and Informed Consent
Dr. Sira values transparency and trust. Before she uses AI as part of her psychological services for any client, Dr. Sira will obtain informed consent from that client regarding the use of AI by clearly communicating the purpose, application, and potential benefits and risks of relevant AI tools and services.
Mitigating Bias and Promoting Equity
Dr. Sira makes efforts to mitigate bias and to promote equity in her practice. She has evaluated the AI service she uses, with a focus on addressing bias and preventing exacerbation of existing health care disparities. She makes efforts to use AI responsibly to consider the full range of lived experiences to avoid unfair discrimination.
Data Privacy and Security
AI services handling sensitive behavioural health data pose risks related to privacy breaches and unethical data use. Dr. Sira will make efforts to ensure that the AI service she uses is in compliance with PIPEDA and other relevant data privacy regulations; this necessitates advocating for robust cybersecurity strategies to protect client information. The AI service Dr. Sira uses has a reliable privacy policy and it does not use client data for training of it’s model. If it does use client data for training in the future, Dr. Sira will ensure the data is fully de-identified using these 18 variables:
Names (clients, schools, professionals), addresses, dates
Phone numbers, email addresses, health card numbers
Education identifiers (school names, grades)
Any unique identifying phrases ("only Farsi-speaking student in her grade")
Client record keeping is Dr. Sira’s responsibility as the health care provider and as the custodian. She has the professional responsibility to ensure the clinical records are accurate. Clients also have the ability to correct their psychological records under PIPEDA.
Data Retention
Recordings and Transcripts used by the AI scribe will not be retained beyond the minimum time required to perform the service, and in no case greater than 30 days.
Data Residency
Dr. Sira ensures that the AI service she uses has strong encryption methods for both data in transit and at rest. When at rest, client data is held in Canada, in accordance with PIPEDA.
Annual Compliance Confirmation
Dr. Sira will assessing if the AI service’s ethical compliance remains acceptable for the entirety of its use by way of regular updates and audits. For example, while an AI service may be ethically sound when first considered for use in a clinical practice, one may find that a year later the developer and/or service provider have not maintained regular updates for privacy or legal compliance or updating its training data for bias. In order to ensure that Dr. Sira remains accountable to her clients for the service’s ethical use she will evaluate how often the service’s updates take place. This demonstrates accountability from the AI service provider and sets the parameters for Dr. Sira to remain accountable to her clients.
Accuracy and Misinformation Risks
Dr. Sira use an AI system that has been rigorously validated before implementation in her psychological practice. She strives to critically evaluate AI-generated content before applying it in her clinical practice. She also critically evaluates the AI tools she recommends to her clients to use. Dr. Sira strives to assess AI tools and services for their quality, performance, and appropriateness in behavioural health settings. Dr. Sira takes responsibility for the quality of information used in her practice, including promptly discontinuing the use of AI services if misinformation concerns arise.
Dr. Sira uses only a “closed” AI service for her clinical practice where client information is only processed based on algorithms previously “trained” using clinical knowledge and published resources (such as a diagnostic manual). The AI system does not "absorb” any additional data from clients for it’s own learning.
Human Oversight and Professional Judgment
AI augments but does not replace Dr. Sira’s decision-making. Dr. Sira remains responsible for her final decisions and does not blindly rely on AI-generated recommendations to ensure her clients are protected from potential harm.
If Dr. Sira uses predictive analysis in the AI system to aid her in her clinical decision making, she will continue to use clinical judgement supported by an understanding how the AI system makes its decisions between the data going into the system and the output being produced. Where possible, Dr. Sira will use AI tools/services that include confidence levels in their recommendations.
The AI system Dr. Sira uses has in place a system to collect feedback, and processes to address and resolve issues. If Dr. Sira observes that the system’s notes or clinical guidance related to a client who is originally from outside of Canada are noticeably less accurate than other clients even though the tool/service was trained to recognise other accents and dialects, she will report this and follow up to ensure it is addressed and resolved. This demonstrates accountability by the AI service provider to Dr. Sira, and accountability by Dr. Sira to her clients.
Liability and Ethical Responsibility
The legal implications of AI in behavioural health are still emerging. Clinical psychologists like Dr. Sira are encouraged to consider liability risks related to AI tool selection and ensure that they provide proper training and understanding of AI services that can help mitigate legal and ethical risks.
Contact information:
If you would like to contact us to understand more about this Policy or wish to contact us concerning any matter relating to individual rights and our use of AI, you may send an email to admin (at) drclairesira.ca.
