Share

After 20 years of consulting across seven industries, I have seen a variety of novel technologies be discovered and then evolve to become part of every organization’s technology platform. What I haven’t seen very often is a planned approach to onboarding these new technologies to ensure they are deployed intentionally, ethically, and with mechanisms to evaluate, optimize, and reduce potential harm they might cause.  AI looks poised to become the next technology to follow this path, and I see it getting deployed through the back door as “shadow tech” by one person or department without any executive guidance (or even awareness) already.

It doesn’t have to be that way though! Even if you are a paper and pencil, “print out the website for me” technology agnostic executive, you can provide the framework and mandate for the ethical and intentional use of AI within your org.

Here’s how every executive can help ethically manage the use of AI:

1. Develop positive & clear feedback loops: Make it safe for your staff to share where they are experimenting with and trying out AI within the org. Trust me, it’s happening, and you want to know about it and offer feedback and guidance. Blanket bans or squashing initiatives will just lead to it becoming shadow tech you, as a leader, aren’t aware of, yet are legally and ethically accountable for.

2. Define clear risk evaluation criteria for staff: Get your staff to think about how and where they should use AI, with risk & quality of delivery as their guiding stars.  Ask your staff, by department, to identify the areas where AI automation or generated content is:

  • Risky for your organization’s reputation or those you serve
  • Helpful & reduces repetitive or unwelcome tasks

3. Ask staff to provide feedback on AI wins & losses: Define guidelines for evaluating how AI is working for each use case. Your staff will NOT be experts in how to best use this technology initially, so your focus as a leader is on blunting and deflecting use in areas where there is a large reputational risk or possible harm to those you serve, while at the same time encouraging staff to evaluate, review, and improve their use month over month. Generative AI, such as ChatGPT in particular, will require learning a new skill called “prompt crafting,” as the structure of the sentences and the words used to make requests of these AI systems dramatically impact the quality and utility of the content generated.

4. Ensure cross-department collaboration can occur: Define your AI use cases so everyone has the same vocabulary for discussing what they are trying, learning to use, and where your organization has major wins or challenges. A few to get you started:

  • Generating Content – Where in your organization will you use AI such as ChatGPT to create (hopefully first drafts that will be heavily edited) content such as blog posts, FAQs, and summaries of longer form content.
  • Automated Classifying & Suggesting – You teach the AI some basic rules and then review the AI’s choices a few times, and it starts automatically classifying or organizing things.  CRMs are building this into a lot for functionality like “suggested next engagement” with an audience member.
  • Algorithmic Selection – Find me the best X in this large group of things. This is being heavily deployed in HR systems to sift through resumes.  It’s starting to be deployed more widely for things like “Who is my most likely big donor” or discovering prospects.
  • Training Content – A lot of AI models that you see today are trained using “publicly available on the internet” data. This is a mixture of perfectly reasonable content and some truly deplorable stuff.  It is increasingly possible, however, to “self-train” AI systems using your own content.  This usually provides much higher quality results from the AI and gives you deeper insight into what is influencing the AI’s choices. Encouraging your team to investigate self-training will pay a lot of dividends.

5. Be open with your audience: Rather than being embarrassed to use AI, get ahead of the communications curve by explaining where, how, and why you are using AI.  Wired magazine recently did a great job of openly sharing and explaining their use of AI to their audience. 

There are lots of places where AI might allow mission driven organizations to direct more of their funders or donor’s dollars to directly advancing their mission. Everyone who uses electricity knows that technology can be not just useful, but essential for getting things done, but can be dangerous and harmful to you and others if you use it without proper care.  AI is no different, when used ethically, thoughtfully, and with open intention, it can help you spend more time on making things better.  Please reach out to me and let’s start a conversation if you’d like to collaborate on the ethical and intentional use of AI at your organization.