Share

Navigating Opportunities, Challenges, and Progress in a Rapidly Evolving Landscape


As “agnostic technologists” at ParsonsTKO, we strive to bring emerging technologies under scrutiny, identifying their potential implications for nonprofit organizations as much as we value and celebrate their benefits. We acknowledge the exhilarating (and exhausting) possibilities these innovations offer and yet we also shed light on the inherent challenges and risks they might introduce. Our inspection and research into artificial intelligence is no different. As we learn about AI and its evolution, we explore potential scenarios to prepare ourselves and our clients for the future benefits and risks it offers.

A quick primer – what powers Artificial Intelligence (AI)?

Most AI tools are built on Language Learning Models (LLMs) that are essentially complex and advanced computer brains (algorithms) that understand and use human language. LLMs started with a simple goal: Help computers understand and make sense of human language. However, they’re not just big collections of content and datasets scraped from the internet, but smart systems that can learn from and make inferences from that data and content. Some of the first consumer-level applications using LLMs showed up as voice assistants (Amazon’s Alexa and Apple’s Siri, for example). Over time, companies like OpenAI, Google, and Microsoft built more advanced LLMs, such as GPT and BERT. These models learned the language from billions of sentences online, which helped them understand more humanly. 

However, LLMs underlying AI tools can unwittingly carry and perpetuate biases in the data they’re trained on. 

In essence, these models learn from our data – and our data can be biased. 

We asked ChatGPT: “Are LLMs biased?”

The response was clear: 

LLMs, like any AI models, can reflect human biases embedded in their training data. These biases can result in biased outputs or uneven familiarity with certain topics. If a language model is trained on a dataset that contains biased, racist or sexist language, it can learn to generate responses that reinforce stereotypes or exhibit discriminatory behavior. 

AI engineers are keenly aware of this bias and are working hard to improve how these systems work so that their outputs don’t cause harm to certain groups of people. LLMs can also be perceived as having certain political leanings, as this Brookings Institute article discusses. 

Should my organization steer clear or embrace AI?

In practical terms, there’s little debate that AI can enhance productivity. While we wouldn’t recommend replacing a well-researched study, a pitch to a funder, or a critical academic article with a 100% AI-generated replacement, there are simple ways to get started exploring the possibilities.

How to begin? Start with what AI can do well

Think of four time-consuming or repetitive, text-based tasks that require a cognitive lift. Examples: 

  1. Compiling Meeting Summaries: Quickly get out a list of the major decisions made in a critical meeting, with the next steps clearly laid out 
  2. Creating Inclusive, Optimized Content: Provide content creators with faster alt-text image description starters that improve accessibility. Shorten academic, think-tank, and wonky jargon into simpler terminology that is more easily understood by diverse readers (and better for SEO). Quickly reformat rough website content received from a field program officer, organizing it around themes, and ensuring that it follows AP style.
  3. Preparing a Business Case: Convince an executive about the need to hire more staff specialists by finding more compelling language and C-level framing to help bolster your case.
  4. Code Review/QA: Review code and get feedback. Identify potential issues, find improvements, and get best practices for coding standards and conventions.

The four examples above are prime candidates for AI assistance. Just remember, you need to check AI’s work as it is only as good as the data it learned from.

In our explorations at ParsonsTKO, when using AI  for repetitive time-sucking activities, none of the tools we’ve explored are perfect, free of bias, or consistently 100% accurate, even with simple text. However, the generated output is always helpful in creating momentum, getting creative juices flowing, and unblocking roadblocks.  It can genuinely save time and effort.

Think of AI (now) as a productivity catalyst…

What AI doesn’t do well

Non-verbal observations: For work that relies on non-verbal observations (such as behavioral interviews), the LLMs can’t help out much yet.

Understanding underlying context or nuance:  While it can help organize and summarize a textual transcript, AI can’t yet incorporate empathy, background, or context into the generated output. 

Synthesize images: UX practitioners would never rely completely on the text output of a field study as AI isn’t (yet) able to synthesize images, screenshots, or recognize facial expressions alongside the text, so don’t expect it to generate sufficient or complete outputs. 

AI Tool Governance – getting started

We’re all in the early adoption phase with AI, so considerations will evolve quickly as uptake increases. However, right now, your organization can be better prepared for AI’s inevitable evolutions by dedicating time to research and education and constructing your foundational organization’s ethical use policies and governance. 

Key considerations for building an evolving Ethical Use Policy:

Spell out your organization’s commitment to using AI responsibly, focusing on fairness,  reducing bias, accountability, transparency, and privacy.

  1. Ensure your organization is thinking about AI data protection and privacy, including how data is collected, stored, and used with these tools. As LLMs learn from what’s fed into them, It’s important to be cognizant of what your teams are inputting into these tools. As a start: 
    • Textual documents containing confidential information, intellectual property, or personally identifiable information (PII) should be tightly reigned in. 
    • Don’t upload meeting transcripts, conversations, or documents without scrubbing names and organizations. Anonymizing this data is key!
  2. Build in operational transparency: How will AI decisions be made and communicated? Provide clear guidelines for staff, volunteers, and vendors. 
  3. Communication planning and outreach: Do your supporters know how you use AI? How is consent gathered and maintained, especially in preparation for personal data, as we know, usage with supporter data isn’t far off. 
  4. What are your procedures for detecting, monitoring, and mitigating data bias? Start by becoming familiar with what to look for and different kinds of tests and tools that can be used to assist your efforts as AI grows in sophistication. (Brookings Institute published a research article on this topic. This is a helpful LinkedIn article to review as well.) 
  5. Most organizations have thorough data use policies and terms and conditions on their public-facing website – how can AI best be threaded into your existing efforts?  
  6. Lean into evolving efforts in the nonprofit sector, such as Fundraising.AI, focusing on the intersection of responsibility, ethics, and fundraising. 

Some AI tools we’re exploring that could benefit your organization:

  1. ChatGPT (https://openai.com/chatgpt) – A versatile AI tool available free and as an iOS app. At the moment, this is the product many nonprofit practitioners and early adopters are experimenting with. 
  2. Microsoft Edge + Bing (https://www.bing.com/new) – An integrated browser and search engine that’s built on a comprehensive array of AI LLMs. 
  3. Bard by Google (https://bard.google.com/) – Google’s AI assistant for enhanced search and workspace productivity. Think of this product as Google’s response to ChatGPT.
  4. Otter.ai (https://otter.ai/) A transcription and thematic summarizing service for audio. Can be easily integrated with video platforms such as Zoom to provide speech-to-text documentation, and document speakers and their ideas. It’s fantastic for interviews, but keep in mind it won’t provide a complete summary of an interview’s activities 
  5. Astica Computer Vision & Speech Utilities (https://www.astica.org/vision/describe/) provides image description generation for images and speech-to-text. While limited, it’s a good catalyst for supporting some initial accessibility practices in content generation.
  6. Fireflies.ai (https://fireflies.ai) – An AI notetaker for meetings. Can also handle limited competitive research and assist with marketing copy
  7. Beautiful.ai (https://www.beautiful.ai/ai-presentations) – Provides a design bot to help build out branded presentations. It’s not impressive in improving existing presentations but it can help if you need a place to start. 
  8. Tome (https://tome.app/) – Facilitates idea generation for presentation outlines and content.
  9. Neuraltext (https://www.neuraltext.com/) – Automates content creation.
  10. Jasper (https://www.jasper.ai/) – Generates on-brand content.
  11. Salesforce Einstein GPT (https://www.salesforce.com/products/einstein/overview/) – Salesforce’s AI for automation, marketing, sales, case management, and lead generation, using audience interactions as prompts, built on top of an earlier predictive analytics platform. 

Remember, this list represents only a snapshot of the ever-evolving AI landscape. It’s hard to predict the future, but things will be different as AI continues to accelerate. By the time this post is weeks or months old, the landscape will be quite different. Standard CRM systems may have AI analysis and response components built into their interfaces and social and email tools may be easily trained to reply smartly to inquiries and comments with improved accuracy and precision. We’ll be watching! 

But, no matter how AI may evolve, our goal at ParsonsTKO remains the same – to find ways to use AI to do good, ethical work. We want to use AI to help find efficiencies so that staff at mission-driven organizations can more effectively support the communities and constituents they serve while continuing to minimize bias and harm. 

ParsonsTKO can support your organization in a number of ways when it comes to AI:

  • Identify areas within your business processes where AI could benefit your work and optimize outputs and where it may increase security or negligence risk.
  • Help construct your AI Ethical Use Policy and internal governance.
  • Help uncover areas of innovation within your work to support your clients and programs in new ways.

What are using AI for at your organization? We’d love to hear from you. 

[Full disclosure: I used AI as an aid in completing this blog post; Otter.ai was used to transcribe an initial, rough Apple Voice Memo I recorded while on my morning walk, then I used ChatGPT to better organize my thoughts and to make grammatical recommendations.]