
Image by Jess Dunham
Meet Yasmine, a youth development expert at the YMCA of Greater Cincinnati. She recently gave me some advice when I asked, “How do I deal with an unruly kid?” Yasmine politely instructed me to first “reframe unruly as a signal, not a trait” and suggested that I instead ask, “What unmet need might this behavior be expressing? Is this a trauma response, a bid for connection, or a reaction to unclear expectations?”
The adept answer didn’t come from someone inside a local YMCA branch. Yasmine isn’t human—she’s artificial intelligence. More specifically, she’s an AI agent built inside ChatGPT.
Yasmine went on to encourage me to integrate practices that could help the child learn self-awareness, self-management, and relationship-building skills. She then offered to design me a lesson plan or answer follow-up questions.
“I was using ChatGPT and immediately began to see its extraordinary capacity to help individuals do their work,” says Jorge Perez, president and CEO of the Greater Cincinnati YMCA system and the human who created Yasmine. “There are about 15,000 youth development leaders in this country who work at the YMCA. All of a sudden, they have a leader that can answer questions like, How do I design a STEM program for seventh grade girls? Or I’m about to talk to parents and I want to encourage them to be great parents to their children. What can I tell them? Yasmine has all the knowledge and is available 24/7.”
Today, the entire YMCA organization across the U.S. and Canada is using Yasmine and 15 other “AI advisors” built here in Cincinnati. There’s Morgan in marketing, Frankie in finance, Harmony in human resources, and so on, each trained at a master’s degree level in their specialty and capable of taking up to 150,000 queries simultaneously and delivering an answer in seconds.
“We’re not using AI to find a way to replace people but really to empower them to supercharge their ability and add it to their toolbox,” says Perez. “The Y’s mission is to help people achieve, relate, and belong. AI could become an extension to that strategy, and I believe we have an opportunity to scale our assistance to levels that seemed impossible before.”
Artificial intelligence has been around for some time but is now barreling into our daily lives with great speed. Less than three years after the public release of ChatGPT, organizations like the YMCA, businesses large and small, government agencies, and everyday people are embracing AI tools across the globe.
When it comes to AI, the stakes couldn’t be higher—the future of humanity is on the table. While the opportunities appear endless, with some believing AI can cure cancer or solve world hunger, the risks are infinite as well, including the spread of misinformation, destruction of jobs, and even the replacement or destruction of humans entirely.
At its core, artificial intelligence is technology that “enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity, and autonomy,” according to IBM. Technology has been evolving for centuries to make AI possible, and you’re using it today if you speak to Alexa or Siri, ask a chatbot a question on a website, or choose from your personalized list of shows and movies on Netflix.
These days, AI tools can analyze and interpret data; generate human-like text and create images, videos, and music; transcribe, translate, and synthesize speech in a multitude of languages and accents; predict trends; and write code. It can control robots doing manufacturing jobs and operate self-driving vehicles and drones. If you know very little about how it works, just understand that it’s totally different from a search engine like Google.
The field is evolving fast, says Kendra Ramirez, cofounder of CincyAI for Humans, a 1,500-member group that meets monthly at the University of Cincinnati. She encourages everyone to begin building an understanding of AI in order to take advantage of its opportunities and avoid its snares.
“In every technology shift, it’s women, minorities, small businesses, and nonprofits that get left behind,” says Ramirez. “I feel passionately that AI democratizes that trend, because anyone can use it. There’s a huge assumption that you have to be technical to use AI. But if you can speak or type, you can use it.”
Ramirez and Perez are part of a growing community of AI adopters here in Cincinnati pushing for the region to become a leader in responsible or ethical AI. “What’s at stake isn’t just jobs or industries, it’s trust and human potential,” Ramirez says. “AI has the power to amplify the best in us or, if misused, accelerate the gaps. That’s why it’s so important for everyday people, not just technologists, to be in these conversations.”

Photograph courtesy CincyAI
Back in October 1950, the English mathematician and computer scientist Alan Turing posed the question Can machines think? in his paper, “Computing Machinery and Intelligence,” published in the academic journal Mind. He laid out what became known as the Turing Test, which involved an interrogator asking the same set of questions to a human and a computer program. If the interrogator couldn’t discern which party supplied which answers, the computer would be considered as thinking.
This test kickstarted research and development into “thinking machines,” and six years later John McCarthy, a professor at Dartmouth College, chose the phrase “artificial intelligence” when putting together a summer workshop to clarify and develop ideas around the technology. His workshop is widely considered the founding moment for AI.
By the 1960s and early 1970s, an artificially intelligent program had passed the freshman calculus final at MIT. Other computer programs could play and win games like checkers and chess. Interest and funding waned from the mid-1970s through the early 1990s, as advancements in AI stalled and computer scientists began to wonder whether the technology could improve.
Interest returned by the 2000s as computing capabilities advanced, and the goal of AI research shifted away from creating a multipurpose, fully intelligent machine to specific tools that could solve specific problems.
Today, AI is solving problems all over, says Ramirez, who started dabbling in AI about seven years ago and has tested more than 100 different AI tools. Her digital and AI agency helps small- to medium-sized businesses and nonprofits with AI education, readiness, strategy, implementation, policy creation, and training, as well as how to create an internal AI Task Force or AI Council to keep up with AI’s constant evolution.
“I started having conversations with people one-on-one about AI and what is being built locally and wanted to get all of us together to share what we were learning and building, so I threw out a LinkedIn post asking if anyone would be interested in meeting up,” says Ramirez. “Over 14,000 views and 200 comments later, it became a reality quickly.”
CincyAI for Humans meetings attract a mix of AI builders, businesspeople, and ordinary individuals wanting to better navigate AI, Ramirez says. It’s an informal get-together where attendees are invited to take the floor for up to three minutes to share a tip, tool, or use case or ask the community a question. “It’s just so magical in that room,” she says. “So many people have gotten job opportunities or business partnerships or solved some problems or identified a tool to solve a problem.”
How does AI solve problems? The computer programs are constructed of varying levels of complex artificial neural networks and mathematical models that enable learning, according to the International Organization for Standardization. “At their core, they are an imitation of the human brain,” ISO’s website explains. “Made up of layers of interconnected nodes—called artificial neurons or perceptrons—each artificial neuron takes in inputs, performs calculations, and generates an output. These outputs are then passed on to the next layer of perceptrons, creating a hierarchical structure. The power of neural networks lies in their ability to learn and recognize patterns in data.”
AI can solve something as simple as filtering out spam from our e-mails and as complex as accurately predicting the effects of all types of genetic mutations. “ChatGPT was pre-trained on 175 billion parameters, which doesn’t mean a whole lot to all of us, but it would take about 500,000 lifetimes to read that much data,” Ramirez says. “At its very foundational level, artificial intelligence is the ability for a computer to see, think, learn, and do over and over. Because it’s constantly learning, ChatGPT is estimated to now be more than a trillion parameters.”
ChatGPT, created by OpenAI, is just one example of a growing number of AI-driven “answer machines.” Others include Google Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok, and Anthropic.
“AI can be a force multiplier for human creativity, productivity, and even compassion,” says Ramirez. “It can help us solve problems we’ve struggled with for decades like innovation in healthcare, making education more personalized, supporting mental health, and making small and medium businesses more competitive. What’s possible? New careers that don’t even exist yet. Tools that remove friction from our daily work. Innovations that bring access to more people.”

Photograph By Andrew Doench
I decided to interact with ChatGPT, so I logged in and told it I was a journalist writing an article about AI for the Cincinnati region. “What should I warn people about AI?” I asked. Here’s the list it provided: Misinformation and deepfakes, bias and discrimination, job displacement, privacy invasion, lack of regulation, overreliance, and hallucination.
To be blunt, artificial intelligence poses short- and long-term risks across every sector of industry and in every corner of our lives, says Daniel Schiff, an assistant professor of technology policy at Purdue University’s Department of Political Science. As co-director of the school’s Governance and Responsible AI Lab, he studies the formal and informal governance of AI through policy and industry and its social and ethical implications in areas like education, finance, and criminal justice.
Schiff began studying robotics and intelligent systems while earning a bachelor’s degree in philosophy at Princeton University and continued a focus on AI as he completed a master’s degree in social policy at the University of Pennsylvania and a doctorate in public policy from the Georgia Institute of Technology.
“Even in the very early days, some of the first people working on AI were thinking about the threats posed by machine consciousness and autonomous weapons,” says Schiff. “The benefits of AI are very possible and we’re seeing some of them, but I worry more about the risks than the benefits.”
He says one pitfall, which all humans should be wary of, is the ability for AI to spread misinformation and produce deepfakes—that’s the term used to describe voice recordings, images, or videos that have been convincingly altered or manipulated to misrepresent someone as doing or saying something they didn’t say or do. Deepfakes have been used to trick grandmothers into sending money to a stranger, manipulate voters in elections, and generate child pornography, all clearly unwanted and often unlawful operations.
Schiff advises never to enter private information like bank account or social security numbers into any AI-driven search and discovery tool, because they instantly become publicly shared. And he wants us to understand that AI can hallucinate, meaning provide false, misleading, or nonsensical outputs, if there are errors in its processing and answer generation.
Awareness is key for citizens and consumers, Schiff says, so we can’t simply ignore AI. “AI literacy includes a mix of knowledge, skills, and attitudes that are technical, social, and ethical in nature,” he wrote in a recent blog post. “Exactly how much literacy everyone needs and how to get there is a much tougher question.”
When using AI, he says it’s best to check multiple sources to verify what you’re being told, to understand who made the AI tool you’re using, and to know where your data is going.
AI can perpetuate biases about race, religion, culture, and other human characteristics, says Christie Kuhns, president and CEO of the Urban League of Southwestern Ohio. As an organization on a mission to disrupt generational poverty, the Urban League wants to avoid reinforcing unfair stereotypes and offering inaccurate information, she says, so the organization’s entire staff recently underwent AI Essentials training to focus on responsible use.
“We didn’t want people to feel like AI lives in the IT department,” Kuhn says. “It lives in every role in everything we do. It’s not something that you can just say, Oh, that’s down the hall.”
According to the Future Jobs Report released by the World Economic Forum in January, 86 percent of employers expected AI to transform their businesses by 2030. Kuhns, Perez, and other nonprofit leaders, including Jeremy Brown at the Talbert House, are now meeting routinely to discuss how regional nonprofits can avoid getting left behind in this major shift.
“When you’re in the business of trying to serve everyone and you have a limited staff to do it, the idea of all of a sudden having more support from AI is just exciting,” says Perez.
These organizations are already using AI in multiple ways. The YMCA has its virtual agents, like Yasmine, which are trained to avoid human biases. Perez says his organization is looking to build additional AI tools, one of which is an AI agent that can watch surveillance cameras at YMCA facilities and immediately alert staff when a senior falls or a fight breaks out or an adult is having difficulty making friends.
The Urban League uses AI to help with administrative tasks, Kuhns says, like preparing for meetings, summarizing meeting notes, and creating next steps. AI has also been useful when sorting through résumés, vetting job candidates, strategic planning, and using data analytics to measure the outcomes of the Urban League’s programs.
Brown says The Talbert House, which offers social services in five counties, is prioritizing AI in every aspect of its operations and services, including matching candidates with the right jobs in their Hamilton County Youth Employment Program; monitoring surveillance cameras in some of its corrections programs; and designing an AI website bot to help visitors find the information they need.
“I don’t speak for all nonprofits,” Brown says, “but there’s always more work to do than there is time in the day for most of our people. I want to encourage staff to use the tools but then also make sure to reinforce that they need to be careful.”
There is a limitation to what AI can do, Perez says. “At some point a person will say, Look, I really need to talk to somebody. I need to connect. I wanted to get in front of this need for connection in order to teach our AI agents to understand how to help humans become healthier.”
Carl Fraik spent 35 years working for Procter & Gamble but wasn’t ready to stop working when he retired. Instead, he accepted a role as director of research for Corporate Entrepreneur Community, a global consortium focused on finding ways for big companies to innovate like startups.
“The members are some of the larger companies in the world,” says Fraik. “When I asked what things would you like me to go dig into, a number of leaders said, AI is coming. We don’t know what that means for innovation. We don’t know what that means for how to run our organizations. And so my role turned into an AI researcher for this consortium.”
It quickly became clear to Fraik that AI was going to make a huge impact in our lives and there would be winners and losers. “I really wanted to be able to take everything I’d learned and experienced and somehow give that back,” he says. “Cincinnati is very deeply my home. So the question was, OK, how do I triangulate all this?”
He and Pete Blackshaw, another former P&G employee now heavily involved in AI, cofounded a nonprofit called Cincinnati AI Catalyst in November 2023. Their mission is “to improve the lives of people in the Cincinnati region by providing a coordinated, collective artificial intelligence capability, committed to Responsible AI, that enables new products and services, attracts capital, creates and preserves jobs, develops and improves skills, and provides a trusted source of AI-related education.”
With guidance from its board of directors, all regional leaders in AI, Fraik has put forth an AI Blueprint for the Cincinnati Region that calls for the alignment of regional stakeholders in 12 key areas, including education, healthcare, government operations, and workforce development. “What I think a lot of people haven’t experienced yet is creating a relationship with the robot,” he says, adding that—just like personal computers and smart phones—everyone will need to get over their initial discomfort with AI technology.
Fraik created the blueprint with the help of a persona he created in ChatGPT. He told it, OK, you are my coach and an expert on creating strategies to deliver on societal objectives. He’s been working with that persona for more than a year now and regularly asks, Am I being holistic? Is this the best there is out there?
He hopes to put the plan into action by building ecosystems of understanding around AI, and he’s started by creating opportunities to experience AI in a safe environment. Lately, those have included an AI education session held at the Deerfield Township administration building, a companywide session with employees of Cincinnati Water Works, and many meet-ups during Cincy AI Week in June, including a happy hour for government leaders.
“I often refer to what we’re doing as a movement,” Fraik says. “What’s possible here is for individuals to have incredible freedom and be able to take an idea or take a question to places that were absolutely unimaginable before. But with great freedom comes great responsibility.”
Responsible AI is about maximizing benefits and minimizing risks while also safeguarding human rights, duties, and values, says Schiff at Purdue. Unfortunately, there isn’t much required in the way of public reporting or clear standards, he says, and his research has found that the current audit ecosystem is being built around narrow notions of bias, privacy, and model transparency—and not real-world impacts like well-being or workplace impacts.
“If you don’t have high-quality auditing and enforcement and penalties, you’re relying on goodwill,” Schiff says. “And while there are well-intentioned people every where, I don’t think these safeguards will work as purely voluntary.”
It may all sound scary, and the fear comes from a good place, says Perez, but he still encourages people to give AI a try. He suggests starting with something simple, maybe telling ChatGPT you want help baking a lemon meringue pie or asking a question about one of your hobbies.
“It’s like driving,” he says. “The first time, you were probably so afraid. But at some point you start driving on the highway. You listen to the radio, you can have a conversation on the phone, and you’ve become really proficient.”
Perez is convinced mastering AI will become a key strength in the modern workplace. “I think in the near future people will come to interviews and say, You should choose me as a grant writer, as a community developer, as an IT leader because you’re not just hiring me and my experiences but you’re also hiring the seven or eight agents I’ve created and worked with, and they’re ready to work for you for free.”
Yet he encourages all of us to work on heightening our abilities to discern the interactions we have with AI, pushing them through our own intelligence to determine whether the data is right or wrong and helpful or hurtful.
Schiff reminds us that efficiency is just one value. “Maybe you want to be slow and inefficient, and maybe you want to make mistakes or write in your own voice,” he says. “I would encourage people to find their own way and think about what they want AI to do for them. You should seek meaning and joy and well-being, not just shortcuts or efficiency.”
Ramirez has a favorite T-shirt that reads HI>AI, meaning human intelligence is greater than artificial intelligence. It’s her guiding principle.
“AI is a tool, and you are the human,” she says. “You bring the values, the empathy, the strategy. The decisions we make now—how we use AI, who gets access, how we govern it, and how we teach it to align with human values—will shape everything from the future of work to education to creativity.”


Facebook Comments