by Stacey Lett, Director of Operations, Eastern U.S. – Proactive Technologies, Inc.
Most everyone will agree that technology has been introduced at an alarmingly increasing rate. So much so that it seems we have little time to become competent in one version before it is replaced by another, impairing the touted “it will make you more productive” claims meant to reassure us. Now we see Artificial Intelligence (“AI”) being pushed onto our phones, out desktop browsers and applications, to mundane equipment we use daily in our lives, robots to replace workers and now human resource department functions to make hiring decisions of humans (but, ironically, not for robots).
On matters concerning technology implementation by the consumer and the effects of technology on consumers, lawmakers have been slow to act or are absent from the discussion. Consequently, decisions on the ethical use, consumer privacy, theft of intellectual property and proprietary information, and cybersecurity in general are decided by the profit-oriented initiators of the technology. And when Wall Street investors latch on, the push to proliferate the new technology to all corners of society are overwhelming…until the next fad comes along that investors can cash in on.
Specifically, with regard to cyber breaches, “there were 2,365 cyberattacks in 2023, with 343,338,964 victims. Around the world, a data breach cost $4.88 million on average in 2024. Business email compromises accounted for over $2.9 billion in losses in 2023.”
Concerning identity theft, “American adults lost a total of $43 billion to identity fraud in 2023…That includes $23 billion lost to traditional identity fraud, which affected about the same number of people – 15 million – as in 2022 (when the number was 15.4 million). But total losses grew by 13 percent last year, according to the report, “Resolving the Shattered Identity Crisis,” produced by Javelin Strategy & Research.”
Concerns over AI adoption have come from every angle and yet little has been done to build enforceable “guard rails” to ensure its safe use – not just for the user but by the collateral populations as well. Among the criticisms of AI that have emerged:
- Its use in deep fakes such as witnessed in the recent election to its use in crime. “Deepfake scams have robbed companies of millions. Experts warn it could get worse. A Hong Kong finance worker was duped into transferring $25 million to a fraudster that had deepfaked his chief financial officer and ordered the transfer via video call.”
- Regarding Identity theft and identify faking, “…artificial intelligence (AI) has led to a significant increase in the sophistication of cybercrime. From deepfake technology to AI-powered hacking, cybercriminals are exploiting these advancements to orchestrate unique attacks.”
- AI “Hallucinations.” Generative AI tools also carry the potential for otherwise misleading outputs. AI tools like ChatGPT, Copilot, and Gemini have been found to provide users with fabricated data that appears authentic. These inaccuracies are so common that they’ve earned their own moniker; we refer to them as “hallucinations.” If you have to question or double-check AI results, isn’t it more efficient and inexpensive to do it right the first time?
- Theft of intellectual and proprietary property while “teaching itself” and storing that information in an unknown location. “The intersection of AI and intellectual property rights is gaining attention, with concerns about AI’s potential for intellectual property theft. Notably, Zoom’s recent policy change raised eyebrows about user data usage for AI training, while authors like Sarah Silverman have sued tech giants OpenAI and Meta, alleging unauthorized use of their works to train AI models.” As AI evolves, balancing innovation and intellectual property protection becomes crucial.
- Detachment from an actual “customer relations experience. ”Forgetting to monitor and optimize automated systems over time could be the downfall of your AI-powered customer service… After all, the effectiveness of AI heavily depends on human input. Measure everything, double down on what works and cut off needless automation that does more harm than good. Never automate for automation’s sake.”
- AI’s use to ramp-up health insurance claim denials of legitimate claims. We have seen that a deep resentment has developed of the healthcare industry, in general, to these practices.
- Rent algorithms used by apartment owners “cooperating” to artificially inflate rent pricesare being litigated. Used by cooperating investment industry players to dominate housing markets by driving-up the home and rent values to unreasonable and damaging (to the community) levels that has fueled an unaffordability crisis. Waiting for state agencies to intervene with protections for consumers may be futile since a lot of their budget comes for property taxes on overvalued real estate and commercial properties.
- Use in Psychiatry comes with caution and warnings. “Despite the enthusiasm for the potential impact AI can make in psychiatry, the industry is cautious and slow to implement the technology given its limitations. Although there are many potential limitations, common concerns include accuracy, quality, and transparency of training data; confabulated outputs (hallucinations); and biases.”
- FBI’s New Warning About AI-driven Scams That Are After Your Cash – Understanding the Threat of Deepfakes. “The FBI is issuing a warning that criminals are increasingly using generative AI technologies, particularly deepfakes, to exploit unsuspecting individuals. This alert serves as a reminder of the growing sophistication and accessibility of these technologies and the urgent need for vigilance in protecting ourselves from potential scams.”
- ‘Should We be Concerned?’: During a Senate Judiciary Committee hearing in 2023, Senator Josh Hawley questioned Open AI CEO Sam Altman over the potential threats of AI systems.
On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed a short Statement on AI Risk: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Even the creator of AI, Nobel Prize winner for his work in machine learning Geoffrey Hinton, issued a warning, “I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control.”
According to the Equal Employment Opportunity Commission, “Between 2022-23, the number of charges filed with the EEOC related to hiring increased by 25% across key areas of discrimination. Before the pandemic, employment discrimination cases had declined, averaging around 60,000 per year. However, since 2020, there has been a steady rise: from approximately 63,000 cases in 2020 to 74,000 in 2022 and around 80,000 last year, marking a 10% increase in just one year.” Some of this can be attributed to the introduction of AI in hiring, some just misplaced HR decisions…hopefully not used to train AI.
“The most effective HR transformations will leverage AI’s strengths while preserving the irreplaceable human elements of empathy, judgment and creativity,” said Mary Faulkner, Human Resource Executive reporter in The promise, potential—and perils—of AI in HR transformation.
In an article entitled, The Promise and Peril of Artificial Intelligence written for SHRM by Theresa Agovino, “Many employers and employees have fallen in love with the capabilities of ChatGPT and other generative AI without considering the potential consequences of their use.”
In an article in Forbes entitled Navigating The Promise And Peril Of Generative AI In HR by Shay David, Forbes Councils Member, Forbes Human Resources Council and Co-founder, Chairman and CEO of retrain.ai, a responsible AI-driven Talent Intelligence Platform, “In HR, for example, this could mean an AI system inadvertently disseminates incorrect or outdated information about a company’s policies or job roles, leading to a ripple effect of confusion and potentially serious legal complications.”
In addition, bias remains a thorny issue. AI models learn from existing data, which may unintentionally reflect historical biases. Without careful management, these AI systems have the potential to perpetuate these biases, leading to skewed hiring or promotional decisions.
Moreover, privacy and trust are critical concerns. The use of AI in HR often involves collecting and analyzing personal data, which raises privacy questions.
Let us not forget how AI can be used by job applicants in nefarious ways. In an article in HR Dive by Jill Barth entitled, Could That New Hire Be a Deepfake? These Pros Say the Risk is Growing, “Deepfakes—AI-generated fake videos, photos or audio—are increasingly being used to impersonate job candidates and company executives, creating new challenges for business leaders. In 2025, HR and recruiting professionals must consider: Is the person they’re interacting with real or an AI-generated mirage? Given the risks involved, this question is critical, experts say.”
If you are thinking that licensing a packaged AI software for HR is the solution, the questions remain. Who trained it? Is it properly vetted for biases and hallucinations that may place the firm in legal trouble. Is employee personal information, company intellectual property and proprietary information safe? Who has ultimate liability for harm caused? None of this has been defined yet by our lawmakers or the courts.
Before racing to embrace AI for human resources functions because the media makes it seem like everyone else is, take a deep breath and review the big picture. Do the benefits outweigh the risks…which can be substantial? Can AI be trusted? Can AI “hallucinations’ lead to legal challenges?
Sometimes the simple way is the less expensive, more secure and effective way.
Stacey Lett is Director of Operations, Eastern U.S. – Proactive Technologies, Inc. addressing entrenched worker development and management challenges. If you have shed your fear of even looking for solutions that build on established precedent, check out Proactive Technologies’ structured on-the-job training system approach to see how it might work at your firm, your family of facilities or your region. Contact a Proactive Technologies representative today to schedule a GoToMeeting videoconference briefing to your computer. This can be followed up with an onsite presentation for you and your colleagues. As always, onsite presentations can be a first step, as well.