“In a few words, tell me about your injuries”
Remember when companies started automating customer service calls? At first, it was “Press 1 for …” though more and more its systems are designed to sound like humans that greet you with “In a few words, tell me why you’re calling…” and “AI” chatbots.
Artificial Intelligence (“AI”) is here. Well, kinda’. It depends on who you ask. Regardless, the “AI” (e.g., ChatGPT) of today is being implemented into almost every industry you can think of. From fast food to retail to medicine and law. Those at the top love it. It cuts labor, automates processes, and yields end results quicker. They say that it can be used to reduce “human error”. As this trend progresses, the effects will be dramatic and harmful not only to workers but also to injury victims.
You’re involved in an automobile accident. You (or your attorney) submit all of your information to the tortfeasor’s insurance company and instead of a human adjuster (with emotions, life experiences, and industry-based experience), your information is dumped into a machine. Next, it’s processed with some form of machine learning identified as “AI” at that time. The machine doesn’t care who you are. It doesn’t care about your particular circumstances. It doesn’t have life experience, emotion, or industry-based experience. It’s a machine “made to think like a human” but, doesn’t really think like a human.
The information inputted is then benchmarked into and by an underwriting system that categorizes, calculates, rations, and ultimately comes up with a value for your claim based on nothing other than datasets. Do you have a herniation? Which level? Category Z1. Did you break an arm? Dominant or non-dominant? Category X9. Soft-tissue? Denial. The single goal of that machine will be to maximize shareholder profits. And, to do this, it must collect as much as it can in premiums, and minimize claim payouts. Additionally, if you’re the insured and the “system” deems you too much of a risk (i.e., incompatible with the ultimate goal of maximizing shareholder profits), your insurance will be canceled automatically, of course.
Maybe there will be an appeal process where an actual human will review the claim. But, it’s just a matter of time until that process is, itself, automated.
Increase in lawsuits filed
The likely scenario, in an effort to get a real person to understand you, your claim, and your injuries, will be to litigate your claim (until that’s automated, too). You’ll have to file a lawsuit in court and hope that you get a sympathetic defense attorney or jury. Obviously, this is going to increase docket sizes which will cause one of two things to happen. Either, your lawsuit will stagnate, or, you’ll be put on a “rocket docket” where you’re not able to have the court hear everything necessary as the case develops.
The results aren’t positive. The days of talking through injuries and treatment and negotiations with human adjusters are limited.
What does ChatGPT say about AI?
Don’t take my word for it; Just ask ChatGPT.
Me: Are there negative effects of AI?
ChatGPT: Yes. There are negative effects of AI (ChatGPT provides this long list):
- Job Displacement. Automation driven by AI can lead to job losses in certain industries, affecting employment opportunities for many individuals.
- Bias and Discrimination. AI algorithms can inadvertently perpetuate biases present in the data they are trained on, leading to discriminatory outcomes in areas like hiring, lending, and law enforcement.
- Privacy Concerns. The widespread use of AI-powered technologies can lead to privacy breaches and data misuse, raising concerns about the protection of personal information.
- Security Risks. AI can be used for malicious purposes, such as creating sophisticated cyber attacks or deep fake content, posing risks to security and trust.
- Dependence on AI . Over-reliance on AI systems can diminish human skills and decision-making abilities, potentially leading to human complacency or negligence in critical situations.
- Ethical Dilemmas. AI can raise complex ethical dilemmas, such as decisions made by autonomous vehicles in life-threatening situations or the use of AI in military applications.
- Psychological Impact. The integration of AI into various aspects of life may influence human behavior, social interactions, and emotional well-being in unforeseen ways.
- Control and Governance. The lack of proper regulation and governance of AI technologies can lead to misuse or abuse, undermining trust in AI systems.
Striking a balance
To mitigate these negative effects, it is essential for society to prioritize responsible AI development, transparent algorithms, unbiased data, and ethical considerations in AI applications. Striking a balance between the benefits and potential risks of AI will be critical for ensuring its positive impact on humanity.
“Why do our lives today feel so alienated when our technologies are supposed to improve lives? And why do we feel so disconnected when our inventions are meant to connect each and every one of us?” – Anicka Yi
Authored by Corey Friedman