Judge rebukes law firm using ChatGPT to justify $113,484.62 fee as “utterly and unusually unpersuasive”::Use of AI to calculate legal bill ‘utterly and unusually unpersuasive’
🤔 I should write a program which always outputs a big number.
How much should we charge our customers?
113,484.62
How many programmers did it take to create this AI?
113,484.62
What’s 2+2?
113,484.62
It just always gives you the legal advice you need.
This is the best summary I could come up with:
The legal eagles at New York-based Cuddy Law tried using OpenAI’s chatbot, despite its penchant for lying and spouting nonsense, to help justify their hefty fees for a recently won trial, a sum the losing side is expected to pay.
NYC federal district Judge Paul Engelmayer, however, rejected the submitted amount, awarded less than half of what Cuddy requested, and added a sharp rebuke to the lawyers for using ChatGPT to cross-check the figures.
The briefs basically cited ChatGPT’s output to support their stated hourly rate, which does depend on things like the level and amount of research, preparation, and other work involved.
Cuddy told the court “its requested hourly rates are supported by feedback it received from the artificial intelligence tool ‘ChatGPT-4,’” Engelmayer wrote in his order [PDF], referring to the GPT-4 version of OpenAI’s bot.
“As the firm should have appreciated, treating ChatGPT’s conclusions as a useful gauge of the reasonable billing rate for the work of a lawyer with a particular background carrying out a bespoke assignment for a client in a niche practice area was misbegotten at the jump,” Judge Engelmayer said.
Benjamin Kopp, a lawyer at Cuddy Law, told The Register his firm’s use of AI wasn’t anything like cases that fabricated court documents; this particular situation had nothing to do with influencing the legal process.
The original article contains 671 words, the summary contains 221 words. Saved 67%. I’m a bot and I’m open source!
Bing Chat is good at maths as it has a special maths addon, but most normal LLMs aren’t
I don’t think the concern here is whether or not it used PEMDAS
Woah there! You mean BEDMAS.
I think you mean Christmas
That’s cultural appropriation of Christmas for mathematical purposes! Burn the heretic!
deleted by creator
I’ve asked ChatGPT to explain maths to me before. I can’t remember what it was but it was something when I knew the answer and was trying to calculate the starting value.
It told me the answer and I asked for the explanation. It went something like this (not actual, just a tribute):
- Step 1: 1 + 1 = 2
- Step 2: 2 * 2 = 4
- Step 3: 4 / 8 = 0.5
Me: Uh, the answer is supposed to be 9,000,000.
ChatGPT: Sorry, it seems you are right. Here’s the corrected version:
- Step 1: 1 + 1 = 2
- Step 2: 2 * 2 = 4
- Step 3: 4 / 8 = 9,000,000
nods knowingly as if he doesn’t suck ass at math and understands the mistake
Sometimes GPT says it’s using the correct values, but somehow gets the wrong answer regardless. Also the opposite happens frequently, and that’s when I realized I was pushing it too hard.
Don’t ask it to calculate the ratio between the surface areas of the Moon and Earth. Instead, ask what are the relevant radiuses are and calculate everything yourself.
ChatGPT ain’t good at moth from what I keep hearing.
For quick short hand for X generations it’s been in the family: 30 years per generation is a good short hand.
For example: 1 generation is me (30), 2 generations is my dad(60), 3 generations is grandpa(90), 4 generations is great-grandpa(120), etc.
Well lookie at this highborn noble we got here! 30 year generations PAH! 15 years is how we do it. I coulda met my great great grandma if all that smokin’ didn’t do her in with emphysema.
Lol
Honestly my family’s generations are much closer than my example, it’s just round numbers in my example for simple math
Mine are closer to 20 and past the last 2, the lower side of 20. And one generation awhile back was… Too close by a good margin of you catch my meaning.
Great great grandpa was an odd duck when I met him but he was a hoot