Some quirks with GPT

Jason Forney asked 5 days ago

Hey guys, has anyone noticed quirks with how PakerAI answers questions? I had it write me an answer to a summons based on a copy of the summons I uploaded and it lists one of the affirmative defenses as being the statute of limitations being over three years when in fact the last payment activity on the debt is recorded being less than three years. I'm not sure if I should leave this in my answer as an affirmative defense or remove it and jsut send what else it gave me.

Also, I have asked it to weite me an answer to teh same summons with slightly different prompts but the answers are drastically different. Anyone have this experience and is there any advice on how I should proceed? Thanks...

Brian Parker Staff replied 5 days ago

Parker here. It is all in the Prompt. Make sure you use the “rocketship” button and make sure all the docs “complaint, exhibits” are included in your upload as the summons is not that important unless the actual pleadings are there. Like everything else, it won’t be perfect until it is-just give it a couple of shots and advise where you need it to correct-it aims to please. Don’t leave anything in your final response unless it applies. Keep working it and it will learn with you. In the 2.0 coming up we have increased your ability to both store your requests and use all the information from your “briefcase.” I like people to know also that we work very hard to eliminate the hallucinations that plague more general chat boxes. Just give ParkerGPT a hint or two and you will be rewarded. Thank you. BPP

Susan Park replied 2 days ago

Yes, same thing happened to me. I had to delete a few of their affirmative defenses on an amended answer because they didn’t apply to me.
One time, it told me to ask the opposing party for a stipulation to amend answer; next time, it said to file a motion and not ask the opposing party.

Brian Parker Staff replied 2 days ago

Thank you for the feedback. It is all good. The more I know, the more I can improve. I assume it did not stop the eventual result for you. We are working on some things on 2.0 as you will see to improve our responses and storage ability. We want it do the best for you. Hell, my name’s on it. Thank you again.

Brian Parker Staff replied 2 days ago

Susan, et al, I wanted to get back to you so you know I follow up with what the AI/ParkerGPT developer said:
“this is actually a really good example of why I think we need a proper knowledgebase for some walkthroughs. Like ones where we can literally walk a user through engaging in conversation, what to say, how to say it, what to be sure that parkergpt checks for…prompt engineering. Garbage in = Garbage out
At its core, this is still an LLM/AI platform. It requires what is functionally prompt engineering: prompting it to get the result you are after.
For the first issue, she obviously knows that SOL doesn’t apply here, therefore she needs to exclude it. In theory, once integrated with parallel or perplexity we can force as best we can triple checking SOL in the user’s jurisdiction, but even then I as a user would always want to check the work. So her question/concern is valid. But regardless from a legal point of view, throwing it at the wall in an answer is cool, no?
for her second one, I’d need to see the prompts and results (note: this is actually going to be addressed when I roll out impersonation – But generally speaking, yes context and what is prompted can and will drive different responses. This happens all the time with any LLM. While we can manage it with things like temperature control (see attached), there’s a balance. I think we’re at .7 right now. I was contemplating including in admin section ability for us to dial it back. Hypothetically it’s possible to dial it back to let’s say .5 which in theory would result in her getting more consistent answers in her second issue above”

This is what I deal with dealer: DeveloperSpeak. Its not my area so I am dumb to what he is saying but it sounds positive. Hope it helps.