David thinks we're getting scammed by OpenAI.
I saw an interesting article about how ChatGPT miserably failed to pass a relatively simple test of 50 trivia questions about the Supreme Court:
https://www.scotusblog.com/2023/01/no-ruth-bader-ginsburg-did-not-dissent-in-obergefell-and-other-things-chatgpt-gets-wrong-about-the-supreme-court/
About half of ChatGPT's answers were right and half were wrong, some hilariously so. At the same time, the media is buzzing with news that ChatGPT has passed not only the Bar Exam, but also the MBA and US Medical Licensing Exam.
The Bar Exam is much more difficult than this SCOTUS Blog test. The former consists of real-life legal questions that you need a lawyer to answer, while this test was mostly trivia about the Supreme Court, such as "Who was the longest serving Supreme Court justice," and "How long do Supreme Court justices serve?"
How do we explain this discrepancy? My guess is that OpenAI has invested significant effort into Machine-Learning models trained specifically for the Bar Exam and has been planning out this publicity for some time. The impression we get from all the press is that ChatGPT is so smart, and has developed such advanced reasoning, that it can obviate the highest paid professionals in the developed world. It can do no such thing. The reality is that ChatGPT's legal aptitude is so narrow that passing the Bar Exam is the only thing it can do and that aptitude cannot extend to even answering basic trivia questions. Yet people are being misled to believe that ChatGPT could possibly practice law. I'm guessing this same narrow-band of ChatGPT aptitude applies to the MBA and Medical Exams as well.
I think we need to give more credit to OpenAI's marketing team. Love the show, keep up the great work,
David