AI passes university level law and economics exam
An artificial intelligence, dubbed Claude, has scored a "marginal pass" on a blindly graded law and economics exam at George Mason University, reads a recent blog post by economics professor Alex Tabarrok.
George Mason University Professor Alex Tabarrok writes on the Marginal Revolution blog he runs with Tyler Cowen that Claude earned a "marginal pass" on a recent blind graded test.
According to a report of Financial Times, Claude simply consumed and puked up a McKinsey report, but Tabarrok says that this is better than many of the actual human responses he gets.
As AI is now approaching the mania stage, it's hard not to be impressed by some of the recent results. Financial academics are certainly paying attention.
Last week, Michael Dowling and Brian Lucey of Dublin City University and Trinity College, respectively, published a paper on SSRN that explored whether ChatGPT could help write financial research.
They tested and compared ChatGPT's output in four stages of the typical research process: idea generation, literature review, data identification and processing, and empirical testing.
"ChatGPT can generate, even in its basic state, plausible-seeming research studies for well-ranked journals. With the addition of private data and researcher expertise iterations to improve output, the results are, frankly, very impressive. So, what do we do now? This is both a practical and an ethical question. Can ChatGPT be simply considered as an e-ResearchAssistant, and, therefore, just a new part-and-parcel tool of how research is normally carried out? Indeed, under this perspective the platform might even be viewed as democratising access to research assistants, hitherto the reserved domain of wealthier universities in wealthier countries," they wrote.
ChatGPT did particularly well in idea generation but struggled with things like literature reviews and testing frameworks. On the whole, the results promising though.
Dowling and Lucey argue this raises some ethical issues, such as whether one can claim AI-generated research as one's own, whether it should be co-credited in authorship etc.