Large Language Models as Financial Analysts

4 minute read

More topics: OpenAI founds an (almost) independent “Safety Board”, and Do LLMs have a dynamic memory similar to humans?

Magic AI News

Hi AI Enthusiasts,

Welcome to this week’s Magic AI News, where we present you the most exciting AI news of the week. Today, we are talking about large language models (LLMs) in the financial sector and the memory capabilities of today’s LLMs.

This week’s Magic AI tool is a must-have for every software engineer or data scientist. Best of all, it’s free, and you can run it 100% locally. Stay curious! 😎

Let’s explore this week’s AI news together. 👇🏽


Top AI news of the week

🤑 Large Language Models as Financial Analysts

In a new research paper, large language models like GPT-4o, Claude 3.5 Sonnet, and Gemini Advanced have demonstrated strong capabilities in analyzing company financials.

The details:

  • The researchers tested these models on financial reports (Q1 2024) from Amazon, NVIDIA, Meta, Apple, and Tesla.
  • All three models were able to evaluate key financial metrics such as revenue trends, profitability, cost management, cash flow, and future outlooks.
  • GPT-4o excelled in providing detailed analysis and insights into the financial health and strategic direction of companies.
  • Claude 3.5 Sonnet provided a broader, high-level analysis suitable for quick overviews.
  • Gemini Advanced struck a balance between detail and conciseness.

If you want to learn more about this topic, we recommend reading the full paper.

Our thoughts

The study shows how LLMs can improve the interpretation of complex financial data and provide financial analysts with valuable tools. In our view, LLMs will be essential in future analysis of financial reports. The capabilities of today’s LLMs offer enormous added value in the analyses.

More information

🤖 OpenAI founds an (almost) independent “Safety Board”

The former OpenAI Safety and Security Committee will become an independent committee. The new board is called the Board Oversight Committee!

The details:

  • Members of the new committee: Zico Kolter, Adam D’Angelo, Nicole Seligman, and Paul Nakasone.
  • All members are also members of the board of directors.
  • Regarding safety concerns, the new committee can stop the release of AI models.

Our thoughts

The question is how independent the new Board Oversight Committee really is, given that its members are also on the board of directors. The issue of security seems to be a constant source of controversy at OpenAI. After all, the dismissal of OpenAI CEO Sam Altman last year also had to do with security issues!

Meta also set up an independent committee years ago following criticism of the safety and security of its platforms. At Meta, however, only people who have no other roles within the company are on this board.

More information

🧠 Do LLMs have a dynamic memory similar to humans?

A new study by researchers from Hong Kong has shown that large language models (LLMs) have a dynamic memory similar to human memory.

The details:

  • LLMs possess memory capabilities enabled by their Transformer-based architecture.
  • LLMs can recall entire content based on minimal input information.
  • A comparison with the human brain shows that both dynamically fit outputs based on inputs.
  • A comparison between human thinking ability and LLMs suggests they have the same thinking mechanism.

If you want to dive deeper into this topic, we recommend reading the recently published paper about this topic!

Our thoughts

The study shows that humans and LLMs have similar thinking mechanisms. The researchers believe that the “underlying mechanisms are the same: they both dynamically fit corresponding outputs based on inputs.”

We also think that today’s LLMs respond very similarly to humans. The response from a chatbot is almost indistinguishable from a human response!

More information


Hand-picked AI tool list


Magic AI tool of the week

Do you ever want a local AI coding assistant? Yes, perfect! We will show you how to create one. Not a long time ago, most coding assistants were only available via an API. Those days are a thing of the past.

Now, you can run state-of-the-art LLMs like Llama 3.1, Phi 3.5, codegemma, or codestral locally on your laptop. The VSCode plugin continue, and the open-source tool Ollama makes it happen.

With a local AI coding assistant, you can chat with your codebase locally and use functions like autocompletion.

👉🏽 Learn more in our article “Mistral’s Codestral: Create a local AI Coding Assistant for VSCode”!


Articles of the week


💡 Do you enjoy our content and want to read super-detailed articles about AI? If so, subscribe to our blog and get our popular data science cheat sheets for FREE.


Thanks for reading, and see you next time.

- Tinz Twins

Leave a comment