AI in FP&A: 5 Use Cases Finance Leaders Are Using Today

The short version

You probably already know that using AI in FP&A can automate forecasting and scenario modeling. The problem is your results are still mediocre, and that's exactly what Christian Wattig tackles in this workshop.

You're likely using AI the same way you'd use a search engine: typing a vague question, copying the answer, and hoping it holds up. The output is mediocre, and you're automating the wrong work. The real analytical use cases like variance analysis, business partnering prep, and forecast back-testing, sit untouched because no one's shown you how to direct the tool.

Christian directs the FP&A Certificate Program at Wharton and spent 15+ years in FP&A leadership at P&G, Unilever, Squarespace, and Datarails. He draws a sharp line between what AI actually does for you and what you're probably using it for. He also breaks down a four-part prompt anatomy from OpenAI cofounder Greg Brockman, shares a Unilever ML rollout where the model worked but couldn't explain itself, and runs five live ChatGPT o3 demos.

Most useful for: FP&A managers, directors, and CFOs who already use ChatGPT casually and want a structured way to apply it to analytical work.

What is AI in FP&A?

AI in FP&A is the use of machine learning and generative AI inside the financial planning and analysis workflow. It covers how finance teams forecast revenue and cash, run variance analysis, generate scenarios, write narrative reports, and prepare for business partnering conversations.

The point is to give finance teams a faster read on what's actually happening and what's likely to happen next, instead of waiting for the next monthly close to tell them.

The term gets used loosely. In practice, "AI in FP&A" covers two distinct technologies that each do different work:

  • Machine learning: Forecasts revenue, expenses, and cash flow by analyzing historical data and external signals. It updates projections as new data arrives.
  • Generative AI: Handles the analytical and communication side, producing variance narratives, Excel formulas, budget-review questions, and scenario comparisons from typed instructions.

Most FP&A teams use at least one of the two already, often without naming it. The real upside comes when you start directing each tool deliberately instead of treating "AI" as one thing. Christian Wattig's five concepts in the workshop below build on this two-category framework, showing how to direct each kind of AI with intent.

Where AI applies in FP&A: 5 use cases

AI shows up across five concrete areas of the FP&A workflow. Most teams start with one and expand as they get the prompt anatomy right.

1. Predictive forecasting

ML models trained on historical results, sales pipeline, seasonality, and external market signals generate rolling forecasts that update as new data arrives. The output is more granular and more reactive than a static monthly forecast file, especially for teams whose business is volatile or seasonally sensitive.

Christian's Unilever working capital story below is one real-world version of this category, with an ML ensemble running alongside a slimmed-down manual forecast that carried the business narrative the model couldn't.

2. Anomaly detection and variance analysis

AI flags transactions and accounts that deviate from expected patterns, which speeds up the investigation. Instead of a finance analyst combing through a P&L looking for what changed, the model surfaces the candidates and the analyst spends time on the why.

3. Automated data management

A meaningful chunk of an FP&A team's week goes to pulling data from ERPs, source systems, and spreadsheets, then cleaning and reconciling it. AI tools accelerate the ingestion, mapping, and validation steps so the team spends less time prepping data and more time interpreting it.

4. Scenario planning

What used to be three carefully built scenarios in a deck can now be dozens of variants generated and compared in the time it takes to grab coffee. You stress-test revenue growth, headcount, cost inflation, currency, and macro assumptions in parallel, and the model tells you which assumptions the business is actually sensitive to.

5. Natural language generation

AI drafts the narrative layer around the numbers, including variance explanations, board commentary, and exec summaries. You still own the interpretation, but you skip the blank-page problem. Christian's four-part prompt anatomy below is the technique he uses across his ChatGPT demos to get reliable output, including narrative writeups like these.

These five categories give you the landscape. The five concepts below from Christian Wattig show you how to direct the tool inside each one.

What does AI in FP&A look like?

If you're approaching AI by asking how the tool works, you're asking the wrong question. With machine learning, the algorithm is a black box, even to the data scientists who built it. With generative AI, the model is a probability engine that will hallucinate if you let it.

In both cases, you control the same things: the input on the way in and the structure of what you ask for on the way out. Christian calls those two levers the entire game.

His Unilever working capital story is about a team that stopped trying to crack open the model and started experimenting with what they fed it. The prompt anatomy he teaches later applies the same lesson to ChatGPT.

This shift has practical consequences. You don't need a data science degree to use machine learning well, and the difference between a useless ChatGPT answer and a usable one is almost always upstream of the model.

5 concepts you should understand when using AI for FP&A

1. Focus on your input data, not the algorithm

The algorithm isn't the bottleneck, your input data is. Christian learned this firsthand when his Unilever working capital model produced a variance that neither his team nor the third-party data scientists could fully explain. The data scientist understood the algorithm but not the business. Christian's team understood the business but not the algorithm.

The model was an ensemble of 12 algorithms, including Amazon's DeepAR and Google's temporal fusion transformer, competing against each other every month. Christian estimates he understood maybe 50–60% of the explanation when the data scientist walked him through it.

Rather than try to bridge that gap, the team accepted the model as a black box and focused on the inputs they could control. Adding inventory data improved accuracy. Running a slimmed-down manual forecast in parallel gave leadership the business narrative the model couldn't produce on its own.

The same logic applies to ChatGPT: stop trying to understand the model and direct it instead.

"With machine learning models, the model itself, it's a black box. Even data scientists sometimes can't explain why the model gives certain outputs. But what we can control is the input. And so once we realized that, we realized, okay, let's experiment with using different inputs."

2. When in doubt, do not upload it

If you're not sure whether data is safe to put into ChatGPT, don't upload it. Start by checking with your manager, then escalate to your data security team if you have one.

Safer alternatives exist. AI tools that run locally, on-premises, or in enterprise environments come with contractual guarantees that your data won't be used for training. ChatGPT Business and Enterprise tiers offer different data handling than the consumer product, and some finance-specific tools embed AI on top of data that already lives in a controlled environment.

In every live demo, Christian uses fabricated numbers or publicly available data, including NASA's published budget. The demos work because the prompts are tight, not because the data is real. You can practice on synthetic or public data before you ever touch a confidential file.

"Talk to your manager. If you have a data security department in your company, the larger company, talk to them. Because whenever you're unsure, don't do it. Unless you're working with AI that is run locally on premises or that has safeguards around it where you know that you can upload anything without it going anywhere."

3. Use ChatGPT o3 for analytical work

Reasoning models like ChatGPT o3 take longer to process but consistently produce better results for multi-step analytical tasks. Think building a business driver tree, generating a XLOOKUP formula tied to a data cell, or producing budget-review questions from an uploaded file. Save the standard model for meeting summaries and email drafts.

The Excel demo makes the difference visible. Christian asks o3 to generate a SUMIF formula that sums daily revenue actuals from January 1 through whatever date sits in a single cell. The model returns the formula with a short explanation. Pasted into Excel, it returns 47,498 on the first try.

When he changes the date cell, the number updates. A manual check against the underlying data confirms a match.

He repeats the pattern with an XLOOKUP combined with the MONTH function, then a percentage-of-month-elapsed formula. All three work on the first try, and Christian's rule of thumb when they don't is to paste the Excel error back into ChatGPT and let the model debug.

Want to see how reasoning models handle more complex finance applications? The webinar From Rules to Reasoning: How Context Graphs Are Powering Smarter AI goes deeper into advanced AI reasoning and what it means for the prompts you write.

"The regular vanilla ChatGPT 4o is a simple model, but what those reasoning models do differently is they take longer to think, they take longer to process, and you typically get better results for complex prompts."

4. AI is a brainstorming partner, not an interpreter

Ask AI for questions instead of answers, it's the simplest way to work around hallucination risk. When you ask a model to interpret data, you have to trust its interpretation. But when you ask it to generate questions, you can evaluate each one and discard the bad ones.

Christian demonstrates this with NASA's publicly published budget, uploading the file and asking o3 for 10 assumption-challenging questions an FP&A manager might ask the budget owner. The output covers scope changes, headcount assumptions the budget didn't address, critical-path procurements, and contingency percentages.

The contingency question is the standout. NASA's budget never mentions uncertainty, yet rocket programs are full of it, and that's a question that surfaces buried risk.

He runs the same pattern on a deliberately weak management report. ChatGPT returns five specific fixes you can apply to your own reports:

  • Start with a single "so what" headline instead of a list of bullets
  • Replace the data table with a variance waterfall or stacked bridge
  • Include ROI figures
  • Reframe people cost as productivity rather than "in line with plan"
  • Clarify compensation impact

Want to take the report auditing use case further and build the dashboards that replace those weak reports? Build 3 AI-Powered Dashboards in 60 Minutes with Nicolas Boucher is a hands-on webinar that walks through turning raw FP&A data into executive-ready visuals.

"This works around the issue that AI is sometimes making things up because I'm not asking it to interpret the data. I can do that myself, but I'm asking it to help me come up with questions. Some of these I may have been able to come up with myself, but others I may not have thought about."

5. The next decade of FP&A is not technical

The skills that'll define FP&A over the next decade aren't technical, rather they're relational. Entry-level roles used to reward Excel proficiency, reporting, and reconciliation, but AI is automating those tasks. What's replacing them at every level is the ability to build relationships, challenge assumptions, and translate data into decisions.

Track every activity for one week, then ask yourself: does spending more time on this help the company reach its goals? Reporting almost never passes that test. Variance analysis with a clear "so what" almost always does, and so does business partnering prep that earns you a seat in the CMO's planning meeting.

Christian closes with a use case from a CFO at a mid-sized company. The CFO builds detailed personas for each board member, uploads transcripts from past meetings, and asks the model to predict which questions each member is likely to ask. Then the CFO prepares by answering them, the same prompt anatomy applied at the highest stakes.

Translating data into a story and anticipating what stakeholders need to hear are the skills How to use AI to turn Raw Data into Executive-Ready Insights also covers. It's a practical webinar for finance leaders who want to close the gap between having the analysis and communicating it.

"As these number crunching tasks get automated, the skills that leaders look for even in more junior roles are much more geared towards relationship building, communication, and business partnering."

Benefits of using AI in FP&A

The case for AI in FP&A is operational, not theoretical. Three benefits show up consistently for teams that move past casual ChatGPT use into structured applications.

  • Time recovered from data work. A large share of an analyst's week goes to pulling, cleaning, and reconciling data before any actual analysis can happen. AI compresses that prep step, freeing capacity for variance analysis, scenario work, and business partnering. The shift is most visible in close cycles and quarterly forecast updates.
  • Better forecasts. Industry research consistently shows that teams using ML in their forecasting process rate their forecast quality higher than teams that don't. The improvement compounds when ML outputs run alongside a manual forecast so leadership gets both the prediction and the business narrative.
  • Capacity for strategic work. The output of the first two benefits is time and attention you can spend on the work that moves the business. Reporting and reconciliation give way to interpreting AI-driven insights, exercising judgment on the numbers, and partnering with operating leaders on the decisions that matter.

The trade-off is upfront effort. Each benefit assumes you've invested in directing the tool, cleaning the input data, and building the prompt patterns that work for your business. The teams that skip that investment get the mediocre output Christian's workshop is built to fix.

How do you apply Christian's advice this quarter?

Start by picking one task and run the full loop end to end.

  1. Audit one week of your calendar. Block out how you actually spend your time, then ask yourself: does spending more time on this help the company hit its goals? The activities that fail that test are your automation backlog
  2. Take one task from that backlog and rewrite the prompt using the four-part anatomy. Goal, return format, warnings, context dump. Run it on ChatGPT o3, not the default model. Compare the output to whatever you would've produced manually
  3. Build a business driver tree for your part of the business using ChatGPT o3. Even rough, it gives you a systematic structure for next month's variance analysis. Refine it with follow-up questions until it matches how the business actually works

Once you have the AI-assisted analysis working, the next layer is building the scenarios that stress-test it. Christian covers exactly that in How to Build Scenarios Like a Wharton Program FP&A Leader, a webinar that pairs directly with the techniques he teaches here.

The point of starting with one task is to see the technique work on something low-stakes before you bring it to a forecast or a board prep. Once it clicks, the pattern transfers.

Final thoughts

"AI will allow you to focus on things that are more interesting. Like building relationships, business partnering, influencing, and connecting the dots between disparate data across all the different data sources that the company has…. Because when you get out of spending most of your time on repetitive tasks, you can spend time on things that actually move the needle."

Christian's bet is that AI automates the work that was already grinding you down. The work that gets unlocked is the work that made you want to be in finance in the first place. If you move first, you'll define how that plays out at your company.

Pick one task and rewrite the prompt.

See how Ramp fits in

Christian's calendar diagnostic asks whether spending more time on a task helps the company reach its goals. The question lands hard on the upstream work that fills your FP&A days. Reconciling card transactions, chasing receipts, formatting expense data so you can finally analyze it.

Ramp replaces that work at the source. You get corporate cards, bill pay, expense management, and procurement on one platform, with categorized transaction data and insights feeding directly into your close and forecast.

Reclaim your time back with Ramp

About the speaker

Christian Wattig directs the FP&A Certificate Program at the Wharton School of the University of Pennsylvania, where he teaches financial planning and analysis to corporate finance teams. Before Wharton, he spent 15+ years in FP&A leadership at Procter & Gamble, Unilever, Squarespace, and Datarails, partnering closely with sales, marketing, and logistics teams.

Common questions about using AI in FP&A

What is AI in FP&A?

AI in FP&A applies machine learning and generative AI to financial planning and analysis, including forecasting, variance analysis, scenario modeling, and business partnering prep. Machine learning forecasts numbers from historical patterns, while generative AI produces text, code, and structured outputs from typed instructions.

They fail in different ways. ML breaks when input data is wrong or conditions shift outside the training set. Generative AI fails by hallucinating. In both cases, the algorithm is a black box, so the lever you can actually pull is the input.

How is AI used in FP&A?

AI shows up across five core FP&A workflows:

  1. Predictive forecasting (ML on historical and external signals)
  2. Anomaly detection and variance analysis (flagging deviations from expected patterns)
  3. Automated data management (pulling and cleaning data from source systems)
  4. Scenario planning (running multi-variable simulations in minutes)
  5. Natural language generation (drafting variance narratives and exec summaries)

Most teams start with one workflow, prove the pattern, then expand.

What are the benefits of using AI in FP&A?

The three benefits that show up consistently are time recovered from data work, better forecasts when ML is used in the forecasting process, and capacity to spend on strategic and partnering work instead of reporting. Each benefit assumes the team has invested in directing the tool well, which is what the five concepts above are built to help with.

What are the challenges of adopting AI in FP&A?

The five most common obstacles are organizational culture (skepticism toward new ways of working), trust in model output, input data quality, the underlying technology stack, and the in-house expertise to direct the tool. Of those, input data quality is the one that decides whether the model produces anything useful, which is why Christian builds the first concept around it.

Will AI replace FP&A jobs?

AI is automating tasks inside FP&A roles, not eliminating the roles themselves. The work being automated is reconciliation, manual reporting, and routine data prep. The work being added is interpreting AI-driven insights, exercising judgment on the numbers, and business partnering. Entry-level and senior roles alike are shifting in the same direction.

How do I write a prompt that gets useful output from ChatGPT?

Use the four-part prompt anatomy from OpenAI cofounder Greg Brockman:

  • Goal: The specific outcome you want
  • Return format: How the output should be structured
  • Warnings: What to avoid or double-check
  • Context dump: A paragraph of background the model wouldn't otherwise know, including your role and seniority

The quality of your output scales with how completely you fill in each part.

What core machine learning concepts should you understand?

Christian named five:

  • Back-testing: Have the model forecast a past period you already have actuals for, then measure how close it got
  • Feature selection: The data science term for which inputs you feed the model
  • Overfitting: A model trained too tightly on historical patterns breaks when conditions shift
  • Statistical significance: Depends on data volume and quality, checked partly through the p-value
  • Correlation vs. causation: Two metrics move together without one driving the other

You don't need to be a data scientist to use off-the-shelf ML, but understanding these five concepts makes a real difference in how you direct the tool.

About the speakers

Christian Wattig's profile picture
Christian Wattig
Director, Wharton FP&A Certificate Program
Christian Wattig is an accomplished FP&A expert with over a decade of leadership experience in multinational corporations and fast-growing tech start-ups. He spent eleven years at Procter & Gamble and Unilever, where he led FP&A and accounting teams. After earning his MBA from NYU Stern, Christian transitioned to the tech sector and played a pivotal role in taking Squarespace public as an FP&A leader. Now, Christian shares his expertise as the Director of the Wharton School's FP&A certificate program, through his own courses, and via LinkedIn where he has more than 100,000 followers.