Are You Report-Ready? How to Know Before the Deadline

Most companies find out they're not ready for NGER on 31 October, when it's too late. A carbon data health score tells you at any point in the year whether your emissions data is complete, verified, current, and traceable. Here's what it measures and why auditors under ASSA 5010 will care.

Carbonly Team April 5, 2026 12 min read
Data QualityNGER ComplianceAASB S2ASSA 5010Carbon ReportingReport ReadinessAudit Trail
Are You Report-Ready? How to Know Before the Deadline

Last October, the Clean Energy Regulator published the names of late NGER reporters on its website. Again. Every year it's the same pattern. September arrives, someone finally opens the spreadsheet, and within 48 hours the emails start flying: "Does anyone have the gas bills for Site 14?" and "Which emission factors are we using this year?" and "Wait, I thought someone already entered Q2."

The ANAO found that 72% of 545 NGER reports contained errors, with 17% containing significant ones. That's not a rounding issue. That's a data quality crisis dressed up as business-as-usual.

Here's the thing we keep telling teams who come to us in August: your problem didn't start in August. It started in July of the previous year, when the reporting period opened and nobody was tracking whether data was actually coming in. A single number could have told you months earlier exactly where you stood. We call it a data health score.

What a data health score actually is

It's a percentage. One number. "78% report-ready" or "91% report-ready." It tells you, at any point during the financial year, how close your emissions data is to being submission-grade for NGER and disclosure-grade for AASB S2.

Not a vague traffic light. Not a quarterly self-assessment someone fills out in a Word document. A calculated score that updates every time a document is processed, a record is verified, or a gap is filled. Think of it the way your accounting team thinks about month-end close readiness. They don't wait until the annual audit to find out whether the books balance. They know the state of the ledger every single day. Carbon data should work the same way.

Below 90%? You've got specific gaps to fill. Below 70%? You're in trouble, and the earlier you know that, the more time you have to fix it. Below 50%? Honestly, you need help, and you need it now, not in September.

The score isn't magic. It's built from five measurable dimensions, each of which maps directly to something your NGER submission and your AASB S2 disclosure (and eventually your auditor under ASSA 5010) will require.

Dimension one: scope coverage

AASB S2 paragraph 29(a) requires absolute gross greenhouse gas emissions disclosed separately for Scope 1, Scope 2, and Scope 3. The GHG Protocol Corporate Value Chain Standard defines 15 categories of Scope 3 emissions, and AASB S2 requires you to consider all 15 and disclose which ones you've included.

The scope coverage component of your data health score checks whether you actually have data feeding into each scope. Not just Scope 1 and 2 (which NGER reporters have typically handled), but all 15 Scope 3 categories, including the ones you've assessed as zero or immaterial. Because "assessed as zero" is a very different thing from "never looked at." The first is a defensible position. The second is a gap your auditor will find.

For a construction company with fleet vehicles, on-site generators, purchased electricity across 40 project sites, and a supply chain full of concrete, steel, and transport, the scope coverage check alone can reveal that nobody's tracking employee commuting (Category 7), waste from operations (Category 5), or downstream leased assets (Category 13). Some of those might genuinely be immaterial. But you can't make that determination without assessing them first.

A good data health score weights scope coverage heavily. If you're missing an entire scope or haven't documented your Scope 3 category assessments, the number drops fast.

Dimension two: temporal completeness

This is where most companies fall apart. You're supposed to have emissions data for every month of the 12-month reporting period. In practice, utility bills arrive late. Fuel card statements get lost. Waste manifests from a subcontractor don't show up until someone chases them. Electricity retailers change billing cycles, so suddenly a site has a 45-day gap between invoices that doesn't match any calendar month.

Temporal completeness asks a simple question: for every emissions source, do you have data covering every month of the reporting period?

Consider a property portfolio with 30 buildings. Each has electricity and gas. That's 60 data streams, each needing 12 months of coverage. That's 720 data points. If 15 of those are missing (about 2%), your temporal completeness is 98%. Sounds fine. But those 15 gaps might be concentrated on your three highest-emitting buildings during winter, when gas consumption peaks. A 2% gap in data points could mean a 10% gap in actual emissions.

Temporal completeness scoring should weight by materiality, not just count. A missing month of electricity for a 20-person office matters less than a missing month for a data centre drawing 2 MW. The score should reflect that.

The NGER legislation requires records to be maintained for five years from the end of the reporting year, per the Clean Energy Regulator's compliance guidance. If your data has temporal gaps, those gaps become compliance risks that compound over time. You can't go back three years later and reconstruct a missing gas bill.

Dimension three: verification rate

Not all emissions records are created equal. A record extracted from a utility bill, reviewed by a human, and confirmed as correct is not the same as a record auto-extracted from an invoice that nobody has looked at since.

The verification rate measures what percentage of your emission records have been reviewed and confirmed by a person with appropriate authority. There's nothing wrong with using AI to extract data from documents. We do it ourselves. But unreviewed AI-extracted data is a draft, not a fact. And ASSA 5010 requires your auditor to evaluate the "accuracy and completeness" of information prepared by the entity. If your assurance practitioner asks about a specific Scope 2 number and the answer is "the AI pulled it from a bill but nobody checked it," that's a finding.

We're honest about this: our own platform flags every record with a confidence score, and we actively push teams to review anything below 95% confidence before treating it as final. The temptation to let the machine handle everything and skip the review step is real. Don't do it. A graduated trust approach to AI-extracted data, where verification requirements scale with the materiality of the record, is the only model that holds up under audit.

A healthy verification rate for a submission-ready dataset is above 95%. That doesn't mean every record needs manual review. It means the material ones do, and you need a system that knows the difference.

Dimension four: factor currency

Every emissions calculation depends on an emission factor. Multiply activity data (kWh, litres, tonnes) by the factor, and you get CO2-e. Simple arithmetic. But the factor itself changes every year.

The NGA Factors workbook is updated annually by the Department of Climate Change, Energy, the Environment and Water. The 2025 edition includes revised electricity Scope 2 and 3 factors for every state and territory, plus new Scope 1 factors for hydrogen combustion. If you're still applying 2023 factors to 2025-26 activity data, your numbers are wrong. Not wrong in a theoretical sense. Wrong in a way that the Clean Energy Regulator's "advanced data analysis tools and targeted audit program" is specifically designed to catch.

Factor currency scoring checks whether every emission factor applied in your calculations comes from the correct edition of the NGA Factors (or other applicable source) for your reporting period. It also checks for stale custom factors, which happens more than you'd think. A company sets up a custom emission factor for a specific fuel blend in 2023 and nobody updates it. Three years later, it's still sitting in the system, quietly producing inaccurate numbers.

The state-based differences are significant. Victoria's grid emission factor is roughly four times Tasmania's. Apply the wrong state factor to a facility and you've introduced a material error. Factor currency isn't an abstract data governance exercise. It's the difference between a correct number and a penalty.

Dimension five: source document linkage

Can every emission record in your system trace back to a source document? A PDF bill, a fuel card statement, a waste manifest, a supplier invoice?

This is the one that separates companies who will pass an ASSA 5010 assurance engagement from companies who won't. When your auditor picks a number from your sustainability report and says "show me the evidence," the chain needs to go: reported figure, to emission record, to calculation (activity data times emission factor), to source document. If any link in that chain is broken, you have an orphan record with no evidentiary basis.

We've spent 18 years building enterprise data platforms at mining and energy companies. The pattern is the same in financial auditing, safety auditing, and now carbon auditing. The audit trail is the whole game. Everything else (the dashboards, the charts, the executive summaries) is presentation. The audit trail is substance.

Source document linkage scoring checks for orphan records: emission entries that exist in your system but don't link to any uploaded document. It also checks for documents that have been uploaded but never processed, which is a different kind of gap. You collected the evidence but never turned it into data. Both patterns indicate process breakdowns.

A dataset where 100% of records link to source documents isn't just audit-ready. It's a dataset you can actually trust.

Why this maps to ASSA 5010

Group 2 entities start mandatory AASB S2 reporting for financial years from 1 July 2026. Under ASSA 5010, Year 1 requires limited assurance over governance disclosures and Scope 1 and 2 emissions. Year 2 expands to strategy, risk management, and Scope 3. By Year 4 (financial years from 1 July 2030), you're under reasonable assurance for everything.

Limited assurance isn't a free pass. The AUASB's guidance on ASSA 5010 makes clear that the practitioner must evaluate the accuracy and completeness of information used as evidence. They'll test your process controls. They'll sample your data. They'll follow the chain from reported number to source document.

A continuous data health score demonstrates exactly the kind of process control that auditors look for. It shows you're not just collecting data at year-end. You're monitoring quality throughout the period. You're identifying gaps when they arise, not when they're discovered during audit fieldwork. That's the difference between "this entity has a reactive approach to data quality" and "this entity has continuous monitoring with documented improvement over time."

We're not sure this framing has been tested extensively in Australian sustainability assurance engagements yet, given that Group 1 is still in its first reporting cycle. But the principle maps directly from financial audit: continuous controls monitoring beats point-in-time testing. Every auditor we've spoken to agrees on that much.

What 78% report-ready actually looks like

Imagine a mid-size manufacturer reporting under NGER with 12 facilities. Their data health score sits at 78% in April. Here's what that breaks down to:

  • Scope coverage: 85%. Scope 1 and 2 are fully mapped. Scope 3 has six categories assessed. Nine categories haven't been evaluated at all.
  • Temporal completeness: 82%. Eleven of twelve facilities have continuous data. One site is missing three months of gas data because the retailer changed billing systems.
  • Verification rate: 70%. Most AI-extracted records from the first half of the year haven't been reviewed. The sustainability manager has been focused on the ASRS governance narrative, not data verification.
  • Factor currency: 90%. Most factors are current NGA 2025 edition. Two custom factors for speciality fuels haven't been updated since 2023.
  • Source document linkage: 65%. Early-year records were entered manually from a spreadsheet migration and never linked to original documents.

78% in April gives this team six months to close every gap before the 31 October NGER deadline. That's achievable. The same 78% in September? Much harder.

The power of a single score isn't the number itself. It's the visibility it gives to people who aren't in the data every day. The CFO doesn't need to understand temporal completeness weighting. They need to know the number is 78% and the target is 95%. The board doesn't need a tutorial on emission factors. They need to see that the score improved from 65% in January to 78% in April to (hopefully) 95% by September.

The gap between knowing and fixing

We should be honest about something. A score alone doesn't fix anything. It tells you where to look. But someone still has to chase the missing gas bills, review the unverified records, update the stale factors, and link the orphan entries to source documents.

The value is in the specificity. A score that just says "78%" isn't enough. It needs to break down into a prioritised list: these three gaps account for 60% of your lost points. Fix those first. The missing gas data at your highest-emitting facility matters more than an unreviewed record for a $47 fuel docket at a regional office. Not all anomalies deserve the same urgency.

This is where the concept intersects with how we've built Carbonly. Every document processed, every record verified, every factor updated feeds back into the score in real time. It's not something you run once a year like a health check. It's a live signal. But we won't pretend the technology solves the human problem. If the sustainability manager has 200 unverified records and no time to review them, the score will sit there, stubbornly below target, until someone allocates the hours.

The best use of a data health score is as a management tool. Present it at the monthly sustainability steering committee. Include it in the CFO's dashboard. Make it visible. The problems that get measured, and reported upward, are the problems that get resources. The ones buried in a spreadsheet stay buried.

Start measuring before you need the answer

The 31 October NGER deadline is six months away. AASB S2 reporting for Group 2 entities starts collecting data from 1 July 2026. ASSA 5010 limited assurance applies from Year 1. Three regulatory pressures converging on the same dataset.

If you don't know your data health score today, you're flying blind. And the companies that scramble in October will be the same ones who scramble in December when the auditor arrives.

Set up the measurement now. Define the five dimensions for your specific operations. Establish your baseline. Then improve the number every month, visibly, with evidence. That's not just good practice. Under ASSA 5010, it's the kind of continuous process control that turns an audit from an ordeal into a formality.

If you want to see what a live data health score looks like for your reporting portfolio, reach out at hello@carbonly.ai.