The standards have been in effect since December 2022. The first round of annual SOQM evaluations is done. The inspections are landing. The patterns are clear. Here's what is working, what isn't, and what better evidence looks like.
Key takeaways
Picture a Quality Partner sitting opposite an audit inspector. Not from a prepared presentation. From a follow-up question.
The inspector has asked whether the firm has a pattern of unsupported numerical claims appearing in deliverables from a particular practice area, on a particular type of engagement, over the last 18 months.
The Quality Partner knows the answer to this question. They can feel it. There has been an uptick. They have spoken to the practice lead about it. There is a remediation plan.
What they do not have, sitting on their laptop, is the evidence pack. The cross-engagement view. The data that shows the pattern, when it started, who is involved, what has been done, and how the remediation is tracking.
So the meeting ends, the data gathering begins, and three weeks later the Partner sends through a careful response that does not quite have the analytical depth of the original observation. The inspector forms their own view about how robust the firm's monitoring really is.
This is the practical reality of ISQM 1 three years in.
For anyone newer to the standards or looking at this from outside the audit profession.
ISQM 1 (Quality Management for Firms that Perform Audits or Reviews of Financial Statements, or Other Assurance or Related Services Engagements) replaced the previous ISQC 1 from 15 December 2022. It requires every firm that performs audits or assurance work to design, implement and operate a System of Quality Management built on a risk-based approach. The standard has 8 components: the firm's risk assessment process, governance and leadership, relevant ethical requirements, acceptance and continuance, engagement performance, resources, information and communication, and the monitoring and remediation process.
ISQM 2 (Engagement Quality Reviews) sets out the requirements for the appointment, eligibility, and responsibilities of the Engagement Quality Reviewer (EQR), and the documentation of an engagement quality review. It applies to all engagements where an EQR is required under ISQM 1.
ISA 220 (Revised) sits alongside ISQM 1 and 2 and addresses quality management at the engagement level, integrating with the firm-level system.
The annual evaluation requirement (ISQM 1, paragraphs 53 and 54 internationally; paragraph 53 in the UK version) places ultimate responsibility on the Individual with Ultimate Responsibility (IUR) to evaluate the SOQM at least annually as of a point in time. The IUR concludes whether the system provides reasonable assurance that the firm's quality objectives are being achieved.
National variants follow the same architecture. The PCAOB's QC 1000 takes effect from 15 December 2025 and is broadly aligned. The AICPA's SQMS 1 covers private company audits in the US. CSQM 1 is the Canadian equivalent (CPAB regulates). ASQM 1 in Australia. ISQM (NZ) 1 in New Zealand. The shape of the obligation is the same everywhere.
After three years of operation and two complete annual evaluation cycles, several patterns have become consistent across regulators.
Recurring EQR deficiencies are the inspection theme of 2025. PCAOB inspection commentary flags that some firms are now in their second or third inspection report with EQR quality control criticisms. The PCAOB pays particular attention to recurring deficiencies and expects to see incremental remediation in each subsequent submission. The same training delivered three years in a row is no longer accepted as evidence of remediation.
Monitoring and remediation evidence is weaker than the design. CPAB's 2025 interim inspection commentary notes that even though firms have generally designed their SOQMs reasonably, the implementation of monitoring activity is uneven. The FRC's 2025 Annual Review of Audit Quality identifies similar themes, particularly around group audit oversight, revenue recognition, estimates, and the firm's ability to evidence root cause analysis and follow-through on remediation.
Technology use without sufficient validation. Both CPAB and the FRC have raised concerns about audit firms using AI and analytics tools without adequate validation. ISQM 1 explicitly requires governance over the resources used in audit work, including technology resources. Inspection commentary suggests that firms have moved faster on adoption than on the validation and oversight required.
Root cause analysis that does not identify root causes. ISQM 1 requires firms to perform root cause analysis on identified deficiencies. Inspection findings repeatedly flag RCAs that stop at the symptom level (a particular partner, a particular engagement, a particular issue) without addressing the systemic factors that allowed the deficiency to occur.
Recurring deficiencies in specific issue types. Across multiple inspection reports, the same categories appear: revenue recognition, accounting estimates, going concern assessments, internal control testing, group audit oversight. The PCAOB's 2025 priorities also explicitly call out technology and ethics and culture.
If you are a Quality Partner reading this, none of these will be a surprise. They are the conversations you are already having internally.
The inspection findings are visible at the surface. The deeper problem is structural, and it is what makes ISQM 1 genuinely hard.
ISQM 1 is, fundamentally, a standard about evidence. The IUR's annual evaluation is a conclusion. The conclusion has to be supported by the monitoring and remediation process. The monitoring and remediation process has to produce reliable, relevant, timely information about whether the system is working.
In practice, this means a Quality Partner needs to be able to answer questions like:
These are the questions that an annual evaluation needs to answer. They are also the questions that most firms cannot easily answer, because the data is fragmented across engagement files, EQR documentation, internal review records, time recording systems, training records, and conversations that never made it into a system at all.
Without that data, the IUR's conclusion is supportable but not robust. The monitoring activities can be documented, but the patterns they were supposed to surface cannot easily be shown.
This is what makes ISQM 1 a different challenge from ISQC 1. ISQC 1 was about having the right policies and procedures. ISQM 1 is about evidencing that they are working.
A point of caution before getting into this. Audit firms have been on the receiving end of a lot of AI sales pitches over the past three years. Most of them have been overclaimed. The CPAB and FRC commentary on technology use is clear: firms that adopt AI tooling without validating it are introducing inspection risk.
So the question is not whether AI helps. It is where AI helps and where it does not.
Where general AI tools do not help with ISQM evidence work.
ISQM monitoring and remediation is structured, deterministic work. The firm has a defined taxonomy of quality risks, quality responses, and engagement issue types. The data needs to be aggregated against that taxonomy in a way that is consistent across engagements and over time. Generic AI tools (ChatGPT, Claude, Copilot, Gemini) are probabilistic. They do not maintain a deterministic taxonomy. They cannot reliably tell you whether issue type R-117 has appeared more often this quarter than last, because they are not tracking R-117. They are summarising whatever you give them.
For evidence-grade work that needs to stand up in front of an inspector, this is the wrong shape of tool. The Stanford research discussed in our previous post on reference verification showed that even purpose-built RAG-based legal AI tools hallucinate 17% to 33% of the time. For SOQM evidence, that error rate is not acceptable.
Where AI legitimately does help.
The places where AI actually moves the needle on ISQM work are narrow and specific:
The common thread: AI surfaces the pattern. The Quality Partner exercises the judgement. The standard explicitly does not allow that judgement to be delegated, and it should not be.
If you are a Quality Partner trying to make your annual SOQM evaluation more defensible, what does the underlying evidence look like?
A consistent issue taxonomy applied across all engagements. The same issue types coded the same way, regardless of which practice the engagement sits in. This is the single biggest unlock. Without consistent coding, cross-engagement pattern detection is impossible.
Cross-engagement pattern detection on that taxonomy. Issue type R-117 has appeared in 8 of A. Hayes's last 12 deliverables across 4 different Quality Profiles. R-410 (Independence declarations) has 0% dismissal but 22% incompletion across Audit and Assurance. Risk Advisory practice shows reasoning-issue density 2.4 times the firm baseline. The numbers do not have to be perfect. They have to be consistent and defensible.
Override rate monitoring against thresholds. Three rules currently exceed the 20% override threshold. Each is a candidate for Quality Profile review by the Quality Partner. Override patterns are the early warning signal that a response is not fitting the work.
EQR coverage tracking by office and practice. With explicit firm targets and visibility into capacity issues. EQR coverage in Wellington office dropped to 78% over the last 30 days, 28 engagements pending, capacity issue likely.
Documented evidence prompts for the IUR's review. The Quality Partner does not need to find every issue themselves. They need a set of curated patterns surfaced for their attention, each with the underlying data accessible if they want to drill in.
An immutable audit trail. Every evaluation, decision, override and Quality Profile version use should be recorded with timestamp and user. When the inspector asks how a particular decision was made eighteen months ago, the answer should be a click away.
This is the kind of evidence pack that supports the IUR's conclusion under paragraph 53 of ISQM 1. It is also the kind of evidence pack that most firms are still building manually.
A few practical steps that do not require waiting for any new tool.
Make sure your issue taxonomy is consistent across practices. This sounds basic. It is rarely the case in practice. Different practices code issues differently. Until the coding is consistent, the pattern detection is unreliable.
Audit your last annual SOQM evaluation against the questions an inspector would ask. What were the key conclusions? What evidence supports each one? Could you produce that evidence within an hour if asked? If not, what would change in your data collection to make that possible?
Look at your EQR documentation for recurring patterns. The PCAOB inspection theme of 2025 is recurring EQR deficiencies. Before the next inspection cycle, run your own internal analysis. Where are the same issue types appearing in multiple engagement reviews? What does your remediation actually look like over time?
Review your root cause analysis output for the last 12 months. Are the root causes actually root causes, or are they descriptions of the symptom? Pattern: "the partner did not adequately review the working paper" is not a root cause. Pattern: "the firm has not invested in training on this issue type since 2022" might be.
Be sceptical of AI tooling that promises to "solve" ISQM 1. Ask the verification questions. How does the tool maintain a consistent taxonomy across engagements? Does it produce evidence the Quality Partner can defend in front of a regulator? Does it preserve the audit trail required by paragraphs 57 to 60 of ISQM 1?
The IAASB's choice to move from ISQC 1 to ISQM 1 was deliberate. The shift from "control" to "management" signalled that the obligation on firms was no longer to have the right policies, but to actively manage quality on an ongoing, evidence-based basis. The annual evaluation is the cap on a year of monitoring activity, not a separate exercise.
Three years in, the firms that are doing well on this are the ones who treat the SOQM as a live system rather than a documentation exercise. The firms that are struggling are the ones whose SOQM is described in detail in a 200-page manual but whose monitoring and remediation evidence is held together with spreadsheets and partner memory.
Qrtr is being built specifically to support this layer of work: structured cross-engagement evidence, drift detection on Quality Profiles, override rate monitoring, EQR coverage analytics, and audit-trail-preserving evidence pack generation against ISQM 1 and ISO 9001 frameworks. If that sounds like something your firm needs, you can register your interest for early access.
When did ISQM 1 and ISQM 2 take effect? ISQM 1 and ISQM 2 took effect for audits and reviews of financial statements for periods beginning on or after 15 December 2022. ISA 220 (Revised) took effect on the same date. The first annual evaluation of the System of Quality Management was required to be performed within one year of the effective date.
What is the System of Quality Management (SOQM)? The SOQM is the firm-level system that ISQM 1 requires to be designed, implemented and operated. It is built on eight components: the firm's risk assessment process, governance and leadership, relevant ethical requirements, acceptance and continuance, engagement performance, resources, information and communication, and the monitoring and remediation process. Its objective is to provide the firm with reasonable assurance that quality objectives are being achieved.
Who is the Individual with Ultimate Responsibility (IUR) under ISQM 1? The IUR is the individual (or individuals) within the firm assigned ultimate responsibility and accountability for the System of Quality Management. ISQM 1 requires the IUR to evaluate the SOQM at least annually and to conclude whether the system provides reasonable assurance that its objectives are being achieved.
What is the difference between ISQM 1 and ISQC 1? ISQC 1 was a quality control standard, focused on having policies and procedures in place. ISQM 1 is a quality management standard, focused on actively managing a System of Quality Management on a risk-based basis. The shift requires firms to identify quality risks, design responses, and continuously monitor whether the responses are working, with formal annual evaluation and root cause analysis on identified deficiencies.
What is ISQM 2? ISQM 2 sets out the requirements for Engagement Quality Reviews (EQRs). It addresses the appointment and eligibility of the Engagement Quality Reviewer, and their responsibilities relating to the performance and documentation of an engagement quality review. ISQM 2 applies to all engagements for which an EQR is required under ISQM 1.
Are ISQM 1 and ISQM 2 the same as the PCAOB's QC 1000? QC 1000 is the PCAOB's equivalent firm-level quality control standard, effective for periods beginning on or after 15 December 2025. It is conceptually aligned with ISQM 1 but contains specific PCAOB requirements. Firms registered with the PCAOB will need to comply with QC 1000 in addition to or instead of ISQM 1, depending on their jurisdiction. National equivalents include SQMS 1 (AICPA, US private company), CSQM 1 (Canada), ASQM 1 (Australia) and ISQM (NZ) 1.
What kind of evidence supports the annual SOQM evaluation? The evidence base for the annual evaluation typically includes: results of monitoring activities (including cold file reviews and ongoing monitoring), root cause analysis on identified deficiencies, evidence of remediation effectiveness, EQR coverage and findings, override rate analysis on quality responses, cross-engagement pattern analysis on issue types, and documentation of follow-up on previous evaluation cycles. The level of formality required scales with the firm's complexity.