Skip to main content
Access Optimization

How to Fix Poor Database Structure in MS Access (Inventory, CRM, Reporting)

April 9, 202614 min read

If your ms access database slows down after 50,000+ records, the problem is rarely hardware—it is structure and queries that were never designed for the row counts you have now. I have watched a wholesale inventory app grind because “Customer” lived in fifteen columns on the order header, and a CRM pipeline died because opportunities were keyed on free-text company names. Reporting looked fine at 2,000 rows; at 80,000, the same tables and joins became the bottleneck.

What you get when you fix structure: fewer table scans, predictable performance, and queries that scale with filters instead of cartesian explosions. You can ship reports on time and keep forms responsive for the team entering data all day.

  • Solve performance issues at the source: primary and foreign keys, selective indexes, and relationships that match how inventory, CRM, and finance actually work.
  • Improve database speed by aligning joins with indexed keys and moving heavy work out of nested queries and domain functions.
  • Handle large datasets with split front/back, archive tables, and (when needed) a staged move of hot data to SQL while keeping Access as the UI.

Why Poor Structure Shows Up as “Slow Access”

Indexing gaps — Jet/ACE can use indexes only where they exist and match your criteria. Joining a 200k-line shipment table to a header on an unindexed text field forces a scan. In CRM, filtering opportunities on an expression like Year(CloseDate) blocks index use even when CloseDate is indexed.

Inefficient joins — Many-to-many implemented as wide tables with dozens of nullable columns invite OR logic and outer joins that multiply rows. A classic failure mode is a query that unintentionally becomes a partial cartesian product because keys are not enforced and duplicates slip in.

Bad table relationships — Without referential integrity, orphan rows accumulate. Developers compensate with extra WHERE clauses, DISTINCT, or outer joins “to be safe,” which hides duplicates until volume exposes the cost.

Large unoptimized queries — Stacked queries that pull everything “for flexibility,” DSum/DLookup in forms and reports, and subqueries correlated per row turn O(n) work into O(n²) as tables grow.

Real-world constraint: You rarely get a greenfield redesign mid-season. The fix is usually incremental: tighten keys and indexes first, then reshape the worst queries, then archive cold history—without stopping order entry.

Step 1 – Identify Bottlenecks

Why it matters: You need to know whether time goes to disk scans, join shape, locking on the share, or UI design. Guessing leads to random new indexes and frustrated users.

How:

  • Run Performance Analyzer on the slowest saved queries and on forms whose RecordSource points at them.
  • In SQL View, ask: does this SQL return a small rowset early, or does it drag large tables together before filtering? Trace stacked queries from the innermost query outward.
  • For multi-user files, note whether slowness tracks network/VPN or happens locally—structure still matters, but a split back-end on Wi‑Fi adds round-trip pain on top of bad SQL.
  • Compare row counts: tables that grew 20x while queries stayed unchanged are structure debt, not “Access limits.”

Step 2 – Optimize Tables

Why it matters: Queries can only be as clean as the keys and grain they sit on. Normalization and explicit relationships are how you keep inventory lines, CRM activities, and GL detail from colliding.

How:

  • Define a natural or surrogate primary key on every entity table; use AutoNumber where there is no stable business key.
  • Move repeating groups (e.g., Contact1…Contact5) into child tables with a foreign key—your CRM forms become continuous subforms instead of 120-column nightmares.
  • Turn on referential integrity where business rules allow; cascade updates carefully, document deletes.
  • Index every foreign key used in joins and high-selectivity filter columns (SKU, OrderID, AccountID). Skip indexing ultra-low-cardinality fields unless measured.
  • For serious apps, avoid table-level lookup fields; use combos on forms bound to real FKs so imports and ODBC stay predictable.

Real-world: A distributor had “one table per year” of sales because someone copied tables annually. Merging into a single fact table with a Year column plus archive tables for closed years cut report time and simplified queries overnight.

Book Free Consultation

Walk through your worst form or report, or request a short structure audit: relationship map, missing indexes, and top SQL fixes—prioritized for your team.

Book Free Consultation

Step 3 – Improve Queries

Why it matters: One saved query reused by forms and reports multiplies any fix. Structure work is wasted if SQL still selects * from stacked views.

How:

  • Filter in the innermost query; pass date ranges and IDs as parameters instead of wrapping fields in functions.
  • Replace DSum/DLookup patterns with grouped subqueries or small summary tables maintained by append queries on a schedule.
  • Prefer INNER JOIN when business rules require matches; reserve OUTER JOIN for true optional data.
  • Break “query on query on query” stacks when Access raises complexity limits or plans degrade—stage into temp tables for heavy month-end reports if needed.

Step 4 – Reduce Load

Why it matters: The fastest query is the one users do not run until they need it. Forms that load 100k rows to show ten are a structure and UI problem.

How:

  • Set form RecordSource to a query with WHERE that matches the form’s purpose (e.g., open orders only).
  • Use progressive combo box row sources (filter by prefix on indexed text) instead of DISTINCT on huge tables.
  • For reports, pre-aggregate nightly into summary tables if live detail at month-end is overkill.
  • Split the database: UI in a local front-end; tables in a network back-end; one front-end copy per workstation. Compact the back-end on a schedule after large deletes.

VBA and SQL You Can Use Today

Practical pass: Use the Relationships window and Design view together—every line of a join should have a matching index on the joined columns. If Access suggests an index when you join large tables, accept it unless you have measured a reason not to.

SQL — replace a per-row domain aggregate with a join (pattern for line totals):

SELECT h.OrderID, h.OrderDate, Nz(d.LineTotal,0) AS LineTotal
FROM OrderHeaders AS h
LEFT JOIN (
    SELECT OrderID, Sum(Quantity * UnitPrice) AS LineTotal
    FROM OrderLines
    GROUP BY OrderID
) AS d ON h.OrderID = d.OrderID;

VBA — list indexes on a table (Standard Module; requires VBA Editor → Tools → References → Microsoft DAO 3.6 Object Library or later):

Public Sub ListIndexesForTable(ByVal tblName As String)
    Dim db As DAO.Database
    Dim tdf As DAO.TableDef
    Dim idx As DAO.Index
    Set db = CurrentDb
    Set tdf = db.TableDefs(tblName)
    For Each idx In tdf.Indexes
        Debug.Print idx.Name
    Next idx
End Sub

VBA — log slow query opens (Standard Module):

Public Sub TimeOpenQuery(ByVal qryName As String)
    Dim t As Double
    t = Timer
    DoCmd.OpenQuery qryName, acViewNormal, acReadOnly
    Debug.Print qryName & ": " & Format(Timer - t, "0.000") & " s"
End Sub

Where to put it: Standard module. How to run: Open Immediate Window (Ctrl+G), type ListIndexesForTable with your table name in quotes, or TimeOpenQuery with your saved query name—then Enter.

Common mistakes: Forgetting Option Compare Database consistency with queries; running DDL from queries without exclusive access; using SELECT * in stacked queries when downstream needs five columns—each extra wide column blocks index-only plans and bloats temp space.

Real Business Use Case

Inventory system, ~100k SKU movements per year. Before: Movement history lived in one wide table with item attributes duplicated every row; rolling reports by warehouse timed out; forms filtered client-side in VBA after loading the whole set. After: Normalized item master; movement lines keyed by ItemID with indexes on ItemID + TranDate; nightly summary table for valuation reports; forms opened on indexed date range only. Result: Month-end closed hours faster, fewer lock conflicts, and buyers stopped avoiding the “slow” screen.

Pair structural work with operational discipline: Access database development services for redesigns, and VBA automation when you need scheduled compacts, batch loads, or validation—not as a band-aid over cartesian queries.

Common Mistakes

  • No indexing on join and filter columns—especially foreign keys.
  • Loading full tables into forms or row sources “for flexibility.”
  • Poor normalization — repeating groups, multi-valued columns, and “spreadsheet tables” imported as-is.
  • Leaky keys — duplicates and nulls in fields that are supposed to identify one row.
  • Lookup fields on base tables used as shortcuts instead of real relationships.
  • Monolithic .accdb — no split, no compact plan, attachments bloating rows.
  • Overusing DISTINCT to hide bad joins instead of fixing keys.

Performance Optimization (Advanced)

Indexing strategy — Index selective predicates first; composite indexes only when queries consistently filter the same left-to-right sequence. Too many overlapping indexes slow writes and confuse the engine.

Split database — Front-end holds forms, reports, macros, and links; back-end holds tables. Reduces corruption risk and clarifies deployment.

Query optimization — Sargable date ranges, parameter queries, avoid functions on indexed columns in WHERE. For related patterns, see Access query performance troubleshooting.

File size control — Archive cold history, remove unused objects, compact after large deletes. When the back-end pushes past roughly 500MB with heavy multi-user writes, stop over-tuning Access alone.

Expert Insight

If your Access file exceeds 500MB and multiple users are hitting it during business hours, you should consider partial migration to SQL Server or Azure SQL instead of endless micro-optimizations. Keep Access as a front end with linked tables or pass-through queries where it shines; move the contested, high-volume tables to a server engine built for concurrency.

For Excel-heavy satellite processes, Excel consulting plus a defined export/import contract often beats forcing everything into one .accdb. Deeper performance context: why Access slows with large data.

Need More Help?

Our experts can help you resolve complex Excel and Access issues quickly and efficiently.

Book 30 Min Free Consulting