Documentation Index
Fetch the complete documentation index at: https://powersync-mintlify-style-consistency-1778162179.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Usage and Billing Metrics FAQs
Where can I review my PowerSync Service usage?
Where can I review my PowerSync Service usage?
- Individual instances: Visit the Usage metrics workspace in the PowerSync Dashboard to see metrics for a specific instance.
- Organization-wide: Go to your organization in the PowerSync Dashboard and check the Plan Usage section for aggregated metrics across all instances in your current billing cycle.
What are sync operations?
What are sync operations?
- Every row change (insert, update, delete) creates a new operation, and this operations history accumulates over time.
- When a new client connects, it downloads the entire history on first sync.
- Existing clients only download new operations since their last sync.
- Daily automatic compacting (built into PowerSync Cloud)
- Regular defragmentation (recommended for frequently updated data)
What are concurrent connections?
What are concurrent connections?
.connect(), it establishes one long-lived connection for streaming real-time updates.Key points about concurrent connections:- Billing is based on peak concurrent connections, which is the highest number of simultaneous connections during the billing cycle.
- Billing (Pro/Team): 1,000 connections are included, then $30 per 1,000 over the included amount.
- PowerSync Cloud Pro plan is limited to 3,000 concurrent connections.
- PowerSync Cloud Team plan is limited to 10,000 concurrent connections by default.
- PowerSync Cloud Free plans are limited to 50 peak concurrent connections.
- When limits are reached, new connection attempts receive a 429 HTTP response while existing connections continue syncing. Clients retry after a delay and should connect once capacity is available.
What is data synced?
What is data synced?
What is data hosted?
What is data hosted?
- A current copy of the data, which should be roughly equal to the subset of your source data covered by your Sync Streams (or legacy Sync Rules).
- A history of all operations on data in buckets, which can be larger than the source since it includes history and one row can be in multiple buckets.
- Data for parameter lookups, which is typically small.
What comprises data processing (legacy)?
What comprises data processing (legacy)?
What is the difference between data replicated vs data synced?
What is the difference between data replicated vs data synced?
Billing FAQs
Where can I see details about my current billing cycle?
Where can I see details about my current billing cycle?
Where can I update my billing details, e.g. the email that receives billing receipts?
Where can I update my billing details, e.g. the email that receives billing receipts?
Can I view my historic invoices?
Can I view my historic invoices?
Which usage metrics are billed under the new pricing model?
Which usage metrics are billed under the new pricing model?
- Data synced: 30 GB included, then $1.00 per GB over the included amount.
- Peak concurrent connections: 1,000 included, then $30 per 1,000 over the included amount.
- Data hosted: 10 GB included, then $1.00 per GB over the included amount (unchanged from before).
- Replication operations (count)
- Data replicated (per GB)
- Sync operations (count)
Usage Troubleshooting
If you’re seeing unexpected spikes in your usage metrics, here’s how to diagnose and fix common issues:Common Usage Patterns
More Operations Than Rows
If you’re syncing significantly more operations than you have rows in your database, this usually indicates a large operations history has built up. This is common with frequently updated data. Solution: Defragmentation reduces the operations history by compacting buckets. While defragmentation triggers additional sync operations for existing users, it significantly reduces operations for new installations. Use the Sync Diagnostics Client to compare total rows vs. operations synced to identify if this is affecting you.Repetitive Syncing by the Same User
If you see the same user syncing repeatedly in quick succession, this could indicate a client code issue. First steps to troubleshoot:- Check SDK version: Ensure you’re using the latest SDK version.
- Review client logs: Check your client-side logs for connection issues or sync loops.
- Check instance logs: Review Instance logs to see sync patterns and identify which users are affected.
Concurrent Connections
The most common cause of excessive concurrent connections is opening multiple copies ofPowerSyncDatabase and calling .connect() on each. Debug your connection handling by reviewing your code and Instance logs. Ensure you’re only opening one connection per user/session.
Sync Operations
Sync operations are not billed in our updated pricing model, but they’re useful for diagnosing spikes in data synced and understanding how data mutations affect usage. While sync operations typically correspond to data mutations on synced rows (those in your Sync Streams/Sync Rules), several scenarios can affect your operation count:Key Scenarios
- New App Installations: New users need to sync the complete operations history. We help manage this by running automatic daily compacting on Cloud instances and providing manual defragmentation options in the PowerSync Dashboard.
- Existing Users: Compacting and defragmenting reduce operations history but trigger additional sync operations for existing users. See our defragmenting guide to optimize this.
- Sync Rule Deployments: When you deploy changes to Sync Streams/Sync Rules, PowerSync recreates buckets from scratch. New app installations sync fewer operations since the operations history is reset, but existing users temporarily experience increased sync operations as they re-sync the updated buckets. We’re working on incremental sync rule reprocessing, which will only reprocess buckets whose definitions have changed.
-
Unsynced Columns:
Any row update triggers a new operation in the logical replication stream, regardless of which columns changed. PowerSync tracks changes at the row level, not the column level. This means updates to columns not included in your Sync Streams/Sync Rules still create sync operations, and even a no-op update like
UPDATE mytable SET id = idgenerates a new operation for each affected row. Selectively syncing columns helps with data access control and reducing data transfer size, but it doesn’t reduce the number of sync operations.
Data Synced
Data synced measures the total uncompressed bytes streamed from the PowerSync Service to clients. Spikes typically come from either many sync operations (high churn) or large rows (large payloads), and can also occur during first-time syncs, defragmentation, or Sync Rule updates. If your spikes in data synced correspond with spikes in sync operations, also see the Sync Operations troubleshooting guidelines above.Diagnose Data Synced Spikes
- Pinpoint when it spiked: Use Usage Metrics to find the exact hour/day of the spike.
-
Inspect instance logs for size:
In Instance Logs, enable Metadata and search for “Sync stream complete” to see the size of data transferred and operations synced per stream.
You may need to scroll to load more logs. If you need a CSV export of your logs for a limited time-range, contact us. For certain scenarios, these are easier to search than the instance logs in the dashboard.

- Compare operations vs row sizes: If operations are high and size scales with it, you likely have tables being updated frequently, or a large operations history has built up. See our defragmenting guide. If operations are moderate but size is large, your rows likely contain large data (e.g., large JSON columns or blobs).
- Identify large payloads in your database: Check typical row sizes for frequently updated tables and look for large columns (e.g., long TEXT/JSON fields, embedded files).
- Consider recent maintenance and app changes: Defragmentation and Sync Rule deploys cause existing clients to re-sync content, temporarily increasing data synced. New app installs trigger initial full sync, so expect higher usage when onboarding new sets of users.
Data Hosted
Your hosted data size may be larger than your source database size because it includes the history of all operations on data in buckets. This can be bigger than the source since it includes history, and one row can be in multiple buckets. Data hosted can temporarily spike during Sync Rule deployments and defragmentation because buckets are reprocessed. During this window, both the previous and new bucket data may exist concurrently.Troubleshooting Strategies
1. Identify Timing
Use Usage Metrics to pinpoint usage spikes.2. Review Logs
Use Instance Logs to review sync service logs during the spike(s). Enable the Metadata option, then search for “Sync stream complete” entries (use your browser’s search function) to review how many operations synced, the size of data transferred, and which clients/users were involved.
3. Compare Metrics
Use the Sync Diagnostics Client to compare total rows vs. operations synced to the user device. If you’re seeing significantly more operations than rows, you might benefit from defragmentation.4. Detailed Sync Operations
Use the test-client’sfetch-operations command with the --raw flag: