Incident History

Back to status page
  1. Feb 08, 2026

    1. Scheduled Maintenance on US Region Completed

      We’ve successfully completed the scheduled upgrade of our US database infrastructure. During the maintenance window, we observed approximately 2 seconds of downtime on the api.us.kombo.dev API.

      Performance and availability improvements are now fully in place. EU region customers were not affected. Thanks for your patience!

    2. Scheduled Maintenance Period on US Region

      On Sunday, February 8, 2026, at 8:00 AM UTC (3:00 AM EST / 12:00 AM PST), we will perform a scheduled upgrade of our US database infrastructure to improve performance and availability for customers in the US Region.

      This might result in a brief period of reduced availability of the api.us.kombo.dev API in a 20-minute time window (i.e., until 8:20 AM UTC). During this window, some requests might fail. We apologize for any inconvenience caused by this. Our team is available for any questions.

      Customers in the EU region will not be affected by this maintenance.

  2. Jan 29, 2026

    1. US API Fully Recovered

      We've continued monitoring our fix and have not seen a single failure since 2026-01-29 04:28:04 UTC. Response times and connection pool saturation for the US API are also back in a healthy state since then.

      We're considering this incident resolved, but we'll be running a diligent post-mortem process in the next days to prevent a similar failure in the future. The overall impact of this has been limited as only a small percentage of API requests for a subset of environments has experienced problems, but this is still not the standard we want to live up to.

      To all affected customers: We sincerely apologize for any impact this has had on your production workloads. We'll be working hard to act on the learnings we've made from this incident.

      Please don't hesitate to reach out to our team if you, e.g., need support in retrying failed write actions during the time frame.

    2. US API Health Recovering

      We've traced the root cause of the connection pool congestion to a slow query on our "/ats/applications" endpoint that's disproportionately affecting a handful of integrations.

      We've temporarily reverted these to a slower but compatible query approach and will investigate the performance issue in more detail in the coming days.

      We've already seen a greatly reduced number of failures in the past hour before rolling out this fix - in general, only a small percentage of customers were affected overall - but we expect remaining failures to stop in the next half an hour.

      We're continuing to monitor the situation to ensure full recovery.

    3. Elevated rate of 500 errors on US API

      We've identified an issue with the database connection pooling on one of our API services and have deployed an initial mitigation. We've already observed a significant drop in 500 errors before that. We're continuing to investigate the root cause and monitor the mitigation.

      Syncs (EU and US) remain unaffected. Customers on the EU region remain unaffected.

    4. Elevated rate of 500 errors on US API

      We're seeing an elevated rate of 500 API errors affecting some customers across our US region. We're investigating the cause of the issue. EU does not appear to be impacted.