L

LaunchDarkly Outage History

Past incidents and downtime events

Complete history of LaunchDarkly outages, incidents, and service disruptions. Showing 50 most recent incidents.

January 2026(4 incidents)

minorresolvedJan 31, 10:02 AM — Resolved Jan 31, 12:20 PM

Data ingestion delays

4 updates
resolvedJan 31, 12:20 PM

This incident has been resolved and all data processing pipelines are fully caught up. No data was lost.

monitoringJan 31, 11:35 AM

A fix has been implemented and our event processing pipeline for Observability and Opentelemetry are fully caught up. We're continuing to monitor as our event processing pipeline catches up for Flag Status, Evaluations, and Contexts.

identifiedJan 31, 10:23 AM

We have identified the issue and are continuing our work to resolve it.

investigatingJan 31, 10:02 AM

All customers are experiencing data ingestion delays with the following: - Observability sessions and errors - Opentelemetry logs, traces, and metrics - Flag status - Evaluations - Contexts We are investigating and will provide updates as they become available. No data loss is expected.

minorresolvedJan 26, 05:54 PM — Resolved Jan 26, 10:13 PM

Elevated error rate when configuring Okta SCIM

4 updates
resolvedJan 26, 10:13 PM

This incident has been resolved.

monitoringJan 26, 08:29 PM

We believe the issue is resolved for all customers. We're continuing to monitor the situation.

identifiedJan 26, 07:56 PM

Some customers using Okta SCIM are encountering errors when provisioning and managing LaunchDarkly members. We continue to work on a remediation and have engaged with Okta's support team.

identifiedJan 26, 05:54 PM

Some customers are experiencing errors when configuring LaunchDarkly with Okta SCIM. We have identified the issue and are continuing our work to resolve it.

minorresolvedJan 12, 05:48 PM — Resolved Jan 12, 07:44 PM

Unable to edit JSON flag variations

4 updates
resolvedJan 12, 07:44 PM

This incident has been resolved.

monitoringJan 12, 07:16 PM

A fix has been implemented for the issue preventing editing some JSON flag variations.

identifiedJan 12, 06:49 PM

We've identified a front-end issue that is causing issues editing certain JSON flag variations and are working on a fix.

investigatingJan 12, 05:48 PM

Customers may experience issues editing JSON flag variation. We are investigating the root cause and will provide updates shortly.

minorresolvedJan 8, 06:46 PM — Resolved Jan 8, 10:00 PM

Guarded releases event ingestion delays

4 updates
resolvedJan 8, 10:00 PM

This incident has been resolved.

monitoringJan 8, 07:08 PM

Events are caught up.

monitoringJan 8, 06:57 PM

A fix has been implemented and we are monitoring the results. We expect to catch up on all events within the next 15 minutes, no data loss is expected.

identifiedJan 8, 06:46 PM

We are currently experiencing delays with guarded releases event ingestion. We are investigating and will provide updates as they become available.

December 2025(5 incidents)

minorresolvedDec 17, 06:33 PM — Resolved Dec 17, 08:51 PM

Delay in Observability product data ingest

4 updates
resolvedDec 17, 08:51 PM

This incident has been resolved.

monitoringDec 17, 08:20 PM

A fix has been implemented and we are monitoring the results.

identifiedDec 17, 07:35 PM

We have identified the cause of the ingest delay and are catching up on the backlogged messages. We expect to be caught up on all delayed sessions and errors in the next hour. Data loss is not expected.

investigatingDec 17, 06:33 PM

Sessions and errors may be delayed by up to 3 hours. We are investigating the root cause.

minorresolvedDec 15, 02:14 PM — Resolved Dec 15, 04:38 PM

Investigating - Increase in SDK errors

8 updates
resolvedDec 15, 04:38 PM

This incident has been resolved.

monitoringDec 15, 04:20 PM

A fix has been implemented and we are monitoring the results.

identifiedDec 15, 03:43 PM

We are observing a reduction in SDK errors. We are continuing to work on a fix.

identifiedDec 15, 03:31 PM

The issue has been identified and a fix is being implemented.

investigatingDec 15, 03:22 PM

We are continuing to investigate this issue.

investigatingDec 15, 03:03 PM

We are also observing a small percentage timeouts when modifying feature flags via our API or UI. We are continuing to investigate this error.

investigatingDec 15, 02:42 PM

We are continuing to investigate this issue.

investigatingDec 15, 02:14 PM

We are investigating an increase in SDK error rates affecting a small portion of requests, currently estimated at less than one percent. SDKs will automatically retry these errors, so the primary customer impact is expected to be longer SDK initialization times rather than request failures. We believe the issue is related to an ongoing incident affecting one of our vendors. Our team is actively working to mitigate the impact and will provide additional updates as more information becomes available.

minorresolvedDec 4, 09:16 PM — Resolved Dec 5, 03:47 AM

Event Processing Delays - Experiment Results Utilizing Attribute Filtering affected

2 updates
resolvedDec 5, 03:47 AM

We have recovered from delays in experimentation results that are sliced by attributes. No data has been lost.

investigatingDec 4, 09:16 PM

We are investigation an issue with delays in experimentation results that are sliced by attributes. No data has been lost.

minorresolvedDec 3, 10:52 PM — Resolved Dec 4, 01:05 PM

Delays in publishing data export events

4 updates
resolvedDec 4, 01:05 PM

All of the delayed data has been processed and this incident is resolved.

identifiedDec 4, 09:50 AM

We are continuing to process the delayed data. The data is now updated through 2025-12-04, 01:00:00 UTC.

identifiedDec 4, 05:09 AM

We are continuing to process the data and data is current as of 2025-12-03, 08:00:00 UTC. We'll continue to update as we process data.

identifiedDec 3, 10:52 PM

Some customers who have configured Snowflake, BigQuery, or Redshift data export destinations may be experiencing delays in events published. There is no data loss. Exported data events are currently 32 hours behind. We are recovering steadily and will continue to send updates

minorresolvedDec 3, 04:00 PM — Resolved Dec 3, 04:00 PM

Elevated error rates in apac region for server side sdks

1 update
resolvedDec 3, 08:32 PM

Elevated error rates in apac region for server side sdks attempting to make new connections to the streaming service from 8:05 AM PT to 8:11 AM PT. The issue is now resolved

November 2025(7 incidents)

minorresolvedNov 26, 08:13 PM — Resolved Nov 26, 08:55 PM

Intermittent issues accessing flag details

3 updates
resolvedNov 26, 08:55 PM

This incident has been resolved.

monitoringNov 26, 08:45 PM

We are no longer seeing any errors, and the issue was contained to the euw1 region. We'll continue to monitor and update this as necessary.

investigatingNov 26, 08:13 PM

We are currently investigating an issue intermittently preventing our flags details pages from loading.

minorresolvedNov 24, 07:14 AM — Resolved Nov 24, 10:45 AM

Delayed Event Processing

4 updates
resolvedNov 24, 10:45 AM

This incident has been resolved.

monitoringNov 24, 10:35 AM

A fix has been implemented and our event processing pipeline is fully caught up. We're continuing to monitor.

identifiedNov 24, 09:31 AM

We are continuing to work on a fix for this issue, and remain at an approximate 20 minute delay in flag event processing.

identifiedNov 24, 07:14 AM

We have identified a delay in our event processing pipeline, and are working to mitigate the issue. Features that show flag usage metrics are affected, and data is approximately 20 minutes delayed right now.

minorresolvedNov 13, 09:49 PM — Resolved Nov 14, 12:36 AM

Investigating elevated latency

2 updates
resolvedNov 14, 12:36 AM

The issue with the AI Configs list page has been resolved. Impacted services have returned to normal operation.

investigatingNov 13, 09:49 PM

We detected elevated latencies loading the flag list and AI configs list pages. The flag list’s performance has recovered, and we continue to investigate remediation on the AI configs list page.

minorresolvedNov 12, 04:00 PM — Resolved Nov 12, 04:00 PM

Elevated error rates for a small number of customers

1 update
resolvedNov 12, 08:38 PM

Between 7:37am and 8:19am PT, a small number of customers in the us-east-1 region encountered elevated error rates with Polling SDK and API requests. This was caused by a minor issue affecting a CDN POP, which has since been resolved.

minorresolvedNov 5, 04:42 PM — Resolved Nov 5, 06:02 PM

Customers unable to edit custom rules on flags

4 updates
resolvedNov 5, 06:02 PM

This incident has been resolved.

monitoringNov 5, 04:51 PM

A fix has been implemented and we are monitoring the results.

identifiedNov 5, 04:48 PM

The issue has been identified and a fix is being implemented.

investigatingNov 5, 04:42 PM

We are currently investigating this issue.

minorresolvedNov 3, 05:11 PM — Resolved Nov 3, 06:22 PM

AI Configs monitoring page tab failing to load

5 updates
resolvedNov 3, 06:22 PM

This incident has been resolved.

monitoringNov 3, 06:18 PM

A fix has been implemented and we are monitoring the results.

identifiedNov 3, 05:12 PM

We are continuing to work on a fix for this issue.

identifiedNov 3, 05:11 PM

We are continuing to work on a fix for this issue.

identifiedNov 3, 05:11 PM

The issue has been identified and a fix is being implemented.

minorresolvedNov 1, 04:05 PM — Resolved Nov 1, 04:05 PM

Delayed flag updates for small number of customers

1 update
resolvedNov 1, 05:34 PM

A limited number of customers (primarily in EU regions) with Polling SDK connections experienced elevated latency and errors rates between 9:05am and 9:58am PT, caused by a service incident in our CDN provider.

October 2025(7 incidents)

noneresolvedOct 28, 05:04 PM — Resolved Oct 28, 05:53 PM

Live Events not loading

3 updates
resolvedOct 28, 05:53 PM

We've resolved an issue causing Live Events to not load.

identifiedOct 28, 05:40 PM

We've identified an issue that was causing Live Events to not load (starting Oct 23 11:03am PT) and are resolving the issue.

investigatingOct 28, 05:04 PM

We've received reports of Live Events not loading and are investigating.

majorresolvedOct 28, 02:17 AM — Resolved Oct 28, 02:50 AM

Experiment results and metrics unavailable

3 updates
resolvedOct 28, 02:50 AM

We've resolved an issue causing Experiment results to fail to load.

identifiedOct 28, 02:38 AM

We've identified an issue affecting the display of Experiment results and are working on a fix.

investigatingOct 28, 02:28 AM

We are investigating reports of experiment results and metrics failing to load.

majorresolvedOct 20, 07:25 AM — Resolved Oct 21, 10:00 AM

Elevated latencies and delays

30 updates
resolvedOct 21, 10:00 AM

This incident has been resolved. One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update IP allowlists in their firewalls or proxy servers in order for their services to continue establishing streaming connections from server side SDKs to LaunchDarkly without disruption. Please refer to documentation at https://docs.launchdarkly.com/home/advanced/public-ip-list for more information. Refer to https://app.launchdarkly.com/api/v2/public-ip-list for complete list of public IPs. Customers who switched from streaming to polling mode as a workaround are clear to revert back to streaming mode.

monitoringOct 21, 09:53 AM

One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update IP allowlists in their firewalls or proxy servers in order for their services to continue establishing streaming connections from server side SDKs to LaunchDarkly without disruption. Please refer to documentation at https://docs.launchdarkly.com/home/advanced/public-ip-list for more information. Refer to https://app.launchdarkly.com/api/v2/public-ip-list for complete list of public IPs. We will continue to actively monitor our services and provide updates if anything changes. We recommend that customers who switched from streaming to polling mode as a workaround remain in polling mode for now. We will continue to provide updates to this recommendation. We’ll provide another update within 60 minutes. The following stable IPs were added: - 52.22.11.124/32 - 98.90.74.184/32 - 44.214.199.141/32 - 54.158.1.193/32 - 52.20.244.244/32 - 3.222.86.128/32 - 3.209.231.150/32 - 98.87.97.132/32 - 54.243.249.198/32 - 52.205.29.16/32 - 52.200.155.176/32 - 72.44.54.239/32 - 44.193.41.212/32 - 44.193.145.213/32 - 3.230.174.47/32 - 34.193.141.46/32 - 54.145.215.104/32 - 54.83.149.69/32 - 54.167.133.6/32 - 98.86.214.67/32 - 3.210.111.117/32 - 44.198.65.246/32 - 3.223.193.186/32 - 54.164.149.203/32 - 52.202.164.129/32 - 54.211.161.195/32 - 52.44.175.163/32 - 54.87.94.27/32 - 34.196.162.28/32 - 3.229.200.95/32 - 34.206.243.165/32 - 44.198.216.81/32 - 98.85.64.100/32 - 34.193.205.73/32 - 54.82.179.12/32 - 35.169.61.114/32 - 3.225.212.129/32 - 44.214.230.241/32 - 44.197.94.28/32 - 54.225.42.164/32 - 3.232.151.250/32 - 98.88.212.98/32 - 44.206.106.7/32 - 44.219.171.95/32 - 54.81.117.83/32 - 3.212.29.247/32 - 52.207.48.173/32 - 52.21.24.75/32 - 44.209.163.213/32 - 3.212.26.71/32 - 3.232.245.239/32 - 44.214.85.107/32 - 54.85.9.44/32 - 3.212.63.158/32 - 44.214.25.250/32 - 34.225.52.183/32 - 54.144.244.40/32 - 13.216.151.182/32 - 34.205.184.16/32 - 54.243.39.147/32 - 52.21.118.82/32 - 44.208.247.20/32 - 44.209.6.233/32 - 98.85.24.70/32 - 52.206.193.249/32 - 52.203.145.124/32 - 34.207.21.226/32 - 52.6.144.34/32 - 3.221.55.92/32 - 54.160.1.221/32 - 54.236.171.5/32 - 3.210.143.243/32 - 18.204.254.23/32 - 34.224.206.32/32 - 54.152.40.39/32 - 52.201.30.87/32 - 98.86.87.228/32 - 52.70.143.213/32 - 34.199.166.40/32 - 54.225.71.167/32 - 100.26.67.253/32 - 13.219.10.149/32 - 52.203.44.182/32 - 3.215.17.57/32 - 3.217.93.49/32 - 3.215.154.205/32 - 3.224.166.159/32 - 44.205.194.1/32 - 54.162.82.157/32 - 54.175.84.251/32 - 54.211.58.167/32 - 52.22.199.197/32 - 35.169.162.188/32 - 44.205.162.192/32 - 54.224.162.1/32 - 50.16.48.228/32 - 52.203.187.144/32 - 52.22.34.71/32 - 52.44.226.138/32 - 35.169.87.104/32 - 50.17.142.209/32 - 34.226.53.28/32 - 50.16.209.122/32 - 54.173.173.176/32 - 54.197.143.76/32 - 52.45.14.195/32 - 54.84.144.50/32 - 52.205.140.231/32 - 52.1.64.188/32 - 23.22.17.50/32 - 44.213.219.16/32 - 54.211.63.220/32 - 34.236.195.69/32 - 100.29.106.41/32 - 107.20.48.118/32 - 107.22.84.205/32 - 107.23.47.163/32 - 174.129.120.2/32 - 174.129.25.155/32 - 18.204.101.179/32 - 18.207.77.1/32 - 18.214.59.159/32 - 3.208.63.99/32 - 3.209.142.240/32 - 3.210.8.83/32 - 3.211.0.174/32 - 3.211.171.106/32 - 3.211.40.100/32 - 3.211.78.169/32 - 3.212.153.172/32 - 3.212.215.241/32 - 3.212.69.145/32 - 3.215.132.92/32 - 3.215.85.74/32 - 3.217.156.217/32 - 3.217.33.194/32 - 3.222.172.85/32 - 3.225.49.136/32 - 3.226.201.70/32 - 3.232.113.99/32 - 3.81.156.201/32 - 3.94.227.253/32 - 34.192.228.56/32 - 34.196.53.78/32 - 34.197.220.63/32 - 34.197.229.208/32 - 34.198.5.248/32 - 34.205.180.137/32 - 34.206.142.57/32 - 34.225.210.63/32 - 34.225.44.159/32 - 34.232.120.176/32 - 34.235.101.237/32 - 34.237.149.109/32 - 34.237.7.234/32 - 35.153.62.144/32 - 35.171.42.112/32 - 35.172.28.29/32 - 35.175.51.91/32 - 44.193.160.19/32 - 44.193.176.64/32 - 44.193.192.114/32 - 44.195.178.165/32 - 44.205.130.196/32 - 44.205.142.202/32 - 44.205.242.41/32 - 44.207.32.19/32 - 44.208.215.105/32 - 44.210.2.163/32 - 44.221.72.252/32 - 44.223.189.67/32 - 50.16.53.115/32 - 52.0.20.18/32 - 52.1.126.54/32 - 52.20.44.107/32 - 52.200.10.183/32 - 52.201.19.0/32 - 52.202.18.147/32 - 52.205.199.141/32 - 52.205.74.149/32 - 52.206.123.108/32 - 52.21.16.31/32 - 52.22.120.141/32 - 52.22.75.64/32 - 52.23.189.51/32 - 52.3.131.52/32 - 52.3.164.32/32 - 52.3.203.3/32 - 52.4.17.19/32 - 52.55.197.16/32 - 52.6.134.5/32 - 52.7.81.224/32 - 54.147.67.241/32 - 54.156.155.61/32 - 54.158.114.255/32 - 54.158.201.166/32 - 54.167.202.203/32 - 54.235.4.229/32 - 54.243.165.178/32 - 54.243.220.97/32 - 54.243.227.67/32 - 54.243.238.143/32 - 54.243.34.157/32 - 54.243.54.147/32 - 54.243.58.248/32 - 54.243.79.193/32 - 54.80.39.21/32 - 54.81.213.212/32 - 54.84.21.101/32 - 54.84.245.230/32 - 98.82.52.30/32 - 98.82.55.107/32

monitoringOct 21, 09:19 AM

One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update the IP allowlists in their firewalls or proxy servers to ensure that their services can continue establishing streaming connections from server-side SDKs to LaunchDarkly without disruption. Approximately 88% of traffic to stream.launchdarkly.com will continue to be routed to existing stable IPs. We are working with AWS to provide a list of additional stable IPs and will post another update as soon as they become available. We will continue to actively monitor our services and provide updates if anything changes. We recommend that customers who switched from streaming to polling mode as a workaround remain in polling mode for now. We will continue to provide updates to this recommendation. We’ll provide another update within 60 minutes.

monitoringOct 21, 08:08 AM

Server-side streaming is healthy. The load balancer upgrade, along with the addition of another load balancer, has restored our service to healthy levels. We will continue to actively monitor our services and provide updates if anything changes. We recommend that customers who switched from streaming to polling mode as a workaround remain in polling mode for now. We will continue to provide updates to this recommendation. We’ll provide another update within 60 minutes.

identifiedOct 21, 07:14 AM

We're seeing signs of recovery, reported error rates for server-side SDKs are dropping significantly. The initial load balancer unit was upgrading and has begun handling traffic successfully. The additional load balancer is online and is beginning to handle traffic. Customers may still experience delayed flag updates. We'll provide another update within 60 minutes.

identifiedOct 21, 06:53 AM

Server-side streaming API is still experiencing a Partial outage. An additional load balancer has been brought online and is being configured to receive traffic. When we confirm that this is successful, we'll bring the other additional load balancer units online to handle the increased volume in traffic and restore service to our customers. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.

identifiedOct 21, 06:00 AM

Server-side streaming API is still experiencing a Partial outage. We are in the process of deploying additional load balancer units that are about to go online. We expect them to successfully handle the increased volume in traffic and restore service to our customers. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.

identifiedOct 21, 04:52 AM

Server-side streaming API is still experiencing a Partial outage. We're still working on creating additional load balancer units to distribute and handle the increased volume in traffic. AWS is providing active support to LaunchDarkly as we work to restore service to our customers. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.

identifiedOct 21, 04:06 AM

Server-side streaming API is still experiencing a Partial outage and the reported error rates for server-side SDKs are reducing. We've added an additional load balancer unit to distribute the traffic which is helping. Based on the volume of traffic, we're going to add five additional load balancer units to give our service enough capacity to handle it. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.

identifiedOct 21, 02:59 AM

Server-side streaming API is still experiencing a Partial outage and the error rates for server-side SDKs are still high. We've escalated the recovery process with our AWS technical support team to accelerate the redeployment of our ALB for SDK connections to restore service. They are updating our ALB load balance capacity units (LCU) to accommodate increased levels of inbound traffic to our platform. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.

identifiedOct 21, 02:08 AM

Server-side streaming API is still experiencing a Partial outage and the error rates for server-side SDKs are still high. We're working with our AWS technical support team to accelerate the redeployment of our ALB for SDK connections to restore service. As a temporary workaround, we recommend switching server-side SDK configs from streaming to polling. Customers connecting their server-side SDKs directly to LD's streaming capabilities can reconfigure their SDKs to use polling to mitigate. Node: - Set LDOptions.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-nodejs-server-side-code-sample - https://launchdarkly.github.io/js-core/packages/sdk/server-node/docs/interfaces/LDOptions.html#stream Python - Set Config.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-python-code-sample - https://launchdarkly-python-sdk.readthedocs.io/en/latest/api-main.html#ldclient.config.Config.stream Java: - Use Components.pollingDataSource() instead of the default Components.streamingDataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-java-code-sample - https://launchdarkly.github.io/java-core/lib/sdk/server/com/launchdarkly/sdk/server/LDConfig.Builder.html#dataSource-com.launchdarkly.sdk.server.subsystems.ComponentConfigurer- .NET: - create a builder with PollingDataSource(), change its properties with the methods of this class, and pass it to DataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-net-server-side-code-sample - https://launchdarkly.github.io/dotnet-server-sdk/pkgs/sdk/server/api/LaunchDarkly.Sdk.Server.Integrations.PollingDataSourceBuilder.html Enterprise customers connecting their server-side SDKs to a Relay Proxy cluster can reconfigure their Relay Proxy to be in Offline Mode to mitigate. https://launchdarkly.com/docs/sdk/relay-proxy/offline We'll provide another update within 60 minutes.

identifiedOct 21, 01:02 AM

Server-side streaming API is still experiencing a Partial outage in our main US region and we're continuing our efforts to restore service. We're redirecting traffic to an EU region to help distribute the load to healthy servers while we work to restore our primary region. Customers connecting their server-side SDKs directly to LD’s streaming capabilities can reconfigure their SDKs to use polling to mitigate. Node: - Set LDOptions.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-nodejs-server-side-code-sample - https://launchdarkly.github.io/js-core/packages/sdk/server-node/docs/interfaces/LDOptions.html#stream Python - Set Config.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-python-code-sample - https://launchdarkly-python-sdk.readthedocs.io/en/latest/api-main.html#ldclient.config.Config.stream Java: - Use Components.pollingDataSource() instead of the default Components.streamingDataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-java-code-sample - https://launchdarkly.github.io/java-core/lib/sdk/server/com/launchdarkly/sdk/server/LDConfig.Builder.html#dataSource-com.launchdarkly.sdk.server.subsystems.ComponentConfigurer- .NET: - create a builder with PollingDataSource(), change its properties with the methods of this class, and pass it to DataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-net-server-side-code-sample - https://launchdarkly.github.io/dotnet-server-sdk/pkgs/sdk/server/api/LaunchDarkly.Sdk.Server.Integrations.PollingDataSourceBuilder.html Enterprise customers connecting their server-side SDKs to a Relay Proxy cluster can reconfigure their Relay Proxy to be in Offline Mode to mitigate. https://launchdarkly.com/docs/sdk/relay-proxy/offline We'll provide another update within 60 minutes.

identifiedOct 20, 11:42 PM

Server-side streaming API is still experiencing a Partial outage and the error rates for server-side SDKs are still high. We're redeploying our ALB for SDK connections to restore service. As a temporary workaround, we recommend switching server-side SDK configs from streaming to polling. Error rates for client side streaming SDKs are low, but flag updates are still delayed. All other service component are fully recovered and we've updated their status to Operational. We will provide our next update within 60 minutes.

identifiedOct 20, 11:18 PM

We're redeploying parts of our service to address the high error rates for client and server side SDK connections that we continue to see. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 60 minutes.

identifiedOct 20, 10:28 PM

Server-side streaming connections continue to be impacted by this incident. The event ingestion pipeline is fully functional again. This means that the following product areas are functional for all customers while data sent between Sunday Oct 19 11:45pm PT and Monday Oct 20 2:45pm PT may be unrecoverable: - AI Configs Insights - Contexts - Data Export - Error Monitoring - Event Explorer - Experimentation - Flag Insights - Guarded rollouts - Live Events Additionally, Observability functionality has recovered as mentioned in our previous update. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 30 minutes.

identifiedOct 20, 09:55 PM

The LaunchDarkly web application is fully recovered for customer traffic. Flag Delivery traffic has been scaled back up to 100% and connection error rates are decreasing but non-zero. Active streaming connections should receive flag updates once successfully connected. If disconnected, these connections will automatically retry in accordance with our SDK behavior until being able to connect successfully. We've currently enabled 7.5% of traffic for the event ingestion pipeline and will continue to enable it progressively. As of 1:40pm PT Observability data is successfully flowing again and we are catching up on data backlog. Observability data between 1:50am PT and 1:40pm PT is unrecoverable due to an outage in the ingest pipeline. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 60 minutes.

identifiedOct 20, 08:55 PM

We've hit our target of healthy, stable nodes that are available for LaunchDarkly web application and are increasing traffic from 10% to 20%. We'll continue to monitor as we scale the web application back up. Recovering the Flag Delivery service for all customers is our top priority. We're working on stabilizing the Flag Delivery Network. We are beginning to progressively enable the event ingestion pipeline for the LaunchDarkly service. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 60 minutes.

identifiedOct 20, 07:47 PM

The impacted AWS region continues to recover and make resources available which we are using to improve the availability of the LaunchDarkly platform. As we continue to recover and scale up, so do our customers. This increase in traffic is slowing our ability to reduce the impact of the outage. For customers who are using the LaunchDarkly SDKs, we do not recommend making changes to your SDK configuration at this time as doing so will impact our ability to continue service during our recovery. For Flag Delivery, server-side streaming is back online and no longer impacted by the incident for most customers. Customers using big segments or payload filtering are still impacted. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. The event ingestion pipeline will remain disabled to limit the traffic volume within LaunchDarkly's services during our recovery. We will provide our next update within 60 minutes.

identifiedOct 20, 07:02 PM

We've made significant progress on our recovery from this incident. Our engineers are continuing to bring the LaunchDarkly web application into a healthy state and have more than tripled the number of healthy nodes to serve our customers. The status of many service components has been upgraded from Major outage to Partial Outage. The following components are still experiencing a Major Outage: - Experiment Results Processing - Global Metrics - Feature Management Context Processing - Feature Management Data Export - Feature Management Flag Usage Metric The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. The event ingestion pipeline will remain disabled to limit the traffic volume within LaunchDarkly's services during our recovery. We will provide our next update within 30 minutes.

identifiedOct 20, 06:28 PM

We continue to work towards recovering from this incident. We're actively working towards restoring the LaunchDarkly service into a healthy state. We now have 58% of the LaunchDarkly web application in a healthy state. The EU and FedRAMP LaunchDarkly instances are not impacted by this incident. While working towards a resolution for our customers, we disabled the event ingestion pipeline to limit the traffic volume within LaunchDarkly's services. This means that the following product areas have unrecoverable data loss: - AI Configs Insights - Contexts - Data Export - Error Monitoring - Event Explorer - Experimentation - Flag Insights - Guarded rollouts - Live Events - Observability While recovering, there is continued impact to customers using our SDKs to connect to our Flag Delivery network. Our engineers are continuing to recover our service in our main region. We will provide our next update within 30 minutes.

identifiedOct 20, 04:41 PM

While we continue to resolve the ongoing impact, we want to clarify the ongoing impact to our Flag Delivery Network and SDKs: - Customers using client-side or server-side SDKs should continue to see the last known flag values if a local cache exists, or fall back to in-code values. - Customers using our Relay Proxy should continue to see last known flag values if a local cache exists. - Customers using our Edge SDKs should continue to see last known flag values. Additionally, our event ingestion pipeline is dropping events that power product features such as flag insights, experimentation, observability, and context indexing.

identifiedOct 20, 03:18 PM

We're continuing to work on resolving the immediate impact from this incident. We're actively working on recovering within our AWS us-east-1 region while also working on options to move traffic to a healthier region.

identifiedOct 20, 01:55 PM

We are continuing to work on a fix for this issue.

identifiedOct 20, 01:16 PM

We are aware that our web app and API are experiencing high error rates due to scaling issues in AWS us-east-1 region.

identifiedOct 20, 01:06 PM

We are still experiencing delays in flag updates and event ingestion pipeline, affecting experimentation, data export, flag status metrics and others. Additionally we are experiencing elevated error rate on client side SDK streaming API in us-east-1 region due to scaling issues in that AWS region.

identifiedOct 20, 11:55 AM

We are still experiencing delays in flag updates and event ingestion pipeline, affecting experimentation, data export, flag status metrics and others. Additionally, observability data (session replays, errors, logs, and traces) has also been impacted starting ~1:50am PT.

identifiedOct 20, 10:15 AM

We are seeing initial recovery for the following services - Flag updates - SDK requests for environments using Big Segments We are monitoring for the recovery of the rest of the services.

identifiedOct 20, 08:54 AM

We are continuing to work on the issue. Additionally impacted services - Delayed flag updates to SDKs - Dropped SDK events impacting Experimentation, Data Export

identifiedOct 20, 08:00 AM

We have identified issue with elevated error rates and event pipelines. Currently impacted services are - SDK and Relay Proxy requests for environments using Big Segments in us-east-1 region - Guarded rollouts - Scheduled flag changes - Experimentation - Data export - Flag usage metrics - Emails and notifications - Integrations web hooks

investigatingOct 20, 07:25 AM

We are investigating elevated latencies and delays in multiple services including scheduled flag changes, flag updates and events processing. We will post updates as they are available.

minorresolvedOct 10, 06:09 PM — Resolved Oct 10, 06:56 PM

Delays in event data

3 updates
resolvedOct 10, 06:56 PM

The issue with delays in event data has been resolved. Event data is up to date and impacted services have returned to normal operation

identifiedOct 10, 06:11 PM

Customers are experiencing up to 21 minute delays with product features using event data. We have identified the issue and are continuing our work to resolve it. Data loss is not expected. Customers may begin seeing recovery of affected services at this time.

investigatingOct 10, 06:09 PM

All customers are experiencing up to 21 minute delays with product features using event data. We are investigating and will provide updates as they become available. Data loss is not expected.

noneresolvedOct 7, 10:40 PM — Resolved Oct 7, 10:50 PM

Delayed flag updates for small number of customers

2 updates
resolvedOct 7, 10:50 PM

The issue has been resolved. Flag updates have returned to normal operation.

monitoringOct 7, 10:40 PM

A small number of customers experienced delayed flag updates made between 15:24 and 15:34 PT. The issue has been mitigated and we will continue monitoring.

minorresolvedOct 3, 06:40 PM — Resolved Oct 3, 08:03 PM

Errors generating new client libraries

2 updates
resolvedOct 3, 08:03 PM

Users are now able to generate new client libraries.

investigatingOct 3, 06:40 PM

We're aware of intermittent difficulties generating new client libraries. We're investigating.

majorresolvedOct 1, 03:12 PM — Resolved Oct 1, 03:46 PM

Delay in event processing

4 updates
resolvedOct 1, 03:46 PM

This incident has been resolved.

monitoringOct 1, 03:28 PM

We've implemented a fix and are monitoring the results. Impact to Data Export was limited to our streaming data export product.

investigatingOct 1, 03:21 PM

We've mitigated the impact on processing events for all features outside of Data Export. We're continuing to investigate.

investigatingOct 1, 03:12 PM

We are currently investigating an issue recording events, some flag, metric, and experimentation events won't be showing in the UI.

September 2025(8 incidents)

minorresolvedSep 30, 05:00 PM — Resolved Sep 30, 07:53 PM

Self-serve legacy customers are unable to check out or modify plan

3 updates
resolvedSep 30, 07:53 PM

The issue with legacy self-serve check out has been resolved.

monitoringSep 30, 06:44 PM

The issue with legacy self-serve plans has been identified and a fix has been implemented. We are continuing to monitor the performance of impacted services. We will continue to update this page until it is resolved.

identifiedSep 30, 05:00 PM

Customers on legacy plans (such as Starter, Professional) are unable to check out or modify the plan. We have identified a fix and will provide an update as soon as the fix is ready. Please contact Support if you need to make an immediate change to your plan.

minorresolvedSep 22, 06:30 PM — Resolved Sep 22, 06:30 PM

Increased error rate on flag status API

1 update
resolvedSep 22, 07:08 PM

From 11:38 am PT - 11:46 am PT we experienced an elevated error rate on the flag evaluation and flag status APIs, used by flag list, flag targeting, and feature monitoring endpoints.

minorresolvedSep 19, 06:46 PM — Resolved Sep 19, 07:18 PM

Goals Endpoint Initialization Failures Impacting Experiments

3 updates
resolvedSep 19, 07:18 PM

The issue with the Goals endpoint has been resolved. We will continue monitoring to ensure normal operation.

monitoringSep 19, 06:57 PM

A fix has been implemented for the Goals endpoint. We are actively monitoring the system to ensure experiments are functioning as expected.

identifiedSep 19, 06:46 PM

We’ve identified an issue where the Goals endpoint is failing to initialize in some instances. This is currently impacting experiments. Our team is actively working on a fix to the cause.

minorresolvedSep 19, 04:13 PM — Resolved Sep 19, 04:49 PM

Customers are unable to edit flag JSON

2 updates
resolvedSep 19, 04:49 PM

The issue where the JSON failed to load when clicking “Edit JSON” has been fixed. Functionality is now fully restored.

identifiedSep 19, 04:13 PM

The JSON fails to load when clicking “Edit JSON.” We’ve identified the issue and are deploying a fix.

minorresolvedSep 19, 10:50 AM — Resolved Sep 19, 03:04 PM

Customers on Foundation plans are unable to invite new members

6 updates
resolvedSep 19, 03:04 PM

This incident has been resolved.

monitoringSep 19, 02:34 PM

A fix has been implemented and we are monitoring the results.

identifiedSep 19, 02:22 PM

We are continuing to work on a fix for this issue.

identifiedSep 19, 01:03 PM

We are continuing to work on a fix for this issue.

identifiedSep 19, 12:08 PM

The issue has been identified and a fix is being implemented.

investigatingSep 19, 10:50 AM

All customers on Foundation plans are experiencing issues inviting new members via the LaunchDarkly UI. As a workaround, these customers can navigate to https://app.launchdarkly.com/projects/default/onboarding to invite members using the onboarding page. We are investigating a fix and will provide updates as they become available.

minorresolvedSep 17, 06:20 PM — Resolved Sep 17, 07:54 PM

Trial Customers Blocked from Foundational Plan Upgrade

3 updates
resolvedSep 17, 07:54 PM

The issue preventing trial customers from upgrading to the Foundational plan has been resolved. All functionality is now operating as expected.

identifiedSep 17, 06:24 PM

We have identified the issue preventing trial customers from upgrading to the Foundational plan. A fix is being implemented.

investigatingSep 17, 06:20 PM

We are investigating an issue where trial customers are unable to upgrade to the Foundational plan.

noneresolvedSep 16, 11:30 PM — Resolved Sep 16, 11:30 PM

Delayed flag updates for small number of customers

1 update
resolvedSep 17, 12:42 AM

A small number of customers experienced delayed flag updates made between 16:22 and 16:49 PT. The issue has been resolved, and all services are now operating normally.

criticalresolvedSep 9, 12:22 PM — Resolved Sep 9, 02:53 PM

Guarded releases and Experiments are experiencing elevated errors

6 updates
resolvedSep 9, 02:53 PM

We have turned on Views for Early Access users and confirmed there are no issues with Guarded Releases or Experiments. This issue is fully resolved.

monitoringSep 9, 01:36 PM

We are continuing to monitor for any further issues.

monitoringSep 9, 01:10 PM

Only customers who had Early Access to the new Views feature were affected by this incident. We've temporarily turned off Views while implementing a long term fix.

monitoringSep 9, 12:39 PM

We have identified the issue, implemented a fix, and monitoring the results.

investigatingSep 9, 12:31 PM

We are continuing to investigate this issue.

investigatingSep 9, 12:22 PM

We are currently investigating this issue.

August 2025(8 incidents)

minorresolvedAug 28, 05:00 PM — Resolved Aug 28, 05:00 PM

Elevated TLS negotiation error rate for server side SDKs in streaming mode

1 update
resolvedAug 28, 06:30 PM

Between 10:12 AM and 10:52 AM PT, a small subset of customers using LaunchDarkly server-side SDKs in streaming mode on older TLS versions may have experienced TLS negotiation errors when initializing SDKs. The issue has been fully resolved, and all services are now operating normally.

minorresolvedAug 27, 04:00 AM — Resolved Aug 27, 04:00 AM

CORS errors for a small subset of customers

1 update
resolvedAug 27, 05:31 AM

A small number of customers using the LaunchDarkly JavaScript client side SDK with use cases that involved a single browser session requesting the same LaunchDarkly environment from two or more domains in certain situations experienced CORS errors when initializing the SDK from secondary domains. The impact started on Aug 22 and was mitigated on Aug 26. This issue has now been resolved.

minorresolvedAug 25, 07:06 PM — Resolved Aug 25, 10:11 PM

Onboarding/Quickstart Not Visible

4 updates
resolvedAug 25, 10:11 PM

This incident has been resolved.

monitoringAug 25, 10:08 PM

A fix has been deployed and we will continue to monitor for a period of time.

identifiedAug 25, 07:24 PM

The issue has been identified and the team is working on a rollback to restore the onboarding and Quickstart experience.

investigatingAug 25, 07:06 PM

We are currently investigating an issue where onboarding and Quickstart functionality in the application are not visible to customers.

minorresolvedAug 21, 06:51 PM — Resolved Aug 21, 11:05 PM

Flag Evaluation Latency

3 updates
resolvedAug 21, 11:05 PM

We have addressed the source of the latency on flag evaluations and all affected endpoints.

investigatingAug 21, 07:59 PM

We are continuing to investigate the intermittent latency and have engaged our backend partner to facilitate the investigation.

investigatingAug 21, 06:51 PM

We are aware of some customers experiencing brief periods of latency in flag evaluations and flag list endpoints. We are continuing to investigate and will provide an update shortly.

minorresolvedAug 13, 01:09 PM — Resolved Aug 13, 01:17 PM

EU West region customers may experience failures in initial streaming requests

2 updates
resolvedAug 13, 01:17 PM

The streaming failures in EU West region have been resolved. Impacted services have returned to normal operation

investigatingAug 13, 01:09 PM

5% of customers are experiencing failures in initial streaming requests in the EU West region. SDKs will retry. We are investigating and will provide updates as they become available.

minorresolvedAug 11, 06:25 PM — Resolved Aug 11, 07:51 PM

Elevated TLS negotiation error rate for SDKs in streaming mode

4 updates
resolvedAug 11, 07:51 PM

The issue with TLS handshake errors from SDKs to our streaming service has been resolved.

monitoringAug 11, 07:29 PM

We are no longer experiencing elevated error rates. We are continuing to monitor the performance of our streaming service.

identifiedAug 11, 07:00 PM

We have identified the issue with the TLS handshake errors from SDKs for small number of customers. We have reverted the change and expect elevated error rates intermittently while the change is rolled out.

investigatingAug 11, 06:25 PM

We are investigating reports of TLS handshake errors from SDKs to our streaming service for small number of customers. Polling SDKs are not affected.

noneresolvedAug 8, 08:23 PM — Resolved Aug 8, 08:23 PM

Elevated error rates in SDK streaming connections

1 update
resolvedAug 8, 09:08 PM

From 8:23 PM UTC until 8:37 PM UTC we observed elevated error rates for streaming connections in SDKs across all regions. These errors self-resolved as clients successfully retried, and there is no ongoing impact.

minorresolvedAug 6, 05:46 PM — Resolved Aug 6, 11:43 PM

Account Usage Charts for Server Side SDKs Degraded

4 updates
resolvedAug 6, 11:43 PM

This incident has been resolved.

monitoringAug 6, 10:30 PM

We have implemented a fix for the Account Settings Usage Charts for server side SDKs under-reporting the connection counts and we are monitoring the results. Customers should start seeing new connection counts show up in the charts. We are unable to backfill the missing under-reported data from July 26 to Aug 5.

identifiedAug 6, 07:02 PM

The issue with account usage charts for server side SDKs connection metrics has been identified and the fix is being implemented.

investigatingAug 6, 05:46 PM

We are investigating the issue with the Account Settings Usage Charts for server side SDKs under-reporting the connection counts for most customers since July 26.

July 2025(9 incidents)

noneresolvedJul 30, 08:18 PM — Resolved Jul 30, 08:18 PM

Observability [EAP] Logs and Traces Ingest Failures

1 update
resolvedJul 30, 08:18 PM

Between 18:42-19:52 UTC (11:42-12:52 PDT), Logs and Traces were not ingested due to an issue in the hosted OpenTelemetry collector. Logs and Traces sent during this time will not appear in the Observability product. We apologize for the inconvenience and have taken measures to prevent such an incident in the future.

noneresolvedJul 24, 07:15 PM — Resolved Jul 24, 07:15 PM

Session Data Not Loading in App

1 update
resolvedJul 24, 10:17 PM

During the period from 12:15PM PT to 2:42PM PT, customers monitoring user sessions via the SDK may have experienced issues loading data. At 2:42PM PT, a fix was implemented to address this issue.

noneresolvedJul 23, 07:34 PM — Resolved Jul 23, 07:34 PM

Blank Pages in the Web App

1 update
resolvedJul 23, 08:12 PM

A guarded rollout serving less than 1% of LaunchDarkly users rendered blank pages in the web application between 12:34pm PT and 12:59pm PT.

minorresolvedJul 23, 03:55 PM — Resolved Jul 23, 04:05 PM

Event Processing Delays

2 updates
resolvedJul 23, 04:05 PM

The intermittent delays have been resolved.

investigatingJul 23, 03:55 PM

Event processing is currently delayed. Some features may show stale data for a period of time until the issue is resolved.

minorresolvedJul 22, 01:44 PM — Resolved Jul 22, 03:10 PM

Elevated error rates in usage and flag endpoints

4 updates
resolvedJul 22, 03:10 PM

This incident has been resolved.

monitoringJul 22, 02:17 PM

We have addressed the backend performance issues and are no longer observing elevated error rates on the impacted endpoints.

identifiedJul 22, 01:45 PM

We are continuing to work on a fix for this issue.

identifiedJul 22, 01:44 PM

We have identified an issue causing elevated error rates on a small subset of requests on the LaunchDarkly platform. Some users may experience 5xx errors when calling the following endpoints. - Usage data endpoints - GET flag(s) endpoint We are currently working to mitigate the underlying root cause.

majorresolvedJul 21, 07:14 PM — Resolved Jul 21, 08:17 PM

Elevated Error Rates in Event Ingestion

5 updates
resolvedJul 21, 08:17 PM

This incident has been resolved.

monitoringJul 21, 07:56 PM

All event ingestion errors have been resolved and downstream services processing event data have been restored to normal operations.

identifiedJul 21, 07:26 PM

Engineers have scaled the event service to resolve the majority of error rates. Users should experience a reduction in the errors on events.launchdarkly.com. Some downstream services in the event data warehouse are still not being updated. We will update to monitoring once we observe the remaining services recover.

investigatingJul 21, 07:19 PM

We are continuing to investigate this issue and implementing steps to address the error rates. Users may experience errors when calling the events.launchdarkly.com endpoint.

investigatingJul 21, 07:14 PM

We are currently observing elevated errors in event data ingestion related to experiments, data export, and other areas of the LaunchDarkly platform. Users may experience stale data when using these features.

minorresolvedJul 16, 07:51 PM — Resolved Jul 16, 08:12 PM

Observability [EAP] Pages Outage

2 updates
resolvedJul 16, 08:12 PM

From 10:49:39 AM PDT to 1:11 PM PDT, observability pages were not loading correctly. This incident has been resolved, and there was no loss of customer observability data.

investigatingJul 16, 07:51 PM

Customers cannot load the observability data, but customer observability data is still being ingested.

minorresolvedJul 10, 08:30 AM — Resolved Jul 10, 08:30 AM

Partial Event Processing Interruption

1 update
resolvedJul 10, 10:10 AM

Between 1:20 AM and 1:40 AM PT on July 10, 2025, we experienced an issue that affected the processing of some analytical events. These events power several features, including flag evaluation graphs, experimentation results, and data exports. Flag evaluation functionality remained fully operational throughout the incident. All systems are now functioning normally, and we are actively monitoring performance while continuing to investigate the root cause to prevent recurrence. We apologize for any inconvenience this may have caused and appreciate your patience.

noneresolvedJul 1, 04:02 AM — Resolved Jul 1, 04:01 PM

Docker Image for Relay Proxy (latest) incorrectly pointing to the v9 alpha release.

2 updates
resolvedJul 1, 04:01 PM

This incident has been resolved.

monitoringJul 1, 04:02 AM

There is an issue with the latest Relay Proxy Docker image pointing to the v9 alpha release between 2025 Jun 30 12:11 PM - 8:52 PM PDT. This has been identified, and we have replaced the latest Relay Proxy Docker image with the latest working v8 version now. If you encountered an issue with the latest Relay Proxy image during the incident time, please grab the latest again, and that should resolve the issue. Please do not use the v9 alpha release without reaching out to LaunchDarkly. The SDKs are not affected.

June 2025(2 incidents)

minorresolvedJun 13, 04:43 PM — Resolved Jun 13, 07:33 PM

CUPED-Adjusted Iterations Delayed

2 updates
resolvedJun 13, 07:33 PM

All missing data has been successfully backfilled.

identifiedJun 13, 04:43 PM

Users of CUPED-adjusted iterations in experimentation may be missing data from June 11th 4:30AM PST to June 13th 10AM PST. There is no underlying data loss and our engineers are working to backfill this data as quickly as possible. We will provide another update once the missing data has been fully restored.

minorresolvedJun 12, 06:47 PM — Resolved Jun 12, 09:52 PM

Delays for Customers Using the Cloudflare Edge SDK

5 updates
resolvedJun 12, 09:52 PM

Our partner has confirmed full restoration of their platform. We began observing full recovery starting at 2:07PM PT.

monitoringJun 12, 08:20 PM

Our partner is reporting partial recovery on their platform. We are still observing intermittent delays in the Edge SDK and warehouse-native data export. We'll continue to monitor and update when full platform recovery has been completed by our partner.

identifiedJun 12, 07:37 PM

We have been notified our vendor handling data warehouse export functionality is also impacted. LaunchDarkly customers using warehouse-native data export may experience data delays until the partner issue is mitigated.

identifiedJun 12, 07:17 PM

We are continuing to monitor the status with our partners. Edge SDK customers continue to experience intermittent delays.

identifiedJun 12, 06:47 PM

LaunchDarkly customers leveraging the Cloudflare Edge SDK may experience intermittent delays in processing flag changes. This is due to broad platform wide outages at Cloudflare. We're continuing to investigate the status with this partner. All other edge SDK's should be functioning normally. We'll post an update when available from Cloudflare.